Key Responsibilities Design, implement, and maintain a robust data platform on Databricks, supporting the ingestion, storage, transformation, and processing of large volumes of data from various sources. Design and develop Azure cloud data products (Data warehousing, data integrations, dashboards etc.) to enhance data services. Leverage Databricks' capabilities to enhance data workflows, implementing advanced features and automations to support predictive modeling and machine learning. Gather, document, and clarify requirements; design solutions; develop code using Microsoft BI stacks; test and support UAT; monitor product functionality. Improve the data product development and support process through automation, including access requests, data refreshes, releases, source control, and knowledge management. Ensure functional and non-functional quality through data profiling, automated and manual tests, and performance testing. Key Skills Proficiency in SQL, Python, and PySpark for data processing and transformation. Proven track record in using Databricks for data engineering, including cluster management, Delta Lake, and other Databricks-specific features. Devops: Experience automating aspects of product development and support such as automated releases and calling API's. Experience with cloud platforms (AWS, Azure, or GCP) and associated data services. Excellent communication and collaboration skills, with the ability to convey technical concepts to non-technical stakeholders.