Position Description – Data Engineer (MS Fabric) Daily Rate Contract (12 Months) Hybrid – 3 days per week in Melbourne CBD Full working rights in Australia required Position Purpose / Overview As a Data Engineer, you'll play a pivotal role in building and managing the data platform leveraging Microsoft Data Fabric. You will develop robust and efficient data pipelines, maintain and enhance the data architecture, facilitate analytics and machine learning use cases, and support business intelligence initiatives. Working closely with database administrators, developers, data scientists, governance, and security teams, your role ensures data is reliable, secure, accessible, and optimized for performance and business value. Responsibilities & Accountabilities Data Engineering & Architecture Design, build, and maintain scalable and efficient data pipelines within Microsoft Data Fabric to ensure timely, accurate, and reliable data delivery. Develop integrations and manage ETL/ELT processes from external data sources, enhancing the data ingestion and orchestration capabilities. Enhance and refine the existing medallion-based (Bronze, Silver, Gold) data architecture to optimize data quality, accessibility, and usability. Manage and optimize data storage infrastructure including relational and non relational databases etc. cloud, particularly, MS Azure, storage services Data Modelling & Migration Document existing data models thoroughly, providing clarity around current state architecture, data flows, and schemas. Develop and propose future-state data models aligned with strategic business goals, analytics, and machine learning needs. Create comprehensive data migration strategies and actively participate in migration activities, coordinating with database administrators and development teams to ensure smooth execution and minimal disruption. Analytics & Machine Learning Support Collaborate with Data Scientists and Analysts to design, build, and maintain datasets and environments optimized for machine learning and analytics initiatives. Support development and deployment of machine learning models, specifically focused on analysing business outcomes, detecting malpractice, and identifying opportunities for improvement. Building and maintaining data workflows and big data technologies such as Hadoop, Spark and optimizing data processing in distributed systems for complex analytical workloads AI & Product Innovation Assist in the integration and adoption of advanced AI features within enterprise systems and product offerings. Contribute to AI-based product enhancements and innovation projects, promoting data-driven decision-making across the organization. Governance, Documentation & Security Collaborate closely with Data Governance Managers and Security teams to ensure data compliance with governance frameworks, security standards, and data privacy regulations. Support and document data governance standards, data lineage, data dictionaries, and metadata management processes. Implementing methods to enhance data accuracy and consistency, supporting our Business Intelligence functions. Stakeholder & Cross-Functional Collaboration Provide technical support and data expertise to cross-functional teams, including business intelligence specialists, product teams, developers, and external partners. Clearly document processes, architectural decisions, and data strategies, ensuring transparency and effective knowledge transfer within the organization. Required Skills & Qualifications Technical Expertise Demonstrated experience building, maintaining, and optimizing data pipelines using Microsoft Data Fabric or related data platforms (Azure Synapse Analytics, Azure Databricks, Azure Data Factory). Proficiency with SQL, Python, and scripting languages relevant to data engineering. Strong understanding and practical experience with modern data architectures, including medallion architecture (Bronze-Silver-Gold), data lakes, and lakehouse paradigms. Familiarity with Azure-based integration tools and technologies, including Azure Data Factory, Azure Synapse pipelines, and API integrations. Experience supporting Power BI, developing reports and dashboards, and optimizing backend data models for business intelligence and analytics consumption. Data & Machine Learning Skills Experience working in data science environments, specifically supporting model building, deployment, and performance monitoring. Familiarity with machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch) and relevant Azure services (Azure ML, Cognitive Services, Azure OpenAI Services). Data Governance & Security Understanding of data governance principles and data security best practices. Familiarity with regulatory frameworks, data privacy considerations (e.g., GDPR, HIPAA), and secure data management practices. Methodologies & Frameworks Knowledge and hands-on experience with Agile methodologies and tools (Azure DevOps, JIRA, Git). Understanding of DataOps principles and best practices for continuous integration, continuous delivery (CI/CD), and data quality assurance. Soft Skills Ability to clearly document technical requirements, architecture decisions, and data flows. Strong interpersonal and collaborative skills, capable of effectively engaging with stakeholders across multiple functional areas. Effective problem-solving skills with an analytical mindset. Experience Bachelor’s degree in Computer Science, Data Engineering, Data Science, or related field; or equivalent work experience. 5 years of experience as a Data Engineer or similar role, specifically in cloud-based data environments (preferably Azure). Experience working closely with data scientists, analysts, and other technical teams to deliver business-aligned data solutions. Preferred Skills & Qualifications Microsoft certifications highly desirable: Azure Data Engineer Associate (DP-203) Azure Solutions Architect Expert (AZ-305) Azure AI Engineer Associate (AI-102) Experience with Azure OpenAI and Generative AI tools. Familiarity with Microsoft Purview for data governance. Exposure to big data frameworks and technologies (Spark, Databricks, Delta Lake). Previous experience implementing or optimizing medallion architecture.