Design scalable data solutions using AWS, PySpark, and Databricks Drive innovation in data architecture with a focus on advanced cloud and big data technologies Collaborate with cross-functional teams to deliver high-performance data platforms Company Overview A leader in wealth management sector is driving innovation through data engineering and cloud infrastructure. As part of our team, you will work with AWS, PySpark, and Databricks to develop data solutions that drive key business decisions. Their focus is on leveraging the best in cloud technology and advanced analytics to build scalable, efficient data platforms that deliver real business value. Role Overview We are seeking a highly skilled Data Engineer/Data Architect with expertise in AWS Cloud (EMR, Athena, Airflow), PySpark, and Databricks. In this role, you will play a critical part in designing and implementing advanced data solutions on AWS and related technologies. As a forward-thinking professional, you will balance delivering technical outcomes with understanding the broader business impact. Your strategic approach will ensure that immediate challenges, future needs, and emerging opportunities are all considered in the solutions you design Key Responsibilities: Logical Data Models & Schema Design: Perform logical data modeling and schema design, leveraging PySpark and Databricks to build physical data models for enhanced data processing and analytics. Analyze Structural Requirements: Evaluate and determine the structural requirements for new software and applications, ensuring alignment with business needs. Guidance & Support: Provide guidance and support to application developers, ensuring adherence to data architecture standards and best practices. Data Migration & ETL: Lead the analysis, design, and execution of data migration and ETL tasks from legacy systems to modern solutions, utilizing PySpark for efficient data processing. Collaboration with Data Science: Work closely with the Data Science team to identify and address future data needs, ensuring alignment with business goals and analytics requirements. Application Data Mapping: Support data mapping, SQL query tuning, schema enhancements, and code expansion for application integration and optimization. Data Visualization: Use data visualization tools to present insights effectively, aiding decision-making across business teams. Data & Schema Standards: Maintain a strong understanding of data and schema standards, ensuring quality, consistency, and governance across all data activities. Data Mining & Segmentation: Apply advanced data mining and segmentation techniques using Databricks to extract meaningful insights from large datasets. Documentation: Create and maintain comprehensive documentation for data models, schemas, and data migration processes, ensuring clarity and consistency. Skills & Experience We are looking for a highly skilled technical professional with: Experience: At least 3 years of experience in data architecture or engineering, with a focus on application data modeling, DB schema design, ETL execution, and SQL query tuning. Significant experience with PySpark and Databricks is essential. AWS Cloud Expertise: Deep knowledge of AWS services, including EMR, Athena, Airflow, S3, Lambda, and Glue, with experience in integrating cloud-based data solutions. PySpark & Databricks Proficiency: Proven experience using PySpark for data processing and Databricks for advanced analytics and large-scale data mining. SQL & PL/SQL: Strong expertise in SQL query optimization, schema design, and PL/SQL programming. SODA Knowledge: Experience working with Service-Oriented Data Architecture (SODA) to design scalable, service-oriented data solutions. ETL & Data Pipelines: Expertise in designing automated ETL pipelines for efficient data migration and transformation. Data Visualization Tools: Experience with tools like Tableau, Power BI, or AWS QuickSight for delivering actionable insights. Education: Degree in Computer Science, Information Systems, or a related technical field. What’s on Offer This is a unique opportunity to join a highly technical, forward-thinking team working on large-scale data projects using AWS, PySpark, and Databricks. We offer a collaborative environment with a strong focus on professional development, innovation, and work-life balance. Key benefits include: Competitive daily rate Ongoing contract potential Opportunities to work on complex, high-impact data engineering projects. Flexible working environment with a focus on work-life balance. Apply to Dave Marshall (David.marshallprofusiongroup.com) via the link today. Profusion respect people, value diversity and are committed to equality. We are committed to providing a supportive culture and positively contributing towards creating diverse and inclusive workplaces for our candidates & clients. We invite candidates of all ages, genders, sexual orientation, cultural backgrounds, people with disability, neurodiverse individuals, and Indigenous Australians to apply.