This is a Data Engineer - Data Operations role with one of the leading companies in AU right now Commonwealth Bank with an amazing team. They are continuing to grow rapidly. This is the chance to join right as the takes off. More About the Role at Commonwealth Bank Do work that matters To us, data is everything. It powers our cutting-edge features and enables us to provide seamless experiences for millions of customers from app to branch. We are one of Australia’s most advanced big data operations teams, doing work that matters. Here, you’ll be part of a team of engineers transitioning from existing technology stacks into cloud-based, future-ready technologies and agile, DevSecOps ways of working. See yourself in our team We’re responsible for CommBank’s key analytics capabilities and work to create world-leading capabilities for analytics, information management, and decisioning. We work across the Group Data Warehouse, Hadoop Big Data Platform, Pega Decisioning, SAS, R, Tableau, Ab Initio, and Data Stage. We’re interested in hearing from people who - Have solid hands-on experience in building ETL applications and commendable production support experience in deploying and managing large-scale, complex big data applications on a Hadoop platform - Strong knowledge of data governance areas such as data lineage, technical metadata, data quality, and reconciliation - Pride themselves on their Big Data and Hadoop expertise and experience with focus on Spark, Hive, Abinitio (or other query engines), big data storage formats (such as Parquet, ORC, Avro) and data pipeline orchestration tools like Airflow - Are confident with Spark SQL, with proven ability in tuning and optimisation - Are knowledge and have proven hands-on experience in implementing data integration tools (such as Abinitio, DataStage or Informatica) and data warehouses - Pride themselves on their AWS Experience - any or all of EC2, S3, RDS, EMR - Strong Data Architecture expertise including different data modelling techniques - Are constantly communicating with a variety of technical and non-technical stakeholders - Are confident in production support with a passion for providing data-driven platform solutions - Are constantly ahead of trends and work at the forefront of Big Data platforms - Pride themselves on their execution of state-of-the-art ITIL practices, driving high-quality outcomes to solve core business objectives and minimising risks - Possess problem-solving abilities with a strategic approach to tackling data-related challenges Tech skills: We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team - Building and supporting Data pipelines using Bigdata Hadoop - Strong scripting knowledge in Unix, SQL - AWS EMR, GLUE - Streaming frameworks such as Kafka - Optimisation of Bigdata ETL Pipelines If you don’t think you're a perfect fit, you should still sign up to Hatch and create a profile, we'll match you to other roles that suit your profile. Hatch exists to level the playing field for people as they discover a career that’s right for them. We model this in our hiring process for our partners like Commonwealth Bank. ✅ Applying here is the first step in the hiring process for this role at Commonwealth Bank. We do not discriminate on the basis of gender identity, sexual orientation, cultural identity, disability, age, or any other non-merit factors. To put it simply, Hatch is for everyone.