Data Engineer (Pyspark) — Denistone West, Ryde Area
Expired

Key Responsibilities: Proficient in Python programming and data manipulation libraries Strong experience with Apache Airflow for workflow orchestration. Hands-on experience with Google Cloud Platform services, particularly BigQuery and Cloud Storage. Specializes in leveraging PySpark Design, develop, and maintain data pipelines using Apache Airflow to orchestrate ETL processes. Implement and manage data storage solutions on Google Cloud Platform, including BigQuery, Cloud Storage, and Cloud SQL. Write efficient and reusable Python code for data processing and transformation. Collaborate with data scientists and analysts to ensure data availability and quality for analytics and reporting. Monitor and optimize data pipelines for performance and reliability. Troubleshoot and resolve issues related to data processing and cloud infrastructure. Document processes and maintain data pipeline architecture for future reference. Stay up-to-date with emerging technologies and best practices in data engineering and cloud computing. Familiarity with SQL and database management. Knowledge of data warehousing concepts and data modeling. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities.

Applications close Sunday, 1 December 2024
Take me to the job