About JDP are helping one of our clients look for a Senior Data Engineer to join their team in an initial 3 month contract. If you're passionate about building scalable, high-performance data pipelines and architecting robust data platforms, this could be your next challenge. What You'll Be Doing : Build scalable batch and streaming pipelines with Python and PySpark Design Delta Lake architectures on Databricks Orchestrate workflows and jobs in Databricks Tune performance and manage code with Databricks libraries Manage AWS S3 data lakes for secure data access Deploy infrastructure using Terraform or CloudFormation Automate AWS services using Boto3 Collaborate across teams to ensure data reliability Maintain data quality and observability standards What We're Looking For : Proven experience in data engineering with Databricks and AWS. Strong programming skills in Python and PySpark. Hands-on experience with Delta Lake and structured streaming. Deep understanding of data lake architecture and ETL pipeline design. Experience with Terraform, CloudFormation, or similar IaC tools. Strong problem-solving skills and ability to work autonomously in a fast-paced environment. Excellent communication and collaboration abilities. Bonus Points For: Experience working in a multi-cloud or multi-account AWS setup. Familiarity with data governance, data cataloging, and security best practices. If this sounds like your next contract gig, then get applying! [email protected]