Your mission
Design and develop scalable data pipelines on Databricks
Manage data ingestion from various sources into the Databricks environment
Build and optimize ETL/ELT workflows to ensure efficient transformation
Guarantee data quality, integrity, and security across pipelines
Monitor pipeline performance and implement enhancements when needed
Document processes and share best practices with internal teams
Conduct knowledge transfer to internal data engineers for long-term support
Your profile
Strong SQL and Python skills
Expertise in Databricks and AWS services
Solid experience with Data Warehouse / Lakehouse architectures
Familiarity with Kimball data modeling for Business Intelligence needs
Analytical thinker with strong troubleshooting skills
Minimum 3 years of experience in SQL and data warehousing
At least 2 years of hands-on experience with Databricks
Effective collaboration skills with both project and engineering teams
Able to clearly document technical work and facilitate knowledge sharing
Bachelor's degree in Computer Science, Engineering, Mathematics, or related field
Fluent in English
Fluent in Dutch or French