Your missionDesign and develop scalable data pipelines on DatabricksManage data ingestion from various sources into the Databricks environmentBuild and optimize ETL/ELT workflows to ensure efficient transformationGuarantee data quality, integrity, and security across pipelinesMonitor pipeline performance and implement enhancements when neededDocument processes and share best practices with internal teamsConduct knowledge transfer to internal data engineers for long-term supportYour profileStrong SQL and Python skillsExpertise in Databricks and AWS servicesSolid experience with Data Warehouse / Lakehouse architecturesFamiliarity with Kimball data modeling for Business Intelligence needsAnalytical thinker with strong troubleshooting skillsMinimum 3 years of experience in SQL and data warehousingAt least 2 years of hands-on experience with DatabricksEffective collaboration skills with both project and engineering teamsAble to clearly document technical work and facilitate knowledge sharingBachelor's degree in Computer Science, Engineering, Mathematics, or related fieldFluent in EnglishFluent in Dutch or French