Your missionbrDesign and develop scalable data pipelines on DatabricksbrManage data ingestion from various sources into the Databricks environmentbrBuild and optimize ETL/ELT workflows to ensure efficient transformationbrGuarantee data quality, integrity, and security across pipelinesbrMonitor pipeline performance and implement enhancements when neededbrDocument processes and share best practices with internal teamsbrConduct knowledge transfer to internal data engineers for long-term supportbrYour profilebrStrong SQL and Python skillsbrExpertise in Databricks and AWS servicesbrSolid experience with Data Warehouse / Lakehouse architecturesbrFamiliarity with Kimball data modeling for Business Intelligence needsbrAnalytical thinker with strong troubleshooting skillsbrMinimum 3 years of experience in SQL and data warehousingbrAt least 2 years of hands-on experience with DatabricksbrEffective collaboration skills with both project and engineering teamsbrAble to clearly document technical work and facilitate knowledge sharingbrBachelor's degree in Computer Science, Engineering, Mathematics, or related fieldbrFluent in EnglishbrFluent in Dutch or French