Build scalable Data platforms with Databricks, Spark, and Azure.
LACO is looking for a
Medior Databricks Data Engineer
who enjoys turning raw data into
reliable, production-grade pipelines
and helping shape
modern cloud data platforms
for real clients.
What you’ll do
Design and optimise data solutions in
Azure Databricks & Spark Implement data pipelines (ETL/ELT)
for batch and streaming workloads, ensuring performance, reliability, and maintainability Improve
performance, reliability, CI/CD, and cloud architecture Collaborate with engineers, analysts, and stakeholders to deliver impact
What you gain
✔ Work on
cutting-edge Microsoft data & AI projects ✔ Real
hands-on ownership, not just maintenance ✔ Continuous
coaching, learning, and career growth ✔ Supportive team with
freedom to take initiative ✔
40-hour week + 32 days off + hybrid flexibility
Your background
~2+ years in
data engineering Strong
Databricks, Spark (PySpark/Spark SQL), SQL, and Python Experience with
ETL/ELT, data modelling, and cloud/CI-CD practices Fluent in
Dutch and/or French + English This role is also
open to freelance consultants
Don’t meet every requirement?
We still encourage you to apply.
Ready to build real data platforms with Databricks? Let’s talk.