Freelance Medior Microsoft Fabric Data Engineer
* Contract until end December 2026 to start ASAP
* Brussels Hybrid
Context
The client is in an active transition phase: parts of the data landscape are still on-premise, while Microsoft Fabric is becoming the target platform for integration, lakehouse and warehouse modelling and downstream Power BI consumption. This creates a hybrid reality where stability and delivery matter as much as modernisation. The team is building new Fabric capabilities while maintaining continuity for existing reporting and operational needs.
Role
You will join our data engineering team as they modernise the data platform from an on-premise data warehouse to a cloud-native Microsoft Fabric environment.
As a senior data engineer, you will contribute hands-on to building reliable, scalable data pipelines and delivering well-structured data products that support reporting and analytics.
You will work in a hybrid landscape where on-premise and cloud sources must coexist during the transition, so you are expected to be comfortable operating across both worlds. You are not being hired to redesign the entire platform, but to deliver high-quality engineering work, challenge weak implementation patterns and help the team mature its Fabric way of working through pragmatic technical input and clear documentation.
The role is not purely focused on data engineering
Profile:
MUST HAVE:
API ingestion patterns (REST, pagination, auth): 1year
Data quality engineering (checks, reconciliation, monitoring): 5 years
Dataflows Gen2:1 year
DAX (performance-focused measures): 3 years
Dimensional modelling and star schemas: 5 years
Documentation, standards and knowledge sharing: 5 years
Fabric Data Factory pipelines: 1 year
Fabric Data Warehouse (tables, views, performance): 1 year
Functional analysis: 5 years
Hybrid enterprise landscape (on-prem plus cloud): 3 years
Lakehouse on OneLake (Delta): 1 year
Medallion architecture (bronze to silver to curated): 3 years
Microsoft Fabric (end to end): 1 year
Owning delivery and proactively managing risks and dependencies: 5 years
PySpark and Spark DataFrames: 2 years
Python for data engineering: 2 years
Scrum hygiene (clean tickets, predictable delivery): 5 years
Semantic layer design (dataset to model to report coherence): 5 years
Source connectivity (ODBC, SQL, Salesforce, notebooks): 3 years
SQL (T-SQL, ELT patterns, optimisation): 3 years
Translating requirements into deliverable technical solutions: 5 years
Enterprise SaaS as a data source( example CRM Platform): 3 years
Power BI Semantic models: 3 years