Contract Details
* Contract Type: Competitive market rate
* Location: Remote - Must be based in Belgium
* Duration: 6–12 months with possible extension
* Start Date: ASAP
Design, build, and maintain scalable ELT/ETL pipelines to support analytics, reporting, and operational workloads.
Develop and optimize data models, data warehouse structures, and data lake architectures.
Implement robust data ingestion, transformation, and integration solutions from multiple sources (APIs, databases, streaming platforms).
Ensure data quality, lineage, security, and governance across pipelines and datasets.
Collaborate with data analysts, data scientists, architects, and business teams to understand data requirements and deliver reliable solutions.
Monitor and optimize pipeline performance, handling troubleshooting, incident resolution, and continuous improvements.
Contribute to automation, CI/CD pipelines, and infrastructure-as-code for data platform components.
Required Skills & Experience
* Strong experience with ETL/ELT development and distributed data processing.
* Expertise in one or more major cloud platforms: Azure (preferred), AWS, or GCP.
* Solid proficiency with:
* SQL & relational databases
* Python or Scala
* Spark, Databricks, or similar frameworks
* Data lakes, data warehouses, Delta Lake/Parquet
* Experience with Azure Data Factory, Synapse, Databricks, or equivalent cloud data services.
* Knowledge of data modelling, metadata management, and data governance principles.
* Experience working in Agile environments and using CI/CD tools (Azure DevOps, Git, etc.).
Nice-to-Have Skills
* Experience with streaming technologies: Kafka, Event Hubs, Kinesis, Pub/Sub.
* Familiarity with containerization (Docker, Kubernetes) and infrastructure automation.
* Exposure to machine learning pipelines or MLOps.