We’re working with a small scale tech company based in Ghent who are searching for an experienced Data Engineer to build scalable and reliable data platforms.
Responsibilities:
Design, build, and maintain scalable data pipelines and data platforms
Develop and optimize ETL/ELT workflows for ingesting and transforming data
Implement and maintain cloud-based data solutions on AWS, Azure, or GCP
Develop and maintain data models and data transformation layers
Ensure data quality, governance, and compliance within data pipelines
Optimize performance, scalability, and cost-efficiency of data infrastructure
Integrate data from multiple internal and external sources
Support AI/ML initiatives by delivering clean, reliable datasets
Contribute to monitoring, testing, and reliability of data pipelines
Your Profile:
3–5+ years’ experience in Data Engineering
Strong expertise in SQL and relational databases (PostgreSQL, Oracle, etc.)
Proficiency in Python, Java, or similar programming languages
Experience building data pipelines using Spark, Hadoop, or similar big data frameworks
Hands-on experience with ETL/ELT tools and orchestration frameworks
Experience working with cloud data platforms (AWS, Azure, or GCP)
Experience building batch and/or real-time data pipelines (e.g., Apache Kafka)
Knowledge of data modeling and warehouse design
Experience supporting AI/ML data workflows
Familiarity with modern data architectures (Data Mesh / Data Fabric) is a plus
The Offer:
Competitive salary and benefits
Flexible working arrangements
Professional development opportunities
Collaborative and innovative engineering environment
If this opportunity excites you, apply today or send your CV and a short cover letter to ryan.martin@vividresourcing.com