We're working with a small scale tech company based in Ghent who are searching for an experienced Data Engineer to build scalable and reliable data platforms.
Responsibilities:Design, build, and maintain scalable data pipelines and data platformsDevelop and optimize ETL/ELT workflows for ingesting and transforming dataImplement and maintain cloud-based data solutions on AWS, Azure, or GCPDevelop and maintain data models and data transformation layersEnsure data quality, governance, and compliance within data pipelinesOptimize performance, scalability, and cost-efficiency of data infrastructureIntegrate data from multiple internal and external sourcesSupport AI/ML initiatives by delivering clean, reliable datasetsContribute to monitoring, testing, and reliability of data pipelines
Your Profile:3–5+ years' experience in Data EngineeringStrong expertise in SQL and relational databases (PostgreSQL, Oracle, etc.)Proficiency in Python, Java, or similar programming languagesExperience building data pipelines using Spark, Hadoop, or similar big data frameworksHands-on experience with ETL/ELT tools and orchestration frameworksExperience working with cloud data platforms (AWS, Azure, or GCP)Experience building batch and/or real-time data pipelines (e.g., Apache Kafka)Knowledge of data modeling and warehouse designExperience supporting AI/ML data workflowsFamiliarity with modern data architectures (Data Mesh / Data Fabric) is a plus
The Offer:Competitive salary and benefitsFlexible working arrangementsProfessional development opportunitiesCollaborative and innovative engineering environment
If this opportunity excites you, apply today or send your CV and a short cover letter to ryan.martin@vividresourcing.com