RequirementsWhat will you be bringing to the team?Master's degree in IT with minimum 11 years of relevant experience (or Bachelor's degree and minimum of 15 years relevant experience).At least 5 years of specific expertise with Apache Hadoop and Apache Spark for building data lakes.At least 8 years of specific expertise developing and maintaining data pipelines using tools like Apache NiFi and Apache Beam or scripting languages like PERL or python to facilitate ETL processes and real-time streaming data.At least 10 years of specific expertise with Data Modelling and Data Governance, Data modelling tools and relational database systems using SQL.Excellent knowledge and experience of data lake/data lakehouse design and architecture (Apache Hadoop and Apache Spark, or AWS with Amazon S3).Excellent knowledge of relational database systems (Oracle or Postsgres).Knowledge of non-relational database technologies (MongoDB).Good knowledge of cloud-based business intelligence design.Knowledge of near real time data warehousing and/or change data capture technologies.Knowledge of Atlassian Jira / Open Project, Atlassian Confluence / xWiki and MS Teams.Ability to design and implement data governance strategies and open data protocols while ensuring compliance with EU regulations like GDPR.Fluent in English at least at a level B2. French will be considered an advantage.