We are looking for 4 Data Engineers (Medior / Senior) to join a dynamic data team working on large scale transformation and migration initiatives. You will be responsible for building and maintaining scalable, high performance data pipelines, both in on prem and AWS cloud environments. You will work with modern data frameworks and tools to enable clean, automated, and efficient data flows supporting business critical analytics and reporting.
Language: English (required)
What are your responsibilities?
* Translate functional specifications into efficient technical designs
* Develop and maintain batch and real time data pipelines using Python and PySpark
* Automate data workflows and orchestration using Apache Airflow
* Migrate legacy pipelines from on prem to AWS cloud environments
* Perform data modeling, transformation, and reconciliation tasks
* Ensure high standards in code quality, testing, and deployment
* Monitor pipeline performance and improve processing efficiency
* Collaborate within Agile teams and contribute to solution architecture
* Support data driven decision making across departments
Who are we looking for?
* 25 years of experience for medior roles, 5+ years for senior roles in data engineering or software development
* Proficient in Python, PySpark, and SQL
* Strong experience with Apache Spark and Apache Airflow
* Solid knowledge of data modeling and data architecture principles
* Experience working in cloud environments, especially AWS (S3, Glue, Lambda, CI/CD tools)
* Familiar with both on prem and cloud based data ecosystems
* Agile mindset with experience working in Scrum/SAFe teams
* Strong analytical thinking, autonomy, and problem solving skills
* Clear communicator and team player with a proactive attitude
* Bonus: experience with Scala, JavaScript, Informatica, or NoSQL technologies (MongoDB, Cassandra, etc.)
PS: We also work with freelancers