Data Engineer
We are looking for a skilled and motivated Data Engineer to join our team and help transform raw data into efficient, scalable, and high-quality data solutions. As a Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, ETL/ELT processes, and data infrastructure. If you enjoy working with large datasets and modern technologies and solving complex data challenges, we’d love to hear from you.
What will you be working on?
Data Processing and Analysis
Extracting, cleaning, and transforming data from various sources (databases, APIs, files).
Ensuring data consistency, quality, and integrity through validation and optimization.
Collaborating with analytics teams to define key metrics and KPIs.
Development and Optimization of Data Solutions
Designing, developing, and maintaining data pipelines using modern ETL/ELT tools (e.g., Apache Airflow, and dbt).
Working with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB).
Implementing big data frameworks (Apache Spark, Kafka) for efficient data processing.
Ensuring scalability, reliability, and performance of data solutions.
Cloud & Infrastructure
Managing cloud-based data environments (AWS, GCP, or Azure).
Implementing best practices for data security, storage, and access management.
Automating infrastructure and deployments using Infrastructure as Code (IaC) tools.
Collaboration and Continuous Improvement
Working closely with data scientists, analysts, and software engineers to deliver actionable insights.
Staying up-to-date with the latest developments in data engineering, analytics, and cloud technologies.
Identifying and implementing improvements based on industry trends and emerging technologies.
What experience and skills should you have?
Bachelor's degree in Computer Science, Information Systems, Business Analytics, or a related field. Master's degree is a plus.
Proven experience in data engineering, ETL/ELT development, and database management.
Strong skills in SQL and Python (experience with Scala or Java is a plus).
Proficiency in at least one major cloud platform (AWS, GCP, Azure).
Experience with big data technologies (e.g., Apache Spark, Kafka, Databricks).
Familiarity with data modeling, warehousing, and pipeline optimization.
Excellent analytical and problem-solving skills.
Strong communication and teamwork abilities.
Knowledge of data governance, security, and compliance is beneficial.
Relevant certifications in cloud computing or data engineering are a plus.
Salary
- Medior: from 2 000 EUR (depending on experience level) + interesting benefits
- Senior: from 3 000 EUR (depending on experience level) + interesting benefits
Benefits
Where you can work with us
From candidate to colleague
Your introduction team
These are the first people who will be in the interaction with you: