Swipe dich zum Job mit truffls
Ähnliche Jobs
ZONTAL

Data Engineer

Standort Aachen
  • Neu
  • Veröffentlicht am 20.03.2026
  • Festanstellung

ZONTAL Data Engineer Job Description

At ZONTAL, we pride ourselves in providing structured, interoperable data to our customers to help make informed business decisions and reduce manual data processing overhead. We're seeking an experienced pipeline-centric data engineer to make this a reality for our ever-growing portfolio.

The ideal candidate will have the expected mathematical and statistical expertise, combined with a rare curiosity and creativity. This role requires taking on many diverse and evolving responsibilities but focuses on building our Python ETL processes and writing superb DAGs. Beyond technical prowess, the data engineer will need soft skills for clearly communicating highly complex data trends to organizational leaders. We're looking for someone willing to jump right in and help our customers get the most from their data.

Objectives of this role

  • Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes
  • Help streamline our data science workflows—through thoughtful automation, reusable pipeline patterns, and AI-assisted development workflows—adding value to our product offerings and building out the customer lifecycle and retention models
  • Work closely with the data science and business intelligence teams to develop data models and pipelines for research, reporting, and machine learning
  • Be an advocate for best practices and continue learning

Responsibilities

  • Use agile software development processes to make iterative improvements to our back-end systems
  • Use AI-assisted development tools to accelerate implementation and debugging, with a strong emphasis on test coverage, code review, security, and maintainability
  • Model front-end and back-end data sources to help draw a more comprehensive picture of user flows throughout the system and to enable powerful data analysis
  • Build data pipelines that clean, transform, and aggregate data from disparate sources
  • Design, develop, and maintain robust ETL/ELT pipelines using tools like Apache Airflow or similar.
  • Collaborate with data scientists, analysts, and software engineers to understand data needs and deliver high-quality datasets.
  • Ensure data quality, integrity, and security through validation, monitoring, and governance practices.
  • Automate data workflows and improve data processing efficiency.
  • Monitor and troubleshoot data pipeline issues and performance bottlenecks.

Required skills and qualifications

  • Three or more years of experience with Python, and data visualization/exploration tools
  • Familiarity with the AWS ecosystem, specifically lambda, Step Functions, SQS, document DB and RDS
  • Literacy in using AI-assisted development workflows to accelerate development, paired with strong judgment around correctness, security, and data privacy
  • Communication skills, especially for explaining technical concepts to nontechnical business leaders
  • Ability to work on a dynamic, research-oriented team that has concurrent projects
  • Experience with Apache Airflow, Kafka or similar.
  • Strong understanding of dimensional modeling and normalization.
  • Knowledge of GDPR, and best practices for data privacy and protection.
  • Familiarity with Git, Docker, and CI/CD pipelines for data workflows.
  • Proficiency in working with Linux-based systems for deploying, monitoring, and maintaining data infrastructure.
  • Experience working with RESTful APIs for data ingestion, transformation, and integration with third-party systems.
  • Ability to create clear, comprehensive technical documentation for data pipelines, architecture, and processes.

Preferred skills and qualifications

  • Bachelor's degree (or equivalent) in computer science, information technology, computational biology, engineering, or related discipline
  • Experience in building or maintaining ETL processes
  • Experience with real-time data processing
  • Background in biological informatics or clinical data analysis is a plus
  • Experience working in a GxP-regulated environment
  • Familiarity with deploying and managing containerized data applications in Kubernetes environments for scalability and reliability.
  • Experience with Helm for managing Kubernetes applications is a plus
  • Experience automating infrastructure and deployments using Terraform, CDK, or Serverless Framework is a plus
  • Familiarity with NoSQL technologies such as MongoDB, DynamoDB, or similar.
  • Experience with Elasticsearch for indexing and querying data is a plus
  • Familiarity with tools like Prometheus, or ELK Stack for pipeline observability.
  • Familiarity with Terraform and AWS CDK for provisioning and managing cloud infrastructure.

You can find a PDF document with our privacy notice for applicants at:

https://zontal.io/wp-content/uploads/2026/03/2025-12-08-Transpareztext-Applicants-ZONTAL-GmbH-EN.pdf

Standort

ZONTAL, Aachen