#1 Job Board for tech industry in Europe

  • Job offers
  • Data Engineer (Azure / Databricks / Spark)
    New
    Data

    Data Engineer (Azure / Databricks / Spark)

    Warszawa
    Type of work
    Full-time
    Experience
    Mid
    Employment Type
    Permanent, B2B
    Operating mode
    Remote

    Tech stack

      English

      B2

      Azure Databricks

      regular

      Azure Data Factory

      regular

      Azure Data Lake Storage

      regular

      Azure DevOps

      regular

      Spark

      regular

      Spark SQL

      regular

      Python

      regular

      Git

      regular

      CI/CD

      regular

      SDLC

      regular

    Job description

    Online interview

    Data Engineer (Azure / Databricks / Spark)

    Remote / Hybrid

    Contract type: B2B or Employment Contract


    About the Project

    Join a modern data engineering team focused on delivering scalable, cloud-native data solutions. You’ll work with Azure, Databricks, and Spark to build high-performance data pipelines that support business analytics, data science, and reporting. The environment is agile, collaborative, and quality-driven—with strong practices around CI/CD, testing, and performance optimization.


    What You'll Do

    • Design, develop, and maintain robust ETL/ELT data pipelines using Azure Databricks and Spark
    • Optimize data transformation workflows for performance and cost-efficiency
    • Build and deploy data pipelines through CI/CD workflows using Azure DevOps (or similar)
    • Work closely with analysts, data scientists, and product teams to deliver clean, reliable data
    • Implement and monitor automated unit tests, ensuring code quality and maintainability
    • Contribute to team standards through code reviews and knowledge sharing
    • Follow SDLC principles in agile team setups


    Tech Stack

    • Azure Databricks – core data processing platform
    • Azure Data Factory, Azure Data Lake Storage, Azure DevOps
    • Spark (including Spark SQL) – for distributed processing
    • Python – for scripting, transformation, and orchestration
    • Git, CI/CD pipelines – version control and automation


    Requirements

    • Hands-on experience with Azure Databricks (must-have)
    • Experience with Azure services such as Data Factory
    • Proficient in Python programming
    • Solid understanding of Spark, including Spark SQL and performance optimization
    • Experience with automated unit testing and code quality best practices
    • Working knowledge of CI/CD pipelines (Azure DevOps or similar)
    • Familiarity with SDLC and agile methodologies
    • English proficiency at B2 level


    Nice to Have

    • Experience with Snowflake
    • Knowledge of dbt, Airflow, or Data Mesh architecture
    • Background in regulated industries such as pharma or finance


    What We Offer

    • Opportunity to work on high-impact, data-driven projects with modern architecture
    • Long-term collaboration with flexible B2B or Employment Contract options
    • Private healthcare and sports benefits (available for both contract types)
    • Learning & development budget with time allocated for upskilling
    • Friendly, quality-focused team and cutting-edge tech stack


    Undisclosed Salary

    Permanent, B2B

    Apply for this job

    File upload
    Add document

    Format: PDF, DOCX, JPEG, PNG. Max size 5 MB

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    Informujemy, że administratorem danych jest emagine z siedzibą w Warszawie, ul.Domaniewskiej 39A (dalej jako "administra...more

    Check similar offers

    Mid DevOps Data Engineer (+ Azure Data Factory)

    New
    1dea
    29 - 32 USD/h
    Wrocław
    , Fully remote
    Fully remote
    Azure
    Ansible
    ETL tools