BigData Developer with Python/Spark/ETL

Requirements

– Professional data engineering experience focused on batch and real-time data pipelines development using Spark and Python
– Data processing/data transformation using ETL tools, Azure Databricks platform (preferred)
– Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift)
– Proactive approach to problem-solving with effective influencing skills
– Familiar with Agile practices and methodologies

Responsibilities

We are looking for a Data Engineer for the Enterprise Data Organization to build and manage data pipeline (Data ingest, data transformation, data distribution, quality rules, data storage, etc.) for the Azure cloud-based data platform.

Nice-to-Have Experience

– Experience with a DevOps model utilizing a CI/CD tool
– Experienced in Azure Cloud Platform
– Hands-on Talend work experience (anyone with this skill will have an advantage over other candidates)
– Apache Airflow, Azure Data Factory experience

What we offer

– Opportunity to work on bleeding-edge projects
– Work with a highly motivated and dedicated team
– Competitive salary
– Flexible schedule
– Benefits program
– Social package – medical insurance, sports
– Corporate social events
– Professional development opportunities
– Opportunity for long business trips to the US and possibility for relocation






    About

    Location: Kyiv, Lviv, Kharkiv
    Type: Full-time

    Our partner is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season.