BigData engineer with PySpark

Необхідні навички

– MUST HAVE: Hands-on working experience with PySpark and Shell scripts,
– Hands-on experience in SQL, MySQL
– Knowledge of creating stored proc, functions in MySQL


– creating and manage new spark jobs for existing data pipelines
– performing storage infrastructure maintenance and necessary data migration.
– building SQL queries and stored procedures to provide correct data representation
– collaborate with cross-functional teams to define, design, and deliver correct data

Буде плюсом

– Experience of Hadoop, Big Data toolsets – Hive / Impala Experience with AWS cloud services: EC2, EMR, RDS, Redshift


– Opportunity to work on bleeding-edge projects
– Work with a highly motivated and dedicated team
– Competitive salary
– Flexible schedule
– Medical insurance
– Benefits program
– Corporate social events
– Professional development opportunities

    Про вакансію:

    Локація: Київ, Львів, Харьків
    Зайнятість: повна

    Our partner is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season.
    If you are excited about all aspects of modern engineering, from writing great code, to creating architectures, designing components, interacting with clients and delivering the working system to production, then you are the kind of person we are looking for. If you enjoy freedom and responsibility, creative thinking, leading and mentoring others, then join our team of world-class developers, QA engineers, architects, and managers.