BigData engineer with PySpark
Необхідні навички
– MUST HAVE: Hands-on working experience with PySpark and Shell scripts,
– Hands-on experience in SQL, MySQL
– Knowledge of creating stored proc, functions in MySQL
Обов’язки
– creating and manage new spark jobs for existing data pipelines
– performing storage infrastructure maintenance and necessary data migration.
– building SQL queries and stored procedures to provide correct data representation
– collaborate with cross-functional teams to define, design, and deliver correct data
Буде плюсом
– Experience of Hadoop, Big Data toolsets – Hive / Impala Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Пропонуємо
– Opportunity to work on bleeding-edge projects
– Work with a highly motivated and dedicated team
– Competitive salary
– Flexible schedule
– Medical insurance
– Benefits program
– Corporate social events
– Professional development opportunities