BigData engineer with PySpark
Requirements
– MUST HAVE: Hands-on working experience with PySpark and Shell scripts,
– Hands-on experience in SQL, MySQL
– Knowledge of creating stored proc, functions in MySQL
Responsibilities
– creating and manage new spark jobs for existing data pipelines
– performing storage infrastructure maintenance and necessary data migration.
– building SQL queries and stored procedures to provide correct data representation
– collaborate with cross-functional teams to define, design, and deliver correct data
Nice-to-Have Experience
– Experience of Hadoop, Big Data toolsets – Hive / Impala Experience with AWS cloud services: EC2, EMR, RDS, Redshift
What we offer
– Opportunity to work on bleeding-edge projects
– Work with a highly motivated and dedicated team
– Competitive salary
– Flexible schedule
– Medical insurance
– Benefits program
– Corporate social events
– Professional development opportunities