Big Data Engineer with Scala and Spark
Необхідні навички
– Strong knowledge of Scala
– In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams)
– Understanding the best practices in data quality and quality engineering
– Experience with version control systems, Git in particular ability to quickly learn new tools and technologies
Обов’язки
– Participate in the design and development of a big data analytics application
– Design, support and continuously enhance the project codebase, continuous integration pipeline, etc.
– Write complex ETL processes and frameworks for analytics and data management
– Implement large-scale near real-time streaming data processing pipelines
– Work with a team of industry experts on cutting-edge big data technologies to develop solutions for deployment at a massive scale
Буде плюсом
– Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.)
– Experience with Github-based development processes
– Experience with JVM build systems (SBT, Maven, Gradle)
Пропонуємо
– Opportunity to work on bleeding-edge projects
– Work with a highly motivated and dedicated team
– Competitive salary
– Flexible schedule
– Benefits program
– Social package – medical insurance, sports
– Corporate social events
– Professional development opportunities
– Opportunity for long business trips to the US and the possibility for relocation