Senior Data Engineer

Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 8 years, we have diligently fostered the largest Data Science Community in Eastern Europe, boasting a network of over 30,000 AI top engineers.

About the company:

Our client is the market-leading intelligence platform for paid search advertising. The unique combination of artificial search intelligence and inside industry knowledge, will help you drive more value from your PPC budget, from maximizing ROI to protecting your brand.

About the role:

We are seeking a skilled and experienced Data Engineer to join a dynamic team.

Requirements:

– Bachelor degree in Computer Science, similar technical field of study or equivalent practical experience.
– Commercial experience developing Spark Jobs using Scala and Java
– Experience in data processing using traditional and distributed systems (Hadoop, Spark, AWS – S3) and designing data models and data warehouses.
– Strong understanding and application of data structures and algorithms in building efficient solutions.
– Experience in SQL, NoSQL database management systems (PostgreSQL and Cassandra).
– Commercial experience using messaging technologies (RabbitMQ, Kafka).
– Experience using orchestration software (Chef, Puppet, Ansible, Salt).
– Confident with building complex ETL workflows (Luigi, Airflow).
– Good knowledge working cloud technologies (AWS, GCP) and using monitoring software (ELK stack).
– Motivated problem-solving skills, ability to bring ideas forward and adapt solutions to complex challenges.

Nice to have:

– Experience of working with Python.

Responsibilities:

– Understand distributed technologies and the best practices around them.
– Build and maintain services/features/libraries that serve as a definitive examples for new engineers.
– Design and write effective complex Spark jobs (data processes, aggregations, pipeline), complex asynchronous, highly parallel low latency APIs and processes.
– Work as part of an Agile team to maintain, improve, monitor the company’s data collection processes using Scala and Java.
– Apply industry practices such as TDD and SOLID.
– Understand and be able to apply data structures and algorithms.
– Understand the company’s data architecture and use appropriate design patterns.
– Design and implement databases for relational and non-relational storage technologies.
– Support the Data Science team to help deliver their machine learning models in to production environments

We offer:

– Good compensation;
– Benefit package;
– Strong team and career growth.

About


Apply vacancy