Big Data Engineer
About us:
We are Data Science UA, a fast-growing IT service company. We are proud of developing the Data Science community in Ukraine for more than 7 years. Data Science UA unites all researchers, engineers, and developers around Data Science and related areas. We conduct events on machine learning, computer vision, intelligence, information science, and the use of artificial intelligence for business in various fields.
About the client:
Our client is disrupting real estate by empowering people to unlock life’s next chapter! Their Data Engineering team supports multiple lines of business and is responsible for implementing, operating and improving data pipelines and creating data sets to empower group brands and customers. They achieve this goal by building and deploying highly scalable data pipelines, adhering to software/data engineering best practices, and ensuring the quality of our data to the delight of our consumers.
About the role:
They are looking for a Big Data Engineer who will be able to architect, design, build, implement and support data pipelines/products to serve machine learning and analytical use cases. A Big Data Engineer will craft and build infrastructure for orchestrating the end-to-end machine learning lifecycle from the experimentation phase to production deployment. This role may include front-end, back-end, full-stack development, or data engineering.
Requirements:
- 4+ years of software development experience using Python/Scala/Java.
- Experience with Machine Learning, infrastructure or backend services.
- Experience working with a variety of technologies such as Kubernetes, Spark, Flink, Kafka, Airbyte, PySpark, Airflow, and a variety of BI/reporting tools.
- Experience architecting and developing complex technical solutions.
- Experience applying automation to data engineering.
- Experience with Python, Java, NodeJS, Go, and GraphQL for APIs and services.
- Experience with SQL Server, PostgreSQL, MySQL or other SQL/NoSQL databases.
- Experience with building CI/CD processes and pipelines, and monitoring.
Responsibilities:
- Architect, design, build, implement and support data pipelines/products to serve ML and Analytical use cases;
- Collaborate with product managers, engineers, data scientists, and analysts on mission-critical property data needs to build world-class datasets;
- Identify opportunities to evangelize and support existing data processes;
- Contribute back to common tooling/infrastructure to enable self-service tooling to expedite customer onboarding.
They offer:
- Competitive salary and perks;
- Working with cutting-edge technologies;
- Friendly team and nice environment;
- A positive atmosphere all over the company.
About