Data engineer (Big Data Platform)

We are looking for a Data engineer to join the analytical development team.
The Data Infrastructure team is responsible for collecting and storing all company data, developing and supporting analytical services and tools.

Job Responsibilities

  • development and support of infrastructure for data storage and processing based on hadoop;
  • optimization of calculations and work with platform bottlenecks;
  • development of tools for working with the metadata catalog;
  • development and support of the hadoop cluster.

Key Qualifications

  • 3 years of commercial development;
  • at least 2 years of experience with Hadoop/Hive/Spark/Kafka;
  • understanding the principles of DBMS operation and design;
  • Scala commercial development experience in conjunction with Spark from 1 year;
  • 2 years of commercial Python development experience;
  • experience in using scripting languages (bash) in your work;
  • understanding what git flow is and how to use it;
  • experience with Doсker and doсker-compose, ability to write and optimize dockerfile;
  • experience working with Unix-like systems at the level of a confident user or administrator.
 Your advantage will be:
  • experience working with task schedulers (Airflow, Cron);
  • experience with Vertica;
  • understanding the principles of building services.

We Offer You

  • remote work;
  • a flexible timetable — we don’t require you to be online at 09:00 sharp. You can start work at a time that suits you;
  • interesting and ambitious tasks that will take you to the next professional level;
  • learning: seminars, trainings and conferences. If you want to participate in a conference,we will help to organize it;
  • private health insurance;
  • team-building activities: movie nights, quizzes, thematic parties, annual trips to the countryside, football and volleyball matches;
  • corporate discounts on hotels and other services;
  • a young and active team of super specialists.
Apply to this position

Or share with your friends