We are looking for an experienced Data System Engineer. In our team we are not only keeping our Big Data (Hadoop eco-system) up and running but we constantly try to step into development by creating our own Batch and Streaming applications as well as helping other teams and stakeholders to improve/tune their applications. We also operate in a cross functional team that consists of system engineers and developers and operates in Scrum style. All the tasks are planned for two weeks sprints.


How about?

  • Employment contract? Of course. With us you do not have to worry about stable employment.
  • Flexible working hours? Sounds great! We start working between 8 to 10.
  • Friendly atmosphere at work? Yes! In PAYBACK, people are the most important asset‎.
  • Dress code? We definitely say no. There are no rigid dress code rules in our company, sneakers are more than welcome.
  • Convenient location? Sure! We invite you to our new office at Rondo Daszyńskiego, but we are currently also working remotely.
  • Benefits? We have them! Among other: corporate incentive program, sport card, private medical care.
  • Trainings? Of course. We provide training to develop hard and soft skills.
  • Working in a hybrid model? Of course! You work with us 2 days a week from the office, 3 days a week from home.
  • ‎Something is missing? Open communication is our priority, so dare to ask!‎

Very well
KafkaSparkHadoop

  • Build, maintain and optimize the PAYBACK BigData platform (MapR / HPE Ezmeral Data Fabric) and the Confluent Kafka Cluster setup.
  • Monitoring, alerting, troubleshooting problems and optimization are part of the duty.
  • You will be involved in DATA and process migration to Google Cloud.
  • Research and Development on tools and technologies around DATA is part of team activities.
  • Actively improving ansible automation.

  • You have a Bachelor/Master-Degree in IT or a relevant work experience.
  • Experience and good understanding of Confluent Kafka.
  • Experience and good understanding of Hadoop eco-system (Spark, Hive, Hue, Livy, Yarn, Zookeeper, etc.) .
  • Knowledge of MapR / HPE Ezmeral Data Fabric is an advantage.
  • Outstanding know-how in Linux administration/maintenance.
  • Good knowledge in Ansible automation.
  • You are curious and have high interest increasing your know-how in new technologies.
  • Experience in analytical tools like JupyterHub.
  • Experience in containerization is an advantage (Docker, OpenShift).
  • Experience in further database technologies (PostgreSQL, Redis, Cassandra) is an advantage.
  • Experience in Cloud Hyperscalers (GCP, AWS, etc.) is an advantage.
  • Your focus relies on service and customers.
  • You are working self-reliant and autonomous within the team.
  • Good communication skills in English and please send English CV.

Packages and extras

  • Healthcare package
  • Healthcare package for families
  • Leisure package
  • Language courses
  • Trainings
  • Books

Amenities

  • Bicycle parking
  • Hot beverages
  • Integration events
  • Chill room

Jesteśmy największym multipartnerskim programem lojalnościowym w Polsce. Punkty PAYBACK są zbierane przez 8,5 miliona Polaków. 

Efektywność PAYBACK została zweryfikowana przez licznych Partnerów, m.in. bp, Kaufland, MaxiZoo, Multikino, Mrówkę i CUK, a także ponad 250 firm e-commerce, jak Allegro, EURO RTV AGD czy Booking.com.

Misją PAYBACK jest zwiększanie sprzedaży Partnerów Programu przez wzmacnianie ich relacji z klientami i polepszanie doświadczenia zakupowego konsumentów poprzez nagradzanie ich za dokonywane wybory. 

Sukces PAYBACK nie byłby możliwy bez zaangażowania i pasji naszego zespołu. Zależy nam więc, aby nasi pracownicy byli odpowiednio nagradzani i mogli pracować w inspirujących i komfortowych warunkach.