Preise Credits & Buchungen Firmeninformation bearbeiten Logout
Diese Stellenanzeige ist abgelaufen
edit bearbeiten

Data Engineer (m/f/d) - Scala

  • Vollzeit
  • Mit Berufserfahrung

Du willst Dir dieses Stellenangebot per E-Mail zusenden?

Hinweise zu Versand, Datenschutz und Widerruf
Frühestmöglicher Eintrittstermin Januar 2019

We are looking for a (Senior) Data Engineer - Scala to join’s Product Intelligence (PI) Team. As a Data Engineer you will play a key role in designing, developing and scaling systems that are used for classifying’s 13 million restaurant products, modeling customers’ preferences and enhancing user experience through personalisation.

What you tell your friends when they ask you what you do

“I work in a specialised team that owns the product intelligence domain from start to finish. I collect data and manage systems that my data scientist colleagues need in order to develop models that can explain and predict customer preferences. When they’re finished, I take care of exposing their models as services to provide users with more relevant offers on our site, apps, newsletters etc. I work at the cutting edge of applied machine learning and manage containerised microservices and distributed systems in the cloud. I am part of a tightly-knit collective of curious people with a heterogeneous set of skills and backgrounds. Our members believe that a good team is one that is inclusive, diverse and made up of open-minded, cooperative people.”

Your Profile

  • You have 2+ years of experience as a Data Engineer, including at least 1 year of work on Scala projects
  • You have first-hand experience in building RESTful services for production
  • You have extensive experience in developing, deploying and tuning Spark applications.
  • You have experience with Spark cluster management and a willingness to optimise performance, automate provisioning, scaling and alerting
  • You have an analytical mind and are a keen problem solver
  • You are a good communicator who enjoys sharing their ideas and can articulate them clearly

The Position

  • Design, develop and test ETL jobs using Spark and the Spark Scala API to ensure that data is always available for exploratory data analysis, model training and model serving in the relevant storage systems - e.g. DWH, data lake, NoSQL
  • Develop, test and deploy APIs that serve data scientists’ machine learning models as scalable services in the cloud
  • Help maintain and improve our team’s Spark infrastructure (we currently use AWS EMR) to ensure that it can serve all use cases that we use Spark for (model training, ETL, real-time predictions) at a uniformly high quality
  • Orchestrate model (re)training jobs and ETL pipelines using Airflow

Our Offer

  • As you’ll be joining a young team, you will be constantly presented with novel puzzles and will be unfettered by technical debt or legacy code
  • You’ll become part of a team that prides itself in owning its stack and you’ll have the opportunity to make your voice heard during technical and design discussions - your opinion will count.
  • Company contribution toward your pension saving
  • Awesome (global) team events like the summer party, skiing trip and more!
  • Table football, billiard and Playstation in the kitchen
  • Contribution to your travel cost with the amount of the BVG monthly abonnement
  • An attractive location with roof terrace close to Potsdamer Platz
  • Workshops, conferences and training seminars to support our growing team
  • Inhouse German language courses

Apply for this job

Apply now if you're interested in this vacancy!