You'll be a key contributor to our big-data processing pipelines built on Spark with Scala / Java that are hosted in AWS
Having a say: You’ll participate in problem definition and solution with a diverse group of engineers.
Proper engineering: Writing Java/Scala alongside our Spark-based processing pipeline for infrastructure that is scalable, performant, well-tested, and easy to maintain.
Technical heavy lifting: You’ll architect and implement major components of an ambitious technical roadmap, evaluating and implementing technologies like Kinesis, Spark, Redshift etc.
Analyse data: measure and benchmark to improve our data pipelines and storage subsystems. We currently use IPython Notebooks & Zeppelin but it could also be a tool of your choice.
- You’ve engineered software: You have hands-on experience with a nice bit of our stack: Scala / Java, Spark, EMR, Elasticsearch, Cassandra, Redis, etc. and for the parts you don’t, you’re super excited to learn.
- You care about code: You take pride in engineering and have a deep understanding of how to process terabytes of data through highly performant pipelines.
- You like moving quickly: We build and ship code to production multiple times a day.
Please apply for an immediate review