Data Engineer

Posted 04 May 2021
LocationUtrecht
DisciplineTechnology
Contact NameJavier Barnby

Job description

​My client based in Utrecht who operates within the Machinery industry are currently seeking to expand their Software department with an innovated Software developer ( C++ )  to start ASAP. 

Responsibilities for the role:

  • Develop, test, operate and support the end-to-end batch, near real-time data engineering solution

  • Be able to develop quality Python/PySpark and adopt/implement software development best practices to ensure high-quality standards are met

  • Performs thorough code reviews of fellow developers and ensure quality code

  • Use DevOps knowledge, cloud and data engineering expertise to operate, monitor and troubleshoot the pipelines/codebases

  • Effectively test and adopt testing frameworks to embed and codify tests

  • Take technical ownership for the implementation of the DE solution at an allocated mining site

  • Clearly communicate and explain concepts to senior stakeholders and key teams during sprint reviews and other programme meetings

  • Have a clear understanding of agile best practices and proactively collaborate and support the team if needed during the whole sprint cycle

  • Document technical implementation details and artefacts created as part of the process

Requirements for the role:

  • Excellent programming skills in Python, Golang, Java, Scala (4+ years). You know everything about the sense and nonsense of OO design patterns and functional programming.

  • Extensive experience in realizing data and machine learning pipelines

  • Experience with developing scalable and performant solutions in the cloud (OpenShift, Azure, AWS, GCP, Kubernetes)

  • Experience in developing and configuring CI / CD solutions on-premise and in the cloud (Jenkins, Gitlab, TFS, Azure DevOps, MLFlow)

  • Knowledge of big data processing (Spark, Kafka, Flink, AWS Kinesis, Azure Event Hub) and flow development (Airflow, Nifi)

  • Knowledge of big data storage (such as HDFS, Hive, HBase, Cassandra, MongoDB, Neo4J, FlockDB) and associated techniques (SQL, Graph, Document

  • Knowledge of search engines (ElasticSearch)

  • Applying Machine Learning and techniques

  • Experience with modern data warehouse solutions (Snowflake, AWS Redshift)

If you wish to be considered for the position available, please e-mail an up to date CV with a contact number. Please feel free to pass this advert on to other suitable candidates.