I am currently working with one of Copenhagen’s FinTech SAAS providers who are in the process of growing out their data engineering function who are looking for a Senior Data Engineer to come and influence and guide their data services & Architecture.
You will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross_functional teams. As a Data Engineer, you will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
· Create and maintain optimal data pipeline architecture
· Assemble large, complex data sets that meet functional / non-functional business requirements
· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
· Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
· Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
· Work with stakeholders including the Executive, Sales, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
· Keep our data separated and secure across national boundaries through multiple data centres and AWS regions
· Work with data and analytics experts to strive for greater functionality in our data systems
· Graduate degree in Computer Science, Informatics or relevant experience
· 4+ years of experience in a Data Engineer role or Senior Software Engineer role
· Experience with supporting and working with cross-functional teams in a dynamic environment
· Experience with object-oriented and functional/scripting languages
· Experience with relational SQL and NoSQL databases
· Experience with building processes supporting data transformation, data structures, metadata, dependency and workload management
· Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
· Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
· Experience with AWS cloud services like EC2, EMR, RDS, Redshift
· It’s a plus if you have worked with:
· orchestration: Terraform, Kubernetes
· big data: Hadoop, Spark
· data pipeline and workflow management tools: Azkaban, Luigi, Airflow
· stream-processing systems: Kafka, Flink, Spark Streaming
· analytics tools: Jupyter, Zeppelin, Domo, Tableau, Looker
· data integration platforms: Mulesoft, Talend
If you feel you are suitable for the above, please apply in or an immediate review!