• Job Intesity 40 hours
  • Location NH, Amsterdam
  • Language Dutch
  • Function Data Engineer
  • Expertise Medior (2-4 year)
  • Industry Computer Software, Information Technology and Services, Internet
  • Coding C/C++, Java, Python, Ruby, Scala
  • Big Data ElasticSearch
  • Cloud AWS, Azure, GCP
  • DevOps Docker,
  • Operating Systems Linux
  • Areas of Expertise/Specialties Engineering

    Apply for this job


    Data Engineer

    You are an engineer at heart with a pragmatic mentality and the responsibility of someone maintaining production systems. You easily switch between scripting and structured programming in typed languages. You understand failures modes in distributed systems. You are passionate about provisioning and automation. And understand that a robust system takes true skill and care.

    As a Data Engineer, you are responsible for setup, deployment and productionising of data-intensive systems. You are experienced in engineering systems from the ground up: OS-level, distributed databases, big data clusters and distributed indexes are familiar to you. Added to that, you understand how to develop data pipelines including transformation and pre-processing.

    The Data Engineer role is a senior postion with a central role in our clients’ teams, and therefore we require at least 2 years of relevant professional experience. Having said that, we like to be amazed: so if you have done something outstanding during your studies, like contributing to open source projects or starting your own company, we encourage you to apply no matter what the level of your experience is.

    Finally, since most of our customers operate in the Netherlands, a working knowledge of Dutch is a requirement.

    Preferred General Knowledge And Experience Includes

    • Hands-on experience managing distributed systems and clusters
    • Programming in scripting languages, e.g. Python, Groovy, Ruby
    • Programming in a statically typed language, e.g. Java, Scala, C++
    • Deployment and provisioning automation tools
    • Linux systems administration
    • Security, authentication and authorisation (LDAP / Kerberos / PAM)
    • Data management
    • Complex Extract Transform Load (ETL) pipelines
    • Cloud platforms (AWS, Azure, Google Cloud, etc)

    Preferred Skills / Tool Experience Includes

    • Hadoop ecosystem
    • Elasticsearch
    • Ansible / Terraform
    • Docker
    • Java / Scala
    • Python
    • Shell Scripting