• Job Intesity 40 hours
  • Duration 12+ months
  • Location NB, Veldhoven
  • Language Dutch, English
  • Function Data Engineer
  • Expertise Senior (4-6 year)
  • Education BSc
  • Industry Semiconducts
  • Coding Java, Python, SQL
  • Big Data Airflow, Flink, Hadoop, HBase, HDFS, Kafka, Spark, Zookeeper
  • DevOps Docker, Kubernetes
  • Databases Cassandra
  • Methods CI/CD
  • Areas of Expertise/Specialties Engineering

    Apply for this job


    Senior Data Engineer


    Are you the hands-on experienced data engineer that can implement and maintain complex Big Data architectures? Are you able to apply the cutting-edge technologies and tools to build Big Data pipelines? Do you have the drive to exceed customer expectations, then read on!

    Job Mission

    As a Senior Data Engineer you will apply the latest technologies and tools to implement complex Big Data platform architectures and pipelines. You will be closely working with domain/technical experts, platform architects, IT specialists, software developers, data scientists and analysts to drive successful implementation and maintenance of complex software solutions. On top of your profound technical capabilities, your success will strongly dependent on your interpersonal skills and your ability to communicate across disciplines. You will engage in architecting the solution, in close cooperation with stakeholders and Data Solution Architect.

    Job Description

    You will be working in a multi-disciplinary team of data scientists, physicists, computer scientists, and system architects to build Big Data platforms and data processing pipelines. You will develop and implement complex data management software solutions leveraging cutting-edge software frameworks and tools.


    MSc in Computer Science, Software Engineering, Big Data Engineering or related


    • Preferably >5 years of hands-on experience in building complex Big Data platforms and pipelines (ETL, both batch- and stream-data processing)
    • Solid knowledge of software system design and distributed architectures
    • Hands-on experience managing and using distributed data management systems and clusters (e.g. HDFS, Cassandra, Druid, HBase, MongoDB, Parquet, ZooKeeper, Airflow)
    • Hands-on experience managing and using Big Data processing frameworks (e.g. Hadoop, Kafka, Hive, Spark, Drill, Impala, Flume, Storm, Flink)
    • Knowledge of programming paradigms (e.g. declarative, imperative, dataflow, object-oriented, event-driven, functional)
    • Hands-on experience with Python, Java, SQL programming languages
    • Experience in setting up both SQL as well as noSQL databases
    • Deployment and provisioning automation tools (e.g. Docker, Kubernetes, Openshift, CI/CD)
    • Bash scripting and Linux systems administration
    • Experience in secure deployment of data science models and building efficient APIs for them
    • Knowledge of security, authentication and authorisation (LDAP / Kerberos / PAM)

    Personal skills

    • Excellent communicator, who can convince across disciplines
    • Team player, good social and team leadership skills, pro-active and customer-oriented
    • Practical approach, being able to work in a high demanding, result driven environment
    • Thinks creative, out of the box, self-going, fast learner
    • Senior, independent mindset, can deal with complex and uncertain environments under pressure

    Other information

    ASML creates the conditions that enable you to realize your full potential. We provide state-of-the-art facilities, opportunities to develop your talents, international career opportunities, a stimulating and inspiring environment, and most of all, the commitment of a company that recognizes and rewards outstanding performance.
    An assessment will be part of the recruitment process.