• Job Intesity 40 hours
  • Duration 12+ months
  • Location UT, Utrecht
  • Language Dutch, English
  • Function Data Engineer
  • Expertise Senior (4-6 year)
  • Education BSc
  • Industry Telecommunications
  • Coding Python, Scala
  • Big Data Airflow, Hadoop, Hive, Kafka, Oozie, Spark
  • Cloud AWS

    Apply for this job

    Freelance

    Big Data Engineer

    Requirements:

    • Experience with deployments and provisioning automation tools (AWS Cloud formation, Ansible,
      Docker, CI/CD (Gitlab), Kubernetes, or Helm charts
    • 3+ years of programming experience (Python, Scala, etc).
    • 3+ years of Big Data experience and deep understanding and application of modern data
      processing technology stacks (Hadoop, Spark, Hive, Kafka, etc.).
    • Deep understanding of streaming data architectures and technologies for real-time and low-
      latency data processing (Kafka)
    • Experience with data pipeline and workflow management tools such as Oozie or Airflow.
    • Ability to drive adoption of Data Engineering best practices and sharing your knowledge.
    • 1 year of recent experience in working with AWS is a must (EMR, Glue)
    • Platform operations and Devops experience on AWS is preferred

      Nice to haves:
    • Experience with exposing API’s and API gateways
    • Experience with Data Science tools such as Sagemaker, MLflow
    • Experience with AWS Athena, IAM policies, Connectivity

      Expectations:
    • We expect that the engineer will bring his own device so he’s able to develop and test code on
      his local machine.
    • Our main language is English and should be fluent
    Categories: