• Location Noord-Holland
  • Language English
  • Function Data Engineer
  • Expertise Senior (4-6 year)
  • Education MSc
  • Industry Insurance
  • Coding Java, Python, Scala, Kotlin
  • Dashboarding & Visualisation Tableau
  • Big Data Hive, Kafka, MapReduce, Pig, Redshift, Spark, Storm
  • Cloud AWS
  • Databases NoSQL

    Apply for this job


    Data Engineer

    iptiQ is a risk tech start-up within Swiss Re Group. Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime.

    iptiQ provides digital, bespoke and transparent L&H and P&C protection products in a B2B2C manner. Founded in 2014, we’re transforming the way consumers buy insurance with a unique digital insurance engine which incorporates the latest technology with world-class underwriting capabilities. We build strong partnerships to sell insurance via trusted brands.

    iptiQ offers a flexible working environment where curious and adaptable people thrive. Are you interested in joining us?

    About iptiQ EMEA P&C

    iptiQ EMEA Property & Casualty is a start-up within Swiss Re with the mission to make the insurance buying process easy, transparent and 100% digital. From cyber to smart-home insurance, we pride ourselves on bringing you innovative insurance products that are ready for the future today. We work with established B2C and B2B brands to build tailored insurance service at attractive prices.

    About the role

    Glad you’re interested! We’re looking for a determined data engineer to help shape our digital landscape and change the way insurance is bought online. No more insurance talk that only a lawyer understands, we’re here to make a difference.

    As a Platform Data Engineer you’ll work in a multi-functional environment to design, implement and operate our big data and reporting infrastructure seamlessly integrated with our customer-facing platform.

    If you’re passionate about improving how we take business and product decisions and delivering awesome products, then read on.

    Your day-to-day activities will include

    • As part of the data engineering team, design, implement and improve our big-data infrastructure in the cloud (Amazon AWS). You’re passionate about balancing scale and performance with a pragmatic approach and devising highly automated solutions to guarantee the encouraged level of reliability, performance and quality in multiple environments
    • Closely work with other data engineers to expand existing large-scale data systems or implementing new ones to support evolving data models, analytical products, multi-sourced data analysis, reporting capabilities and production machine learning algorithms along with data scientists
    • Deliver high-quality code and solutions, alongside Site Reliability Engineers, to improve data delivery, fault-tolerance and seamlessly integrate with the production platform running our customer-facing products
    • End-to-end responsibility on running the data platform, alongside data and SRE teams, monitoring key metrics around availability, performance and data delivery. Drive incidents resolution and root cause investigation within postmortems establishing a culture of collaboration between teams
    • Focus on delivering fully automated, tested and self-healing systems. Engage in capacity planning and explore solutions to increase the efficiency of the platform using the opportunities of the public clouds

    This is what we will look for

    • 6+ years of hands-on experience in delivering highly distributed, large scale data platforms preferably using open-source technologies (Spark, Storm, MapReduce, Hive, Hadoop, Pig, Kafka, Redshift, HDFS, HBase, Teradata, Vertica …), designing, building, scaling and monitoring data infrastructure. Public cloud experience (Amazon AWS) is preferable
    • Experience in software engineering in any modern language (6+ years hands-on with Java, Python, Scala, Kotlin …), applying the best engineering practices to produce elegant, maintainable and scalable code. Your code is easy to read, test and re-use and you constantly improve quality overtime
    • Strong experience with SQL/NoSQL datastores, time series DBs and other distributed system
    • You can troubleshoot distributed architectures with an analytical approach and structured problem-solving attitude. You’re collaborative within incidents mitigation, triage and resolution
    • You have experience with applications containerization/orchestration (Docker, Kubernetes, Istio), configuration management (Terraform, Ansible, Chef, Puppet) and implementing continuous delivery and deployment solutions (Jenkins, GoCD, TeamCity, Spinnaker …) is a plus
    • Standout colleague: you collaborate effectively with team members, express technical leadership supporting your views and ideas while keeping open to different opinions, being confident and always supplying to the overall growth of the team
    • Avid learner who stays up-to-date with the latest trends and can vet with pragmatism and long-term vision the adoption of new technologies
    • Master’s or PhD degree in computer science, engineering or equivalent working experience
    • Ability to speak and write English fluently

    What we offer

    We’re a start-up embedded within an organization known for its high-caliber talent and excellent benefits. So, you get the best of both worlds – a fast paced, challenging environment with a genuine work-life balance and more.

    Do you love thinking ahead and identifying new opportunities or anticipate future challenges? Do you like driving complex, cross-functional projects following agile principles and pragmatism? Do you enjoy pushing borders and have a passion for the latest technologies?

    That was a lot! Still interested? Now, that’s the spirit we’re looking for. Go ahead and apply today.