Skip navigation

Have you considered using our job search? Click here to search our current jobs.

Have you considered using our job search? Click here to search our current jobs.

Data Engineer


Information Technology
Singapore - Singapore

About us

Dyson IT
At Dyson, we demand the highest standard of performance from the technologies we engineer. Our people expect the same from the technology that supports them. We are a community that appreciates and advocates better engineering. A community of pioneers. 

Our Data team
This role sits within Dyson's Global Data Services team. This team is tasked with ensuring that the various parts of Dyson's business are able to leverage rich, accurate, and timely data to generate insights and make better decisions. As a part of IT, the team has the resources and the remit to keep up with Dyson's impressive global growth.
Building our analytical capabilities is a core pillar of Dyson's new global data strategy. As more consumers engage with Dyson, via more types of products, and across more markets, the volume and diversity of data will greatly increase. Understanding how to use this data to improve everything from customer experiences to product development will be key to Dyson's success.

About the role

As a Data Engineer you will be responsible for developing, industrialising, and optimising Dyson's big data platform running on GCP. You will ingest new data sources, write data pipelines as code, and transform and enrich data using the most efficient methods. 
Working with data from across Dyson’s global data estate, you will understand the best way to serve up data at scale to a global audience of analysts. You will work closely with data architects, data scientists and data product managers on the team to ensure that we are building an integrated, performant solutions.
Ideally you will have a Software Engineering mind-set, be able to leverage CI/CD and apply critical thinking to the work you undertake. The role would suit candidates who already have experience in cloud technologies like AWS, GCP, Azure etc or who are looking to make the move from working with traditional big data stacks such as Spark and Hadoop to using cloud native technologies (DataFlow, Big Query, Docker/Kubernetes, Pub/Sub, Redshift, Cloud Functions). Candidates who also have strong software development skills and wishing to make the leap to working with Data at scale will also be considered.

About you

  • Strong programming skills in languages such as Python/Java/Scala including building, testing and releasing code into production.
  • Strong SQL skills and experience working with relational/columnar databases (e.g. SQLServer, Postgres, Oracle, Presto, Hive, BigQuery etc…)
  • Knowledge of data modelling techniques and integration patterns.
  • Practical experience writing data analytic pipelines
  • Experience integrating/interfacing with REST APIs / Web Services
  • Experience handling data securely. 
  • Experience with agile software delivery and CI/CD processes.
  • A willingness to learn and find solutions to complex problems. 
  • Experience in data infrastructure ecosystem
  • Experience migrating from on-premise data stores to cloud solutions
  • Experience of designing and building real/near real time solutions using streaming technologies (e.g. Dataflow/Apache Beam, Fink, Spark Streaming etc) 
  • Hands-on experience with cloud environments (GCP & AWS preferred)
  • Experience in working with large data volumes
  • Building API's and apps using Python/JavaScript or an alternative language.
  • Practical experience with traditional Big Data stacks (e.g Spark, Flink, Hbase, Flume, Impala, Hive etc)
  • Experience with non-relational database solutions (e.g. Big Query, Big Table, MongoDB, Dynamo, HBase, Elasticsearch)
  • Experience with AWS data pipeline, Azure data factory or Google Cloud Dataflow
  • Working with containerization technologies (Docker, Kubernetes etc…)
  • Experience working with data warehouse solutions including extracting and processing data using a variety of programming languages, tools and techniques (e.g. SSIS, Azure Data Factory, T-SQL, PL-SQL, Talend, Matillion, Nifi, AWS Data Pipelines) 
  • Experience in configuration management tools like Terraform, Chef, Puppet etc.


Dyson Singapore monitors the market to ensure competitive salaries and bonuses. Beyond that, you’ll enjoy a transport allowance and comprehensive medical care and insurance. But financial benefits are just the start of a Dyson career. Professional growth, leadership development and new opportunities abound, driven by regular reviews and dynamic workshops. And with a vibrant culture, the latest devices and a relaxed dress code reflecting our engineering spirit, it’s an exciting team environment geared to fuelling and realising ambition.

Interview guidance

We are following the government guidelines regarding COVID19. At this time all interviews will be conducted via video or telephone. We’re taking these precautionary measures to protect both our employee and candidate wellbeing. Our Talent Acquisition team will work with you and provide further information as appropriate.