- Data Science and Engineering
- Singapore - Technology Centre
Data & Analytics at Dyson
Data and analytics excellence at Dyson are delivered by a diverse and collaborative global community spread across Dyson locations from UK to Chicago, Shanghai to Singapore. Domain-specific experts form spoke analytics teams, enabled by a central team at the hub. All teams benefit from significant recent investments in cloud technologies and tools, combined with an expansive scope and no shortage of ambition and momentum; data and analytics are recognised throughout the organisation, to the highest level, as critical to all of Dyson’s strategic objectives.
With a ‘one-team’ approach, the global community are on a mission to:
- evolve existing solutions to stay ahead
- embed emerging solutions to capitalise on potential benefits
- deliver conceptualised & future solutions to introduce net-new capability
Our Data Team
As the ‘hub’ team delivering the data, technology and community provision enabling Dyson’s global data and analytics capabilities, Global Data Function (GDF) have end-to-end responsibility for data from foundations (DQ, MDM) to management (data platforms, integrations), to value realisation (analytics enablement and delivery).
The team are a multi-disciplinary, global team providing round-the-clock development and operations - including product and project management, community enablement, governance, data architecture, data engineering, data science, and analytics expertise.
Involved with every aspect of Dyson’s global business - from finance to product development, manufacturing to owner experience - data is enjoying record-breaking investment and mandate for 2021 and beyond, seeking to deliver solutions generating impressive and tangible business value.
About the role
As a Data Engineer you will be responsible for developing, industrialising, and optimising Dyson's big data platform running on GCP. You will ingest new data sources, write data pipelines as code, and transform, enrich and publish data using the most efficient methods.
Working with data from across Dyson’s global data estate, you will understand the best way to serve up data at scale to a global audience of analysts. You will work closely with data architects, data scientists and data product managers on the team to ensure that we are building an integrated, performant solutions.
You will have a Software Engineering mind-set, be able to leverage CI/CD and apply critical thinking to the work you undertake. The role would suit candidates looking to make the move from working with traditional big data stacks such as Spark and Hadoop to using cloud native technologies (DataFlow, Big Query, Docker/Kubernetes, Pub/Sub, Redshift, Cloud Functions).
Candidates who also have strong software development skills and wishing to make the leap to working with Data at scale will also be considered.
Designing and building end to end Data Engineering solutions on the Google Cloud Platform.
Ensuring the platform is secure, compliant and efficient.
Being a proactive member of DevOps / Agile scrum driven team; always looking for ways to tune and optimise all aspects of work delivered on the platform.
Aligning work to both core development standards and architectural principles.
Person specification / Core Competencies:
Resilient and comfortable with high pace change.
Strong programming skills in languages such as Python/Java/Scala including building, testing and releasing code into production
Resilient and comfortable with high pace of change
Strong SQL skills and experience working with relational/columnar databases (e.g. SQL Server, Postgres, Oracle, Presto, Hive, etc…)
Knowledge of data modelling techniques and integration patterns
Practical experience writing data analytic pipelines
Experience integrating/interfacing with REST APIs / Web Services
Experience handling data securely
Experience with DevOps software delivery and CI/CD processes
A willingness to learn and find solutions to complex problems
Experience migrating from on-premise data stores to cloud solutions
Experience of designing and building real/near real time solutions using streaming technologies (e.g. Dataflow/Apache Beam, Fink, Spark Streaming etc)
Hands-on experience with cloud environments (GCP & AWS preferred)
Practical experience with traditional Big Data stacks (e.g Spark, Flink, Hbase, Flume, Impala, Hive etc)
Experience with non-relational database solutions (e.g. Big Query, Big Table, MongoDB, Dynamo, HBase, Elasticsearch)
Experience with AWS data pipeline, Azure data factory or Google Cloud Dataflow
Working with containerization technologies (Docker, Kubernetes etc…)
Experience working with data warehouse solutions including extracting and processing data using a variety of programming languages, tools and techniques (e.g. SSIS, Azure Data Factory, T-SQL, PL-SQL, Talend, Matillion, Nifi, AWS Data Pipelines)
Exposure to visualisation technologies such as Looker and Tableau
Knowledge of and experience in automation technologies
Dyson monitors the market to ensure competitive salaries and pension contributions. Beyond that, you’ll also enjoy a profit-related bonus, generous leave and life insurance. But financial benefits are only the start of a Dyson career. Rapid professional growth, leadership development and new opportunities abound, driven by regular reviews and dynamic workshops. And with a vibrant culture, flexible working hours, the latest devices and a relaxed dress code reflecting our engineering spirit, it’s an exciting team environment geared to creativity, innovation and ambition.
At Dyson, it's about more than our machines. We recognise that our success comes from our inventive people. We believe in including everybody and supporting you on your journey with us
We are following the government guidelines regarding COVID19. At this time all interviews will be conducted via video or telephone. We’re taking these precautionary measures to protect both our employee and candidate wellbeing. Our Talent Acquisition team will work with you and provide further information as appropriate.