Skip navigation

Have you considered using our job search? Click here to search our current jobs.

Have you considered using our job search? Click here to search our current jobs.

Machine Learning Operations Engineer


Data Science and Engineering
Poland - Krakow Office

About Us

Dyson is a global technology company with a unique philosophy - to solve problems that others ignore, first. It transforms every category it enters with radical and iconic re-inventions that work, perform, and look very different.  

Data excellence at Dyson is delivered by a diverse and collaborative global community spread across Dyson locations from Bristol to Chicago, Shanghai to Singapore.

Domain-specific experts form 'spoke' data teams, enabled by a central team at the 'hub'. All teams benefit from significant recent investments in cloud technologies and tools, combined with an expansive scope, and no shortage of ambition and momentum; data is recognised throughout the organisation as critical to all of Dyson’s strategic objectives.

About the Role

This role is part of a new Data Science Centre of Excellence (CoE), co-located in Europe and Singapore. The mission of the Data Science CoE is to spark use cases that leverage data science and machine learning to deliver transformative value to Dyson. The CoE acts as a sounding board and sparring partner where teams have their own capability; as a development and recruitment partner where teams are growing that capability. Finally, it is the CoE who is responsible for robust and effective data science platform design, ensuring that we can iterate rapidly through experiments and reach production with efficiency and confidence.

The role of ML Ops Engineer is key to ensuring that this latter objective of the Data Science CoE is a success.

Key Responsibilities

  • You will work alongside CoE Data Scientists and be a key contact with the markets and functions as they take their models into production.

  • You will be central to forming the MLOps landscape at Dyson, shaping the ways of working and standards that will ensure models deliver their promised value across the organisation.

  • We work with streaming data and we are a multi-cloud organisation.

  • The role will be a trusted advisor on all aspects of MLOps, from scaling and throughput, to infrastructure and deployment strategies.

  • As a key contributor in the CoE, you will manage, design and develop secure, scalable and user-friendly processes and code.

  • Taking code from experimentation and notebook-based working to production will require close collaboration with Data Scientists and SMEs from around the organisation, as well as colleagues in the Global Data Function dealing with cloud technology, DevOps, and data engineering.

  • Reusability and simplicity will be core tenets of our MLOps philosophy. The role will be implementing and shaping MLOps patterns and curating and contributing to our shared Kubeflow component library.

  • Being a part of the Centre of Excellence, the organisation will look to this role to publicise and demonstrate best practice and ways of working as we grow our MLOps capability.

About You

  • You will have experience building and scaling MLOps on cloud

  • Confidence in AI and ML disciplines

  • Strong experience in taking data science experimental code to production

  • 1-2 years' experience in CI/CD, machine learning or data engineering

  • Skilled programmer in Python

  • Solid understanding of cloud concepts

  • Experience building Kubeflow pipelines

  • Familiarity with docker and building custom containers

  • Knowledge of dev/staging/production environment methodologies in cloud environments

  • Experience of model monitoring techniques, and implications for retraining decisions

  • Knowledge of TFX and ability to craft TFX pipelines

  • Clear communicator, happy with distilling and effectively simplifying complex MLOps topics to audiences of a broad technical knowledge range

  • A desire to socialise an MLOps platform across an organisation

Desirable experience

  • 1 year of working with Google Cloud AI Platform and Vertex AI

  • 1 year if building scalable, component-driven pipelines in Kubeflow

  • Strong knowledge of TFX

  • Facility with various model deployment strategies

  • Experience of building and maintaining robust, reliable, and critical production pipelines, serving both batch and high-throughput streaming data

What We Can Offer

  • Private medical care

  • MyBenefit cafeteria program

  • Discount on Dyson products

  • Insurance package

  • Retirement plan

  • Holiday allowance

  • Performance bonus

  • Trainings

  • Possibility of working from home


Dyson is an equal opportunity employer. We know that great minds don’t think alike, and it takes all kinds of minds to make our technology so unique. We welcome applications from all backgrounds and employment decisions are made without regard to race, colour, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other any other dimension of diversity.

Interview guidance

We are following the government guidelines regarding COVID19. At this time all interviews will be conducted via video or telephone. We’re taking these precautionary measures to protect both our employee and candidate wellbeing. Our Talent Acquisition team will work with you and provide further information as appropriate.