- Data Science and Engineering
- Singapore - St James Power Station Headquarters
Dyson Connectivity is responsible for defining a world-class Connected strategy, product and owner experience in the Dyson Link App, to help take this amazing technology to market, explaining how our machines are different and their benefits. We never share the stage, where we demonstrate our machines and their superiority we win. We are also responsible for the development and delivery of the Connected features within the Dyson Link App and Cloud.
Connectivity will rely heavily on harnessing the power of data to provide insights back to the various areas of the business – NPI, RDD, Customer Services, Owner experience and GTM teams, to help make data-based decisions. Connectivity will also help us provide a more agile, mission-based operating model. One which will enable us to continue our journey but in a more structured and efficient way.
We are on an exciting data journey where our emphasis is on providing a better user experience for our Owners. To this end, we are investing heavily in bringing together machine, app and owner data to bring bespoke recommendations – whether it be help/support/guidance/repair/parts/machines – to our owners when they need it.
About the role
The Lead/Senior Data Engineer will be responsible for handling the data is fed down to the Data scientist and the Data analysts. They would be in charged with designing and building an end-to-end solution for the data stream which is used for analytics.
We’re looking for someone that can bring solid experience working with production grade quality data at scale. While experience is a must for this role, we need someone comfortable with uncertainty, innovation and able to drive into new territory, learning, growing and continuing to delivery along with the wider team
You will work closely with Google and Google certified partners to build out these capabilities that are crucial for our 5 year plan, where we seek to double the business and grow the direct to customer proposition.
You will work amongst a growing and multi-skilled set of teams ranging from app engineers, data engineers, data scientists and a top-class product team to deliver a scalable machine learning platform and services. The majority of the projects you will work on will be greenfield.
Expected day to day responsibilities:
Designing and building end-to-end data solutions for analytics and data flow
Plan, Design and build data system/structures/Schema’s which are to be used by Data Scientist, Data Analysts and Data Engineer
Optimise current data pipelines.
Assist with Migrating existing data pipelines onto the platform.
Strengthening data quality and reliability
Improving data lineage and governance. A principal of ownership, you build it, you own it.
Progressing standards and best practices for the platform and operational excellence
Contributing to the current Data Engineering and MLOPS framework
Lead a team of data engineers in designing, building, and maintaining scalable data infrastructure and pipelines.
Collaborate with cross-functional teams, including data scientists and analysts, to understand data requirements and implement effective solutions.
Architect and optimize data storage, processing, and retrieval systems to ensure efficient and reliable data operations.
Develop and implement data governance and quality control processes to ensure data accuracy, integrity, and compliance.
Evaluate and recommend new technologies and tools to enhance our data infrastructure and improve data processing and analytics capabilities.
Provide technical guidance, mentorship, and support to team members, fostering a collaborative and high-performing environment.
Stay up to date with industry trends, best practices, and emerging technologies in data engineering, and apply them to drive continuous improvement in our data operations.
Accountable for contributing towards and encouraging among the team a supportive and safe team environment.
Bachelor’s degree in computer science, Engineering, or a related field.
3+ years of relevant experience working with build data infrastructure/framework and ETL pipelines.
Strong proficiency in SQL and experience with database technologies (e.g., PostgreSQL, MySQL, or similar).
Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, or Java).
Solid understanding of distributed systems, data modelling, and data warehousing concepts.
Extensive experience with cloud-based data platforms (e.g., AWS, GCP, Azure, GCP cloud build, vertex AI and GCR for CI/CD ) and related services (e.g., S3, Redshift, BigQuery).
Experience with data integration and ETL tools (e.g. GKE Apache Spark, Airflow) is highly desirable.
Strong problem-solving skills and the ability to analyze complex data-related issues.
Excellent communication and leadership abilities, with a track record of leading and mentoring technical teams.
Looker for data monitoring and dashboarding is a bonus.
Experience with Terrafrom as code
Git for source control management
Dyson Singapore monitors the market to ensure competitive salaries and bonuses. Beyond that, you’ll enjoy a transport allowance and comprehensive medical care and insurance. But financial benefits are just the start of a Dyson career. Professional growth, leadership development and new opportunities abound, driven by regular reviews and dynamic workshops. And with a vibrant culture, the latest devices and a relaxed dress code reflecting our engineering spirit, it’s an exciting team environment geared to fuelling and realising ambition.
Dyson is an equal opportunity employer. We know that great minds don’t think alike, and it takes all kinds of minds to make our technology so unique. We welcome applications from all backgrounds and employment decisions are made without regard to race, colour, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other any other dimension of diversity.