Job FunctionsDesigning, building, and maintaining scalable ML model deployment pipelinesManaging and optimizing cloud-based ML infrastructureImplementing monitoring, logging, and alerting systems for ML models in productionAutomating model training, evaluation, and deployment processes using CI/CD pipelinesEnsuring compliance with MLOps best practicesCollaborating with data scientists, ML engineers, and software developers to streamline the transition of models from development to productionOptimizing model serving infrastructure using Kubernetes, Docker, and serverless technologiesImproving data pipelines for feature engineering, data preprocessing, and real-time data streamingResearching and implementing tools for scalable AI development
Job RequirementsHands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, TFX, SageMaker)Strong expertise in cloud services (AWS, GCP, Azure and other Clouds)Proficiency in containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation)Experience in building CI/CD pipelines for machine learning modelsSolid programming skills in Python, Go, or Shell scripting for automationFamiliarity with data versioning and model monitoring tools (DVC, Evidently AI, Prometheus, Grafana)
SkillsHands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, TFX, SageMaker)Strong expertise in cloud services (AWS, GCP, Azure and other Clouds)Proficiency in containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation)Experience in building CI/CD pipelines for machine learning modelsSolid programming skills in Python, Go, or Shell scripting for automationFamiliarity with data versioning and model monitoring tools (DVC, Evidently AI, Prometheus, Grafana)Strong problem-solving skills with a proactive, self-motivated attitudeExcellent collaboration and communication skills to work in a cross-functional team
About the Role:
We are seeking a passionate MLOps Engineer to join our team and drive the deployment, monitoring, and optimization of machine learning models in production. This role will be key in ensuring the reliability, scalability, and efficiency of our ML infrastructure while supporting the development and release of AI-driven solutions. If you have a strong background in cloud technologies, automation, and ML model deployment, this is an excellent opportunity to work on cutting-edge AI applications.
Key Responsibilities
- Design, build, and maintain scalable ML model deployment pipelines for real-time and batch inference.
- Manage and optimize cloud-based ML infrastructure, ensuring high availability and cost efficiency.
- Implement monitoring, logging, and alerting systems for ML models in production to track performance, data drift, and anomalies.
- Automate model training, evaluation, and deployment processes using CI/CD pipelines.
- Ensure compliance with MLOps best practices, including model versioning, reproducibility, and governance.
- Collaborate with data scientists, ML engineers, and software developers to streamline the transition of models from development to production.
- Optimize model serving infrastructure using Kubernetes, Docker, and serverless technologies.
- Improve data pipelines for feature engineering, data preprocessing, and real-time data streaming.
- Research and implement tools for scalable AI development, such as Retrieval-Augmented Generation (RAG) and agent-based applications.
Qualifications
- Hands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, TFX, SageMaker).
- Strong expertise in cloud services (AWS, GCP, Azure and other Clouds).
- Proficiency in containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation).
- Experience in building CI/CD pipelines for machine learning models.
- Solid programming skills in Python, Go, or Shell scripting for automation.
- Familiarity with data versioning and model monitoring tools (DVC, Evidently AI, Prometheus, Grafana).
- Understanding of feature stores and efficient data management for ML workflows.
- Strong problem-solving skills with a proactive, self-motivated attitude.
- Excellent collaboration and communication skills to work in a cross-functional team.
- Fluent in Mandarin for effective communication within a multilingual team environment.
Why Join Us
- Work with cutting-edge MLOps and AI deployment technologies in a fast-growing industry.
- Be part of a dynamic and innovative team focused on AI and cloud solutions.
- Gain exposure to end-to-end machine learning workflows, from data processing to model deployment.
- Opportunities for professional growth in cloud computing, automation, and AI infrastructure.
Apply for this job
Life at Patsnap
At PatSnap, we pride ourselves on knowing everything about technology and innovation. Whether it’s determining your competitors’ innovation strategy, identifying new areas to grow your business or helping you navigate past potential risks, we identify the technological opportunities that could affect the future growth and survival of your business.\n\nWith the use of AI technology, we connect and analyze data points from patents, journals, venture capitalists, startups, M&A, technology news and more, so you can separate the signals from the noise at every step of your innovation funnel.\n\nClose the innovation insights gap, improve collaboration between teams and overcome strategic challenges with PatSnap. \n\nExperience PatSnap first-hand - book a demo at www.patsnap.com
Thrive Here & What We Value- Inclusive environment- Lifelong learning- Entrepreneurial spirit- Openness and honesty- Customercentric approach- Generous annual leave and bank holidays- Paid parental leave- Private healthcare- Eyecare voucher scheme- Perkbox benefit scheme