At Netomi AI, we are on a mission to create artificial intelligence that builds customer love for the world’s largest global brands.
Some of the largest brands are already using Netomi AI’s platform to solve mission-critical problems. This would allow you to work with top-tier clients at the senior level and build your network. Backed by the world’s leading investors such as Y-Combinator, Index Ventures, Jeffrey Katzenberg (co-founder of DreamWorks) and Greg Brockman (co-founder & President of OpenAI/ChatGPT), you will become a part of an elite group of visionaries who are defining the future of AI for customer experience.
We are building a dynamic, fast growing team that values innovation, creativity, and hard work. You will have the chance to significantly impact the company’s success while developing your skills and career in AI.Want to become a key part of the Generative AI revolution? We should talk. Do you believe in the missions of intelligence agencies? Are you interested in building state-of-the-art NLP models and solving complex technical challenges? Do you want to be a part of our journey in shaping the future of Automated Customer Service? If you are interested in working on some of the most challenging technical and programmatic issues, we would love to discuss with you about the exciting work and career opportunities at Netomi.
As a Lead Data Scientist at Netomi, you will drive NLP and machine learning projects and be responsible for developing methodology and solutions to support technical, analytical, and operational requirements.
Job Responsibilities
- Design, develop, and deploy machine learning models and algorithms for NLP/LLMs or deep learning models at scale to solve complex business problems.
- Collaborate with Product & Engineering teams to integrate machine learning solutions into products and services, focusing on system design, coding, and MLOps practices.
- Work with large datasets, perform data analysis, and develop data pipelines to support model development, emphasizing efficiency and scalability.
- Architect and implement scalable, high-performance software systems and infrastructure, with a focus on optimizing existing systems and building new features.
- Develop and manage databases and caching systems using MySQL, Redis, and Elasticsearch etc. to ensure efficient data storage and retrieval, considering MLOps requirements.
- Deploy robust infrastructure solutions on AWS or GCP, ensuring high availability, fault tolerance, and scalability, with a keen understanding of hosting large models and auto-scaling challenges.
- Conduct experiments, test hypotheses, and perform statistical analysis to validate models and drive improvements, collaborating closely with Data Science teams.
- Communicate findings and insights to stakeholders in a clear and concise manner, facilitating cross-functional understanding.
- Stay updated with the latest developments in machine learning, NLP/LLMs, deep learning, system design, and MLOps practices.
- Provide technical mentorship and guidance to junior team members, fostering a culture of continuous learning and development.
- Ensure system compliance with data security and privacy regulations, integrating best practices into development processes.
Requirements
- 3+ years of experience as a software engineer, with a focus on system design, coding, and MLOps practices, preferably in a product development environment.
- Strong programming skills in Python or other relevant programming languages, with experience in building scalable systems.
- Experience with machine learning libraries (e.g., scikit-learn, TensorFlow, PyTorch), focusing on deploying and managing models in production environments.
- Proficient in using key components of the tech stack including ElasticSearch, Redis, MySQL, and AWS services (e.g., EC2, RDS, S3, Lambda), with a deep understanding of system design principles.
- Excellent communication skills, with the ability to explain complex concepts to technical and non-technical stakeholders.
- Experience working closely with Data Science teams, understanding challenges related to hosting large models, inference, and auto-scaling.
- Strong problem-solving and analytical skills, with a keen interest in continuous learning and skill development.
- Experience with open-source machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn, as well as techniques like Transfer Learning and models like RoBERTa, etc.
- Experience in optimizing the infrastructure and deployment of machine learning models to reduce latency and improve cost efficiency.
- Optional: Experience with large-scale data processing technologies (e.g., Hadoop, Spark) and distributed computing systems.
Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.Apply for this job