About Attentive:
Attentive® is the AI marketing platform for leading brands, designed to optimize message performance through 1:1 SMS and email interactions. Infusing intelligence at every stage of the consumer's purchasing journey, Attentive empowers businesses to achieve hyper-personalized communication with their customers on a large scale. Leveraging AI-powered tools, a mobile-first approach, two-way conversations, and enterprise-grade technology, Attentive drives billions in online revenue for brands around the globe.
Trusted by over 8,000 leading brands such as CB2, Urban Outfitters, GUESS, Dickey’s Barbecue Pit, and Wyndham Resort, Attentive is the go-to solution for delivering powerful commerce experiences for consumers with the brands they love.Attentive’s growth has been recognized by Deloitte’s Fast 500, Linkedin’s Top Startups and Forbes Cloud 100 all thanks to the hard work from our global employees! Who we areWe’re looking for a self-motivated, highly driven Senior Software Engineer to join our Machine Learning Operations (MLOps) team. As a team, we enable Attentive’s Machine Learning (ML) practice to directly impact Attentive’s AI product suite through the tools to train, inference, and deploy ML models with higher velocity and performance, while maintaining reliability.
We build and maintain a foundational ML platform spanning the full ML lifecycle for consumption by ML engineers and data scientists. This is an exciting opportunity to join a rapidly growing MLOps team at the ground floor with the ability to drive and influence the architectural roadmap enabling the entire ML organization at Attentive.This team and role is responsible for building and operating the ML data, tooling, serving, and inference layers of the ML platform. We are excited to bring on more engineers to continue expanding this stack.
Why Attentive needs you
- Demonstrate the ability to analyze, troubleshoot, coordinate, and resolve complex infrastructure issues
- Build, operate, and maintain a low-latency, high volume ML serving layer covering both online and batch inference use cases
- Orchestrate Kubernetes and ML training / inference infrastructure exposed as an ML platform
- Expose and manage environments, interfaces, and workflows to enable ML engineers to develop, build, and test ML models and services
- Manage and expand our feature store implementation that allows ML teams to self-service data labeling, feature engineering, and batch inferencing
- Close the latency gap on model inference to online, real-time model serving
- Develop automation workflows to improve team efficiency and ML stability
- Analyze and improve efficiency, scalability, and stability of various system resources
- Partner with other teams and business stakeholders to deliver business initiatives
- Help onboard new team members, provide mentorship and enable successful ramp up on your team's code bases
About you
- You have been working in the areas of MLOps / Platform Engineering / DevOps / Infrastructure for 5+ years, and have an understanding of gold standard practices and best in class tooling for ML
- Your passion is exposing platform capabilities through interfaces that enable high performance ML practices, rather than designing ML experiments (this team does not directly develop ML models)
- You understand the key differences between online and offline ML inferences and can voice the critical elements to be successful with each to meet business needs
- You have experience building infrastructure for an ML platform and managing CPU and GPU compute
- You have a background in software development and are passionate about bringing that experience to bear on the world of ML infrastructure
- You have experience with Infrastructure as Code using Terraform and can’t imagine a world without it
- You understand the importance of CI/CD in building high-performing teams and have worked with tools like Jenkins, CircleCI, Argo Workflows, and ArgoCD
- You are passionate about observability and worked with tools such as Splunk, Nagios, Sensu, Datadog, New Relic
- You are very familiar with containers and container orchestration and have direct experience with vanilla Docker as well as Kubernetes as both a user and as an administrator
Our environment
- We have access to Python, Snowflake, SQL, DBT
- Our data visualization tool is Looker
- Our product backend is Java and Python microservices coupled with Spark, Kinesis, Airflow, Snowflake, and Postgres, all hosted on AWS
- Our team supports stakeholders from Client Strategy, Product Management, Sales, Marketing, Finance, Engineering, Designers and the Leadership Team
- We believe our company will win in the long run through product innovation and data-driven decision-making
You'll get competitive perks and benefits, from health & wellness to equity, to help you bring your best self to work.For US based applicants:- The US base salary range for this full-time position is $160,000 - $240,000 annually + equity + benefits- Our salary ranges are determined by role, level, and location#LI-MDK1
Attentive Company Values
Default to Action - Move swiftly and with purposeBe One Unstoppable Team - Rally as each other’s championsChampion the Customer - Our success is defined by our customers' successAct Like an Owner - Take responsibility for Attentive’s successLearn more about AWAKE, Attentive’s collective of employee resource groups.If you do not meet all the requirements listed here, we still encourage you to apply! No job description is perfect, and we may also have another opportunity that closely matches your skills and experience.At Attentive, we know that our Company's strength lies in the diversity of our employees.
Attentive is an Equal Opportunity Employer and we welcome applicants from all backgrounds. Our policy is to provide equal employment opportunities for all employees, applicants and covered individuals regardless of protected characteristics. We prioritize and maintain a fair, inclusive and equitable workplace free from discrimination, harassment, and retaliation.Apply for this job