About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
Anthropic is working on frontier AI research that has the potential to transform how humans and machines interact. As our models grow more powerful, securing them from exfiltration or misuse becomes critically important. In this role, you'll be helping to build and institute controls to lock down our AI training pipelines, apply security architecture patterns built for adversarial environments, and secure our model weights as we scale model capabilities.
Responsibilities:
- Design and implement secure-by-default controls as they relate to our software supply chain, AI model training systems, and deployment environments.
- Perform security architecture reviews, threat modeling, and vulnerability assessments to identify and remediate risks.
- Support Anthropic's responsible disclosure and bug bounty programs and participate in the Security Engineering team's on-call rotation.
- Accelerate the development of Anthropic's security engineers through mentorship and coaching, and contribute to company building activities like interviewing.
- Help build greater security awareness across the organization and coach engineers on secure coding practices.
- Lead and contribute towards large efforts such as building multi-party authorization for AI-critical infrastructure, helping to reduce sensitive production access, and securing build pipelines.
You may be a good fit if you:
- Have 8+ years of software development experience with a security focus.
- Have experience applying security best practices, like the principle of least privilege and defense-in-depth, to complex systems.
- Are proficient in languages like Rust, Python, Javascript/Typescript.
- Have a track record of launching successful security initiatives and working cross-functionally to enact such changes.
- Are passionate about making AI systems more safe, interpretable, and aligned with human values.
Strong candidates may also:
- Have experience supporting fast-paced startup engineering teams
- Care about AI safety risk scenarios
Deadline to apply:
None. Applications will be reviewed on a rolling basis.
The expected salary range for this position is:Annual Salary:$300,000—$320,000 USD
Logistics
Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship:
We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification.
Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time.
As such, we greatly value communication skills.The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.