Product Public Policy, Model Development and Research

San Francisco, CA | New York City, NYFull-TimeMid-levelResearch

You will be redirected to the company career page

About Anthropic

  • Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

Role Overview

  • As a member of the Product Public Policy team, you'll ensure Anthropic's model development practices inform emerging AI governance frameworks and build trust with policy stakeholders. You own the policy dimensions of every major model release, translate our Responsible Scaling Policy into frameworks for transparency legislation, work closely with our Labs team on policy-informed research/product development, and lead initiatives that demonstrate our commitment to openness and responsible development.

In this role you will

  • Translate RSP components into emerging policy frameworks
  • Own policy engagement related to Claude's Constitution, translating our model values and behavioral frameworks into governance standards and demonstrating how principled AI development can inform regulation
  • Drive policy strategy and positioning for major model releases, including how to position developments in light of voluntary commitments, policy pressures; external engagement and input
  • Partner with research, legal, and cross functional teams to translate technical advances into policy narratives for regulators, AI Safety Institutes, and the technical policy community
  • Develop frameworks showing how our technical practices (interpretability, safety evals, development methodology) tie to public policy priorities
  • Represent Anthropic in standards bodies and regulatory consultations on model development and governance
  • Translate RSP components into emerging policy frameworks
  • Own policy engagement related to Claude's Constitution, translating our model values and behavioral frameworks into governance standards and demonstrating how principled AI development can inform regulation
  • Drive policy strategy and positioning for major model releases, including how to position developments in light of voluntary commitments, policy pressures; external engagement and input
  • Partner with research, legal, and cross functional teams to translate technical advances into policy narratives for regulators, AI Safety Institutes, and the technical policy community
  • Develop frameworks showing how our technical practices (interpretability, safety evals, development methodology) tie to public policy priorities
  • Represent Anthropic in standards bodies and regulatory consultations on model development and governance

You may be a good fit if you

  • 8+ years in AI/tech policy with technical depth to engage credibly on model development and evaluation
  • Strong understanding of LLM development, evaluation methodologies, and AI safety
  • Track record translating technical practices into policy frameworks and regulatory strategies
  • Proven cross-functional work between technical teams and policy stakeholders
  • Expert stakeholder management with experience working across policy, technical, academia, and civil society
  • Experience with AI governance frameworks (EU AI Act, transparency requirements, safety standards)
  • 8+ years in AI/tech policy with technical depth to engage credibly on model development and evaluation
  • Strong understanding of LLM development, evaluation methodologies, and AI safety
  • Track record translating technical practices into policy frameworks and regulatory strategies
  • Proven cross-functional work between technical teams and policy stakeholders
  • Expert stakeholder management with experience working across policy, technical, academia, and civil society
  • Experience with AI governance frameworks (EU AI Act, transparency requirements, safety standards)
  • The annual compensation range for this role is listed below.
  • For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

How we're different

  • We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
  • The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Job Summary

CompanyAnthropic
LocationSan Francisco, CA | New York City, NY
TypeFull-Time
LevelMid-level
DomainResearch