National Security Policy Lead

Remote-Friendly (Travel-Required) | Washington, DC; San Francisco, CA; Washington, DCFull-TimeLeadOther

You will be redirected to the company career page

About Anthropic

  • Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

  • We are looking for a National Security Policy Lead to guide our work to address a range of national security challenges involving AI. This role will develop and lead engagements on policy approaches to national security during a critical period in AI development and governance. You will be a high-agency member of a team dedicated to national security, and your work will ensure that Anthropic supports the security of U.S. and allied democracies, their geopolitical strength and competitiveness, and their adoption of AI for defense and intelligence purposes. You will partner closely with colleagues across legal, trust and safety, product and sales, and research functions.

In this role, you will

  • Design policy proposals to address national security challenges related to AI, and lead associated policy engagements
  • Shape Anthropic’s own policies and approaches to mitigating national security risks involving its products
  • Develop strategies for AI to safeguard the geopolitical strength and competitiveness of the United States and allied democracies
  • Support and promote collaborations with national security partners, across the public and private sectors, on model testing and deployment for national security
  • Collaborate with technical teams to translate Anthropic threat model research into concrete policy proposals, stakeholder education, and meaningful contributions to public discussions and debates
  • Engage in thought leadership and planning for changes that very powerful AI may bring to the global national security landscape
  • Design policy proposals to address national security challenges related to AI, and lead associated policy engagements
  • Shape Anthropic’s own policies and approaches to mitigating national security risks involving its products
  • Develop strategies for AI to safeguard the geopolitical strength and competitiveness of the United States and allied democracies
  • Support and promote collaborations with national security partners, across the public and private sectors, on model testing and deployment for national security
  • Collaborate with technical teams to translate Anthropic threat model research into concrete policy proposals, stakeholder education, and meaningful contributions to public discussions and debates
  • Engage in thought leadership and planning for changes that very powerful AI may bring to the global national security landscape

You may be a good fit if you

  • Have 10+ years of experience working in government or private-sector roles related to national security
  • Hold an active TS/SCI clearance or held one in the last two years, and have the ability to obtain/maintain one
  • Possess excellent written and verbal communication skills, and can translate technical concepts into accessible information.
  • Have experience designing and advocating for concrete policy and regulatory proposals regarding technology and national security
  • Are adept at working with diverse cross functional teams (including but not limited to trust and safety, legal, product, research, comms, and marketing)
  • Are high-agency, able to develop and execute strategy independently, taking account of dependencies with other cross-functional teams within an organization.
  • Have demonstrated interest and experience in a complicated technical subject (ideally, AI, but other examples could be quantum computing, cryptography, fusion power, etc.)
  • Have 10+ years of experience working in government or private-sector roles related to national security
  • Hold an active TS/SCI clearance or held one in the last two years, and have the ability to obtain/maintain one
  • Possess excellent written and verbal communication skills, and can translate technical concepts into accessible information.
  • Have experience designing and advocating for concrete policy and regulatory proposals regarding technology and national security
  • Are adept at working with diverse cross functional teams (including but not limited to trust and safety, legal, product, research, comms, and marketing)
  • Are high-agency, able to develop and execute strategy independently, taking account of dependencies with other cross-functional teams within an organization.
  • Have demonstrated interest and experience in a complicated technical subject (ideally, AI, but other examples could be quantum computing, cryptography, fusion power, etc.)
  • The annual compensation range for this role is listed below.
  • For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

How we're different

  • We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
  • The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Job Summary

CompanyAnthropic
LocationRemote-Friendly (Travel-Required) | Washington, DC; San Francisco, CA; Washington, DC
TypeFull-Time
LevelLead
DomainOther
National Security Policy Lead at Anthropic (Remote-Friendly (Travel-Required) | Washington, DC; San Francisco, CA; Washington, DC) | WorkWay