Library / Editorial

Back to Library

Source document

May 2023 AI Risk Statement - Primary document

The primary one-line statement text, with source context and verification links for teams that want first-hand reference.

20 March 20262 min read

Author

sAIfe Hands Editorial Desk

Lead editorial voice

The primary authored voice for thesis episodes, signal posts and companion notes that connect technology to culture, risk and civic meaning.

AI futurescybergovernanceculturecivic imagination

Themes

AI safetygovernancepublic understanding

Why this document matters

The May 2023 AI Risk Statement is a short text with outsized institutional weight. It marked a shift from technical alignment discourse into explicit public-risk framing, and it did so in language that policy, media, and executive audiences could repeat without specialist translation.

For sAIfe Hands, this page is a primary source anchor: read the exact wording first, then follow the linked interpretation, signatory, and discussion pages.

Primary statement text

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

This is the one-line statement released by the Center for AI Safety in May 2023 and signed by a broad set of researchers and public figures.

Primary sources

Why this matters in the sAIfe Hands lens

  • The statement normalized extinction-risk language in mainstream institutions.
  • It created a common baseline sentence that different camps could debate.
  • It accelerated a policy transition from "is this speculative?" to "what controls are proportionate?"
  • It exposed a continuing tension between long-horizon catastrophic risk and present-day deployment harms.

Explore further

Related discussions

Signal Room bridge

23 Feb 2026

An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future!

A high-signal conversation with Stuart Russell that maps the same risk framing to governance and institutional response.

Play on sAIfe Hands

Library deep dive

25 May 2021

Stuart Russell on the flaws that make today's AI architecture unsafe, and a new approach that could fix them

Foundational technical framing for understanding why extinction-risk language moved from fringe to central.

Play on sAIfe Hands

Public discourse linkage

11 Jan 2025

Yoshua Bengio - 2 Years Before Everything Changes

A mainstream-facing articulation of frontier risk and governance urgency from a core AI researcher.

Play on sAIfe Hands

Continue exploring

Signatory focus20 March 2026

Geoffrey Hinton - 2023 warning arc

A compact brief on how Hinton's 2023 public warnings changed mainstream risk language and why that still matters for governance.

AI safetypublic understandinggovernance
2 min readBy sAIfe Hands Editorial Desk
Open entry
Primary source brief15 May 2025

Geoffrey Hinton - Nobel Prize Conversations

An official long-form Hinton source on AI capability progress, risks, and human-AI coexistence.

AI safetygovernancepublic understanding
1 min readBy sAIfe Hands Editorial Desk
Open entry