Sam Altman published a new set of operating principles on Sunday, and the difference between what OpenAI said in 2018 and what it says now tells a story about how the company views itself, its competitors, and its responsibilities.
OpenAI’s original 2018 charter noted artificial general intelligence (AGI) could be up to 12 times better than humans in almost all economically valuable tasks. In the 2026 Charter, AGI is referred to only twice.
This change is not merely semantic. AGI was the organizing principle of the former, while the latter is organized by the iterative deployment of AI within society.
The 2026 Charter declares, “The world needs to grapple with each successive level of AI capability, a framing that elevates today’s commercial applications rather than any future superintelligence as the central concern of OpenAI’s philosophy.”
The 2018 charter included one particularly noteworthy provision. If any other research institute dedicated to security made a breakthrough in AGI before OpenAI, OpenAI promised to end the competition and cooperate.
The 2026 charter removed this promise without comment. Instead, it recognizes that, in some cases, “exchanging some empowerment for greater flexibility” may be necessary.
Anthropic’s valuation is close to $1 trillion, while OpenAI’s valuation is closer to the $800 billion range, Business Insider reports.
The 2018 Charter uses first-person pronouns in their full meaning: “We will”, “We commit”, and “Our primary fiduciary responsibility is to humanity”. In contrast, the 2026 Charter addresses governments and the larger tech community to consider innovative economies and ethical use of AI technologies.
