The Cooperation Clause Is Gone
The most striking deletion from the original charter is a commitment that once defined OpenAI's public identity. The 2018 document stated: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project." That line has been removed entirely from the 2026 version.
In its place, the new principles are competitive in tone. Altman's blog post acknowledges that OpenAI is "a much larger force in the world than it was a few years ago" and states the company will be transparent when its operating principles need to change. But the idea of stepping aside for a better-aligned rival? Gone.
That shift is hard to read as anything other than a response to the current moment. Anthropic, founded by former OpenAI researchers including Dario Amodei, has seen a sharp rise in user and investor interest. Business Insider reports that investor demand has pushed Anthropic's secondary market valuation to around $1 trillion, overtaking OpenAI's mid-$800 billion range. The old cooperation pledge made more sense when OpenAI had no serious rivals. It makes considerably less sense now.
From Commitments to Suggestions
A subtler but equally meaningful change is in how the language itself is framed. The 2018 charter was built on first-person commitments: "we will," "we commit," "we expect." The 2026 document shifts to broader recommendations for the ecosystem: governments should consider new economic models, the world needs huge AI infrastructure investment, key AI decisions should be made democratically.
Those are sensible ideas. But they are notably different from commitments OpenAI is making for itself. The company's "primary fiduciary duty is to humanity," the 2018 document stated. The 2026 version does not include that phrase.
The new principles also introduce a flexibility clause that earlier versions lacked. "We can imagine periods in the future where we have to trade off some empowerment for more resilience," the document reads — meaning OpenAI is explicitly reserving the right to restrict access to its models if safety or security considerations demand it. That is a significant evolution in how the company talks about deployment decisions, and one that could affect developers who have built on OpenAI's APIs.
The Timing Is Not Accidental
Altman published the new principles the same week OpenAI is completing its conversion from a capped-profit entity to a fully commercial structure, a process that has drawn criticism from Elon Musk, former employees, and state attorneys general. Publishing a principles document that explicitly includes commitments against concentrating AI power in any single entity, including OpenAI itself, is a public answer to those critics.
Whether the document holds up as a genuine governance framework or functions primarily as a reputational reset will depend on what OpenAI actually does. The five principles are broad enough to accommodate almost any decision the company might make. That flexibility could be a feature — the adaptability principle acknowledges openly that OpenAI does not know what it will learn — or it could be the point.
What is clear is that the company publishing these principles is not the nonprofit research lab that wrote the 2018 charter. It is a commercial enterprise valued in the hundreds of billions, competing aggressively with well-funded rivals, and making deployment decisions at a scale its founders did not anticipate. The principles have caught up to that reality, even if some of the commitments that made the original document distinctive have not made the trip.
This analysis is based on reporting from Business Insider via Yahoo Finance and OpenAI.
Image courtesy of Jonathan Kemper.
This article was generated with AI assistance and reviewed for accuracy and quality.