Anthropic has taken a different stance, lobbying against the measure and advocating for stronger accountability requirements. “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,” said Cesar Fernandez, the company’s head of U.S. state and local government relations.
The disagreement highlights a widening divide in how leading AI firms approach regulation. OpenAI has supported policies that emphasize transparency without introducing new liability risks, while Anthropic has pushed for stricter oversight. In Illinois, Anthropic is backing an alternative proposal, Senate Bill 3261, which would require companies to publish safety and child protection plans subject to review.
The fight also reflects a broader shift in AI governance. With federal efforts to impose a nationwide pause on state-level regulation failing, individual states are moving forward with their own rules. That has opened the door for tech companies to engage directly in local policymaking, shaping how emerging AI laws are written and enforced.
OpenAI’s involvement follows a series of legal challenges tied to its technology, including wrongful death lawsuits connected to chatbot interactions. The Illinois proposal goes further than previous efforts by not only avoiding new liability but actively limiting it.
Anthropic, by contrast, has leaned into positioning itself as a proponent of stricter safeguards, arguing that AI companies should face scrutiny if their systems contribute to harm.
The outcome of the Illinois debate could influence how other states approach AI regulation, particularly as lawmakers weigh how to balance innovation with accountability in a rapidly evolving industry.
This analysis is based on reporting from Gizmodo.
Image courtesy of Eileen T. Melser/Chicago Tribune.
This article was generated with AI assistance and reviewed for accuracy and quality.