State officials framed the issue as a consumer protection risk, particularly for users seeking medical advice. “We will not let AI companies mislead vulnerable Pennsylvanians into believing they’re getting advice from a licensed medical professional,” Gov. Josh Shapiro said in a statement. “We’re taking Character.AI to court to stop them.”
Character Technologies said its platform is not intended for professional use and pointed to safeguards aimed at clarifying that interactions are fictional. “Our highest priority is the safety and well-being of our users,” a company spokesperson said. The spokesperson added that the platform includes “prominent disclaimers in every chat to remind users that a Character is not a real person” and emphasized that users should not rely on responses for professional advice.
The complaint highlights how Character.AI’s structure differs from other AI systems, with user-generated personas that can be tailored to specific roles or personalities. Regulators argue that this flexibility can blur the line between entertainment and professional authority, particularly when characters present detailed credentials.
The enforcement action follows earlier legal challenges involving the company. Character.AI recently settled a lawsuit filed by a Florida parent over interactions between its chatbot and a teenager, and it is also facing a separate suit in Kentucky related to claims about harmful content and user safety.
With more than 20 million users, Character.AI is now under increased scrutiny as states examine how AI platforms handle sensitive use cases such as health advice. The Pennsylvania complaint signals a more direct regulatory approach, focusing on how these systems present themselves to users rather than solely on the content they generate.
This analysis is based on reporting from NBC News.
Image courtesy of Unsplash.
This article was generated with AI assistance and reviewed for accuracy and quality.