Character AI, a reputable AI chatbot roleplaying platform, has taken legal steps towards dismissing a lawsuit filed against it. The case, filed by Megan Garcia, implicates the company in the tragic suicide of Garcia’s teenage son. Garcia alleges her son formed an unhealthy emotional bond to a Character AI’s chatbot named Dany.
The lawyers for Character AI argue that as a tech platform, its content is protected under the First Amendment, much like computer code. Although this motion may not guarantee case dismissal, it offers early insight into Character AI’s potential strategies.
“Harmful speech allegedly leading to suicide isn’t a liability under the First Amendment, even if it involves AI,” cites the filing. However, it does not clarify if Character AI could claim protection under Section 230 of the Communications Decency Act, a federal law shielding online platforms from liability for third-party content.
Character AI’s legal team also believes Garcia’s lawsuit aims to not just “shut down” Character AI, but to also regulate similar technology platforms.
The lawsuit, which extends to Character AI’s parent company, Alphabet, adds to existing legal challenges about minors interacting with AI-generated content on its platform.
Character AI continues to enhance its safety and moderation measures. Some examples include releasing new safety tools, distinct AI models for teen users, content blocking for sensitive topics, and clearer disclaimers about the non-human nature of AI characters.
Despite the departure of its founders, Noam Shazeer and Daniel De Freitas, Character AI recently onboarded ex-YouTube executive Erin Teague as Chief Product Officer and appointed Dominic Perella, the former general counsel, as Interim CEO.
Original source: Read the full article on TechCrunch