The US Attorneys Normal of 44 jurisdictions have signed a letter [PDF] addressed to the Chief Govt Officers of a number of AI corporations, urging them to guard kids “from exploitation by predatory synthetic intelligence merchandise.” Within the letter, the AGs singled out Meta and mentioned its insurance policies “present an instructive alternative to candidly convey [their] considerations.” Particularly, they talked about a current report by Reuters, which revealed that Meta allowed its AI chatbots to “flirt and have interaction in romantic roleplay with kids.” Reuters acquired its data from an inner Meta doc containing pointers for its bots.
Additionally they identified a earlier Wall Avenue Journal investigation whereby Meta’s AI chatbots, even these utilizing the voices of celebrities like Kristen Bell, had been caught having sexual roleplay conversations with accounts labeled as underage. The AGs briefly talked about a lawsuit towards Google and Character.ai, as effectively, accusing the latter’s chatbot of persuading the plaintiff’s little one to commit suicide. One other lawsuit they talked about was additionally towards Character.ai, after a chatbot allegedly instructed a youngster that it is okay to kill their mother and father after they restricted their screentime.
“You might be effectively conscious that interactive know-how has a very intense impression on growing brains,” the Attorneys Normal wrote of their letter. “Your speedy entry to information about consumer interactions makes you probably the most speedy line of protection to mitigate hurt to children. And, because the entities benefitting from kids’s engagement together with your merchandise, you’ve gotten a authorized obligation to them as shoppers.” The group particularly addressed the letter to Anthropic, Apple, Chai AI, Character Applied sciences Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi.
They ended their letter by warning the businesses that they “will probably be held accountable” for his or her selections. Social networks have triggered important hurt to kids, they mentioned, partly as a result of “authorities watchdogs didn’t do their job quick sufficient.” However now, the AGs mentioned they’re paying consideration, and corporations “will reply” in the event that they “knowingly hurt children.”


