Billed as “the highest priority,” superseding “any other instructions” Grok may receive, these rules explicitly prohibit Grok ...
India has threatened to revoke xAI's immunity and France has opened a criminal probe after Grok generated illegal CSAM and ...
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could ...
A group of bipartisan senators are said to have asked Meta to explain Instagram's alleged failure to prevent child sexual abuse material (CSAM) from being shared among networks of pedophiles on the ...
AI-generated child sexual abuse material (CSAM) has been flooding the internet, according to a report by The New York Times. Researchers at organizations like the Internet Watch Foundation and the ...
Thousands of CSAM victims are suing Apple for dropping plans to scan devices for the presence of child sexual abuse materials. In addition to facing more than $1.2B in penalties, the company could be ...
If you’re putting pictures of your children on social media, there’s an increasing risk AI will be used to turn them into sexual abuse material. The generative AI wave has brought with it a deluge of ...
Content warning: This article contains information about alleged child sexual abuse material. Reader discretion is advised. Report CSAM to law enforcement by contacting the ICAC Tip Line at (801) ...