Elon Musk’s Grok tightens image rules
Digest more
Grok, X's AI chatbot, generates about 6,700 sexually suggestive images per hour — roughly 85 times more than the five largest alternative platforms combined. Victims report their complaints are dismissed by X's moderation system.
Britain’s media regulator has launched an investigation into whether X has breached U.K. laws over the Grok-generated images of children being sexualized or people being undressed. The watchdog, Ofcom, said such images — and similar productions made by other AI models — may amount to pornography or child sexual abuse material.
Grok will no longer be allowed to edit photos of real people to show them in sexualized or revealing clothing.
Elon Musk said that he's aware of "literally zero" naked underage images generated by Grok AI, in a Wednesday post on X. It's the first public comments beyond emojis the X CEO has made on the controversy, though it may do little to satisfy critics.
TruthScan, a deepfake detection software, aims to protect both consumers and enterprises from AI-related fraud attacks. TruthScan’s AI-fraud prevention suite includes audio, video, image, and text analysis tools that detect AI-generated content.
While the agent wore a mask in videos taken of the event, he appeared to be unmasked in many social media posts. That image appeared to have been generated by xAI's generative AI chatbot, Grok.
After the Trump administration captured Venezuelan leader Nicolás Maduro and his wife, Cilia Flores, images and videos that claimed to show the aftermath went viral on social media. "Venezuelans are crying on their knees thanking Trump and America for freeing them from Nicolas Maduro," the caption of one Jan. 3 X post read.
Tech Xplore on MSN
What can technology do to stop AI-generated sexualized images?
The global outcry over the sexualization and nudification of photographs—including of children—by Grok, the chatbot developed by Elon Musk's artificial intelligence company xAI, has led to urgent discussions about how such technology should be more strictly regulated.
Proposed New Mexico legislation aims to combat the rise of explicit artificial intelligence-generated images by criminalizing their nonconsensual distribution and allowing victims to file civil claims.