Why Elon Musk is laughing off Grok’s flood of deepfake AI porn

From the moment Elon Musk’s artificial intelligence company, xAI, began rolling out its Grok chatbot to paid X subscribers in 2023, it pitched the tool as the bad boy of large language models. Grok would supposedly be authorized to say and do things that its politically correct competitors—primarily ChatGPT, produced by Musk’s old nemeses at OpenAI—would not. 

In an announcement on X, the company touted Grok’s “rebellious streak” and teased its willingness to answer “spicy” questions with “a bit of wit.” Although xAI warned that Grok was “a very early beta product,” it assured users that with their help, Grok would “improve rapidly with each passing week.”

At the time, xAI did not advertise that Grok would one day deliver nonconsensual pornography on an on-demand basis. But over the past few weeks, that is exactly what has happened, as X subscribers inundated the platform with requests to modify real images of women by removing their clothing, altering their bodies, spreading their legs, and so on. X users do not need to be premium subscribers to avail themselves of these services, which are accessible both on X and on Grok’s stand-alone app.

Some images generated with Grok’s assistance depict topless or otherwise suggestive images of girls between ages 11 and 13, according to a U.K.-based child safety watchdog. One analysis of 20,000 images generated by Grok between December 25 and January 1 found that the chatbot had complied with user requests to depict children with sexual fluids on their bodies. On New Year’s Eve, an AI firm that offers image alteration detection services estimated that Grok was churning out sexualized images at a rate of about one per minute.

“I’ve been sexually assaulted in the past, and it almost felt like a digital version of that,” one woman told The Cut after Grok users transformed a picture of her posing next to a Christmas tree while wearing workout gear into a picture of her wearing a thong bikini. “It is unfathomable to me that people are allowed to do this to women.”

On Thursday, journalist and Bellingcat founder Eliot Higgins reported seeing Grok-generated images of Renee Nicole Good, the unarmed 37-year-old woman shot and killed by ICE agents in Minneapolis, altered to depict her dead body in a bikini. As Higgins put it: “Digital corpse desecration now available to the public.”

For all the potential use cases of AI chatbots that AI companies have touted in recent years, bespoke pornography was always the howlingly obvious one. (You don’t need to be a behavioral scientist to understand what certain demographics immediately think to do when presented with a tool advertised as capable of magically producing photorealistic images of anything one’s mind can dream up.) With varying degrees of success, platforms like ChatGPT and Google’s Gemini have at least tried to get ahead of this eventuality, building “guardrails” that try to limit users’ ability to customize NSFW images to suit their tastes.

A major difference between these companies and xAI, of course, is that xAI is helmed by Musk, whose ideological commitment to eradicating wokeness and censorship extends to offering amused, winking defenses of nonconsensual adult content published by his company’s flagship product.

On his X account, Musk has been firing off prompts treating what would be an existential crisis for any other company as a fun and funny meme. The fact that one of the victims was Ashley St. Clair, the mother of Musk’s 1-year-old son, did not dissuade Musk from declaring himself unable to stop laughing at an AI image of a bikini-clad toaster.

Both X and Musk have since issued statements reminding users that the X terms of service bar the creation of child sexual abuse material (CSAM) and pornography. X has also said that it removes CSAM and other illegal content, and permanently suspends accounts that create it. At the same time, it is sort of challenging for the company to position itself as taking a problem seriously when its owner, who is also the most-followed person on the platform, was logging on and treating the entire thing as one big joke.

Normally, the existence of an online tool capable of generating one-click CSAM would prompt widespread outrage and rapid responses from law enforcement. Regulators in countries in Europe, Asia, and South America have promised to investigate, and this week the European Commission extended an order that requires X to retain all Grok-related documents and data while officials take a closer look.

There are existing legal mechanisms in the U.S. for addressing the vile things Grok is doing, too. Less than a year ago, for example, Trump signed into law the TAKE IT DOWN Act, a bipartisan bill that requires websites to remove “nonconsensual intimate imagery” within 48 hours upon the victim’s request. And although a provision of federal law known as Section 230 generally protects websites and social media platforms from liability for content published by their users, here, Grok itself is doing the “publishing” by generating the images. 

Sen. Ron Wyden (D-OR), who helped write Section 230 three decades ago, weighed in on Bluesky, arguing that “AI chatbot outputs are not protected” under Section 230, and that “it is not a close call.” Along with two other Democratic senators, Wyden has also asked Apple and Google to remove Grox and X from their app stores for violations of the companies’ terms of service. This would be a significant step beyond what appears to be the only action taken by Apple thus far: raising its age rating of the Grok app from 12+ to 13+.

All that said, Musk, who spent four whirlwind months hacking away at the administrative state as head of the Department of Government Efficiency, has plenty of practical reasons not to be worried. Thanks to the political and financial support that Musk and his Silicon Valley peers provide to the Republican Party, the second Trump administration has been enthusiastic about integrating AI products—both from xAI and from other companies—into the workings of the federal government. 

The fact that Trump immediately designated David Sacks, a tech investor with significant AI and crypto interests (as well as close personal and professional ties to Musk), as his AI and crypto czar is a pretty good indication that meaningful regulation is not coming anytime soon.

Since 2019, states with both Democratic- and Republican-controlled legislatures have responded to the absence of federal action by passing more than 140 state laws regulating AI, according to a Brennan Center analysis. But in December 2025, the White House made what is perhaps its most promising gesture yet to the AI industry: an executive order reiterating Trump’s commitment to a building “minimally burdensome national policy framework for AI” that will “sustain and enhance the United States’ global AI dominance.” 

Among other things, the order directs executive agencies to identify state AI regulations that the administration deems inconsistent with its agenda, and encourages Attorney General Pam Bondi to form an “AI Litigation Task Force” to challenge the offending laws in court.

Like most Trump executive orders, this one will not have the immediate impact that some breathless headlines suggest; as the order itself acknowledges, Congress would need to act in order for the substantive provisions to take effect. But for Musk, the message the White House is sending about its priorities is what really matters: Right now, the Trump administration is too preoccupied with starting illegal wars and executing unarmed protesters in the streets to worry about a few risqué images appearing on its social media platform of choice. 

When Musk left Washington last year, he did not do so quietly, lashing out at Trump for being insufficiently deferential to his preferences and insufficiently grateful for his support. But eight months later, the fact that the official response to Grok’s pornography and CSAM features is effectively a disinterested shrug demonstrates that the quarter-million dollars Musk donated to Trump and other Republicans in 2024 was a sound investment in his company’s future.

By January 3, while Grok was still spitting out these images upon request, Musk and Trump had reconciled enough to have dinner together at Mar-a-Lago. Afterward, Trump called Musk “great” and “a good guy,” and Musk predicted that 2026 would be “amazing.”

Laws are only as strong as the willingness of the powers that be to enforce them. When you own the people who regulate you, there is no scandal too disgusting for you to laugh off.

source https://www.fastcompany.com/91472488/why-elon-musk-is-laughing-off-groks-flood-of-deepfake-ai-porn


Discover more from The Veteran-Owned Business Blog

Subscribe to get the latest posts sent to your email.

Published by Veterans Support Syndicate

Veterans Support Syndicate is a partner-centric organization that unites with diverse networks to elevate the quality of life for U.S. veterans nationwide. Leveraging deep collaborative efforts, they drive impact through Zen Force, a holistic virtual team providing mental health advocacy and resources. They also champion economic independence via VetBiz Resources, supporting veteran entrepreneurs through launch and growth. Together, they ensure those who served receive the support they deserve.

Discover more from The Veteran-Owned Business Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Design a site like this with WordPress.com
Get started