
EU vs X: Grok’s Explicit-Image Mess Has Officially Crossed the Line
The EU has now singled X out – and this time it’s not political, misinformation based or some nebulous free speech argument.
It has to do with porn: Specifically, there’s this question of the sexually explicit images that can be created using Grok, the AI associated with Elon Musk’s platform, and whether some of those were being used to make “digital undressing” content.
This is the sort of thing that makes your stomach clench when you read it, because it’s not just harm that is abstract. It’s targeted, personal and in some instances may be illegal.
And about the mood, too. This is not melodramatic E.U. This is the EU saying, “Enough”.
Regulators are concerned about how fast this type of content propagates itself on the internet and also just the fact that once something like a deepfake explicit image is out there, it’s not going to disappear.
The damage is done, even if the platform cuts it down, even if the account meets its end.
Now here’s the kicker. People keep seeming surprised when AI gets put to use for the worst things. But, I mean, let’s face it – are we really surprised?
You unleash a delicious image tool on millions of people and the internet does what it always does: throws its shiny new toy into a garbage disposal, searches for ways to hurt someone with it.
That’s why this investigation isn’t merely “EU angry at chatbot.” It is occurring with the Digital Services Act, which essentially commands big platforms to behave like responsible adults.
It should always be possible to tell whether X took a reasonable approach to risk assessment, and put in place sufficient safety guardrails. Not after the damage. Before.
X has apparently taken some measures in response, such as paying more attention to certain features and tightening control (by, for example, putting some functions generating images behind a paywall).
That’s… something, I guess. But if you’re the one whose image was altered and circulated, it probably doesn’t feel like a win. It’s as if you’re locking the front door only after your house has been robbed.
And here’s another uncomfortable fact: Platforms today don’t simply “host content.” They amplify it. They recommend it. They push it into feeds.
That’s why the EU isn’t just concerned about Grok-exposed explicit images – it’s interested in whether X systems made that content travel faster and further than it ever should have.
What’s frightening is that this is about to become the new normal.
AI-generated images are not going anywhere. In fact, it’s only getting better, faster, cheaper and more real.
Which is to say the “gross uses” are going to multiply as well. Today it’s Grok. Tomorrow it’s another model, another platform, another crop of victims.
And it’s not just celebrities anymore; it’s classmates, colleagues, ex-lovers and random women on the internet who posted one selfie in 2011 and still rue the day they ever existed online.
That’s why the E.U. inquiry is important. And not because it’s fun to see big tech sweat (though, O.K., that part is sustaining).
It matters, though, because this is one of the first high-profile tests of whether governments can actually compel platforms to treat AI harm as a real emergency and not just the side quest.
And if X fails this test? And anticipate that regulators will be more aggressive across the board – because the next platform in their cross hairs may not have so many chances.












