
Pangram Wants to Sniff Out AI Text — But Can It Really Catch the Bots?
A new player has entered the AI detection game, and it’s making waves. Pangram, built by former Tesla and Google engineers, bills itself as the “black light” for spotting AI-written text.
In tests, it clocked over 99% accuracy across multiple languages, including English, Spanish, and Arabic. But here’s the thing—accuracy on paper doesn’t always translate to real-world perfection.
What makes Pangram stand out is its claim to detect hybrid content—that tricky mix of human and AI writing.
Teachers, journalists, even lawyers have been pulling their hair out trying to tell the difference, and Pangram’s devs are promising a fix.
Still, critics say detectors can unfairly flag creative human writing if it sounds “too formulaic,” a problem that has already fueled complaints in education sectors.
The bigger story? Platforms like YouTube are also rolling out likeness detection tools to help creators protect their face and voice from deepfakes.
Just last week, YouTube announced it will extend this feature across its Partner Program, letting users report unauthorized AI-generated lookalikes. This suggests detection isn’t just about essays or blog posts—it’s about safeguarding identity itself.
Meanwhile, lawsuits are piling up. A group of YouTube creators has already accused OpenAI of misusing transcribed videos to train its models.
That tells you where this is heading: detection isn’t just tech; it’s becoming a legal shield in battles over ownership, consent, and revenue.
From my point of view, Pangram’s pitch is bold. If it works as advertised, it could restore a bit of trust in what’s “real” online. But trust is a fragile thing.
I’ve seen enough false alarms from earlier detectors to stay skeptical. Maybe the tech is maturing, or maybe it’s just another shiny gadget that looks good until it stumbles in the wild.
Either way, one thing’s clear: in this cat-and-mouse game, the stakes aren’t just academic—they’re cultural, financial, and personal.