
Draw me a smiley face”: Old-school tests are tripping up new-school deepfakes
The Wall Street Journal reports that companies are quietly beating AI impostors with delightfully low-tech moves: ask the caller to draw a smiley face and hold it to the camera, nudge them to pan the webcam, throw in a curveball question only a real colleague would know, or hang up and call back on a known number. Simple, a bit cheeky, and—right now—surprisingly effective.
Here’s the thing I keep hearing from CISOs when I ask, “What actually works on a Tuesday afternoon?”
They say the combo move matters: blend basic human challenges with policy checkbacks and only then lean on detection software.
That’s not shade on the tools; it’s an admission that social engineering—not just silicon—is carrying these scams.
And yes, the numbers are grim: deepfake fraud losses topped $200 million in Q1 2025 alone, which helps explain why even very traditional firms are piloting call-back and passphrase protocols.
If you want a government-grade cross-check, NIST’s recent guidance on face-photo morph detection offers a timely reminder: algorithms mislead, but provenance and layered checks can close the gap. It’s not about one magic detector; it’s about workflow.
Zoom out for a minute—because something else shifted this week. Google is baking C2PA Content Credentials into Pixel 10’s camera and Google Photos so images can carry cryptographic “nutrition labels” about how they were made.
That’s provenance, not detection, but it changes the default: prove it, or be treated as unverified.
You might ask: “Cute doodles aside, does any of this stop the headline-grabbing heists?” Sometimes—especially when people remember to slow down.
Law enforcement, for its part, is getting faster at clawbacks: earlier this year Italian police froze nearly €1 million from an AI-voice scam that impersonated a cabinet minister to shake down business leaders. It wasn’t perfect justice, but it was concrete.
Let me be candid: I used to roll my eyes at “analog defenses” because they felt… flimsy.
Then I watched a real incident review where a finance manager defused a suspicious video call by asking the “CFO” to angle the webcam toward the whiteboard.
The lag, the artifacts, the awkward silence—it was a tell. That exact tactic shows up in expert playbooks too: change the lighting, move the camera, hold up today’s newspaper (yes, that old chestnut still works). The point is to yank the attacker off their pre-rendered rails.
There’s policy momentum, not just street smarts. Provenance schemes like C2PA will only matter if platforms display and respect them, and if organizations wire provenance checks into intake flows.
YouTube’s early steps to label camera-captured, unaltered clips via Content Credentials hint at where this could go if more ecosystems play along.
Does this mean detectors are dead? Not at all. They’re just moving backstage while procedures move front-of-house.
The pragmatic read from standards bodies is: combine authentication (who/what created this), verification (did it change, and how), and evaluation (does this make sense in context?). It sounds fussy until you remember the stakes.
One more question I keep getting from readers: “Is this overkill for small teams?”
Honestly, no. Pick two moves you can train in an hour—verbal passphrases that change weekly and a hard rule to call back on a saved number for any money request.
Tape a reminder next to the monitor. It’s not glamorous, but neither is wiring out HK$200 million on a fake Zoom. The lesson is human: when they expect you to zig, you zag—on purpose, together, every time.
Bottom line: the smiley-face test isn’t a punchline—it’s a pattern interrupt. Pair it with call-backs, provenance checks, and a culture that rewards “slow down” over “rush it,” and you’ve got a fighting chance against fakes that look and sound uncomfortably real.