
Can You Hear the Future? SquadStack’s AI Voice Just Fooled 81% of Listeners
Imagine answering a call and chatting away, only to find out minutes later that the “person” on the other end wasn’t human at all. Creepy? Impressive? Maybe a bit of both.
That’s exactly what happened at the Global Fintech Fest 2025, where SquadStack.ai made waves by claiming its voice artificial intelligence had effectively passed the Turing Test – the age-old measure of whether a machine can convincingly mimic human intelligence.
The experiment was simple but daring. Over 1,500 participants took part in live, unscripted voice conversations, and 81% couldn’t tell if they were speaking to an AI or a human.
It’s the kind of milestone that makes even skeptics sit up straight. We’ve heard about AI art and chatbots, but this? This is AI talking – literally – and doing it well enough to blur reality.
It reminds me of when OpenAI unveiled its Voice Engine, a model that could generate natural speech from just 15 seconds of audio.
Back then, the internet went wild over the implications – creative, ethical, and downright unsettling.
What SquadStack seems to have done now is push that vision further, proving that conversational nuance isn’t just about pitch and tone, but also timing, emotion, and context.
But let’s pause for a second – because not everyone’s celebrating. Regulators have started to tighten their belts.
In Europe, policymakers are already pushing for stricter identity disclosure for AI-generated voices, echoing growing fears of deepfake scams and digital impersonation.
Denmark, for instance, is drafting a law against AI-driven voice deepfakes, citing cases where cloned voices were used for fraud and misinformation.
Meanwhile, the business world is cheering. Companies like SoundHound AI are reporting massive earnings growth, showing that voice generation isn’t just cool tech – it’s good business.
If consumers can’t tell AI apart from real people, call centers, virtual assistants, and digital sales agents might soon sound indistinguishable from their human colleagues. That’s efficiency in stereo.
There’s also a fascinating parallel here with Subtle Computing’s work on AI voice isolation – they’re teaching machines to pick out speech in chaotic environments.
It’s almost poetic, really: one startup making AI listen better, another making it speak better.
When those two threads meet, we’ll have AI that can hear us perfectly, talk back naturally, and maybe even argue convincingly.
Of course, that raises the big question: how much of this do we actually want? As someone who still enjoys small talk with the barista and phone calls with real people, I find the idea both thrilling and unnerving.
The technology is dazzling, no doubt. But part of me misses the stumbles, the awkward pauses, the little imperfections that make human voices feel alive.
Still, it’s hard not to be awed. Whether you see it as a step toward a seamless digital world or a warning sign of things to come, one thing’s undeniable – the voices of tomorrow are already speaking. And if you can’t tell who’s talking… well, maybe that’s the whole point.












