
Echoes of Trust: Late Dr. Michael Mosley Used in AI Deepfake Health Scams
Trust can evaporate in an instant when technology gets mischievous. That’s the latest in the wild world of AI, where scammers are using deepfake videos of the late Dr. Michael Mosley—once a trusted face in health broadcasting—to hawk supplements like ashwagandha and beetroot gummies.
These clips appear on social media, featuring Mosley passionately advising viewers with bogus claims about menopause, inflammation, and other health fads—none of which he ever endorsed.
When Familiar Faces Sell Fiction
Scrolling through Instagram or TikTok, you might trip over a video and think, “Wait—is that Mosley?” And you’d be right… sort of. These AI creations use clips from well-known podcasts and appearances, pieced together to mimic his tone, expressions, and hesitations.
It’s eerily convincing until you pause to think: hold on—he passed away last year.
A researcher from the Turing Institute warned the advancements are happening so fast that it will soon be nearly impossible to spot real from fake content by sight alone.
The Fallout: Health Misinformation in Overdrive
Here’s where things get sticky. These deepfake videos aren’t harmless illusions. They push unverified claims—like beetroot gummies curing aneurysms, or moringa balancing hormones—that stray dangerously from reality.
A dietitian warned that such sensational content seriously undercuts public understanding of nutrition. Supplements are no shortcut, and exaggerations like these breed confusion, not wellness.The UK’s medicine regulator, MHRA, is looking into these claims, while public health experts continue urging people to rely on credible sources—think NHS and your GP—not slick AI promotions.
Platforms in the Hot Seat
Social media platforms have found themselves in the crosshairs. Despite policies against deceptive content, experts say tech giants like Meta struggle to keep up with the sheer volume and virality of these deepfakes.
Under the UK’s Online Safety Act, platforms are now legally required to tackle illegal content, including fraud and impersonation. Ofcom is keeping an eye on enforcement, but so far, the bad content often reappears as fast as it’s taken down.
Echoes of Real-Fake: A Worrying Trend
This isn’t an isolated hiccup—it’s part of a growing pattern. A recent CBS News report revealed dozens of deepfake videos impersonating real doctors giving medical advice worldwide, reaching millions of viewers.
In one example, a physician discovered a deepfake push for a product he never endorsed—and the resemblance was chilling. Viewers were fooled, comments rolled in praising the doctor—all based on a fabrication.
My Take: When Technology Misleads
What hits me hardest about this isn’t just that tech can imitate reality—it’s that people believe it. We’ve built our trust on experts, voices that sound calm and knowledgeable. When that trust is weaponized, it chips away at the very foundation of science communication.
The real fight here isn’t just detecting AI—it’s rebuilding trust. Platforms need more robust checks, clear labels, and maybe—just maybe—a reality check from users before hitting “Share.”