Get Ready for Deepfakes

The Tom Cruise TikTok deepfakes last spring didn’t spur me into writing about deepfakes, not even when Justin Bieber fell so hard for them that he challenged the deepfake to a fight. When covered the topic last night., though, I figured I’d best get to it before I missed this particular wave.

We’re already living in an era of unprecedented misinformation/disinformation, as we’ve seen repeatedly with COVID-19 (e.g., hydroxychloroquine, ivermectin, anti-vaxxers), but deepfakes should alert us that we haven’t seen anything yet.

ICYMI, here’s the story:

The trick behind deepfakes is a type of deep learning called “generative adversarial network” (GAN), which basically means neural networks compete on which can generate the most realistic media (e.g., audio or video). They can be trying to replicate a real person, or creating entirely fictitious people. The more they iterate, the most realistic the output gets.

Audio deepfake technology is already widely available, and already fairly good. The software takes a sample of someone’s voice and “learns” how that person speaks. Type in a sentence, and the software generates an audio that sounds like the real person.

Credit: Deepfake Challenge

The technology has already been used to trick an executive into sending money into an illicit bank account, by deepfaking his boss’s voice. “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” a company spokesperson told .

One has to assume that Siri or Alexa would fall for such deepfaked voices as well.

Audio deepfakes are scary enough, but video takes it to another level. As the saying goes, seeing is believing. A cybercrime expert told :“Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts at all.”

As is often the case, the porn industry is an early adopter of the new technology. Last month reported on a site that allows someone to upload a picture of a face, and see that face morphed into an adult video. The impacts on innocent victims are horrifying.

That particular site (which now says is no longer available) was not the first such porn site to use the technology, probably didn’t had the most realistic deepfakes, and won’t be the last. Sadly, though, deepfake porn is far from the biggest problem we’re likely to have with the technology.

We’re going to see mainstream actors in movies that they never filmed. We’re going to see actors in new movies. We’re going to see deepfaked business executives saying all sorts of ridiculous things (Mark Zuckerberg may already be a deepfake). We’re going to see politicians saying things that make their opponents look good.

Martin Ford, writing in , warns:

Hany Farid, a Cal Berkeley professor, told : “Now you have the perfect storm. I can create this content easily, inexpensively and quickly, I can deliver it en masse to the world, and I have a very willing and eager public that will amplify that for me.”

Nina Schick Credit: 60 Minutes

Similarly, technology consultant Nina Schick, who has written a book on deepfakes, told : “the fact that AI can now be used to make images and video that are fake, that look hyper realistic. I thought, well, from a disinformation perspective, this is a game-changer.”

Imagine what the COVID misinformation crew could do with a deepfake Dr. Fauci.

He has been, in many ways, the face of modern medicine and science during the pandemic. There are countless hours of video/audio of him over the last eighteen months. He’s usually been right, sometimes been wrong, but has done his best to follow the science. COVID-19 skeptics/deniers constantly parse his words looking for inconsistencies, for times when he was wrong, for any opportunity to challenge his expertise.

With deepfakes, we could have him telling people not to bother with masks or even vaccines. His deepfake could tout unproven and even unsafe remedies, and denounce the FDA, the CDC, even President Biden. Heck, they could have President Biden attacking Dr. Fauci and praising Donald Trump (conversely, of course, a deepfake Trump could urge vaccine mandates).

We struggle now to find the best health information, about COVID and anything else that worries us about our health. We look for credible sources, we look for reputable people’s opinions, and we use that information to make our health decisions. But, as Ms. Schick said on , deepfakes are “going to require all of us to figure out how to maneuver in a world where seeing is not always believing.”

That will not be easy.

We’re just starting to realize how deepfakes may impact healthcare. In a recent article, Chen, et. alia warned:

The authors believe that there is a role for synthetic data in healthcare, but say: “it is urgent to develop and refine regulatory frameworks involving synthetic data and the monitoring of their impact in society.”

So it is generally. The technology for detecting deepfakes is improving but, of course, so is the technology for creating them. It’s an arms race, like everything with cybersecurity. As Ms. Schick pointed out on , “The technology itself is neutral.” How it is used is not.

She also believes, though: “It is without a doubt one of the most important revolutions in the future of human communication and perception. I would say it’s analogous to the birth of the internet.”

I’m not sure I’d go that far.

Doctored audio/video have been with us for pretty much all of the time we’ve had audio/video; deepfake technology just takes it to a new, and more convincing, level. We still haven’t figured out how to use the internet responsibly, and, if they do nothing more, deepfakes remind us that we’d better do so soon.



Curious about many things, some of which I write about — usually health care, innovation, technology, or public policy. Never stop asking “why” or “why not”!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kim Bellard

Curious about many things, some of which I write about — usually health care, innovation, technology, or public policy. Never stop asking “why” or “why not”!