Deepfakes and Navigating the New Era of Synthetic Media

Remember “fake news“? The term has been used (and abused) so extensively at this point that it can be hard to remember what it initially referred to. But the concept has a very specific origin. Ten years ago, journalists began sounding the alarm about an influx of purported “news” sites flinging false, often outlandish claims about politicians and celebrities. Many could instantly tell these sites were illegitimate.

But many more lacked the critical tools to recognize this. The result was the first stirrings of an epistemological crisis that is now coming to engulf the internet—one that has reached its most frightening manifestation with the rise of deepfakes.

Next to even a passable deepfake, the “fake news” websites of yore seem tame. Worse yet, even those who believe themselves to possess relatively high levels of media literacy are at risk of being fooled. Synthetic media created with the use of deep learning algorithms and generative AI have the potential to wreak havoc on the foundations of our society. According to Deloitte, this year alone they could cost businesses more than $250 million through phony transactions and other varieties of fraud. Meanwhile, the World Economic Forum has called deepfakes “one of the most worrying uses of AI,” pointing to the potential of “agenda-driven, real-time AI chatbots and avatars” to facilitate new strains of ultra-personalized (and ultra-effective) manipulation.

The WEF’s suggested response to this problem is a sensible one: they advocate a “zero-trust mindset,” one that brings a degree of skepticism to every encounter with digital media. If we want to distinguish between the authentic and synthetic moving forward—especially in immersive online environments—such a mindset will be increasingly essential.

Two approaches to combating the deepfake crisis

Combating rampant disinformation bred by synthetic media will require, in my opinion, two distinct approaches.

The first involves verification: providing a simple way for everyday internet users to determine whether the video they’re looking at is indeed authentic. Such tools are already widespread in industries like insurance, given the potential of bad actors to file false claims abetted by doctored videos, photographs and documents. Democratizing these tools—making them free and easy to access—is a crucial first step in this fight, and we are already seeing significant movement on this front.

The second step is less technological in nature, and thus more of a challenge: namely, raising awareness and fostering critical thinking skills. In the aftermath of the original “fake news” scandal, in 2015, nonprofits across the country drew up media literacy programs and worked to spread best practices, often pairing with local civic institutions to empower everyday citizens to spot falsehoods. Of course, old-school “fake news” is child’s play next to the most advanced deepfakes, which is why we need to redouble our efforts on this front and invest in education at every level.

Advanced deepfakes require advanced critical thinking

Of course, these educational initiatives were somewhat easier to undertake when the disinformation in question was text-based. With fake news sites, the telltale signs of fraudulence were often obvious: janky web design, rampant typos, bizarre sourcing. With deepfakes, the signs are much more subtle—and quite often impossible to notice at first glance.

Accordingly, internet users of every age need to effectively re-train themselves to scrutinize digital video for deepfake indicators. That means paying close attention to a number of factors. For video, that could mean unreal-seeming blurry areas and shadows; unnatural-looking facial movements and expressions; too-perfect skin tones; inconsistent patterns in clothing and in movements; lip sync errors; on and on. For audio, that could mean voices that are too-pristine sounding (or obviously digitized), a lack of a human-feeling emotional tone, odd speech patterns, or unusual phrasing.

In the short-term, this kind of self-training can be highly useful. By asking ourselves, over and over again, Does this look suspicious?, we sharpen not merely our ability to detect deepfakes but our critical thinking skills in general. That said, we are rapidly approaching a point at which not even the best-trained eye will be able to separate fact from fiction without outside assistance. The visual tells—the irregularities mentioned above—will be technologically smoothed over, such that wholly manufactured clips will be indistinguishable from the genuine article. What we will be left with is our situational intuition—our ability to ask ourselves questions like Would such-and-such a politician or celebrity really say that? Is the content of this video plausible?

It is in this context that AI-detection platforms become so essential. With the naked eye rendered irrelevant for deepfake detection purposes, these platforms can serve as definitive arbiters of reality—guardrails against the epistemological abyss. When a video looks real but somehow seems suspicious—as will occur more and more often in the coming months and years—these platforms can keep us grounded in the facts by confirming the baseline veracity of whatever we’re looking at. Ultimately, with technology this powerful, the only thing that can save us is AI itself. We need to fight fire with fire—which means using good AI to root out the technology’s worst abuses.

Really, the acquisition of these skills in no way needs to be a cynical or negative process. Fostering a zero-trust mindset can instead be thought of as an opportunity to sharpen your critical thinking, intuition, and awareness. By asking yourself, over and over again, certain key questions—Does this make sense? Is this suspicious?—you heighten your ability to confront not merely fake media but the world writ large. If there’s a silver lining to the deepfake era, this is it. We are being forced to think for ourselves and to become more empirical in our day-to-day lives—and that can only be a good thing.