Advertisement

SKIP ADVERTISEMENT

This Video May Not Be Real

What should we really be worried about when it comes to “deepfakes”? An expert in online manipulation explains.

Video
bars
0:00/3:38
-0:00

transcript

This Video May Not Be Real

What should we really be worried about when in comes to “deepfakes”? An expert in online manipulation explains.

Hello. Today I’m going to be talking to you about a new technology that’s affecting famous people. Remember when Obama called Trump a dipshit? “Complete dipshit.” Or the time Kim Kardashian rapped, “Because I’m always half naked“? Or when Arnold Schwarzenegger impersonated himself? “Get out of there! There’s a bomb in there! Get out!” Deepfake. Deepfake. Deepfake. This is a deepfake, too. I’m not Adele. But I am an expert in online manipulation. So deepfakes is a term that is used to describe video or audio files that have been created using artificial intelligence. My favorite is probably Lisa Vanderpump. It started as a very basic face-swapping technology. And now it’s turned into film-level C.G.I. There’s been this huge explosion of, “Oh my goodness, we can’t trust anything.” Yes, deepfakes eerily dystopian. And they’re only going to get more realistic and cheaper to make. But the panic around them is overblown. In fact, the alarmist hype is possibly more dangerous than the technology itself. Let me break this down. First, what everyone is freaking out about is actually not new. It’s a much older phenomenon that I like to call the weaponization of context or shallowfakes with Photoshop and video editing software. There’s so many needs. How about the time Nancy Pelosi appeared to be drunk while giving a speech? “But you never know. “But this president of the United States.” Turns out that video was just slowed down at 75%. “It was very, very, very, very, very strange.” You can have a really simplistic piece of misleading content that can do huge damage. For example, in the lead-up to the midterms, we saw lots of imagery around this caravan of people who were moving towards the U.S. This photo was shared with captions demonizing the so-called migrant caravan at the U.S.-Mexico border in 2018. But a reverse image search showed it was actually Pakistani refugees in Greece. You don’t need deepfakes’ A.I. technology to manipulate emotions or to spread misinformation. This brings me to my second point: What we should be really worried about is the liar’s dividend. The lies and actions people will get away with by exploiting widespread skepticism to their own advantage. So, remember the “Access Hollywood” tape that emerged a few weeks before the 2016 election? “When you’re a star, they let you do it. “You can do anything.” “Whatever you want.” Around that time. Trump apologized, but then more recently he’s actually said, I’m not sure if I actually said that. When anything can be fake it becomes much easier for the guilty to dismiss the truth as fake. What really keeps me awake at night is less the technology. It’s how we as a society respond to the idea that we can’t trust what we see or what we hear. So if we are fearmongering, if we are hyperbolic, if we are waving our hands in the air, that itself can be part of the problem. You can see where this road leads. As public trust in institutions, like the media, education and elections, dwindles, then democracy itself becomes unsustainable. The way that we respond to this serious issue is critical. Partly this is the platforms thinking very seriously about what they do with this type of content, how they label this kind of content. Partly is the public recognizing their own responsibility. And if you don’t know 100%, hand on heart, “This is true,” please don’t share, because it’s not worth the risk.

Video player loading
What should we really be worried about when in comes to “deepfakes”? An expert in online manipulation explains.CreditCredit...The New York Times

Leah Varjacques and

In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” — seemingly realistic videos generated by artificial intelligence. First seen on Reddit with pornographic videos doctored to feature the faces of female celebrities, deepfakes were made popular in 2018 by a fake public service announcement featuring former President Barack Obama. Words and faces can now be almost seamlessly superimposed. The result: We can no longer trust our eyes.

In June, the House Intelligence Committee convened a hearing on the threat deepfakes pose to national security. And platforms like Facebook, YouTube and Twitter are contemplating whether, and how, to address this new disinformation format. It’s a conversation gaining urgency in the lead-up to the 2020 election.

Yet deepfakes are no more scary than their predecessors, “shallowfakes,” which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false.

Claire Wardle (@cward1e) is executive director of First Draft, a nonprofit focused on research and practice to address misinformation and disinformation.

Advertisement

SKIP ADVERTISEMENT