Deepfakes

And you thought “fake news” was bad

Photo via UC Berkeley

In a world where we believe whatever we like to hear, is there any reliable way to stop the spread of misinformation?

These days, just about anyone can download deepfake software to create fake videos or audio recordings that look and sound like the real thing. While there has been a lot of fear surrounding the damage they might do in the future, until now, deepfakes have been limited to superimposing faces into porn or swapping out audio to make it look like politicians are saying something controversial. Because the Internet is full of fact-checkers, most deepfakes have been outed almost immediately.

What’s at stake when it comes to deepfakes?

But people are justifiably concerned about their potential to do great harm in the near future. From ruining marriages to interfering with democratic elections, the creation of these fakes can have major consequences. Part of the problem is that people seek out and believe things that justify their worldview, so even if someone calls out a deepfake video or audio recording, there are people bound to still believe in them.

Of course, video and audio can already be easily manipulated or taken out of context, but the application of deep learning to create hard-to-identify fakes is something we need to take seriously, especially as they become more sophisticated.

Deepfakes use technology called generative adversarial networks (GANs) in which two machine learning models play off of one another to create a nearly impossible-to-detect forgery. While one model creates the video the other attempts to detect signs that it’s a fake. Once the detection model can no longer find vulnerabilities in the forgery, it’s ready to be uploaded for all sorts of nefarious purposes.

Are deepfakes free speech?

There’s still the unresolved question of whether deepfakes are illegal. Are they covered by the First Amendment? Does intellectual property law come into play when a video is manipulated?

Companies such as Facebook and Microsoft have recently launched challenges to encourage people to create tools to detect deepfakes before they become easy enough for anyone to create. DARPA has been looking into deepfake detection since 2016, a year before the first fake video appeared on Reddit. 

There are two federal bills currently under consideration in the U.S., the Malicious Deep Fake Prohibition Act and the DEEPFAKES Accountability Act. California, New York, and Texas are also attempting to build state legislation to regulate them. But one wonders, do politicians understand the technology well enough to regulate it? Will those laws conflict with First Amendment rights? Is there even a way to regulate the technology or should we be concentrating on its weaponization instead?

One way governments are trying to stop the creation and spread of deepfakes is by regulating social media, the most common platform on which they are shared. But tech companies have already proved themselves largely immune to this type of regulation.

So what can we do about this new high-tech wave of disinformation? Regulate it? Provide more resources for media literacy to make them less of a threat? Rely on sharing platforms to detect and ban them (or hold them accountable for their dissemination)?

Further reading:

Facebook AI launches its deepfake detection challenge (IEEE Spectrum, 2019)

Congress wants to solve deepfakes by 2020. That should worry us. (Slate, 2019)

Deepfakes and fake nudes could become a big problem. News organizations don’t have to publish them. (Slate, 2019)

It is your duty to learn how to spot deepfake videos (Slate, 2019)

You thought fake news was bad? Deep fakes are where truth goes to die (The Guardian, 2018)

Deepfakes, revenge porn, and the impact on women (Forbes, 2019)

Fighting deepfakes when detection fails (Brookings, 2019)

China makes it a criminal offense to publish deepfakes or fake news without disclosure (The Verge, 2019)