What Are Deepfakes, and How Can We Prevent Their Misuse?

In today’s society, most people know better than to believe what they read in the news. They also understand that images are easy to manipulate. Videos, on the other hand, may seem more credible.

We know about special effects in the movies, but special effects don’t apply when someone is videoing real news, do they? After all, if you can see a public figure talking in a video, you can compare the voice and the movements of their lips.

The point is that most people find video footage more credible. And, up until a few years ago, they’d have been fairly correct in that assumption. Clever editing and special effects could manipulate the message of a video, but you could believe the words someone spoke.

 

WHAT HAPPENED IN 2017?

It was then that we first started seeing Generative Adversarial Networks coming onto the world stage. This type of deep learning was initially developed in 2014, but it wasn’t until a few years later that the world started to catch on.

Prior to GANS, AI could only interpret content, rather than create it. This innovative technology uses artificial intelligence to manipulate images, sound, and video footage. With this tech, it’s possible to make anyone say anything that you want.

As an example, look at this public service announcement by “Barack Obama.” It looks and sounds like the ex-president is warning you about deepfakes. If you watch closely, it looks slightly off, but you might write that off as a lag between the sound and the lip movements until you see the explanation behind it.

 

SHOULD WE WORRY ABOUT DEEPFAKES?

The discerning eye will be able to tell that the Obama video was a fake. We can’t be complacent, however. The technology is improving to a point where these videos will be indistinguishable from the real thing.

Should we be concerned? Yes. We only have to look at the anger stirred up by the killing of George Floyd here in America to understand why. There’s no question that this video was real, but what if someone wanted to further stir up racial tensions or discredit the victim, and posted a deepfake video?

Deepfakes might be used for more pragmatic purposes as well. The tech is thought to be behind a scam in which a company CEO was conned out of $243,000. The CEO acted on what he thought was a call from his boss to make the transfer. The voice was apparently a perfect match.

There are also cybersecurity implications. AI avatars are already being used as influencers. They could point followers to sites loaded with ransomware, viruses, or other forms of malware.

 

WHAT CAN WE DO ABOUT DEEPFAKES?

Interestingly enough, using AI is one of the most promising solutions. The alternative would be to have a team of human researchers scan millions of videos and sites to detect the fakes. Considering the amount of work involved and the number of factors to consider, this isn’t a workable solution.

Analyzing content

We can, however, train AI to perform the same function in a fraction of the time. Companies have already built tools that assess shadows, the interplay of light, facial features, and a range of other features to identify possible deepfakes.

If the AI finds the content suspicious, it flags the content. At this time, human researchers make the final call. As AI becomes more advanced, though, it’ll become more capable of making that final decision.

In the future, security measures will have to become more sophisticated. Bad actors usually find ways around countermeasures within a matter of months.

Protecting content before it’s released

AI can create new content from existing content, but not from scratch. Developers are now working on ways to protect their content before its release. They may, for example, add a filter over an image so that the image can’t be used to create the deepfake.

Yet such technological solutions are not likely to stem the spread of deepfakes over the long term. At best they will lead to endless cat-and-mouse dynamic, similar to what exists in cybersecurity today, in which breakthroughs on the deepfake detection side spur further innovation in deepfake generation. The open-source nature of AI research makes this all the more likely.

Assigning legitimate keys to content

Another thing that companies are working on is to create digitally-signed content. This works in much the same way as a security certificate on a website. Content may be validated through a third-party provider or through the company that created the content.

That’s unlikely to protect the average consumer much. In the case of revenge porn, for example, few viewers will care if the content isn’t digitally signed.

 

COULD BLOCKCHAIN TECHNOLOGY PROVIDE A SOLUTION?

As of right now, no, but it’s an interesting concept. For years now, developers have considered the potential of protecting one’s intellectual property using a blockchain-based ledger system. While this kind of system won’t prevent the misuse and spread of fake news, it could be a useful verification tool.

 

CHANGES IN LEGISLATION

While companies are working on countermeasures, we’ll also have to consider legal remedies. At present, spreading deepfakes isn’t necessarily a criminal action. If you, for example, create or spread a video and it damages someone’s reputation, they might have a civil case against you. Whether or not there’d be a criminal case is a lot murkier.

Aside from that, the creators of the original content that the deepfake was based on might sue for copyright infractions.

Either way, we need to reconsider the regulations regarding this kind of content. In a world where freedom of speech is prized, limiting people’s free expression isn’t ideal. At the same time, we have to find some way to make people more accountable for the information that they publish to or share on the internet.

If we don’t, we risk diluting the effect of real tragedies in a miasma of fake news.

 

FINAL NOTES

Deepfakes are a fact of life now. Beating the technology will prove harder in the future, and so consumers will have to be more careful in verifying the source of content. AI, digital stamps, and making original content tamper-proof will help, but it’s up to each of us to stop the spread of it.

Techvera icon

Written by Kamilla Akhmedova

l

June 19, 2020

You May Also Like…

Skip to content