A Deep Dive on Deepfakes

The term ‘deepfake’ has only been around a couple of years, but there is a lot of interest—good and bad—in it’s potential.

What’s a deepfake? It’s a fake image or video produced using artificial intelligence. Feed enough data into an algorithm, and it will learn to create eerily realistic fabrications of whatever it was trained on.

You might have seen some instances of this—Barack Obama saying something he’d never say. An app that gives you a glimpse of what you’ll look like when you get old. Or one persons’ face superimposed onto someone else’s body.

These are just a few of the many ways this technology has already been applied, but they raise number of questions about the future.

How might deepfakes affect society? Will uncertainty and doubt swallow all internet media? Or will we somehow grapple with fake content and establish some semblance of truth and trust? Will we at least get a good laugh out of it?


In 2016, one year before the term deepfake would first be used, FaceApp was released.

The app has a variety of options for altering your selfie, such as adding glasses, applying makeup, or changing the colour of your hair.

What caught people’s attention, however, was its ability to create realistic looking versions of people when they’re old.

Also in 2016, researchers from Stanford released Face2Face.

The software manipulates a target’s facial expressions to mimic those of someone else—the source. As the source, you can sit in front of your camera and yawn, laugh, or poke your tongue out, and watch as a character of your choosing moves in unison.

In 2017, researchers from the University of Washington created a lip-syncing algorithm.

While Face2face alters expressions to mimic yours, this program takes recorded speech and modifies a target face to look as if it is speaking those words.

This technology is not all about manipulating existing faces.

This website will show you images of people who don’t exist. While this one pairs a fake person with a real one, challenging you to determine which is which.

Using a Deepfake for Good

Now that we’ve seen a few examples of this technology, let’s explore how it might be used, starting with the positive side of things.


Impersonations are funny. We’ve all done them. More specifically, we’ve all attempted an Arnold Schwarzenegger voice—“I’ll be back.” “Hasta la vista, baby.” Classic.

One of the best people at impressions is Bill Hader. And not surprisingly his Schwarzenegger is very good. It’s funny without any manipulations. But I hope you’ll agree that it’s even funnier when Hader becomes Arnold in more ways than one.

The creator behind this deepfake goes by Crtl Shift Face. His Youtube channel is filled with comedic mashups, like Steve Buscemi as Jennifer Lawrence, Jim Carrey in The Shining, Sylvester Stallone as the Terminator, and Bruce Lee in The Matrix.

The comedian Kyle Dunnigan prefers to use his own face as the canvas on which to paste celebrities, and has filled his Instagram profile with mock portrayals of famous figures like Jeff Goldblum, Donald Trump, and Kaitlyn Jenner.

While most of them are clearly fake, the idea isn’t to be real, it’s to be hilarious. But then there’s a clip of Dunnigan portraying Elon Musk, which is much more realistic (except for what he talks about):

For this clip, Dunnigan used the help of Dr. Fakenstein, who has a wealth of content on his account—such as this one of Nick Offerman in Full House:


Creative uses of deepfakes will go beyond making us laugh, they’ll also be used to make us think, or just to make things aesthetically pleasing. One day AI might make art without the help of us humans.

…. and it might reincarnate those artists who have already given us so much.

One group of programmers have taken images of the Mona Lisa, Marilyn Monroe, and Salvador Dali, and brought them to life. (They appear at the 4:17 mark in the video below)

While this is similar to the Face2face program mentioned earlier, the significance here is that only a single image was required. Most other deepfakes need a lot of footage and angles of the subject for the result to be of decent quality.

While Dali appears near the end of that video, it’s not his only foray into deepfakes.

At the Dalí Museum in Florida, you can find a big screen with a life-size reincarnation of the surrealist painter. He’ll move and talk just like the real Dali, and will even take a selfie with you.

In the future, algorithms might create new artworks in the styles of these famous artists. Already there are apps that convert your photos into different types of art, but this is only the beginning.

To varying levels of success, there have been programs designed to mimic the music of Bach; computer-generated artworks that have fooled judges into thinking they were by people; even an attempt at getting AI to write a chapter of a new Harry Potter book.

The grave is not the end. We might soon see a world where our favourite artists live in digital form, interacting with us, and creating new works.

And then, of course, there are modern art projects that make use of deepfakes, such as Spectre.

For this project, Bill Posters and Daniel Howe put words in the mouths of Mark Zuckerberg, Kim Kardashian, and others, about the power and misuse of people’s data—to quote fake Zuckerberg, “whoever controls the data controls the future.”

The project premiered at Site Gallery in Sheffield, with the hope of showing us “how our behaviours are predicted, and influenced, both online and in the voting booth.”

Games and Movies

Then there is the entertainment business. What might directors and developers get out of deepfakes?

For one thing, instead of arduously creating characters or landscapes from scratch, we can let AI work its magic. All the designers will need to do is set the parameters and fine-tune the results.

An initial glimpse of this capability comes from Nvidia, who showcased a tool to draw and paint landscapes with the help of AI.

Using their website, you select brushes which correspond to different features, such as water, rocks, trees, and clouds. The algorithm will then convert your blotchy image into something a little more detailed.

My creations didn’t work out that great, but I doubt it will take long before digital realms might be created in real-time. Imagine an open-world game that you could never reach the end of.

My digital landscape

For those in the movie business, this tech should make dubbing films in different languages much easier.

No more subtitles or sounds that don’t match the mouth movements. You can have an algorithm convert the dialogue to another language, and alter the facial movements to match.

A bad performance or a change to the script post-recording can be edited without stepping back in front of the camera.

In fact, it will be possible to get by without the actors at all.

In 2016, the Star Wars film Rogue One caught attention for digitally recreating the actor Peter Cushing, who had passed away a couple of decades earlier. They faced some ethical concerns regarding Cushing’s inability to consent. But as this gets easier, it’s sure to happen more often.

Entire games and movies could be created with this technology. Humans might only need to write the story and pick the appropriate visual style. But then they might cede those responsibilities too, eventually.

A Digital Self

As interesting as it would be to watch a film or play a game that was produced by AI, how much better would it be if we were the main character?

Using deepfake tech, it might be possible. A couple of selfies from different angles and voila! Our face will be on the big screen, fighting the bad guys and saving the day, like we always imagined we would. 

But this digital self can take on other roles. Such as picking out our clothes. Imagine being able to see yourself in an outfit without having to try it on. You can essentially become the model in the advertisements.

SuperPersonal is working along these lines.

Your digital self might also make you the star of another type of film: porn.

Naughty America is hoping to get you in on the action. You can pay to customise clips, which could be inserting your face onto one of the people in the scene, or changing the background to something you’re more familiar with.

The procedure can end up costing thousands, and it’s not clear if people are going to want to see themselves in these scenes, or if it will just make for a weird out of body experience.

Rather, people are going to want to replace the pornstars with other people of their choosing. Whether that is other pornstars, celebrities, or people they know in person. This is where the idea of consent gets tricky, and where our conversation takes a turn…

Using a Deepfake for Evil


When the term deepfake was first used, it was applied to videos in which the faces of celebrities such as Scarlett Johannson and Emma Watson were cast onto the bodies of pornstars by someone on Reddit.

Reddit banned them, so have Pornhub, but as is custom with the internet, they have proliferated elsewhere.

“The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part,” says Scarlett Johannson.

Celebrities have been easy targets given the wealth of imagery they have online for the AI to learn from, but the tech is increasingly able to work with less.

For people that aren’t in the spotlight, these graphic deepfakes could be more damaging, as they might be more easily believed. As Scarlett points out, “Clearly this doesn’t affect me as much because people assume it’s not actually me in a porno, however demeaning it is.”

How would you feel if you woke up one morning to find a video being shared around the web that depicted you in a compromising sex video? Unfortunately, for some, this is already a reality.

There are online communities where people share, request, and pay others to create deepfake porn. This is revenge porn on a new level. As one woman who experienced this firsthand writes, “It’s this weird feeling, like you want to tear everything off the Internet. But you know you can’t.”

When a fake-but-realistic-looking video of you performing graphic acts gets shared about, who knows who might see it. Your boss, your friends, your family.

The ability to create deepfake porn isn’t likely to stay within the bounds of a few perverted basement-dwellers. There might be easy-to-use apps that anyone can download.

Recently, an app called DeepNude briefly saw the light of day, until it was taken down by the creators, who said that “the probability that people will misuse it is too high.” Why they couldn’t have foreseen that before releasing the app is hard to answer—the app was designed to take pictures of clothed women and predict what they look like naked. 

How can we combat these types of deepfakes? As already mentioned, Reddit has removed communities that spread them and Porn sites have promised to take action. Virginia updated a 2014 law on revenge porn to make it illegal to share fabricated videos or images—you’ll face up to a year in prison and a $2,500 fine. 

But this is the internet, where anonymity is everywhere and content lasts forever. How successful will these responses be going forward, if we assume there is more of this content to come?

“You know how the internet is — once something is uploaded it can never really get deleted,” says another victim. “It will just be reposted forever.”

Political Interference

Many of the deepfakes gaining public attention contain presidents. This has sparked concern about what role they might play in future elections. If a well-timed and decent quality video of a presidential candidate saying things they never said appears, will it convince people?

Disinformation has been around a while, and you don’t need anything near deepfakes to cause trouble. Rumours spread through Facebook have led to violence between Buddhists and Muslims in Sri Lanka. While in India, messages sent via WhatsApp wrongly accusing people of kidnapping led to them being beaten and killed

There was fake news sparking concern during the 2016 elections. We’ve grown used to seeing photoshopped models in magazines. We’ve all lied or had someone lie to us at some point in our lives.

Altering information to serve an agenda is an age-old practice. Deepfakes will be a new extension of that. The same sceptical mindset we use when we see models in a magazine or read a “fact” on Facebook will need to extend to videos.

But maybe doubt is the point of deepfakes. Rather than convincing people of something that didn’t happen, we’re more likely to disbelieve what’s real. It’s about eroding trust.

All the problems deepfakes could cause already exist. The true goal of misinformation is not to make you believe a lie; it’s to make you doubt the truth.”Isabelle Roughol

Whether we’re believing the fakes or doubting what’s true, it’s difficult to see deepfakes having anything but a negative effect in this realm. We already have some examples.

The president of Gabon, Ali Bongo, was hospitalised after a stroke in October 2018. He wasn’t seen or heard from by the public for several months, which led to speculation about the state of his health.

But on January the first, 2019, he posted a video to social media. That video, according to some of Bongo’s critics, was a deepfake.

In another case, in Malaysia, a video of two men having sex went viral. A man claiming to be Haziq Aziz, who is the senior private secretary to the Primary Industries Deputy Minister, confessed that he was one of the men in the video, and declared that the other person, Economic Affairs Minister Datuk Seri Mohamed Azmin Ali, was not fit to be a leader.

But again, there are doubts as to whether this confession was made by the real Aziz.

Whether or not these videos were fake isn’t really the point. We’re at a time when digitally altered videos are convincing enough that any video can be accused of being fake.

I see this as being similar to how some people use the term ‘fake news’—it’s often just applied to whatever information someone doesn’t want to accept or wants to discredit. Soon, ‘deepfake’ will be used the same way.  

“If you want a vision of the future, don’t imagine an onslaught of fake video. Imagine an onslaught of commenters calling every video fake.”NYMag

The Future is Fake

Combating the negative uses of deepfakes will be a challenge. It’s going to be difficult finding the creators and punishing them; social networks haven’t inspired much trust in their ability or willingness to remove questionable content; and as the technology improves, identifying deepfakes will become virtually impossible.

This isn’t to say we shouldn’t go down these routes, but that they won’t offer a full solution.

“As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape.”The Verge

As consumers of online content, we will do well not to believe everything we see. This shouldn’t be difficult, we’ve been putting up with misinformation for years. Most people don’t automatically trust every piece of content they’re exposed to. Confidence levels vary based on where the information comes from, how likely it is, and how well it meshes with what they already know.

As faith in news and institutions falters, we’re increasingly relying on our social circles for information. This makes it essential that we take care in who we surround ourselves with, our groups have a habit of inflating our confirmation biases by sharing our opinions and shielding us from outside ideas.

“Truth is no longer dictated by authorities, but is networked by peers.” —Kevin Kelly

Like deepfakes themselves, the future repercussions of this technology are uncertain. As much as they might divide us along certain lines, they should unite us along others. They have the power to bring out the best or worst in us. They’re going to reflect our desires and goals.

I hope the good uses will outweigh the bad. I hope people prefer to watch themselves as the lead in a movie, or laugh over an utterly ridiculous mashup of celebrities, than they do trying to harm people or destroy reputations. Is that wishful thinking? Only time will tell.

Share the word

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *