It’s getting harder and harder to believe what’s in front of your eyes.
A new app called FaceApp can artificially add a smile to your selfie. It can also make you old, young, or convert you to the opposite sex. It’s pretty good too. More realistic than the pixies and aliens you find on Snapchat. But it’s all in good fun. For now.
Another company called Lyrebird cooked up a program that can mimic someones speech. They advertised it with a delightful piece of conversation between Donald Trump, Hillary Clinton, and Barack Obama.
Yeah they’re definitely robots, but it’s not far off—and that’s the concern.
As Lyrebird themselves write in a statement:
“Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings. This could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.”
I love that last part.
So it will get more difficult to know who said what, for real. Now is the time to get into podcasts—you can interview whomever you want, and they’ll say whatever you tell them to.
But this danger extends beyond audio. Soon you might not only be listening to Trump’s artificial impersonator, but watching him too.
A group of researchers recently put together a program that put’s together faces. Called Face2face, it records one person’s facial movements and projects them onto the video of someone else. Here, watch:
A little ingenuity could have you talking out of Putin’s face. Wait a little longer and, as Alexandre de Brébisson notes, “…it’s likely that generating whole videos with neural nets will become possible.”
What on the internet are we going to believe!?
We’ll have to go back to believing only what we see, out there in the real world, with our own two eyes.
Well maybe not. Maybe artificial intelligence will be better than us at spotting artificial people. It makes sense, they can spot subtle patterns in huge amounts of data far better than us. They should be able to pick out the markers of real and fake.
But then it could shape up to be something of a race, a battle between truth-seeking AI and deceitful AI. It wouldn’t be the first AI vs AI battle to flare up. Already there are machines that can forge works of art by famous painters, and AI systems designed to detect such fakes—which it managed to accomplish with a 93% success rate.
“If a computer can fake a painting, can it also fool the computers designed to detect the fakes? How can the programs designed to spot fakes stay a step ahead of the programs designed to generate them?” writes the Atlantic.
That’s the big question. As for this Face2face technology, one of the creators, Justus Thies, is already developing a tool to spot the fakes. He told MIT Tech Review that “Intermediate results look promising.”
If we’re not prepared in the short-term, the current misinformation crisis could get a lot worse before it gets better. So, let’s hope the truth-seeking AI out there isn’t late to the party.
. . .
Check out more in the Digital Brain series here