Although the best-known deepfake videos are parodies featuring politicians and celebrities, there is growing concern the technology could be deployed to interfere in political processes or manipulate public opinion.
(Shutterstock)
As videos faked using artificial intelligence grow increasingly sophisticated, experts in Switzerland are re-evaluating the risks its malicious use poses to society – and finding innovative ways to stop the perpetrators.
In a computer lab on the vast campus of the Swiss Federal Institute of Technology Lausanne (EPFL), a small team of engineers is contemplating the image of a smiling, bespectacled man boasting a rosy complexion and dark curls.
“Yes, that’s a good one,” says lead researcher Touradj Ebrahimi, who bears a passing resemblance to the man on the screen. The team has expertly manipulated Ebrahimi’s head shot with an online image of Tesla founder Elon Musk to create a deepfake – a digital image or video fabricated through artificial intelligence.
It’s one of many fake illustrations – some more realistic than others – that Ebrahimi’s team has created as they develop software, together with cyber security firm Quantum Integrity(QI), which can detect doctored images, including deepfakes.
Using machine learning, the same process behind the creation of deepfakes, the software is learning to tell the difference between the genuine and the forged: a “creator” feeds it fake images, which a “detector” then tries to find.
“With lots of training, machines can help to detect forgery the same way a human would,” explains Ebrahimi. “The more it’s used, the better it becomes.”
Forged photos and videos have existed since the advent of multimedia. But AI techniques have only recently allowed forgers to alter faces in a video or make it appear the person is saying something they never did. Over the last few years, deepfake technology has spread faster than most experts anticipated.