The software creates a 3D representation of a subject’s face, which can then be swapped with the 3D representation of another face. The process works even if one subject has facial hair or a different skin tone. But it won’t work if a person’s long hair blocks his or her mouth.
Matthias Niessner, a Stanford University professor who contributed to the collaboration between the University of Erlangen-Nuremberg and the Max Planck Institute, warned that because of such technology we should be more careful about believing what we see in videos.
Niessner wants the work to raise awareness that video fraud is another hazard for consumers.
“People get that an email could be fraud,” Niessner said. “This is a very similar thing. Now the only difference is people should know about it.”
Niessner recommends that viewers who suspect manipulated videos look closely for disparities and inconsistencies in the lighting in a video. These hints are tough to sniff out on grainy videos, but more obvious on a high-resolution video. Niessner expects we’ll eventually have smartphone apps that help users figure out if a given video is real or not.
Currently the researchers are considering commercializing the technology for use in TV shows that are re-released in a different language. Editing actors’ facial movements to match the audio should make the dubbed programs seem more natural, even if what’s onscreen is actually fake.