The world of movies and TV shows has always been fascinated by the idea of swapping faces. Whether it’s bringing back a deceased actor to life, creating a digital double for stunts, or de-aging an actor to make them look younger, the possibilities are endless.
For years, VFX companies have been using traditional techniques like rotoscoping and 3D modeling to achieve these effects, but the emergence of artificial intelligence and deepfake technologies have changed everything. Face swapping deepfakes are revolutionizing the movie industry with their ability to create realistic and seamless face swaps, and do it all in a fraction of the time it used to take with older VFX techniques.
But with new technologies always come with a whole host of ethical and privacy concerns that we must navigate as a society.
Rotoscoping: the VFX face swapping traditional technique
Before the advent of deepfakes, VFX (Visual Effects) companies employed various techniques to swap faces in movies and TV shows.
One of the most common methods was to use a technique called “rotoscoping.”
Rotoscoping involves manually tracing the subject’s face frame-by-frame in a video, using specialized software such as Adobe After Effects or Maya. Once the face is traced, the VFX artist can apply a new image or face onto the traced area using various compositing techniques, such as keying or tracking.
Another method used by VFX companies was to employ 3D modeling and animation software to create a 3D model of the subject’s face. The VFX artist would then animate the 3D model to match the movements and expressions of the subject in the original footage, and then render the final result with the new face.
Both of these techniques required a lot of time and skill to achieve a convincing result, and were typically used only for major film or TV productions where the budget and timeline allowed for such extensive post-production work.
Deepfake: the machine learning face swapping to rule them all
Face swapping deepfakes use a type of machine learning algorithm called a generative adversarial network (GAN) to swap the face of one person onto another person’s body in a video or image.
Here’s how it works:
- Training Data: The GAN is trained on a dataset of images and videos of both the person whose face will be replaced (the “source”) and the person whose body will be used (the “target”).
- Face Detection: The GAN uses facial recognition algorithms to detect the position and features of the source face in the video or image.
- Face Embeddings: The GAN then generates a “face embedding” for the source face, which is a mathematical representation of the unique features of the face. This is done using a type of neural network called an encoder.
- Body Swapping: The GAN then swaps the source face onto the target body using another neural network called a decoder. This creates a new image or video that shows the target body with the source face.
- Post-Processing: At EISKO, this step is fundamental and is realized by our 3D artist experts. This step includes color correction and smoothing for example.
One of the biggest advantages of deepfakes compared to traditional techniques is the speed and efficiency with which they can be created. Deepfake algorithms can generate realistic-looking videos in a matter of days or hours depending on the target resolution, whereas traditional VFX techniques can take weeks to complete.
Another advantage of deepfakes is the level of realism they can achieve. Deepfake algorithms use machine learning to analyze and replicate the facial features, expressions and movements of the original subject, resulting in a more accurate and seamless face swap. In contrast, traditional VFX techniques can sometimes result in a less convincing or “uncanny valley” effect, where the swapped face looks slightly off or unnatural.
What about ethics and privacy ?
Overall, the proliferation of deepfakes presents a significant challenge for society, as it requires us to rethink our approach to privacy, trust, and truth in the digital age. It also highlights the importance of responsible and ethical use of technology, as well as the need for education and awareness about the risks and implications of deepfakes.
This question has been at the core of EISKO’s vision for over the past 10 years: the protection of digital identity. All those years, EISKO has developed its strategy and is now able to provide models that respect the privacy and security of the digital identity.
Eisko’s AI Face Replacement Technology
At EISKO, our face swapping deepfakes cover a large span of applications. Our deepfake can be used to create digital doubles of actors or performers. This can be useful for creating realistic stunts or action sequences without risking the safety of the actors.
In addition tour face swapping algorithms can be used to bring deceased actors or famous people back to life for a movie or television show. For example, in the film “Rogue One: A Star Wars Story,” the character Grand Moff Tarkin was brought back to life using a combination of motion capture and face swapping techniques.
Deepfake can also be used to digitally manipulate the appearance of actors or performers to make them appear younger, what’s called de-aging. The Marvel Cinematic Universe has used de-aging technology in a number of its movies, including “Captain Marvel” and “Ant-Man and The Wasp.” This technology was used to make actors Samuel L. Jackson and Michael Douglas appear younger in flashback scenes
Those are some of the applications we have been working on. But what makes us stand out from the crowd is our proprietary state-of-the-art deepfake algorithm. This technology can be used for megapixel face swapping deepfake and is unique to EISKO.