Disney Research has released a white paper titled: Physical Face Cloning. Here’s Disney’s abstract for the face cloning paper:
We propose a complete process for designing, simulating, and fabricating synthetic skin for an animatronics character that mimics the face of a given subject and its expressions. The process starts with measuring the elastic properties of a material used to manufacture synthetic soft tissue. Given these measurements we use physicsbased simulation to predict the behavior of a face when it is driven by the underlying robotic actuation. Next, we capture 3D facial expressions for a given target subject. As the key component of our process, we present a novel optimization scheme that determines the shape of the synthetic skin as well as the actuation parameters that provide the best match to the target expressions. We demonstrate this computational skin design by physically cloning a real human face onto an animatronics figure.
Engineers and scientists have long sought to create robots that realistically display the human form. Disney shows a new path to realistic human expression for animatronic characters by developing a pipeline that leverages computational guidance in the design and fabrication of robots that use soft-tissue material. Disney describes the purpose of their research like this:
The goal of this work is to automate this process, to increase the realism of the resulting character and, ultimately, to create an animatronic face that closely resembles a given human subject. In order to accomplish this task, we capitalize on recent developments from three areas in computer graphics: facial performance capture, physics-based simulation, and fabrication-oriented material design.
Disney Research’s white paper describes their use of facial scanning technology, modeling & simulation techniques, soft-tissue fabrication, and actuators on a underlying electromechanical base that are able to deform the soft-tissue to produce a realistic robot head.
Limitations still remain, Disney Research’s paper notes:
Moreover, it is important to note that the motions of current animatronic characters are still limited as they are less expressive than most humans. Thus, it is important to develop retargeting algorithms that can map the expressions of a target human to the physical constraints of the electromechanical base and materials used. In this paper, we have not addressed the appearance aspect of the synthetic skin tissue including re?ectance, subsurface scattering, and hair. This is also a very interesting direction for future work and we envision that similar computationally guided processes could be developed to solve these problems. Finally, while in this work we have only shown examples of replicating human faces, we predict that similar frameworks will be used to design realistic full-body characters.
As our relationships with robots evolve, so will their appearance. Disney has shown an extraordinary example of our attempt to make robots look more like us. And though their current research was based on the human head, Disney Research predicts similar progress in full-body robotic representations of us. We are nearing a threshold where robots may be indistinguishable from us and perhaps we from them. Consider the cross-cutting implications this research may have on the design and manufacturing of prosthetics and the robots we may soon use for day-to-day tasks. “Big things have small beginnings.”
Posted by Dennis Bonilla