Search Box

Monday, October 19, 2015

Virtual Emotions

Project Puts Real-Time Virtual Emotions on Your Stoic Face

Brittany Hillen | October 16, 2015

In a demonstration of what is perhaps the freakiest technology ever, Stanford researchers have managed to take the facial expressions from one person's face and transplant them onto another person's face...in real time through video. Imagine that you're channeling your inner stoic, sitting without expression in a chair, but a live video feed of your face shows you smiling, singing, sticking out your tongue, or any number of other things.

Source: http://www.slashgear.com/project-puts-real-time-virtual-emotions-on-your-stoic-face-16410081/

<more at >http://www.slashgear.com/project-puts-real-time-virtual-emotions-on-your-stoic-face-16410081/; related links: http://graphics.stanford.edu/~niessner/papers/2015/10face/thies2015realtime.pdf(Real-time Expression Transfer for Facial Reenactment. Justus Thies, Michael Zollhofer, Matthias Nießner, Levi Valgaerts, Marc Stamminger and Christian Theobalt. [Abstract: We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.] and http://fusion.net/video/216658/this-software-lets-other-people-control-your-face/ (+Video) (This software lets other people control your face. October 19, 2015)>

No comments:

Post a Comment