Search Box

Thursday, October 22, 2015

Autocomplete Hand-Drawn Animations

Microsoft Research Debuts Autocomplete For Animation, And It’s Incredible (+Video)

Draw Someting Once, and Software Can Predict, Guide, and Adjust what You Want To Do Next

Fast Company | October 20, 2015



Microsoft Research, working with the University of Hong Kong and the University of Tokyo, has a remarkable new technology that it calls "Autocomplete hand drawn animations."
Unveiled at the Siggraph Asia conference, it doesn’t appear that a formal white paper has been released, but there is this illustrative video of the tech. You can watch as someone draws a fish once, then, upon drawing a single line for the next frame, the software suggests a skeleton to trace. It’s responsive, too, shaping its wireframes around your sketches in real time.



<more at http://www.fastcodesign.com/3052463/microsoft-research-debuts-autocomplete-for-animation-and-its-incredible; related links: http://www.wired.com/2015/10/microsofts-badass-new-tool-is-like-autocomplete-for-drawing/ (Microsoft's Dope New tool Is Like Autocomplete for Drawing. October 20, 2015) and http://www.liyiwei.org/papers/workflow-siga15/ (Autocomplete Hand-drawn Animations. Jun Xing, Li-Yi Wei, Takaaki Shiratori, and Koji Yatani. SIGGRAPH Asia 2015. [AbstractHand-drawn animation is a major art form and communication medium, but can be challenging to produce. We present a system to help people create frame-by-frame animations through manual sketches. We design our interface to be minimalistic: it contains only a canvas and a few controls. When users draw on the canvas, our system silently analyzes all past sketches and predicts what might be drawn in the future across spatial locations and temporal frames. The interface also offers suggestions to beautify existing drawings. Our system can reduce manual workload and improve output quality without compromising natural drawing flow and control: users can accept, ignore, or modify such predictions visualized on the canvas by simple gestures. Our key idea is to extend the local similarity method in [Xing et al. 2014], which handles only low-level spatial repetitions such as hatches within a single frame, to a global similarity that can capture high-level structures across multiple frames such as dynamic objects. We evaluate our system through a preliminary user study and confirm that it can enhance both users' objective performance and subjective satisfaction.]>

No comments:

Post a Comment