Microsoft Research Debuts Another Project, Semantic Paint
Sean Cameron | July 1, 2015
A great deal of thought has gone into machine learning over the last few years. And with products like HoloLens destined to hit the market in the near future, Microsoft in particular has gone to great lengths to ensure continued advancement in this field.
To this point, one of the most difficult tasks has been to simply teach the machine about its environment, however a new project from Microsoft Research promises to project thinking on the topic even further: Semantic Paint.
Video showing basics of Semantic Paint. Source: http://jnack.com/blog/2015/07/07/vr-microsofts-semantic-paint/ |
The premise is simple, using the software, it is possible to teach the program about its environment, labeling objects and correcting mistakes to start with, after which it is able to recognize its environment with more accuracy than was previously possible
<more at http://www.winbeta.org/news/microsoft-research-debuts-another-project-semantic-paint; related links: http://jnack.com/blog/2015/07/07/vr-microsofts-semantic-paint/ (The Semantic Paintbrush: Interactive 3D Mapping and Recognition in Large Outdoor Spaces); http://www.fastcodesign.com/3048111/innovation-by-design/love-ms-paint-heres-vr-paint (Love MS Paint? Here's VR Paint; Microsoft Research Labs has invented a virtual crayon that allows you to color the real world.); and http://jnack.com/blog/2015/07/07/vr-microsofts-semantic-paint/ (VR: Microsoft’s Semantic Paint); furhter: http://www.bbc.com/news/technology-33379653 (Computers that see objects with touch); Microsoft Research paper on Semantic Paint http://research.microsoft.com/en-US/projects/semanticpaint/valentin2015semanticpaint.pdf [Summary: We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing them to immediately correct errors in the segmentation and/or learning – a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user’s environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.]>
No comments:
Post a Comment