Search Box

Wednesday, December 9, 2015

Wearable Cogntive Assistant

Wearable Cognitive Assistant Whispers Advice, Step-By-Step Instructions

Byron Spice | November 3, 2015

Researchers at are building a computer system called Gabriel that, like the angel that is its namesake, will seemingly look over a person’s shoulder and whisper instructions for tasks as varied as repairing industrial equipment, resuscitating a patient or assembling IKEA furniture. The National Science Foundation has awarded Carnegie Mellon University (CMU) a four-year, $2.8 million grant to further develop the wearable cognitive assistance system. Gabriel uses a wearable vision system, such as Google Glass, and taps into the ubiquitous power of cloud computing via a CMU innovation called a “cloudlet.”


<more at; related links: (Wearable Cognitive Assistance. Published July 10, 2015.[Summary: GPS navigation systems have transformed our driving experience. They guide you step-by-step to your destination, offering you helpful just-in-time voice guidance about upcoming actions. Can we generalize this metaphor? Imagine a wearable cognitive assistance system that combines a device like Google Glass with cloud-based processing to guide you through a complex task. You hear a synthesized voice telling you what to do next, and you see visual cues in the Glass display. When you make an error, the system catches it immediately and corrects you before the error cascades. Fortunately, this is not science fiction. To support this genre of applications, we have created a multi-tiered system architecture called Gabriel. This architecture preserves tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. We have gained initial experience on building Gabriel applications that provide user assistance for narrow and well-defined tasks that require specialized knowledge and/or skills. Specifically, we have built proof-of-concept implementations for four tasks: assembling 2D Lego models, freehand sketching, playing ping-pong, and recommending context-relevant YouTube tutorials. This talk will examine the potential and challenges of wearable cognitive assistance, present the Gabriel architecture, and describe the early proof-of-concept applications that we have built on it.]) and (Early Implementation Experience with Wearable Cognitive Assistance Applications. Zhuo Chen, Lu Jiang, Wenlu Hu, Kiryong Ha, Brandon Amos, Padmanabhan Pillai†, Alex Hauptmann, and Mahadev Satyanarayanan. 2015. [Abstract: A cognitive assistance application combines a wearable device such as Google Glass with cloudlet processing to provide step-by-step guidance on a complex task. In this paper, we focus on user assistance for narrow and well-defined tasks that require specialized knowledge and/or skills. We describe proof-of-concept implementations for four different tasks: assembling 2D Lego models, freehand sketching, playing ping-pong, and recommending context-relevant
YouTube tutorials. We then reflect on the difficulties we faced in building these applications, and suggest future research that could simplify the creation of similar applications.])>

No comments:

Post a Comment