How Armbands Can Translate Sign Language
A research project looks at how gesture-recognition armbands can help the hearing impaired communicate more easily with those who don’t understand sign language.
Rachel Metz | February 17, 2016
That’s what researchers at Arizona State University say they can do with a project called Sceptre. They use the armbands to teach software a range of American Sign Language gestures; then, when a person wearing the bands makes one of these signs, it can be matched up with its corresponding word or phrase in Sceptre’s database and shows up as text on a screen.
The Myo gesture control armband recognizes muscle patterns to interpret sign language symbols. Source: http://www.newsweek.com/myo-armband-controller-translates-sign-language-text-428043 |
<more at https://www.technologyreview.com/s/600818/how-armbands-can-translate-sign-language/; related links: https://impact.asu.edu/Sceptre.html (Sceptre: A gesture and sign language recognition system) and https://impact.asu.edu/publication/Sceptre_Prajwal.pdf (SCEPTRE: a Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. Prajwal Paudyal, Ayan Banerjee, and Sandeep K.S. Gupta. ACM 978-1-4503-4137-/16/03. http://dx.doi.org/10.1145/2856767.2856794. 2016. [Abstract: Communication and collaboration between deaf people and hearing people is hindered by lack of a common language. Although there has been a lot of research in this domain, there is room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user. Such a system will be powerful enough to translate gestures performed in real-time, while also being flexible enough to be fully personalized to be used as a platform for gesture based HCI. We propose SCEPTRE which utilizes two non-invasive wrist-worn devices to decipher gesture-based communication. The system uses a multitiered template based comparison system for classification on input data from accelerometer, gyroscope and electromyography (EMG) sensors. This work demonstrates that the system is very easily trained using just one to three training instances each for twenty randomly chosen signs from the American Sign Language(ASL) dictionary and also for user-generated custom gestures. The system is able to achieve an accuracy of 97.72 % for ASL gestures.])>
No comments:
Post a Comment