Microsoft researchers started exploring inputting sign language capabilities to their Kinect. The company is working on including sign language reading capacities to the technology. Is this really possible?

The Chinese Academy of Sciences' Institute of Computing Technology and Microsoft Research Asia has begun a partnership on exploring the capacities of Kinect. Their primary goal for this project is to include computer sign-language recognition into Kinect.

Progress reports indicate that Microsoft's Kinect can translate sighs through body and hand tracking. Microsoft Research clarifies the technology is still in "translation mode." This means Kinect can convert sign language in either speech or text.

The team tested both continuous sentence and isolated word recognition. Researchers are including a "communications mode" into Kinect which will feature an avatar to assist hearing-impaired individuals to communicate with other people. The device translates the sign into text and vice versa.

Microsoft Research the process of translation was made possible through a system called "3D trajectory matching." Windows software in Kinect allows the device to recognize hand movements. These are used to find the right word for the action. If the technology pushes through, the research initiative can provide improved and additional access to people with hearing or speech impairments.

"We believe that IT [information technology] should be used to improve daily life for all persons," Guobin Wu, Microsoft Research Asia's research program manager, explained.

"While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future."