Saturday, July 11, 2009

Watch what you say!

With computer technology increasingly entering the space of interpreting multimodal interaction in inter-human communication, it is just a question of time, that the vision of a computing system in the form of Lt. Cmd. Data of the Starship Enterprise becomes reality. Computers will be able to interpret voice, gestures, facial expressions and posture in the context of the conversation. They will be able to derive syntax, semantics, as well as emotional and sociometric communication between participants.

This capability will go far beyond the human capabilities of 'reading' other people. So the question remains what the applications will be that add value to a broad audience of users. Wouldn't it be very intrusive if everybody could look at their handheld device and know if you are nervous, distracted, insecure when talking to them? How would be ensure privacy around innermost sentiments?

For a start, computers are just learning to read sign language in support of a human user audience that cannot speak. Computer learns sign language by watching TV - tech - 08 July 2009 - New Scientist
Not that computers' speech recognition capabilities are already at a point that we could call robust and mainstream...

No comments:

Post a Comment