Wednesday, July 15, 2009

Cross Reality: When Sensors Meet Virtual Reality

Reality is time bound, costly, sometimes dangerous. Virtual Worlds could theoretically be a safer more effective 'playground' to experiment with costly and potentially dangerous objects and situations, meet and collaborate without travel, or just socialize in an anonymous way. However, Virtual Worlds are are still lacking the the real use due to a lack of real-world connectivity, a lack of ability to tie real-world events (and thus data) to the Virtual experiences. Cross-reality is holding the promise to overcome those shortcomings. Using a variety of sensors, media feeds and and input modalities, the Virtual Worlds is becoming a true platform for telepresence applications, an alternate interface to synchronous collaborative applications in cyberspace: Cross Reality: When Sensors Meet Virtual Reality.

Saturday, July 11, 2009

Being all eyes ...

We knew already that cameras are getting smaller and smaller and cheaper and cheaper. It was, therefore, somewhat expected that eventually cameras would play a significant role in our culture. That there would be a time, where we could record everything that was going on around us - possibly skipping our mind in an instance - in order to play it back in the evening; for ourselves or others. Now, researchers at MIT went even further: A camera-like fabric, which would allow for surround-image capturing through your clothes: MIT develops camera-like fabric | Underexposed - CNET News. Combine this with recent developments in flexible display clothing (http://www.gizmag.com/go/3043/) and what do you get? ..... Cloaking! If the display on your back shows what the camera on your front sees, you become see-through. Can't wait. I was always a great proponent of transparency ...

Watch what you say!

With computer technology increasingly entering the space of interpreting multimodal interaction in inter-human communication, it is just a question of time, that the vision of a computing system in the form of Lt. Cmd. Data of the Starship Enterprise becomes reality. Computers will be able to interpret voice, gestures, facial expressions and posture in the context of the conversation. They will be able to derive syntax, semantics, as well as emotional and sociometric communication between participants.

This capability will go far beyond the human capabilities of 'reading' other people. So the question remains what the applications will be that add value to a broad audience of users. Wouldn't it be very intrusive if everybody could look at their handheld device and know if you are nervous, distracted, insecure when talking to them? How would be ensure privacy around innermost sentiments?

For a start, computers are just learning to read sign language in support of a human user audience that cannot speak. Computer learns sign language by watching TV - tech - 08 July 2009 - New Scientist
Not that computers' speech recognition capabilities are already at a point that we could call robust and mainstream...