Sunday, August 30, 2009

Do New Media Detain Literacy or create New Literacy?

I have been very critical with New Media in the past in the way they affect tween and teenage literacy and communication capabilities. The limited way in which tools like Twitter, Facebook, and SMS support inter-human communication and the way in which young audiences respond to those limitations by adopting an abbreviated, cryptic and highly ambiguous code as language seems frightening. Especially, when this code then also makes it into the traditional written language: A few months back I received a pile of thank you letters from my son's class for a Show&Tell I had given. To my surprise, the letters of these then 7-graders was full of orthographic and grammatical errors. Moreover, many of the orthographic mistakes the students made were clearly related to the way they communicate if aforementioned New Media, 'bcause', 'wat', and 'yu' just being a few of them.

However, maybe I am thinking to narrow-mindedly? Clive Thompson on the New Literacy references Andrea Lunsford, a Stanford University professors, who is studying this phenomenon and is actually postulating that the New Media makes children write more, more voluntarily and for an existing audience instead of the teacher alone. All of these findings are clearly positive trends that might in the long term improve children's cognitive, communication and problem-solving abilities.

If this is the case then does the language really matter as long as everybody agrees on it?

Or is there a limit to the depth of thinking and articulation that the New Media language imposes on us, which is only suited for news casting but not critical analysis and opinionated discussions?

I guess, only time will show. In the meantime, I am going the safe way, exposing my children to New Media tools while encouraging and supporting them in enjoying old media such as books and dinner table conversations.

Wednesday, July 15, 2009

Cross Reality: When Sensors Meet Virtual Reality

Reality is time bound, costly, sometimes dangerous. Virtual Worlds could theoretically be a safer more effective 'playground' to experiment with costly and potentially dangerous objects and situations, meet and collaborate without travel, or just socialize in an anonymous way. However, Virtual Worlds are are still lacking the the real use due to a lack of real-world connectivity, a lack of ability to tie real-world events (and thus data) to the Virtual experiences. Cross-reality is holding the promise to overcome those shortcomings. Using a variety of sensors, media feeds and and input modalities, the Virtual Worlds is becoming a true platform for telepresence applications, an alternate interface to synchronous collaborative applications in cyberspace: Cross Reality: When Sensors Meet Virtual Reality.

Saturday, July 11, 2009

Being all eyes ...

We knew already that cameras are getting smaller and smaller and cheaper and cheaper. It was, therefore, somewhat expected that eventually cameras would play a significant role in our culture. That there would be a time, where we could record everything that was going on around us - possibly skipping our mind in an instance - in order to play it back in the evening; for ourselves or others. Now, researchers at MIT went even further: A camera-like fabric, which would allow for surround-image capturing through your clothes: MIT develops camera-like fabric | Underexposed - CNET News. Combine this with recent developments in flexible display clothing (http://www.gizmag.com/go/3043/) and what do you get? ..... Cloaking! If the display on your back shows what the camera on your front sees, you become see-through. Can't wait. I was always a great proponent of transparency ...

Watch what you say!

With computer technology increasingly entering the space of interpreting multimodal interaction in inter-human communication, it is just a question of time, that the vision of a computing system in the form of Lt. Cmd. Data of the Starship Enterprise becomes reality. Computers will be able to interpret voice, gestures, facial expressions and posture in the context of the conversation. They will be able to derive syntax, semantics, as well as emotional and sociometric communication between participants.

This capability will go far beyond the human capabilities of 'reading' other people. So the question remains what the applications will be that add value to a broad audience of users. Wouldn't it be very intrusive if everybody could look at their handheld device and know if you are nervous, distracted, insecure when talking to them? How would be ensure privacy around innermost sentiments?

For a start, computers are just learning to read sign language in support of a human user audience that cannot speak. Computer learns sign language by watching TV - tech - 08 July 2009 - New Scientist
Not that computers' speech recognition capabilities are already at a point that we could call robust and mainstream...

Saturday, May 23, 2009

Getting Virtually Better

And yet another project focusing on making virtual humans or so-called avatars more life-like, with a more natural look, more intelligent and context-specific interaction, and more emotional feedback. Somehow, I don't expect the great breakthrough since over the past decades many groups have had a similar agenda (ICT, US; VHIL, Stanford, US; DFKI, Germany, MIRALAB, Switzerland, you name them) and were only able to produce partial results, usually limited by availability of technology, insufficient body of science or -worst of all - insufficient understanding of human behavior.

I anyhow wish them success, because the technology is important and has many real-world applications from healthcare to education and training: The Next Best Thing to You

Plug it in, plug it in ...

We were always concerned about the cost of retrofitting existing homes with IT infrastructure to make many of our home-centered heatlhcare ambitions come true. Now there's a light at the horizon. These inexpensive computers by Marvell Technology Group promises an explosion of innovation about for the home because of the combination of open-source software and powerful chips that are becoming available at very low costs. Home data centers, home entertainment center, and home health centers seem to be the a plausible beginning.

Plugging In $40 Computers - Bits Blog - NYTimes.com

Monday, April 27, 2009

Dull Networks – How microblogging might turn the wisdom pyramid upside down

According to Russell Ackoff, a systems theorist and professor of organizational change, the way humans process input from their environment can be classified into five categories:

  • Data represents a fact or statement of event without relation to other things.
    Ex: It is raining.
  • Information embodies the understanding of a relationship of some sort, possibly cause and effect.
    Ex: The temperature dropped 15 degrees and then it started raining.
  • Knowledge represents a pattern that connects and generally provides a high level of predictability as to what is described or what will happen next.
    Ex: If the humidity is very high and the temperature drops substantially the atmosphere is often unlikely to be able to hold the moisture so it rains.
  • Wisdom embodies more of an understanding of fundamental principles embodied within the knowledge and is essentially systemic.

Underlying this theory is the assumption that there is enough content in each of the related input streams to create relationships, identify patterns as well as identify and understand principles.

Recently, microblogging tools like Twitter emerged, which forcefully restrict users’ input to 140 characters, while still allowing for references to original authors’ tagging of keywords and providing URLs for content and location.

News channels like CNN, celebrities like Oprah and many companies are embarking into the microblogging adventure up to a point, where it seems that we often can read about people’s comments before they had the time to think about them.

Recently, a colleague attended his first event at which heavy underground twittering accompanied a formal presentation-style conference. He claimed, that the dynamics that this underground chatter – combined with occasional public outbursts of emerging self-proclaimed representatives of the twitter community – added a completely new and possibly valuable dimension to the knowledge exchange at these types of professional gatherings.

Well, I don’t know…

While I am sure there are some smart uses of microblogging tools, let me here inspect specifically Twitter’s use for knowledge transfer and knowledge augmentation:

Let’s look at it more closely: The knowledge and wisdom that a well-prepared speaker is communicating to the crowd based on a lifetime of experience in the form of simplified slides, multimedia materials, his or her voice, mimicry and gestures are absorbed by a most likely less experienced attendee whose mind is in parallel occupied by competing with others in the crowd commenting on the input in rapid sequence – 140 characters or less at a time.

Experiment: Turn on the TV, take a sheet of paper and capture what’s being said…. Done? Easy, isn’t it !

Now: Instead of capturing content, comment on what you hear while listening to the TV show. Still easy? What if I asked you now about details of the show? Most likely, you would draw a blank, since your mind was so occupied with creating an opinion and putting it to paper, that you had to stop following the show. We humans are just not good in parallel processing once we turn on our cognitive abilities and start thinking about the data we are absorbing.

In addition, since there is not enough space to put any contextual information and most twitterers don’t know each other personally, the communicated information is reduced down to bits and chunks of data flying through the twittersphere. Since the speaker might not have had his twitter address on the first slide, he might not even get referenced appropriately for the space, thus removing the last hope for the data being interpreted in context.



So we have just reduced the wisdom of an individual down to a multitude of data chunks being broadcasted through the networks. What makes matters worse is the way some people use microblogging tools like Twitter to seek information and build knowledge. Busy with keeping up with their legions of followees (people they are following) and trying to make sense out of the multitude of parallel discussion threads they are engaged in, many people don’t seem to have the time anymore to reflect while critically inspecting the origin and context in which the data was presented. However, this process is crucial to finding relationships between data samples in order to turn them into information.

In consequence, any data is elevated to the level of being trustworthy information, which then makes pattern recognition easy: Knowledge is what is read more than once from different sources. However, usually that should mean – for instance also in serious journalism practice – from independent sources. Unfortunately, microblogging sites are also social networking sites. The social networks propagate information multiple times and it is very hard to ensure independence. Surowieki’s book The Wisdom of Crowds emphasized that such wisdom requires independent knowledge contribution and aggregation rather than the unfiltered propagation of word of mouth data.

Don’t you think there are better ways to gain information and build knowledge?

And what’s wrong with listening to and trusting the wisdom of an expert?

Saturday, April 18, 2009

Healthcare and STEM Education – Siamese Twins in Reform and Innovation

The US has lost ground in recent years to other leading industrial nations in attracting new generations to science, technology, engineering, and math (STEM) careers. This has resulted in decreased enrollment rates in STEM college and university programs. Much of this trend is related to issues of global outsourcing of many of the related jobs in established industries, thus eliminating corresponding career incentives for high-school graduates. Industry and academia are in agreement that this educational trend is threatening the US economic and intellectual future and is one of the biggest challenges for current generations. Other reasons are related to an antiquated school system that has not changed significantly since the industrial revolution, and is, therefore, still favoring universal education over academic excellence on the PK-12 level.
National reform of our public school systems will take a long time, so one shorter-term solution is to specifically on women and minorities that are traditionally underrepresented in these careers. We need to find new approaches to attract minority groups to STEM programs, plain and simple.
One way we might do that is to embed STEM curriculum into currently desirable career fields. For instance, in a previous career at the University of Rhode Island, I wrapped traditional computer science education into the context of game design in order to attract more students. Another related initiative combined colleagues’ traditional STEM education with industrially relevant experiences and international exchange in order to emphasize the diversity and breadth of related career paths.
I will admit, however, that while this kind of approach can demonstrate isolated successes, it cannot change fundamental issues in career choice that are closely tied to gender differences: Although girls increasingly outperform boys in K-16 education, consequential female dominance does not seem to translate well into higher education or even STEM careers.
An extensive body of scientific research suggests that the apparent difference of career choice is in part related to gender differences in risk preferences, social preferences and competitive preferences. These differences have largely evolutionary roots, leading apparently – together with workplace discrimination and social acceptance pressure – to women’s ‘attraction’ to jobs with lower mean, lower-variance salaries. This relationship between evolved gender differences and occupational segregation might be hard to influence, so a bigger benefit may come from channeling these differences into new opportunities:

“The tendency of men to predominate in fields imposing high quantitative demands, high physical risk, and low social demands, and the tendency of women to be drawn to less quantitatively demanding fields, safer jobs, and jobs with a higher social content are, at least in part, artifacts of an evolutionary history that has left the human species with a sexually dimorphic mind. These differences are proximately mediated by sex hormones.”

What about healthcare reform?

It is widely accepted that healthcare reform will heavily rely on information and telecommunication technologies. Whether you are talking about electronic medical records or personal health records, telemedicine, telemonitoring or teleconsultation, online social communities of interest, remote caregiving, or Aging in Place, the trend from provider-centric healthcare to home- and individual-centered health and wellness is on the horizon.

This new found demand is creating an unprecedented need for scientific, technological, and engineering innovations. The corresponding career paths have the potential to combine both the job recognition and safety with the social content and rewards according to studies sought after by many women.
Could a potential to channel gender differences into new opportunities lie somewhere within healthcare reform? Early signs point to yes. Women are at the forefront of many of the emerging multidisciplinary research fields underlying the aforementioned healthcare IT R&D opportunities. These research fields combine aspects of Computer Science and Computer Engineering with Psychology, Social Sciences, Anthropology, Medicine and Communication and include Human-Computer Interaction, Affective Computing, Privacy Engineering, Health Communications (incl. Games for Health), Assistive Robots, and Online Social Networking, to mention a few. These women are the role models for new generations of women in STEM careers.
However, the emergence of such role models and the mere existence of the described opportunities for reform in health, wellness, and STEM education are not enough to catalyze the rapid change that is required. STEM education reformers are struggling to understand how to attract women and minorities to traditional STEM higher-education programs. There is also a struggle in determining how to develop frameworks for providing stronger workplace support for these underrepresented groups in STEM careers.
Where are the STEM curricula and initiatives specifically addressing the opportunities promised by home-centered healthcare and personalized health and wellness?
Where are the interdisciplinary centers communicating these opportunities to today’s high-school graduates?
And where are the public-private partnerships that can provide political decision makers with the implementation frameworks to link healthcare reform to STEM education reform?
The answers to these questions will not come from tweaking standardized testing or from providing in schools an additional hour of health and wellness per week … So where will they come from?

Wednesday, March 25, 2009

How to Lose Friends with Social Media

Recently a colleague complained to me that I’m not following him back on Twitter. Another one posted a comment to a Facebook application, offended that my wall was not accessible to her. And another one got annoyed when comments to my online status stayed unanswered for long periods of time, although there seemed to be plenty of activity on my profile.

Which made me think about the different uses people apply to their online social networks and tools – and their personal expectations.

In general, people don’t seem to realize, that most social technologies, including Twitter and Facebook, are asynchronous communication tools. That means, that they are meant to post information now that people pick up at a later point in time at their own discretion. Consequently, the builders of those tools have built in mechanisms and algorithms which – in an attempt to manage the communication load – often arbitrarily display the newest status updates, photos, news on the various ‘friends’ you follow; the newest ones first but in no particular order and without any particular ranking. Therefore, your profile may look active today when your updates are actually from a while ago. What makes matters worse is the fact, that your profile also displays replies, posts, comments by your friends – dependent on your preference settings. So, there may be recent activity on your profile although you haven’t logged in for weeks.

What we need to remember is that people use these tools in different ways, which is dependent on how they are able to access them throughout the day. For example, due to company security restrictions, I can only access most social media sites from my iPod touch during the day and from my home desktop at night. Consequently, I try to manage my Twitter stream by:
  • only following people that talk about things of interest to me (which at this point does not include when they go to the shower or watch the sun rise) :)
  • only posting information and links on Twitter that I find particularly intriguing from a professional and intellectual perspective
Some good additional suggestions on social media etiquette were posted by Chris Brogan.


Consequently, I don’t prohibit anybody from following me but choose who to follow based on the above criteria. Unfortunately but not surprisingly, other people use Twitter in different ways which includes building an online reputation as connectors or distributors of any kind of information, measured by a ratio of followers to followees (called tweeciprocity on Twitter) or alike.

Sorry, guys, for virtually screwing up your cyber-reputation. I hope that the intellectual and informative value that my posts provide to you compensate for that. :)

But back to asynchronous communication tools, old-fashioned Email being one of them... They allow you to access and respond in a different-place/different-time manner and thus the expectation for somebody waiting for a response should adjust accordingly. Even though you may instantaneously see my post doesn’t create the need or ability for me to immediately respond; nor does it require me to respond at all. :)

In contrast, synchronous communication gives you instant feedback but requires you to also immediately respond. This direct feedback loop, however, helps to quickly overcome ambiguity, reach agreement, minimize time, and is, therefore, a much better way to arrive at mutual consent and to make decisions.

So, why not pick up the phone if you actually want to accomplish something?

Or, if the person you want to talk to is actually sitting in the cubicle across the aisle: Why not get up, walk over, and talk to him or her?

You might actually make a real friend …

Monday, March 16, 2009

Attention Deficit Disorder: Personal Demise or the Next Step of Human Evolution?

According to modern evolutionary theories, evolution is based on two fundamental changes in life forms, both of which adhere to the process of natural selection: arbitrary mutation and adaptation to the environment.

Recent years have seen a remarkable increase in children diagnosed with attention deficit disorder (ADD) often in conjunction with hyperactivity (ADHD). ADHD is now thought to occur in 3-5% of school-age children and is more common in boys. It is not yet known what causes ADD, but there does seem to be a genetic influence.

As a condition, ADD is considered to be a deviation from 'normal' capabilities (which constitutes a challenge for educators and a burden for many parents). But what if, in classifying ADD exclusively as a state of reduced mental capacity, chemical imbalance or 'different wiring' (as often alluded to), we’ve gotten it all wrong? What if ADD was actually an adaptation to dealing with an increasingly complex environment? An environment where children grow up exposed to increased levels of external stimuli and information, at an ever-increasing pace. And an environment that presents them with an increased number of multimodal communication channels to be processed in parallel.

Some scientists even correlate rises in ADD among children to a nature-deficit disorder, i.e. the lack of exposure of today’s children to the need for involuntary attention processes as required in natural environments.

At the same time, popular new-age beliefs are on the rise about the power of subconscious decision-making over thorough scientific proof (see e.g. Malcolm Gladwell’s Blink!) paving the way for societal acceptance of short-term focus and spontaneous, uninformed snap judgments.

Not surprisingly, this theory of “Thinking without thinking” has been sharply criticized by supporters of evidence-based decision making such as Michael LeGault in Th!nk, advocating the continuous need for critical thinking and problem-solving strategies as well as emphasizing a concerning decline of related capabilities among the young generations.

So, are we dealing with a classic dichotomy of contrasting trends and opinions, both of which would be supporting the hypothesis of natural selection through adaptation, with one, however, providing hope and justification for phenomena like ADD, the other dooming us to look at a future of self-inflicted mental decline?

Many aspects of the modern information society bear the risk of information overload for the human recipient and the need for them to quickly filter huge amounts of references rather than store significant amounts of information for longer periods of time.

Effectively navigating through the jungle of online media has become essential to gain and maintain social connectivity and acceptance. Crouching through a multitude of opinions in the form of blogs and discussion forums, while engaging in duels of rapidly fired bursts of micro blogs to create situational awareness in an increasingly complex world has replaced externally led knowledge acquisition and indoctrination. It is now truer than ever: "It's who you know, not what you know!"

So, are modern IT-based communication tools and techniques our response to reduced human capacity in focusing, critical thinking and long-term memorization or are those societal trends and challenges the result of the new and celebrated technologies?

Is, consequently, the best medicine in this case possibly no medicine at all, but a matter of better diagnosis resulting in differentiation between true ADD/ADHD and different learning styles and cognitive specialization of today's children?

Might it, therefore, require a revised educational system that embraces mental diversity in order not to be in the way of human evolution and adaptation to ITC innovation?

Or should we – God forbid – be a little more critical towards the adoption of modern technologies and should we study and consider their impact more thoroughly before exposing future generations?

Monday, February 9, 2009

Nosy Displays

Watch out when you pass by these displays - small embedded cameras might be watching, profiling and classifying you to then provide custom-tailored information. While this might be a good thing for many application such as security and automated patient reception, the privacy concerns cannot be overlooked. Especially since it will not be long until every display will also be a camera by default so that software could be turned into such capturing and analysis device. The result: Spam staring you in the face ...

http://tech.yahoo.com/news/ap/20090130/ap_on_hi_te/tec_nosy_ads

MIT Students Turn Internet Into a Sixth Human Sense

This type of fluid interface is just another step in the direction the iPhone has become so popular by: Integrating just-in-time data capture with intuitive interface and data manipulation. The difference here is that the mobile device is just the computing system behind the scenes - the the display (i.e. projection) surface can be any object. Another step into making it hard to distinguish between real and digital world and artifacts.

TED: MIT Students Turn Internet Into a Sixth Human Sense -- Video | Epicenter from Wired.com

Tuesday, January 13, 2009

Mind games

What started with monkey experiments and was targeted to enable brain-damaged persons to interact with a computing system has now gained the interest of the toy and entertainment industry. Using your brain waves to DO things is not anymore just something you read in Science Fiction novels but commodity reality as this Mattel toy shows. Therefore, there is hope that this technology which could halp so many desperate differently-able people will now receive a major push - because it can also be commercialized in the mainstream market. If you remember, this is what already happened with the graphics and Virtual Reality industries, that struggled in finding their killer applications - until the PC commodity hardware and inexpensive gaming hardware and software made it accessibly to the consumer market - maybe less sophisticated, but therefore so much more fun, inexpensive and lucrative.

New games powered by brain waves

Robot with a tender touch

So far, most robot applications insisted to be 'hands-off' due to the overarching concern of people getting hurt from a robots clumsiness and rigidness. However, there are many applications - from hospital patient management to elderly care, where some hand-on robot-human interaction will be required. Here now one first attempt to tame the beast. We'll probably have to wait a long time before that makes it through FDA approval, though ...

Guide robot steers with a tender touch - tech - 12 January 2009 - New Scientist

AI-based virtual agent for call centers lowers costs, improves caller experience

It seems as if the work that DFKI in Germany with their Persona work almost 20 years ago and others (like Patti Maes) working on Autonomous Intelligent Software Bots finally made it into the mainstream. And it's a killer application! Call centers are at the forefront of any consumer experience with a service-oriented large business. And we probably have all our stories to share of waiting for hours on the line to talk to a human being in order just to get very uniformed and generalized answers. Here now might be solution to the problem. With these autonomous AI bots, theoretically no wait is necessary anymore and - due to the bots ability to instantly access, correlate and process much large amounts of personal and other information, it might actually be able to give better, more personalized and context-relevant advice.
So - if suddenly you you have a drastically improved consumer experience with a call center - there might be a bot talking to you ....

http://www.kurzweilai.net/news/frame.html?main=news_single.html?id%3D9980

Wednesday, January 7, 2009

Live long, prosper: Joseph Coughlin on Longevity 3.0 | The Pop!Tech Blog | Accelerating the Positive Impact of Worldchanging People and Ideas

Joe Coughlin from the MIT AgeLab giving a great perspective on how Aging concerns all of us and what role technology could play to make it the next stage of life rather then the end terminal.

Live long, prosper: Joseph Coughlin on Longevity 3.0 The Pop!Tech Blog Accelerating the Positive Impact of Worldchanging People and Ideas

Saturday, January 3, 2009

She-3PO: Canadian Inventor builds 'perfect woman' robot

http://www.thesun.co.uk/sol/homepage/news/article2023392.ece

I guess the motivation is questionable. However, it shows that robotic companions are not so far out in the future. And the more human those robotic companions can behave and communicate, the more intuitive the human-robot interaction will be, thus eliminating the need for extensive training or manual reading and giving a broader public access to such anthropomorphic computing interfaces.

Clothing with a brain: 'Smart fabrics' that monitor health

Miguel Encarnacao wants to recommend to you:

Clothing with a brain: 'Smart fabrics' that monitor health
http://www.physorg.com/news147928092.html

Location-based sensors (GPS and alike) and sensors to measure daily activity (such as from Nike) were a good first step to capture one's just-in-time context. But with those 'Smart Fabrics' we will finally be able to know what people really need in every situation, rather than having to infer it from a few data points and other contextual information. WIll be huge in the prevention of catastrophic medical events (such as stroke or heart attacks) but even more in giving just-in-time personalized advice on how to optimize ones wellness. Can't wait to get into those clothes ...