From the first computer mouse to the Nintendo WII remote
Posted by Michael Wolf on March 20, 2009
The first computer mouse designed by Douglas Engelbart in 1964 is in principle similar to the one we’re using today
While the influence of digital technology on everyday life grows stronger, offering us new tools and possibilities, interaction designers, human-computer interaction specialists and media artists try to accommodate the demand for digital tools better adapted to human behaviours. Already 30 years ago, technologists, designers and media artists started to re-discover the experience of body and space, letting users navigate and interact with multimedia content by means of gestures and body movement. Yet, gesture-controlled interfaces have not yet come close to replace the window, icon, menu, pointing device interaction (WIMP) paradigm that has persistently dominated how we interact with computers for decades already.
Since 1973, the essentials of human-computer interactions have not changed. The Xerox Alto was the first computer to deploy the desktop metaphor on a monitor in combination with a keyboard and a 3-button mouse as input device. Although the Graphical User Interfaces have become more sophisticated, enabling users to manage an endless array of tools and tasks, the main principle together with all its proven limitations remains unchanged. For 35 years, WIMP has tied the vision of millions of people to a 17” rectangle and their hand movement to a patch the size of 14” or less. Co-located collaboration using WIMP interaction remains a major hurdle, as one person involuntarily becomes the scribe while others do – well – something else. Revolutionary 35 years ago, WIMP today seems like a dinosaur that has somehow managed to disguise as a circus elephant by learning rope dancing and other tricks. Astoundingly, the same people who are responsible for 35 years of “mouse-clicking” have come up with a whole range of other ideas that have not conquered the workplace yet, but slowly but surely filter through to other areas of everyday life.
In 1991, Mark Weiser of Xerox-Parc began his article “A Computer for the 21st Century” written for Scientific American with the words: “The most profound technologies are those that disappear.” Here he refers to computers which silently work in the background, become part of our everyday life, and which extend the functionality of our other tools. According to Weiser, interaction with these invisible “workers” takes place through actions and behaviours, which are well known, to us. He refers to this as the “seamless integration” of the computer into the physical world. Weiser claims: “They weave themselves into the fabric of everyday life until they are indistinguishable from it.” He thus describes the emerging “ubiquitous computing”, a concept which has been used and discussed in many models and applications since the early 1990’s. Examples such as the “intelligent house” (in which the fridge automatically orders new food from the supermarket via internet) often lead to uncritical technique euphoria as well as to undifferentiated fear of “big brother is watching you”. The concept of invisible computers, extending functional aspects of daily life, lead to the concept of “Augmented Reality”. Instead of developing completely new applications, a group of researchers at the Xerox- Park-Lab, decided to extend the context of existing physical products with multimedia and computing technology. Gradually, books and other objects were equipped with tiny chips (electromagnetic ID-Tags), which refer and link to dynamic content information. This information is accessible when the object comes close to output devices such as printers or screens.
In a project named “Digital Desktop”, a real physical desktop was equipped with certain functions of a virtual desktop. This made it possible to manually interact with the computer and to use ordinary tools such as a pencil and a rubber. This model made use of a camera tracking system, which sends information to the software about certain moves on the desktop. At the same time, the influenced situation was projected from below onto the desktop (Wellner, Newman, 1992).
Projects of media artists explore interfaces, which interpret gesture and mimic. In “Murmuring Fields”, the visitor of an architectural room is able to navigate through data spaces and interact with media content using full body movement and gestures. The vision of this immersive installation is “a room furnished with data” (Fleishmann, Strauss, 2000).
A multitude of other models call for a fundamental change in human-computer-interaction. However, in recent years only the computer gaming industry was able to introduce a new approach worth mentioning to a broader market. The Nintendo Wii console uses camera tracking and 3-axis-accelerometer sensors creating an interface that supports natural gestures (such as pointing, moving and rotating objects in 3-dimensional space). A growing community of developers and users make use of the Wii system to control other applications or their desktop computers.
A key problem here obviously is the integration of a new interaction concept with software that has been designed for WIMP. Navigating a standard iTunes interface with the Wii remote as one user of the above website proudly presents, is merely replacing a standard mouse with a marginally different pointing device.