LAS VEGAS Developing the "perfect" human interface to a computing device is a matter of finding more-efficient ways to access the volume of content available on stationary and mobile computing platforms. Three presentations at the International Conference on Consumer Electronics, held here Jan. 11-14, exemplify this work.
Researchers at the Hitachi Human Interaction Laboratory in Tokyo collaborated with Interaction Design of the Royal College of Art in London to develop an interface that accesses a rich variety of content more easily by using a poetry-reading metaphor. The interface searches for content by turning the pages of a book-like device. The device allows the user can to find desired content without scrolling excessively, pressing a lot of buttons or handling complex graphical user interfaces like remote controls. The prototype can control Internet browsing, a TV or radio and media files stored on a hard-disk drive.
Reading poetry is a special case. Typically, the researchers said, a reader knows where a favorite page is located no matter how thick the book is. Usually the book will open to that spot because the reader keeps coming back to the favorite poem. Later, the reader may put the book face down for future reading.
The researchers emulated this scenario and developed a physical prototype of a book-like device. The device can detect three states: whether the book is open or closed, the page the book is opened to and whether the book is face up or down.
To distinguish the states of the book, light-dependent resistors (LDRs) measure brightness. Resistors on each page and the front and back covers have gray-scale values from 0 (bright) to 255 (dark) in response to the brightness. When the book is closed, the value from the LDR on the front or back cover, LDR-f or LDR-b, is the smallest.
When page number "n" is opened, the value from LDR-n on page n becomes the smallest. When the book is face down, the values from LDR-f and LDR-b are the first and second smallest. A peripheral interface controller judges the state using the input value from each LDR and sends the state as output to the target device.
The device was applied as a television remote control to change channels by turning the pages. Each channel is linked to a page. Files stored on a hard-disk drive, such as recorded TV programs or programs downloaded using video on demand, can be linked to the pages.
The interface can also search for digital radio stations by turning the pages and turning the volume on or off by turning the book face up or face down. Also, a digital photo viewer based on the book device allows a user to find his favorite photo by flipping pages. A bookmarked Web site can be assigned to a page without searching for it in the bookmark list.
A magnified view
Another user interface, developed by researchers from Mitsubishi Electric Microcomputer Application Software Co. Ltd. (Nagaokakyo, Japan) and Ryukoku University (Otsu, Japan), allows the display to change by moving a 640 x 480 viewer itself.
On most portable displays a lot of information has to be presented in a small display area. As a result, chosen icons and information behind those icons cannot be displayed at the same time.
Researchers at Mitsubishi have developed a viewer with an interface to access information that acts like a magnifying glass the user can operate intuitively. The contents displayed on the small screen of the viewer change in real-time when the viewer moves. The action resembles how one uses a magnifying glass. The system inputs data by the physical movement of the device itself without requiring special input tools such as a mouse or buttons.
The experimental system, embedded with a gyro sensor and a microcomputer, confirmed that the user's operation of the viewer is reflected on the screen by detecting the amount of inclination, slide and rotation of the viewer.
Researchers at Kyungpook National University's Department of Information and Communications and School of Electrical Engineering and Computer Science (Daegu, South Korea) developed an interface that recognizes a user's sequence-action. The user interface recognizes a user's various hand gestures such as throwing, swinging, tilting, pushing, pulling and snatching, and it can estimate the user's various postures.
A state machine algorithm for sequence-action recognition is applied to the output signals of an embedded two-axes accelerometer (ADXL202EB). The output is signal-processed for gesture recognition and posture estimation. The resulting output signal defines each user's posture and gesture stage-by-stage, by considering a typical accelerometer signal for the action.
The researchers concluded that the elegance of the state machine approach is effective in sequence-action recognition and suitable for low-end mobile devices because the accelerometer can be easily and inexpensively embedded into mobile devices.