Manhasset, N.Y. -- George Robertson, a leader in the study of human-computer interaction, has been inducted into the CHI Academy for his contributions to the study of this field.
Robertson, who coined the term "information visualization," began Microsoft Research's active involvement in the human-computer interface, or HCI, with his hiring in 1996.
Information visualization describes a way to present data in nontraditional, interactive graphical forms, using 2-D or 3-D color graphics and animation. These illustrations can show the structure of information, making it possible to navigate through and modify it with graphical interactions. Military leaders, businessmen and intelligence gatherers adopted the visualization techniques to put information in a readable graphical format.
Robertson is a member of the National Visualization and Analytics Center Panel at the Pacific Northwest National Laboratory (Richland, Wash.), one of the national labs analyzing intelligence on terrorist activities. "The problem is huge," Robertson said. "In one database alone there are 120 billion documents, and the pace is that 1 million documents are changed every hour when searching for clues. That requires an enormous effort to show graphically."
Robertson was inducted in ceremonies at the recent CHI2006 event in Quebec, where he presented two papers that show the incremental directions in which Microsoft is headed. In the first, Robertson and his colleagues explored peripheral display techniques that help users maintain task flow, know when to resume tasks and how to more easily reacquire tasks. Specifically, they compared two types of abstraction: semantic content extraction, which displays only a window's most relevant content, and change detection, which signals changes.
In a study, Robertson and his colleagues compared four peripheral graphical software interfaces using different types of abstraction that provided varying types of task information: scaling, which showed a window's layout overview; change detection, showing whether a change had occurred; semantic-content extraction, displaying a small piece of the most relevant window content; and a combination of change detection and semantic-content extraction.
In a user study, 16 men and 10 women compared the four interfaces in simulated multitasking. They found semantic-content extraction more effective than both change detection and scaling in improving multitasking efficiency. Semantic-content extraction also significantly benefitted task flow, resumption timing and reacquisition.
A second paper, with input from the Redmond labs and the computer science department at the University of Maryland (College Park), described a novel way to search large data sets from a mobile phone. Existing mobile searches need keyword text entry and are unsuited for browsing. The researchers' alternative hybrid model de-emphasizes tedious keyword entry in favor of iterative data filtering.
With nearly 780 million mobile phones expected to be sold this year, and with yearly worldwide demand projected to top 1 billion by 2009, phones are increasingly being used as front-end interfaces to ever-larger external data sets, including Web sites, traffic information and Yellow Pages. Nearly a dozen query-answer systems and Web browser interfaces that target mobile platforms have debuted in the last year.
While existing solutions cater to small screens and low bandwidth, they are modeled after desktop Web search, posing usability issues for mobiles. As an alternative, Microsoft researchers have developed a keypad-driven, compact query interface for browsing and searching large data sets from a phone. Called FaThumb, it uses a hybrid model based on hierarchical faceted metadata navigation and selection, with incremental text entry to further narrow results.
In a study, researchers used the Seattle metropolitan-area Yellow Pages' 39,000 or so listings, but the design was intended to generalize to a variety of data sets, including personal contacts like e-mail, Web pages and movie listings. The study confirmed the basic hypothesis: If you know something specific--the target name, let's say--text entry (even on a phone) is faster. If you know only data characteristics, facet navigation is faster. Ultimately, real-world tasks require both techniques.