PORTLAND, Ore. The National Science Foundation (NSF) is funding development of electronic visualization technology, providing $500,000 in seed money to the University of Illinois at Chicago (UIC) and the University of Central Florida in Orlando.
"Our budget is about one-fiftieth of a typical video game developer's," said researcher Jason Leigh, a UIC professor and director of its Electronic Visualization Laboratory (EVL). Still, "we think we can put together a working prototype by 2009."
The three-year effort will begin with graduate students spending several months shadowing an undisclosed NSF official to record his mannerisms, gait and lifestyle in a series of video and audio recordings. Back at the lab in Chicago, students will craft avatars that digitize all the visual aspects of the mystery NSF director.
UIC researchers and their avatars
Likewise, Florida researchers will encapsulate the NSF official's management expertise in an artificial intelligence knowledge base. "We're not supposed to disclose which NSF director we're digitizing," said Leigh. "But in the end, we hope to capture his knowledge and management style using AI, then let any NSF employee interact with his avatar to consult with him using natural language."
The result will be a methodology that enables future generations of computer scientists to archive human traits the same way that other types of data are archived today.
"Today we archive all types of data, the text in books, pictures of architecture, movies about important events," said Leigh. "Now we want to add people to the list of things you can archive."
Unfortunately, the goal of the project will not be to resurrect legendary figures like Newton and Einstein, since the digitizing process requires a living person to don a skin-tight black suit covered with tiny white balls. Once perfected, any living person will be able to answer a series of questions about their knowledge. They can then be consulted later, virtually.
The motion-capture studio will be housed at the EVL. Subjects being digitized will have their appearance, voice, mannerisms, gait captured in real time and encoded into an avatar. Their knowledge can be captured remotely through a series of questionnaires and interviews, then archived in a database accessible by the talking-head avatar.