PORTLAND, Ore.—Next-generation augmented reality displays will employ smart-recognition of their users that melds context-sensitive voice-and-gesture commands with total-surrounding awareness of other people, places and things, according to John Kispert, CEO of Spansion Inc., who gave the keynote address at the Globalpress Electronics Summit 2012 last week. But smart memory will be needed to wean these advanced user-interfaces off cloud connectivity dependence, Kispert said.
"The bottleneck to next-generation user interfaces is local memory," said Kispert. "The most advanced user-interfaces user cloud-assets to first recognize their user with facial recognition, then adjust the context by switching preferences and augmented-reality displays that are aware of a user's surroundings."
However, in the future, according to Kispert, local memory assets will substitute for cloud-based connectivity so that context-aware augmented-reality displays react faster and do not have to be online to function properly. The Hansen Report on Automotive Electronics (January 2012), for instance, cited sluggish response of cloud-based user-interfaces as the number one complaint of automobile users. Likewise, advanced voice-based user-interfaces such as Apple's Speech Interpretation and Recognition Interface (Siri) requires wireless cloud-access to work and still only provide response times measured in seconds. Local memory assets, on the other hand, can provide context-aware augmented-reality displays that not only recognize their user in milliseconds, but also provide instant recognition of the other people, places and things in the immediate surroundings.
"User-interface convergence today has eliminated the need for manuals and instructions--people just know how things work," said Kispert. "But head-up displays, recognition algorithms and context switching needs to be able to rely on local memory to work well regardless of whether wireless cloud assess are available or not."
Not surprisingly, Spansion specializes in the NOR flash memories that provide nonvolatile configuration assets to modern user interfaces, which Kispert claims will be critical to weaning context-aware augmented-reality interfaces off exclusive dependence on cloud-based assets. By storing the information that allows devices to recognize their user, switch contexts, and recognize the other people, places and things in a user's surroundings, next-generation high-density NOR flash will solve the memory bottleneck that makes cloud-based augmented-reality user interfaces sluggish, according to Kilpert, resulting in safer, more secure displays that superimpose relevant tactical information in realtime.
User interfaces have evolved from keyboard and mouse to user-aware voice control, but need smarter memory to make the jump to augmented-reality displays that don't depend on cloud connectivity. (Click on image to enlarge.)
I don't see anything in what is described that would indicate that this memory is "smart" in any way. I assume that this is simply a weird spin put on the story because it's coming from a company that makes memory, but the real story is simply an argument that more memory in a mobile platform is desirable to enable local processing vs server side processing.
I believe years ago the term "smart memory" used to refer to the concept of embedding processing capabilities into the memory itself. An architecture that has yet to make sense.
We hope so. I have always thought that a new computing paradigm like neuromorphic will offer a better model for resolving the user-interface problem than anything. Unfortunately, the industry has not taken any major interest in it.
We've all had the frustration--waiting for the network to catch up with our requests, which are especially frustrating when you are waiting for cloud-based resources like Apple's Siri to respond. Smart memory based interfaces, that use local resources instead of cloud connections is the answer, according to Spansion, but I suppose this is not surprising, since Spansion sells the memories that will enable next-gen user interfaces to operate in the absence of cloud connections.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.