Those videos are cool, but it kinda get me worry. Will people confuse the "user experience" with reality? I can imagine we will bump into each other a lot more often on the street because we are all constantly distracted.
The photo of the prototype worn by the model reminded me of Commander LaForge on Star Trek: Next Generation, which made me think how cool it would be if the camera could capture and render wavelengths not visible to the human eye, as was the case with the device he wore on the show.
Then I watched the video, and that Star Trek cultural iconic memory was replaced with one from The Terminator. A little white box zooming in around the face of a person in the field of view, a brief pause for facial recognition and then a message at the bottom of the screen indicating "target acquired".
To Rick's point, it's true the hardware and battery storage is not there yet, or even on the visible horizon, to make a device with the form factor and capabilities shown by Google. But it is not SO far out there that we should mock it.
Google and/or others will bring a product like this to market long before personal jet packs have an impact on urban planning. And when these smart glasses do hit the market, there is little doubt they will be a huge success...for better or worse.
A decade ago stuffing the processing power of an iPad or today's iPhone in as small a PCB area as they do would probably have seem just as impossible. Battery power per ounce is advancing slower than is processing power, but as the chips shrink, the power requirement do as well. I can easily see Google glass coming to fruition in a decade.
Like you, Rick, I'm OK with technology companies describing an innovative futuristic concept, but I would agree that we are a long way from this concept being a reality (augmented or otherwise). Plus, I still have gotten used to people walking around with their Bluetooth headset in their ear in case they get a call. It's going to take me decades to adjust to everyone walking around looking like Geordi La Forge (who, by the way, was blind, so if he wanted to see he didn't have much choice).
1. I'm reminded of Navin Johnson's "invention" that keeps glasses from sliding down the nose, and all the refund checks he wrote for cross-eyed customers. 2. Please do bring on this new tech to test on bleeding-edge-firsties, but with warnings not to use them while driving, etc. How much can we reasonably expect homo sapiens sapiens to multi-task? 3. Is Glass hubris or misdirection to competitors?
Google took some creative liberties with that video, but overall I'd say it's a good representation of what's possible.
The voice recognition was optimistic but acceptable. Remember that dictation and voice commands aren't the same thing - the program is expecting a small set of commands and will try to match what you say to one of them.
The map of the bookstore however, was too far-fetched. indoor maps are coming about, but indoor positioning? That's going to take a bit more work.
The way they presented the walking instructions was good, it fit with the 2-3 meter precision provided by current GPS systems. So was the camera, and the vision sharing. These are all things that can be done on a current smartphone. Battery requirements are an issue but not a big one; the coming generation of 28/32nm SoC's will only help.
I believe we aren't that far from realizing the vision Google has demonstrated. The challenge will be taking it beyond that, into the realm of in-scene augmentation with stereoscopic displays and real time image processing.
@Rick: More importantly, when I look at an emerging visual technology like Google Glass, I examine it through the lens of a hearing impaired electrical engineer, namely, how can it help me, and millions of other hearing impaired Americans, cope with day-to-day living?
First off, even the basic heads-up display is a G-dsend: One of the things we depend on is CapTel (captioned telephone), which is conducting a regular voice phone call, and having the other party being monitored by a relay operator who also transcribes the call, with the text appearing on our phone like this:
or appear on our mobile like this:
Now, let's say you're hearing mpaired, and walking down the street while talking to someone on your mobile: Instead of holding the phone up to your hearing aids or cochlear implant (CI) -- And missing many words -- or looking down to read the captions, instead the words come into our ears, and then appear in front of our eyes a second or two later. Pretty cool, ehh?
For the cognitively impaired, overlaying information on landmarks while walking about (as a sort of "heads-up GPS") would be very helpful.
Lastly, for those who are cognitively impaired when it comes to recognizing faces -- Or more accurately, connecting a familiar face to a name (which is absolutely maddening, as that's me) -- this would be a huge help.
For much more, talk to the good people at the Rehabilitation Engineering Research Center at Gallaudet in DC:
Editor, The Hearing Blog
In a lot of ways I think Google will be betting on the advancement of Cloud computing and mobile networking speed to make many of the things shown possible. Voice transcription, mapping, alerts, face recognition, social tracking, and many other tasks are going to be primarily preformed by the very large and optimized Google server farm while the Glass device is primarily display and interface. Right now, this is a slow and tedious process, but it's not inconceivable that speed increases and larger, smarter databases will make this a real (useful) possibility.
It's a cool concept but Project Glass will have to get away from an antenna near the temple region if they don't want users concerned about SAR from wireless components. Also, based on that form factor, it will be fairly difficult to design all of the electronics into the device without investing in custom ASICs. Power supplies (including battery), wireless devices with EMI shielding, a processor, storage, audio and video codecs + analog interfacing. It will be a challenging design task for sure.
Very challenging to build but if Google manages that most young people will wear it within a year or so...really cool, I am not that young but will buy it ASAP...even with limited functionality, I bike frequently...Kris
The biggest issues are reliability, accessibility, and learning curves. Consider navigation. My paper map has never crashed, slowly rebooted, failed due to "no service", or experienced a battery failure. It is completely unobtrusive until I access it. The learning curve was relatively short (long ago in my youth) and the user interface has never changed. Bookmarks can be placed in a moment with a ball point pen. Of course, the paper map is useless in new locations. The learning curve on digital maps is long (placing a bookmark the first time was a very long and painful process), the reliability is miserable (as you enter the wilderness, "no service" terminates navigation access), and unexpected glitches make instant access undependable. In short there are advantages of both the digital connections and the legacy systems. Somehow digital systems need to address these challenges, in the meantime it is prudent to carry legacy (paper) backup systems when traveling. Exactly the same lessons apply to banking (just try gaining access to historical financial records from a few years ago - especially for closed accounts). Emerging digital solutions have a half life of about 6 months.
For someone like me, a digital map is easier than a paper one. I can easily search for a location I've never been to before (manual search takes forever on paper). Zooming in and out is easy, with different overlays for routes, topography, satellite, and even street view in some locations (zooming is possible on paper if you install the magnifying glass add-on). The cost of constantly updating paper maps is much more than digital, and with a little bit of download it should be able to support an offline mode (why doesn't my android phone support off-line maps?). And bookmarking (and the learning curve in general) are getting easier. How easy would it be just to say "remember this place for later. And add it to my favourite restaurant list. And check me in here. And Tweet it with hashtag #goodeats." That would be the power of a transparent, voice operated interface like Glass is proposing.