Can you imagine a world in which personal artificial intelligence assistants prompt you with things to say and tell you how your audience is responding?
Recently, I posted a column about a session I'll be presenting at the forthcoming Embedded Systems Conference (ESC) in Boston, which will take place May 3-4, 2017 (see Futuristic Embedded Technologies at ESC Boston 2017).
This presentation will cover some of the advanced technologies that are starting to appear in real-world applications, such as cognitive (thinking / reasoning) systems, artificial neural networks, deep learning, machine vision, and virtual, augmented, hybrid, and hyper realities. As interesting as these technologies are individually, it's when they are integrated together that the fireworks really start.
Now, I love technology, but it has to be admitted that there are a lot of potential pitfalls, with one worst-case scenario being a Robot (Artificial Intelligence) Apocalypse. Hopefully, this won’t come to pass, but I have no doubt the devices and capabilities that are going to ensue from these new technologies will dramatically impact the way in which we live and the way in which we interact with the world and with each other.
My poor old noggin is bursting with ideas. For example, I'm reminded of The Naked Sun by Isaac Asimov (this was the sequel to The Caves of Steel). The bulk of the action takes place a millennium in the future on a colony world called Solaria, which is populated by only a handful of humans each served by thousands of robots.
The humans are all germaphobes (with regard to coming in close contact with other humans) and live as far away from each other as possible. However, they use sophisticated holographic projection systems to allow them to visit with each other: chat, share meals, go on walks, and so forth.
This all seemed so far-future when I first read these books way back in the mists of time, but there are now virtual reality applications that let you do this sort of thing using avatars in virtual worlds, and it won’t be long before augmented reality systems allow you to see and interact with computer-generated representations of people you know overlaid on the real world. By this, I mean the computer would generate a visual representation of a person, but that representation would be controlled by sensors imaging the real person's movements and gestures, and the voice you hear would be that of your friend -- not of the computer (or so we might hope).
The scary thing is that our artificial intelligence and machine learning systems are currently progressing in leaps and bounds. It's already possible for these systems to analyze your face to determine if you are happy or sad, and to analyze your voice to see if you are relaxed or stressed, and to pull everything together to determine whether you are lying or telling the truth. Now imagine all of this being used to supplement the capabilities of an augmented reality system.
I mean, suppose you were wearing some sort of augmented reality headset or goggles, and you were chatting with a someone, and your system was feeding you information (visually and/or audibly via bone-conduction) about that person's truthfulness and state of mind. Suppose you are talking to a used-car salesman, for example, and he makes some claim, and your system reports "99% probability that's a lie." This might sound useful, except that his system might be informing him as to your responses to what he's saying, and it could be guiding him to say the things you want to hear. Hmmm, that doesn’t sound quite so good, does it?
The reason this sort of thing is currently at the forefront of my mind is that EETimes Community Member Seantellis posted a comment to a recent column -- FPGA-Based AI System Recognizes Faces at 1,000 Images per Second -- saying:
Once mnemonic spectacles become ubiquitous, not using them will be a badge of honor. The short story Norbert and the System by Timons Esaias (available online) explores this possibility in quite an amusing way.
Well, I immediately bounced over to Amazon and picked up a secondhand copy of The Best of Interzone from 1997 for only $0.01 (plus $3,.99 shipping and handling). This is a compilation of 29 short stories, one of which is Norbert and the System.
Norbert and the System was first published in 1993. Your knee-jerk reaction may be that 1993 is quite recent. When you come to think about it, however, you realize that this was almost a quarter of a century ago. In fact, it was the year the general public first became aware of the Internet with the launch of the Mosaic web browser.
I don’t want to ruin the story for you, but a brief summary is that Norbert lives in the not-too-distant future when everyone has Personal System (PS) implants and wears augmented reality headsets (goggles and earpieces). One day, Norbert sees an attractive young lady whose "social beacon mounted above her left ear was flashing green," so he instructs his PS to pull up her personality profile and provide him with an introductory chat-up line. Unfortunately, Norbert has an older PS that takes a few seconds to respond, by which time he's missed his chance. Somewhat disgruntled, he ends up purchasing a new, state-of-the-art system.
There's a lot to this short story, but the main thrust of it is that Norbert ends up with a system than has an On/Off switch (this is really unusual). It's only when Norbert experiments by turning his PS off that he realizes the soundtrack that has accompanied him all his life isn’t actually there -- it was being generated by his PS (you know what it's like when you are watching a film and the background music prepares you that something scary is about to happen, or the rescue party is about to arrive, or... that sort of thing).
There's a great scene toward the end where Norbert is talking to a young lady. He asks about her interests, and she starts listing them and his PS "...barraged him with definitions and explanations in both earspeakers while filling both lenses with charts and graphs." The thing is that both of their systems are constantly offering suggestions for things for them to say, analyzing the other person's reactions, and then suggesting appropriate responses.
At some stage, Norbert decides to ignore his PS. He turns it off and answers the young lady's most recent question. The young lady is really surprised (her PS informed her that Norbert had taken his PS offline). She realizes that no one has ever given her an un-PS-prompted response before, and this excites her. When Norbert powers-up his PS once again, it immediately evaluates what's going on and informs him: "Emotional Complications Pending."
When this story was written in 1993, I'm sure that both author and audience considered it to be a far, far future scenario. Now, only 24 years later, the potentiality for this type of system is almost upon us. I think many of us would have some level of interest in knowing whether our companions are engrossed or bored by what we are saying. Did they find our last joke droll or disagreeable? Do they agree with our stated views or do they dispute and/or disapprove?
I also know quite a few people who are awkward in social situations. I wonder how many of them would be interested in a system that could feed them with interesting observations to contribute to a conversation, coupled with hints and tips on how things were going and how they should respond.
What about you? Do you think this sort of scenario will never come to pass? Alternatively, do you fear that he are headed for a world full of Norberts and Norbertinas?
— Max Maxfield, Editor of All Things Fun & Interesting