In the beginning, there was time. Or was there? Scientists disagree about the moment the universe began, but most current theories support some kind of "big bang", and then we were off to the races. With the universe expanding quickly, and much too hot to support anything we would define as particles, let alone life, the concepts of time and space were only just beginning to take shape.
Fast forward about 13.7 billion years, and a bunch of clever primates known as Homo Habilis
manage to survive, despite being common prey for large jaguar-like cats. They do so by extending their realm from just the “here and now” to the “what might be”. With more brain power than other life forms, their ability to think, socialize, and utilize tools allows them to not only survive, but thrive.
A mere two million years later, you and I have not only evolved to the top of the food chain, we are now grappling with the health of the Earth itself. Our view has broadened from “next meal”, to “next generation”, to “next planet”. We’ve taken the first steps into space, but the dimensions of space are so vast that they seem to mock our ability to even dream about interstellar travel. Our Voyager 1 spacecraft, launched in 1977, has just cleared our solar system and is the farthest man-made object from Earth, but would require 72,000 more years to reach Alpha Centauri, our star-cluster-next-door. At 4.22 light years, even if we had sufficient technology and energy to travel near the speed of light (we don’t), it would be quite a trip. Special Relativity would give us a little boost because of time dilation, but upon returning to Earth, our space travelers would find that much more time had passed at home.
Which brings us to “wormholes”, “space-folding”, “warp engines”, and the “space-time continuum” - while these aren’t disallowed in General Relativity, there are yet many more questions than answers on how we might (or might not) be able to harness them for our travel convenience. This is the edge between science fact and science fiction, and for now, we’ll just have to be content within our own humble solar system.
Personal Computing and Social Computing
Just as simple stone tools for scraping and cutting enabled Homo Habilis
to prosper in the Pleistocene Period, the tools for today’s Information Age are equally well-defined, and they revolve around the recent personal computing revolution. In just the last 40 years, we’ve gone from larger-than-a-breadbox “personal” computers to pocket-sized smartphones with tens of thousands of times the computing speed. Storage has increased by a factor of ten to one hundred million, and while those represent the traditional “time” and “space” factors for computing, the even bigger revolution in personal computing has happened in just the last 10 years - we’ve now gone mobile! Computing has for the first time become truly personal - wireless computing in a form factor that’s “always available.”
The irony of our now truly “personal” computers, of course, is that the applications that we run on them are now more communication-based than ever, from messaging to voice and video calls, social networking, online collaboration, and web browsing. None of these are possible without reliable wireless communication. Even digital audio, photography, and video have become more social - the ability to create and share with others does indeed seem to be ingrained in our DNA.
But all of this sharing does have a cost - we are pushing against longtime boundaries of personal privacy and security. What formerly could be locked up behind a door with a key is increasingly digitally encoded and stored on a server in the cloud, and while electronic keys are theoretically highly secure, the weakest link has always been the human interface.
The Impact of Power
With all of this computing capability, and with our wireless freedom at stake, energy storage density (battery efficiency - the “supply”), and computational energy efficiency (low power computing - the “demand”), have become the recent focus for mobile computing and communication. After all, the world’s most sophisticated device with a dead battery is nothing more than a paperweight, and in our near-paperless society, even the utility of a paperweight is in question.
On the supply side, battery technology has been improving slowly; Lithium-ion and lithium-ion-polymer technologies continue to be the choice for most portable electronics, due to advantages in energy density (weight), high voltage, no memory effect, and flexibility of manufacturing in custom sizes. But energy capacity has improved slowly over time, around 8% year over year, which is nearly constant compared to the exponential growth in microelectronics capacities and performance seen through Moore’s Law. As a result, the major advances that have enabled today’s mobile computers have been on the “demand” side - doing more with less.
“Low power design” - building integrated circuits that consume far less energy, has been a focus in the chip design industry for only about a decade, but the results have been spectacular. From advancements in architectural design, to logical and physical implementation optimizations, all the way to fundamental changes in device fabrication, including recent moves to 3D transistors, or FinFETs, power-efficient design and verification techniques have rippled through every facet of integrated circuit design.