@measurementBlues: Thank you for making it more clear with the picture you provided. Clearly, as speeds increase, I/O and memory fetch operations move closer to the chip. This could also mean that in the future, entire operating systems can be moved to the memory chip alone.
The possibilities are endless and this approach to efficient data transfer between modules in a device can probably lower the average power consumption of a processor and its memory modules. If done correctly, this system would allow increased parallel and pipelining performance, thereby reducing processor load, and reducing processor idle time. The only problem remains with laser generators, since devices would get small, laser generators would also have to get small, but without operational compromise.
Does anyone else see the problems we face in going faster as just more evidience that the future of integration is going to be much much more difficult than anyone predicts? We have had it too easy for too long. More research $ needed when there are fewer and fewer $ available.
Maybe time to finally make the jump to software development from hardware...
I think that some decades of start-ups trying (and failing) to compute with photons showed that at least in the coming decade we will have Electrons for compute, Ions for storage and Photons to communicate
Now we just need to scale optical technology down enough to work efficiently within the chips themselves, with no electronic medium whatsoever! Being able to do all of the signal processing optically, with devices small enough to be printed en masse on a single chip would allow for the highest possible processing speed, would it not?
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...