The more active a processor is; the more power it consumes. The more processors are running; the more power it consumes. It is the law. How can we achieve better performance w/o consuming more power?
On the other hands, parallelism is no doubt the future to hammer out the performance of next generation computing and to balance performance and power consumption. What would be the best strategies - on developer level or on compiler level or on processor level?
Within a technology, more processor activity and more processors require more power. However, new technologies may require less power. If we still ran computers using tubes, activating the complexity of current smartphones would trigger power outages in our homes rather than operating easily on a small battery. The trick is to find new technologies that are even less power hungry.
The discussion on another story regarding parallel architectures brought out several good points, including the thought that maybe it's time to reexamine the Von Neumann architecture. Is it time to rethink these fundamental assumptions?
Yes, time has come to re-examine these assumptions. However, we have to do it in an incremental way because there are economic concerns here. Billions and billions of investment in hardware, software and applications have been expended with the Von-Neumann paradigm in mind. It is not reasonable to put that aside in favour of a new paradigm no matter how great is this new paradigm in theory.
The power consumption is a major criteria for the electronics only in the case of battery operated systems. Otherwise I do not think the power consumed by the electronic components is not going to create any major difference. I would like to see more research in the better battery technologies and more alternative energy sources.
The performance requirements are just increasing (games, video,...). In order to get a system meet the performance requirements they are only three solutions: raw clock speed, parallelism and dedicated processors.
Raw clock speed comes at the expense of power. In low geometry processes there is a choose between different Vt voltages. The lower the higher the speed and the power consumption. Parallelism can be obtained by either hardware or software. The first solution is hardware by using mult-issue out of order super scalar architectures. Also high end processor use power expensive branch predictors. Software parallelism comes however power-wise for free.
The last issue is about dedicated processors or rather the cost of legacy support. The extreme is the use of specialised processors for 3D graphics or even audio playback. Intel still needs to support the old 8086 instruction set. So they place down complex instruction decoders. ARM on the other hand supports more instruction set and is thus more efficient.
Parallel processing will help, but maybe we need more than that. Perhaps it's time to look not only at parallel equal processing cores, but also at collections of tailored processors For example, the Nvidia processor with a graphic core and a general computing core.
In the personal computer world, it's been well established that systems benefit from a combination of a graphics processor and a general computing processor. Years ago, floating point was done in software, or with a discrete co-processor. Today systems have the processing core(s), with built in floating point, and discrete graphic processors. Perhaps more efficiencies could be gained by developing dedicated database cores, communications cores and that like.
The requirements for high performance graphics work are different from that of database, and from communications, signal processing and a number of other functions. Make each processing core as efficient for the type of work that it's designed for and less computing power will be required to translate a general purpose core to the specific needs of some of these sub-systems.
For sure that would help but there is always an issue of critical mass and economies of scale. Graphics can afford specialised, increasingly powerful, and relatively cheap processors because of the size of the market. Who is going to take the hit of a large investment for equivalent technologies in other applications? What is the killer app.?
Using parallel architectures we can trade area for dynamic power consumption. We can get the desired speed performance even if we operate at lower frequency. Because, the propagation delay is nearly inversely proportional to operating voltage, if we decide to operate the system at lower frequency, then we can be able to run the system at lower operating voltage. Since dynamic power dissipation is proportional to square of the operating voltage, parallel processing results in lower power consumption. This basically allows us to trade area for power. But, in certain problems, e. g, in some signal processing problems like digital filtering, where latency of computation is not a main issue, one can achieve lower power consumption by pipelining with less area overhead compared to parallel processing.
Blog Doing Math in FPGAs Tom Burke 18 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...