@HankWalker: In a talk that I heard John Cocke give in the mid-1980s, he said that his goal with 801 and his definition of RISC was an ISA designed to make it easier for the compiler to generate good code, while a CISC is an ISA designed to make it easier for a human to write assembler.
That accords with my memories. I was watching when RISC CPUs were the new hotness. The VAX architecture was held up as the exemplar of CISC, with a "super instruction set". The problem was that most of those instructiones were never used. Increasingly, programmers didn't write in assembler. They wrote in a high level language like C or Pascal, and the compilers didn't generate code that used all of those fancy instructions.
So designers said "Why have them? Most of those high level instructions can be implemented as combinations of simpler ones, so let's design CPUs with only the basic simple instructions, concentrate on making them run as fast as possible, and let the compiler do the heavy lifting and optimization."
The results included the DEC Alpha, Sun SPARC, and HP PA-RISC architecture among others.
CISC won, but the reasons I could see had nothing to do with performance and everything to do with cost. DEC was already in trouble when the Alpha was released: the market for the VAX was eroding rapidly under the pressures of super-micros based on off the shelf MC680X0 CPUs runing flavors of Unix, that could do what a VAX did almost as fast at a tenth of the price. DEC tried to ramp up production and sale of Alpha based workstations, but couldn't do so quickly enough to stem the bleeding. DEC competitor Data General had the same sort of woes with its RISC entry.
HP shifted from PS-RISC to x86 because they could get the performance required from off the shelf chips that were well understood with a substantial eco-system and highly developed toolchain for creating software. They didn't have to spend money on design, manufacture, and updates. Sun stuck by SPARC, but hedged its bets with a line of Opteron based x86 architecture models. Using x86 was simply cheaper. Performance might not have been at RISC levels, but it was good enough. The advantage from using RISC wasn't pronounced enough to justify the higher cost. The decisions were ultimately economic, not technical.
(And I was grimly amused at one point. AMD had a RISC processor called the 29000 back then. They came out with a new x86 compatible CPU, and from what I could see, it used the 29000 RISC core. x86 instructions in code were intercepted and converted on the fly to the underlying instructions the 29000 actually executed.)
ARM is winning in the mobile space because of lower power consumption, but ARM has the advantage of being fabless. Lots of folks license ARM designs and make ships based on them, and the market is large enough that Intel x86 chips don't have a cost advantage. OEMs making products using them can buy them off the shelf, and don't have the overhead of design and manufacture.
I think CISC vs RISC frames the question in the wrong terms. It's about the money, and the question is what the cheapest solution is that will do the job.
In a talk that I heard John Cocke give in the mid-1980s, he said that his goal with 801 and his definition of RISC was an ISA designed to make it easier for the compiler to generate good code, while a CISC is an ISA designed to make it easier for a human to write assembler. He noted that 801 was not really a RISC architecture in terms of instruction count. He preferred to talk about a streamlined instruction set. The apex of CISC was reached with VAX, and a lot of the VMS operating system had hand-written assembler. I used to point out that the VAX presented compilers with a difficult "reverse semantic gap" in that sometimes a whole procedure's worth of C/C++ code could be compiled into one instruction, but compilers were not clever enough to do so.
In terms of hardware, a CISC obviously has a larger overhead in instruction fetch and decode, but that is a small fraction of a processor. A bigger impact of CISC is the smaller register set that requires more memory accesses, which in some ISAs means more instructions. Bill Wulf once said that a big headache for compiler writers is the plethora of CISC addressing modes, and the difficulty in having the compiler efficiently use them all. Compilers didn't use them all, which is why most were dropped in RISC.
Jason see my other post for the accepted definition of RISC. RISC ISAs certainly turned out to have a significant commercial advantage - just compare x86 with ARM volume: ~300 million vs 12 billion per year. You can also compare x86 CPUs with similarly performing ARM ones, which do you think turns out to be significantly larger, more complex and more expensive?
So I don't believe this report is in any way conclusive - they would have to prove 2 equally capable teams could develop RISC/CISC CPUs at similar cost/area/power/performance. The CPUs actually on the market clearly prove this is impossible. The fact is that Intel has pumped ~$10 billion into promoting x86 in phones and still has zero market share. So claiming the debate is over is wishful thinking at best.
Load/store architecture is generally considered the hallmark of a RISC, as is simple addressing modes, simple decode with few instruction formats, plenty of registers to reduce memory traffic, and simple instructions that can be executed in a single cycle. Having
Many RISCs do indeed support more complex instructions like multiply and divide which can take multiple cycles, and several support load/store multiple (or at least 2 registers). The return-from-interrupt example is something that is complex on every CPU, on Cortex-M transistors were saved by popping registers from memory rather than increasing the register file. This is not really complex when you already support load/store multiple instructions.
I agree being purist is bad, and that goes both for RISC and CISC. Very CISCy architectures have all died (x86 is one of the least CISCy, it's very lucky in that compilers can completely avoid all the complex microcoded instructions, even load+operate and complex addressing modes are rarely used). Similarly very pure RISCs have not been successful.
TonyTib, you are correct, the earliest processors were slower than memory. I was thinking of the 32-bit RISC processors of the 1990s, as on-chip signals started to become faster than off-chip outputs.
In the mid-1990s, I took a graduate class in RISC processors, using Hennessey and Patterson's text, "Computer Architecture, a Quantative Approach", if I recall correctly. Each chapter stated and developed a concept for making processors faster, and concluded with a postscript of famous cases where the concept failed to speed things up. One of the appendices in the back of the book apologetically covered the x86, criticizing the x86's nonorthogonality but noting the x86's sales volume was so large as to be elegant in its own way.
I'd say the 6502 bet on memory speed (it really was faster back then), not modern RISC (maybe you could class the 6502 and PIC are pre-modern RISC?)
RISC did introduce a number of approaches that were then used by other CPU makers (some of this was probably done by the mini/mainframe guys) such as optimizing instruction set for the compiler (e.g. regular instructions are much easier for the compiler; x86 has some odd-ball instructions that are probably never used by any compiler), analyzing actual programs to figure out what instructions programs actually use, designing ISA for pipeling, and large, regular register sets (many early ISAs have all sorts of limits on what register can be used to do what).
Two of the main RISC guys are still working, although in different roles: David Patterson (Berkely, inspired SPARC) is still there, working on the RISC-V (which EETimes covered) and John Hennesey (Stanford, MIPS) is President of Stanford University.
I believe the x64 architecture is much more RISC-like than the x86, e.g. more registers and such.
On the MCU side, the non-RISC processors would be ISA's like 68000 (pretty much replaced by Coldfire and ARM at Freescale), Renesas RX (and earlier, including legacy Hitachi, NEC, etc ISA's), 8051, etc.
The Data General Nova was a RISC machine years before the term RISC was invented.
When using a MIPS processor, be sure your startup code is in read-only memory, as some MIPS processors perform random writes when the caches are initialized.
My understanding, is that the early RISC developers bet that speed would be cheaper than complexity as fabrication processes improved, and that RAM would remain faster than processors. Instead, complexity turned out to be cheaper than speed, and processors became faster than RAM.
Jason - You may know RISC WAS originally reduced instruction architecture. Early RISC were load/store register machine, single-word instruction code, fixed operand bit coding, one instruction / one operation / in one clock. Often little or no embedded immediate operand in instruction code, often 3-operand and with zero-register. SPARC and MIPS are typical early RISC architecture.
Over time, those "pure RISC" architectures are becoming things of the past. Todays ARM CORTEX handles multi-word instruction code which can handle pretty long fixed operand, single instruction code can initiate rather complex series of sequence - for example, single "mov pc, lr" works as return from interrupt: pop several registers from stack then switch processor context. I agree with you it that, is hardly be "Reduced, simplified, one instruction / one operation" architecture, in the context of "RISC" back in 90's.
Perhaps, the only characteristics today's RISC architecture inheriteted from early days is load-store architecture. You have to mv r2, 4 / ld r1, [r3] / add r1, r2 then st r1, [r3]. You cannot "add dword ptr [di], 4" in syntax of x86. I don't think it is fundemental difference between RISC vs CISC argument - just matter of flavor.