Not sure it is possible to argue "goodness" based on reading the spec. Things don't get much better with more quantitative analysis like kernel benchmarks. Silicon area, frequency, # issues are first order effects, other factors less so imho. Personally I certainly wouldn't argue too hard with Professor Patterson design decisions:-) I learned pretty much everthying I know about computer architecture from reading his books.
It's an interesting discussion for sure, but let's not forget that the REALLY big news is that fact that the world now has a first rate free to use openly available ISA for all to use.
Who knows...maybe this will be the Linux equivalent moment for chip design...
"Thanks in part to the open-source Chisel hardware design system, one 64-bit RISC-V core is half the area, half the power, and faster than a 32-bit ARM core"
How exactly does Chisel enable that? I had a look at it, and it seems to be a Scala-driven HDL generator. It probably makes design easier and faster, but the area/power should be the same as when using HDL directly.
The big OpenRISC enhancements in the last few years were the new implementations without branch delays, with much shortened pipelines, and the improvements to multicore support. The community is very active at present, so it can be hard to keep track of all the changes (see the #openrisc IRC channel on freenode.net for the ongoing conversations).
It's always hard to know with open processors who is using them - the community tends to hear after the event. OpenRISC is in some Samsung set top box chips, and in NXP/Jennic Zigbee chips (in the BA Semi variant). OpenRISC is used as a power controller in the AllWinner A1000, part of their A31 ARM based SoC. OpenRISC flew in NASA's TechEdSat a year or two ago (declaration of interest, we did a commercially robust version of the GCC C compiler for that project).
There is an ongoing project at the University of Genoa and ETH Zurich, supported by ST, which is developing a low energy multicore SoC based on OpenRISC.
I'd be interested if other readers had heard of new commercial uses.
The OpenRISC community has for a year or two considered specifying a future architecture, known as OpenRISC 2000. Perhaps we should be looking to RISC-V as the basis of that architecture? There would be a certain intellectual consistency, given OpenRISC 1000 was based on DLX. Doubtless this will be discussed at ORConf 2014 in October.
Well, to be honest I don't know where to start - a LOT of useful instructions are missing in RISC-V. Eg. load/store indexing, load/store of 2 or more registers, shift+add, conditional move/select, conditional compares, branch/call instructions with a larger range, immediate instructions that allow efficient 2-instruction sequences, rotate, bitfield operations, multiply accumulate (and yet there is REM???), add-carry etc.
Then there is DSP, SIMD, CLZ, etc, and although these are less frequently used, they do provide significant speedups. It's important to understand there is never a need for every instruction to improve performance across all applications - if you go down that path you end up with something like Alpha, ie no byte or halfword loads/stores as 80% of applications hardly need them! All you need is to show the benefit of an instruction outweighs its cost.
Note also the codesize impact is an important consideration - I proposed several instructions in Thumb-2 solely for the benefit of improving code density (ultimately that means lower power and improved performance). Given RISC-V will have pretty bad code density due to all the missing instructions, adding a compressed form of the ISA should be a priority.
Sure it may run Linux, but the Rocket variant that was compared with the A5 doesn't appear to have an FPU or any other extension beyond the very basic 64-bit ISA. So that variant most certainly can't run SPEC well or do anything that you expect in a modern CPU (multi-core, debug, performance counters, timers, interrupt controllers and so on). Cortex-A5, while one of the smallest ARMv7-A cores, is far more advanced, significantly faster and has good code density (unlike RISC-V).
So if you wanted to do a barebones CPU comparison you should compare with an M0 or M3 with an added MMU. That would be a reasonable comparison as they have very similar features. Performance will be close on 32-bit code, but M0/M3 obviously wins big time on power, code density and area.
On the other hand, if you wanted to compare a full SPEC capable Rocket version, you'd have to give the size/power for the full version including all the extensions, FPU, MMU, caches, etc. No bait and switch like giving area/power results for a minimal version while quoting performance of the most advanced version!
The thing is, when you aim for a similar level of performance as modern ARM CPUs then you'll need a real memory system. Not a 1-way 8KB cache like the very first Alpha used more than 2 decades ago!!! That means a much larger die size and higher power.
There are SPEC scores available for various ARM cores, however you can buy pretty much any device nowadays and just run benchmarks yourself. You can use the NDK on Android devices (or Linux if you root it), but I find a Chromebook is a perfect ARM Linux development machine (lots of people run SPEC on it).
Anyway, if you actually ran the full SPEC2006 on Rocket, just publish the scores. I don't see why you have to run it first on ARM.
Dhrystone is a zombie benchmark indeed, partly because it is easy to use, gives a quick&dirty estimate of core-only performance, and all benchmarks that tried to replace it turned out to be far worse. So it won't die any time soon...
Here is an idea: to give people an idea what the ISA is like, why not publish the Dhrystone disassembly? I bet not everybody will have a spare week for downloading, building and troubleshooting the RISC-V tools...
"Cortex-A5 actually runs Android pretty well, and I bet RISC-V with its very basic ISA won't be able to keep up"
I'm curious. What instructions do you imagine we're missing in RISC-V G that would make Android run better? Nearly all compiled integer code I see is dominated by the very simplest instructions and it is extremely hard to add an instruction that really improves performance across all applications.
As the technical report says, we intend RISC-V to be used for everything from as small as Internet of Things (IoT) to as large as Cloud Computing (Warehouse Scale Computers or WSCs).
As some of other posts hint, RISC-V is a very modular ISA, while still a reasonable target for compilers.
For IoT, you'd want to use 32-bit address, integer instructions (I), and compressed instrucitons (C). We call that RV32IC
For WSC, you'd want to use at 64-bit addresses (and over the next decade maybe even 128-bit addresses), integer instructions (I), multiply-divide (M), atomic instructions (A), single (F), double (D), and even quadruple (128-bit or Q) floating point instructions: We call that combination RV64IMAFDQ
We are in contact with some other OS developers, but are focused on finishing the privileged architecture specification and system binary interface specification to reduce the effort of porting OSs to different RISC-V implementations.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.