"Re: The point: a minimal RV32I implementation is very small in LUT usage, competitive with the smallest, and less capable, stack implementations."
And which stack machines were used for comparison and what benchmsrks weere used?
"Re: von Neumann bottleneck: much hyped, wrongly named, non-problem"
And the room was filled with smoke from funny cigarettes.
"Re: Why not execute usual HLL statements? See last 35 years of Comp Arch literature."
The last 35 years has been dedicated to RISC/super scalar and RISC was created because the compilers of thjat era were only capable of generating very simple machine code.
Meanwhile compilers have evolved to generating some form of byte code and using the RISC CPU to emulate a stack machine.
Also OOP has evolved and the use of methods that take parameters off the stack and return the rersult on the top of the stack.
Real live computers have register files and tons of circuitry to execute out of order because of memory latency.
Over the last 35 years there have been many poeople inventing ISAs and the best thing to be said is that they are different. BUT all have the same generic ALU operations after the operands are finally available. Most differences are load/store addressing.
And going from 32 bit addresses to 64 bit feeds the marketing hype generator, but the world will end before anymemory of that size can be filled.
I survived the real world of computer development for 30 years.
Re: The point: a minimal RV32I implementation is very small in LUT usage, competitive with the smallest, and less capable, stack implementations.
Re: 64-bit: Yes, addressing is main reason to extend to 64-bits, and even smartphones use 64-bit addressing now. See Bell and Strecker, ISCA-3.
Re: Why not execute usual HLL statements? See last 35 years of Comp Arch literature.
Re: von Neumann bottleneck: much hyped, wrongly named, non-problem.
Re: all this to be independent of ARM? No - we did this to do things ARM doesn't support, like simple implementations, like openly sharing RTL between groups, or at the time, 64-bit addressing (ARM v8 happened later).
OK, the point was that there is more to anFPGA thanLUTs.
The spec has a separate section for 64 bit version and it seems logical that sign extension would be automatic just as it is for immediates.
Is there real need for 64 bit aside from memory addressing(which is probably managed by OS and MMU?)
Rather than a new RISC, compiler, debugger, etc. Why not execute the usual HLL statrments? Certainly the compiler would be simpler. In fact app debug could be done then loaded onto the chip analogous to using an FPGA and loading the bit stream.
And there is the memory, cache(s) MMU, IO, memory wall, and von Neumann bottleneck.
And this is all justified to be independent of ARM?
RISC-V has 32 16 bit registers plus a 32 bit instruction register so "350 LUTs" is marketing talk.
Also the speed is probably for the latest generation of FPGA and the 100 MHz is for older less expensive generations.
I looked at the RISC ISA spec and it is still load/add/store kind of stuff with a heavy emphasis on immediate type instructions so the 16 bit base has a 32 bit instruction word to hold the immediate operand along with 2 source registwe addresses and a destination register address.
A fine academic exercise (hopefully there will be more HW types that know what goes on inside a computer) But on the other hand, HW design is not about computer design.
What is good about it is that it was pointed out that so many can be put on a chip. Now we can think about dedicating a RISC-V to each thread that is run on an RTOS with all the headaches of interrupts and scheduling. KISS may live on.