REGISTER | LOGIN
Breaking News
News & Analysis

DARPA Funds Development of New Type of Processor

Worlds 1st Non-Von-Neumann
6/9/2017 01:01 AM EDT
22 comments
NO RATINGS
1 saves
Page 1 / 2 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
Page 1 / 3   >   >>
KarlS01
User Rank
Author
Re: Something still bothersome, Trung quote.
KarlS01   6/13/2017 11:22:09 AM
NO RATINGS
"Of course, some load ahead and some store behind are needed..........."  And it also depends on omputation intensive applications, not general applications where the control flow is full of branches.

This is the falacy of super scalar assumptions that there is very high probability that the data has been put in a register previously.  Can you quantify "some" in some way?

How about context switching, interrupts, and all the other unpredictable things?

In fact this whole topic is about the need to access data that exists in 8 byte chunks in global memory.  So much so that caches are a problem, causing congestion and wasted power. 

sw guy
User Rank
Author
Re: Something still bothersome, Trung quote.
sw guy   6/13/2017 10:27:19 AM
NO RATINGS
@KarlS01

Here is what I meant, using fantasy instruction set.

You said the sequence for a single addition is:
load  REG1,DATA1
load  REG2,DATA2
add   REG3,REG1,REG2
store REG3,RESULT
(4 instructions fetch)

But as soon as optimizer is able to go full steam ahead, actual sequence would be:
add   REG3,REG1,REG2

Of course, some load ahead and some store behind are needed, but some compiler are very smart are getting rid of memory storage for local variable, which may be awkard to debug but is good for:
- Program size
- Data set size
- Count of instructions to get same work done
(abstract: good for performance from every side)



KarlS01
User Rank
Author
Re: Something still bothersome, Trung quote.
KarlS01   6/13/2017 10:00:55 AM
NO RATINGS
@Colin:  There is zero chance of any interest.

1) It runs on "Windoze".

2) Not RISC 

3) Not Linux

4) No TCL scripting

5) Not from an EDA vendor

6) Not based on marketing hype and buzz words

KarlS01
User Rank
Author
Re: Something still bothersome, Trung quote.
KarlS01   6/13/2017 9:34:44 AM
NO RATINGS
@sw guy:  I think you mean that data fetches are reduced, because  instruction fetches specify which data operands to use even if they are in register files.

agoodloe
User Rank
Rookie
Re: Something still bothersome, Trung quote.
agoodloe   6/13/2017 9:13:06 AM
NO RATINGS
Previous attempts in the late 1980s and 1990s at pushing  non-von-Neumann architectures such as Lisp machines, data flow machines, and reduction machines all failed to gain acceptance in the marketplace. While such designs had a strong intellectual appeal among the research community, they would have required the wholescale adaption of programming paradigms that are foreigng to  most developers and the OS and every application needs to be  rewritten in these paradigms. Had advances in traditional processors stalled the cost may have been deemed acceptable, but the big chip makers poured resources into fabrication techniques and deeper super scaler pipelines that allowed the old dusty deck of C porgrams to run faster and faster and consequently non-von-Neumann architectures got the reputation as impractical.

sw guy
User Rank
Author
Re: Something still bothersome, Trung quote.
sw guy   6/13/2017 7:24:40 AM
NO RATINGS
Even if I already thought of a computer were processing power could be distributed near memory areas, one must reckon that some sequences of code would generate a single instruction fetch for an addition, because both inputs and output all live in registers. Sure, this does apply only under favorable circonstances, but with right CPU, compiler and compiler options, it happens enough that have count of instruction fetches to be noticeably decreased

rsmith
User Rank
Rookie
Not First non-von Neuman
rsmith   6/12/2017 12:13:18 PM
NO RATINGS
I would like to point out the first non-von Neuman comuter architecture was a Meta-Mentor. Patented in 2006, the architecture earased the differences between von Neuman and Harvard architectures by splitting the functions into two distinct parts. It is also referred as Multi Domain Architecture (MDA).  It uses combinational mathematics to determine faults and prevent computer viruses. For example, using Byzantine mathematics, the number of systems required to detect a fault is: n >= 3m +1.  (n = number of systems, m = number of faults) This means using von Neumann architecture, a minimum of four systems are required to detect and recover from a single fault. Using Combinatorial techniques like Greaco-Latin squares, or even Latin squares, MDA needs a maximum of three reconfigurable systems.  That is, three MDA systems can detect up to three different types of faults, and remain functioning and fault tolerant after the failures. This is just one of its features.

Roger Smith

R_Colin_Johnson
User Rank
Author
Re: Something still bothersome, Trung quote.
R_Colin_Johnson   6/12/2017 11:03:58 AM
NO RATINGS
Thanks Karl. I'm sure you'll picque some interest in those Roslyn/CSharp parser APIs.

KarlS01
User Rank
Author
Re: Something still bothersome, Trung quote.
KarlS01   6/12/2017 9:47:40 AM
NO RATINGS

Thanks, Colin:  I wish we knew more about Microsoft's "FPGA in the cloud"(evolution from project Catapult) because they may be working on a similar problem.

Another aspect is that CPU ISAs infer that all data is in memory and there must be 2 loads, an assign, and a store(4 instruction fetches, 2 data fetches, and a data store -- just to add 2 numbers.  Again Microsoft Research "Where's the beef" compared FPGA to CPU and concluded that there are too many instruction fetches.

I believe that a new processor that does if/else, for, while, do, and assignments while keeping variables local is doable by using the Roslyn/CSharp parser APIs.  (I am in the process of doing it)

R_Colin_Johnson
User Rank
Author
Re: Something still bothersome, Trung quote.
R_Colin_Johnson   6/12/2017 8:29:51 AM
NO RATINGS
Yes, that bothered me too, thanks for bringing it up. DARPA did the research and found what it calls attempts to create non-von-Neumann architectures and decided that they were not successful. By what metric they measured "success" Trung did not say, and the BAA does not specifically call for a non von Neumann solution (we'll have to wait and see what Intel and Qualcomm propose as architectures, the examples given were just to clarify the problem). After speaking with Trung, I believe his measure of success is "widely popular" as they pitched to Intel and Qualcomm that if they create a non von Neumann architecture that cuts through Big Data like a knife through buttter and thus becomes widely popular, then HIVE will become known as the first, even though there were previous attempts. DARPA wants to create unique firsts that become widely popular, like the Internet, even though there were previous networks before Arpanet. DARPA justifies its existence by the value they give to civilian society after satisfying their military goals with the same technology.

Page 1 / 3   >   >>
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed