Design Con 2015
Breaking News
News & Analysis

Intel, Microsoft describe parallel progress

8/22/2008 02:00 PM EDT
3 comments
NO RATINGS
Page 1 / 2 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
ALP76
User Rank
Rookie
re: Intel, Microsoft describe parallel progress
ALP76   8/29/2008 6:30:43 PM
NO RATINGS
Parallel is "Distributed". No new magic escept more "Cores" on a device. Our design has two 32-Bit Processors on a single FPGA from Actel - the AX2000. The on-chip memory is used for 1024 Instructions and 1024 Operands - Harvard Architecture. Our Operating System is called FOPS [File Oriented Programming System). Each basic function of the Language has an ALP76 assigned. Each processing statement has several calls to various functions. These are distributed [parallel] and is possible to execute the entire statement in one "fell swoop" as we used to call it. However if there is a function waiting for a "call"; a DDP bit says wait for interrupt. Any processor not activated is sound asleep with no clocks applied. We are not heavy with registers in our pipeline - only four, but they can be executing in parallel - and have transparent Jump, Jump and Store Return [Call], and Conditional Jumps. Both of our processors; the ALP76 [32-Bit] and ERIN76 [64-Bit] use Schematic Design vs HDL. Decided to stay with the old school here and one of eight is some sort of Jump anyway. Questions - ask at--- Richard E. Hartney, President Erin Greene & Associates LLC Email: rhartney1@cox.net

ALP76
User Rank
Rookie
re: Intel, Microsoft describe parallel progress
ALP76   8/26/2008 2:13:34 PM
NO RATINGS
Parallel Processing is just another way of saying "Distributed Processing". We are currently disecting a Software Operating System called FOPS (File Oriented Programming System) that was designed in 1972. At the same time; in order to keep costs to a minimum we have designed a Computer called ALP76 (A Language Processor) that is an Emulation of the Honeywell DDP516 which had One Accumulator, One B-Register and One Index Register, expanded to 32 Bits. Most of our present day Engineers don't remember why registers were added - a hardware stack is/was faster than Core Memory. New Instructions have been added to complement data handling such as Clear/Set/Test Bit, Load/Store Byte. Our task is rather simple as FOPS currently has only 9,976 Instructions. ( I counted them!!!) Subroutine lengths are less than 512 instructions. We are targeting the ACTEL AX2000-1 FPGA containing two processors using on-chip memory for Instructions and Operands with a modified Harvard Architecture and is software Programmable with any combination Series, Parallel, Orthoganol instruction execution -defined during boot power-on load. Have not decided whether to make this feature programmable on the fly. Gut feel says - YES. Each of the FOPS operators shown below has its its own processor assigned. Punctuation marks are grouped. 10. The FOPS LEXICON The FOPS language consists of vocabulary words, variable markers and punctuation marks which are used to form legal strings called statements for execution by the system. The Language is File, Form, Page and Part Oriented as used by the Application Programmer. The System Programmer uses an Assembler not a Compiler for the entire system. Compliers are; in my mindset; very inefficient. The Hardware uses Schematic input not HDL which; again; in my mind not very efficient. COMMANDS OPERATORS MODIFIERS VARIALBE MARKERS PRINT FILE IN FORM ?X?-Literal marker PRIORITY PAGE FOR -Value Marker DISPLAY PART IF [X]-Numerical position marker DEMAND FORM ON SET +(plus) ENQUE PUNCTUATION MARKS DELETE -(Minus) SYSTEM /-Step Marker COMPILE =(Equals) VARIABLES :-Range of values marker WAIT => (Greater) LP1 ;-Statement terminator TO <= (Less) LP2 ,-List delimiter DO READER LP3 @-Keyboard input terminator DONE LIGHTPEN LP4 .-(Center Dot) subfile marker SEND STEP TIME *-Comment Marker RECEIVE VIDEO CLOCK (Reserved) &(Concatenation BLINK Operator UNBLINK $XX$ (1) T1 T2 (1) All 4 character variables starting and ending with $ reserved for error handling and Initialization. I too will accept comments and thanks for accepting my comments. Richard E. Hartney, President Erin Greene & Aasociates LLC Business Information Management Systems Email: rhartney1@cox.net

threads
User Rank
Rookie
re: Intel, Microsoft describe parallel progress
threads   8/22/2008 6:33:51 PM
NO RATINGS
My hunch is that for a parallel programming model to get broad adoption, it will have to be a relatively modest change. Learning a new language, or substantially restructing an existing app with millions of lines of code, are too high a barrier to overcome on the timescales involved here. 8-core processors are around the corner, after all. (Here's an e-Book on multicore programming - covers various programming models including OpenMP, Intel's TBB, Pthreads, Cilk++, MPI. http://www.cilk.com/multicore-e-book)

Radio
NEXT UPCOMING BROADCAST
EE Times Senior Technical Editor Martin Rowe will interview EMC engineer Kenneth Wyatt.
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Flash Poll