How much is the term ASSP = Application-Specific Standard Part actually used? Until today, I though it stood for Application-Specific Signal Processor :-) I've looked at any number of data sheets for complex chips like the SMSC LAN9512 USB hub with Ethernet, and they don't seem to need the term. I would think that any non-customized chip you can buy from a vendor is by default one of their standard parts, so you only need an additional term like ASIC if it's customer-specific.
@betajet: How much is the term ASSP = Application-Specific Standard Part actually used?
I think it depends on one's audience. I'm reasonably confidant that my mother and her friends don't use it at all. By comparrison, I would say that a large number of the people with whom I rub shoulders on a daily basis use it very commonly indeed.
I'd be interested to hear what others have to say about this.
I've always viewed the differences between an ASIC, ASSP & standard product like this: an ASIC is designed for one customer and often includes some of that customer's IP. An ASSP is an ASIC that gets sold to several customers, competing in the same application space. It typically does not include IP from a specific customer, or if it does, that customer gets a specific time period of exclusivity. A standard product is designed to serve many customers in many different applications. You can buy standard products from distributors.
FPGA simply describes a type of implementation -- programmable, rather than hard wired in silicon. Some FPGAs are SoCs, others are not.
Wow Max, cudoes for writing a short article that get's so much attention and debate. To your original question, I recall Altera being the first to use the term System on Programmable Chip (SOPC) back about 15 years ago when they had the Excalibur product line. I always thought this term was sufficient for similar devices including those from Xilinx. However, since these devices were not that popular back then, the need for an industry term focused on a new breed of programmable silicon devices wasn't so necessary.
As George pointed out in an earlier post, today, the lines are becoming much more blurred in terms of what Altera and Xilinx offer versus full custom devices. Today, traditional ASIC/ASSP development teams are including FPGA capability in their devices. Some of this integration is occuring within one monolithic device and some are integrating with 2.5/3D techniques. With that in mind, I think we should revert back to naming the device based simply on how it will be applied in the overall system and call it either an ASIC, ASSP. Almost all of these will contain some sort of processor and incude some memory....so it might be time to retire the term SoC too.
@Wnderer: I read documents coded in acronyms and write code documented with LongVariableNames.
Think yourself lucky -- I remember the days when you could only use 8-character variable names and the characters all had to be letters or numbers or underscores and you couldn't start with a number -- and we considered ourselves to be lucky!!!
> Think yourself lucky -- I remember the days when you could only use 8-character variable names and the characters all had to be letters or numbers or underscores and you couldn't start with a number -- and we considered ourselves to be lucky!!!
Max , think yourself lucky - i remember the days when i was tasked with creating a world out of emptiness - no where to store variables - and i considered myself to be lucky.
Max wrote: I remember the days when you could only use 8-character variable names...
You got to use eight? Pfui, I only got 6. The Univac 1100 series had 36-bit words, so you could fit six 6-bit ASCII upper-case characters in a single word and compare names as integers. Watch out for negative zeroes!
On the PDP-11 you could fit 6 characters in 32 bits. Magic, you ask? Yes indeed, a magic spell called DEC RADIX-50 which encoded 26 upper-case letters plus 10 digits plus a few punctuation marks into a number from 0-39, and then multiplied the first character by 1600, the second by 40, the third by 1, and added them all up. So why "RADIX-50" instead of 40? Well, we PDP-11 dudes always preferred thinking in octal :-)
And divide-by-zero? Multiplication is repeated addition, division is repeated subtraction. The concept of divide-by-zero simply means that one can subtract zero from a quantity forever without changing the value of that quantity.
So why do computing machines have such a problem with this?
(note: this is not intended to be completely serious)
@betajet: Yes indeed, a magic spell called DEC RADIX-50 which encoded 26 upper-case letters plus 10 digits plus a few punctuation marks into a number from 0-39, and then multiplied the first character by 1600, the second by 40, the third by 1, and added them all up. So why "RADIX-50" instead of 40? Well, we PDP-11 dudes always preferred thinking in octal :-)
I remember RADIX-50 -- we used to use it on little GENRAD PCB functional testers (I think they were the 2225 testers) -- I thought the whole concept of RADIC-50 was absolutely brilliant.
@kfield: I am still having trouble over the definition of embedded. :-)
Fear not Oh leader of men (and boys) ... legendary embedded guru Jack Ganssle and yours truly are going to be having a debate about this very topic in a live "Radio Show" starting at 2:00pm Eastern on Friday 11th July -- after we've talked for 30 minutes, everyone else will be able to join us in an online chat. I'll be posting a blog about this in a day or so when it's all set up in the system.
An entire debate? Is the definition of embedded in the context of embedded systems that heavily varied? If it's not a relatively specialized, computerized system that is often embedded in a bigger system, then I have been misleading a lot of people over the years.
@Andrew: Is the definition of embedded in the context of embedded systems that heavily varied?
Oh to be young and innocent once again. If the definition were so simple, then experianced engineers wouldn't have so much trouble saying "This is definatly an embedded system while that certainly isn't."
So, we'll be seeing you at the chat on Friday 11th July, right? LOL
@maxthemagnificent Be there or be square! Love to chat on the chat about whether we should bring back the name "Embedded Systems Conference?" After all, teenagers are wearing old spice these days so retro is in!
I have worked on embedded systems for 30 years and I have the same problem.
My view, is that if the system can't run games, it's not an embedded system ;)
That said the Altera 'SoC's are interesting. They are expensive however they cut down development risk. That said, a $12 ARM processor and a separate $15-$25 FPGA will probably do the same job. However if you don't know what you are doing then clearly the Altera 'SoC's eliminate that risk.
The proper answer of course is C: none of the above or maybe it's D: all of the above.
While it's true that most SoCs are ASICs and vice versa, it's possible to have an ASIC without a processor although these days it would be almost un-heard of. On the other hand, I would guess that most ASSPs these days are SoC and I suppose that some FPGAs could be considere SoC although I'd prefer that my SoC have some analog I/O. And then there's the Cypress PSoC devices which definately qualify as SoC but could they also be considered ASSP?
We at ChipPath have been working on mapping Zynq-7000, SmartFusion-2, SoC FPGA for over two years and the term we use is FPASSP. Field/Factory programmable ASSP. These devices have a fixed functional parts in the CPU subsystems like ASSP and programmable parts in FPGA or Metal programmable blocks. Three are two partitions hence the combined acronym.
2) ST Spear - Metal programmable fabric blocks plus Cortex-A9 or A15. (FPASSP)
3) Future: ASSP like OMAP or NXP with FPGA blocks on board (fpASSP)
The capitalization of FP and ASSP reflects the dominant partition in the device. In Zynq the CPU partition is less than 22% of the overall die, FPassp. In 3) we will see less that 20% dedicated to FPGA programability, fpASSP. Metal programmable fabrics tend to be more even.
Reason other acronyms don't work as well is SoC implies a full mask set. This is clearly not the case since the NRE is low to zero. Second is the functions are fixed hence go along with ASSP. Mapping architecture these new devices requires complex functional mapping as well as traditional FPGA resource mapping. This all will lead to a new category of EDA tools and IP.
@George Janac: The capitalization of FP and ASSP reflects the dominant partition in the device.
This is very interesting -- something to think about -- the capitalization of FP and ASSP remonds me of the a/D (little 'a' big 'D'), A/D, and A/d (big 'a' little 'D') to talk about mixed-signal chips and indicate the relative amount of the analog and digital functionality.
Important thing about FPASSP is everyone is trying to expand their markets. FPGA is trying expand their reach by adding ASSP content which is small enough in dies size not to radicaly increase their high device costs. The ASSP vendors are willing to add a FPGA or programmable die area to offset their high NRE costs by making their devices suitable in adjacent applications.
But all this is a problem, and an opportunity. FPGA customers now have to use IP for which they must write drivers and do software. Work they are not used to. ASSP users now have to understand how to deal with non-fixed resources and tools that program them. Both sides need new support from EDA, IP, and software. Simply a manner ot economics and how architectures map onto devices.
"This is very interesting -- something to think about -- the capitalization of FP and ASSP remonds me of the a/D (little 'a' big 'D'), A/D, and A/d (big 'a' little 'D') to talk about mixed-signal chips and indicate the relative amount of the analog and digital functionality."
I too have often used the terms "Big A/Little D" and "Big D/Little A" for mixed-signal ICs -- by the way, isn't everything a mixed-signal IC? -- but those terms relate just as much to design methodology as they do to the relative amount of analog vs digital content. Big A/Little D can be thought of as "schematic on top" and Big D/Little A can be thought of as "netlist on top." The former refers to traditional analog design methodology and the latter refers to traditional digital design methodology.
Choosing the optimum methodology for a particular mixed-signal design can make a big difference in die size, performance and schedule.