Scott Elder wrote: Sure, if they took the time, their code would run a lot faster. But I don't think most people would care or pay for it.
I suspect that Apple hires people who understand computers at the lower levels, and that's a reason why their software works so well. Apple can afford to pay people to write elegant software. On this 30th anniversary of the Apple Macintosh, it's useful to point out that much of the original Macintosh OS was coded in 68000 ASM to fit in that 64KB ROM.
I understand your point. But do Web Site programmers need to understand the Intel x86 mnemonics? Do they need to understand how many clock periods it takes to do a MOV AX to BX instruction?
Sure, if they took the time, their code would run a lot faster. But I don't think most people would care or pay for it.
In software, everyone has moved up to the next higher level of abstraction. And the software industry has exploded because of the speed an application can be written and distributed. I think hardware would benefit using the same methodology.
Apple engineering works the same way. You only need to know one level away. Selling 100's of millions of products worldwide with an impressive field failure rate suggests it's not a bad methodology.
Scott Elder wrote: I wonder how much quicker products would be developed if each level of the development was done by a single collocated team whose deliverables were "shrink wrapped" to the next team working at the next higher level of abstraction.
This might work except that abstraction layers leak, so you end up with problems at the high level that can only be understood by looking one or more layers below. At that point, the engineer who knows about transistors, ground bounce, flyback diodes, metastability, machine language, and other subjects "nobody needs to understand any more" is the only one who can save the project.
Besides, I would hope that all digital engineers would be curious about how transistor circuits work -- and how they fail -- and would find joy in understanding these things. Part of being an expert in a subject is to have a grasp of one or two or more layers below where you usually work. That way it's not magic, and when you have noise or heat problems you have an idea where to start to fix them.
Its time to stop trying to develop large integrated systems with parallel teams.
Digital engineers no longer need to understand anything about transistors. Fewer and few engineers need to understand an HDL like Verilog. Pretty soon there will be no difference between an engineer that designs a digital circuit for an ASIC and a programmer who writes code for Windows 7 or Mac OS X. But curiously, all of the prior skills are still required.
We still need process designers, transistor designers, nand gate designers, place and route programmers, etc. all the way up to the programmers that wrote HDLCoder for Simulink. But we don't need them to all work in parallel.
The requirement to understand all of the details of a product development from the pins down to how a transistor works no longer exists. But it is very hard to convince the team members that's the case.
I wonder how much quicker products would be developed if each level of the development was done by a single collocated team whose deliverables were "shrink wrapped" to the next team working at the next higher level of abstraction.
As SoCs have become more complex, so has the task of verifying that what is implemented on chip is what the designer intended. No single verification approach can deliver the certainty that design teams need to tape out, but a suite of tools that each address a particular issue can help build that confidence. Applying the tools may also do more than avoid errors: better analysis early in the design process can avoid issues propagating, and, by highlighting which issues matter and which can be safely ignored, give designers the freedom to improve their designs.
Today an SoC is a sea of interfaces connecting different blocks and sub-systems. Individual IP blocks may be as complex as entire SoCs of five years ago, and may have internal clocking and power-management strategies which SoC designers need to be aware of. The integration of these complex blocks means that clock signals may have to negotiate up to 100 asynchronous clock domains as they cross block interfaces. Similarly, systemic power management strategies may involve coordinating power management within a block and among many blocks.
Managing the verification of such complex systems is challenging. The designs are large, so designers need tools with very high capacities. They need to be able to control the rising tide of uncertainty caused by clock signals that cross domains, and power-management strategies that create unknown (X) logic states when they blocks are turned on and off. Most of all, designers need these tools to tackle such problems at the highest level of abstraction possible, to speed up the verification process and stop the issues multiplying and becoming more obscure as the RTL design is decomposed to gates. Clock domain crossing (CDC) tools, engineered to recognize and analyze crossings for problems, are essential to help control the verification complexity involved in tackling a full SoC.
My Mom the Radio Star Max MaxfieldPost a comment I've said it before and I'll say it again -- it's a funny old world when you come to think about it. Last Friday lunchtime, for example, I received an email from Tim Levell, the editor for ...
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...