Hey Kurt - thanks for the reply. Indeed we DO agree - for the small slice of high value and high volume semiconductors. I think we can further agree that we can not determine the shape of the iceberg from the visible tip. Trends at the top end do make a huge difference to semiconductor companies, especially those over-exposed to a small number of sockets, but the majority of big names are well-diversified. The decades long trend of disaggregation indeed has allowed more systems companies to conveniently re-aggregate (per your examples) but this is, in my opinion, an exception that proves the rule rather than a industry-wide trend. So I do agree with more of your article than I disagree, Kurt, if you do not mind putting your conclusion and article title in the minority! Finally, I appreciate the willingness to put your views out there and in so doing to suffer the slings and arrows of... different viewpoints.
Jim, my point was to bring awareness to this trend. I'm sorry if you thought it was false advertising of my views.
I thought I was nuanced in explaining that this is occurring with big companies who are in growing markets with high value semiconductor content, and have a need to differentiate. Right now this is happening in mobile and servers. Although these markets may be a small slice of semiconductor volume, then do encompass a huge amount of semiconductor industry monetary value. Semiconductor companies making SoCs for these markets need to keep an eye on this.
I will leave it to analysts like Will Strauss, Jim McGregor, and Nathan Brookwood and firms like Semico, Linley Group, Gartner and IHS iSuppli to research this and supply the next level of quantitative detail and facts. They get paid to do that. I was stating my observations in my little part of the industry.
When I design a hardware/software system -- typically using an FPGA for the hardware -- I have limited resources (logic cells) for hardware and plenty of memory for software. Thus I put into hardware only what needs to be there because it has to have high performance and/or low latency. Everything else goes into the software. This keeps the hardware relatively simple, and throws the complexity into software. Software is a cheap way to perform complex functions, but it's really hard to design for and test all the odd conditions that can occur, and recover from errors gracefully.
With an SRAM-based FPGA, I have the luxury of being able to fix hardware bugs in later releases of the software. Even so, the hardware bugs rarely survive long and most releases are software updates and keep the same FPGA hardware.
Regarding software quality in general: it's pretty rare for any large program to work perfectly, and users have long ago set their expectations accordingly.
I like the idea. The smart-a$$ in me thinks the reason why software is so buggy compared to hardware is because the software folks always say, "Just ship it now! We can fix the bugs with a patch later." If the hardware folks tried this, it would be bad. (Understatement intended.)
Kurt, it seems your background would make for a far more nuanced read of the situation. You cite Apple, Microsoft, and Google and draw conclusions from a small slice of the market. Systems companies are going to spend $3 or $100 a part and buy from semis if they can avoid the millions, the time, and the risk of designing their own chips. But yes, they will design their own if that is what they need to do to differentiate. There is no new trend here. No new threat to semi companies. You have to look at each industry and the volumes and the differentiation strategies and the newness of their markets to determine where the integration (and integration expense/risk) will occur. A wake up call is not needed - the integration pendulum swings at different rates for different markets and there are hundreds of these markets, each with their own answer.
Am I being overly dismissive of this story if I call it fluff with a hard-hitting tag line to get readers attention? I guess it worked on me, so at least the second part of that is true.
For a future story, I am interested in seeing a discussion of why 70% of software projects fail or are plagued by endless bugs, but VLSI designs which share many similarities to large software projects can tape-out successfully. What can the software guys learn from the hardware guys?
I recently heard David Patterson (computer architecture researcher) was teaching software engineering this year. Perhaps the hardware guys are already starting to teach the software guys a thing or two. :-)
There's no pay for play with EE Times. I don't advertise with them or pay any fees. I just love to write and love our industry. So they invited me to contribute on a monthly basis.
The coffee choking was probably due to the fact that almost all of my company's customers are semiconductor vendors rather than OEMs or systems houses. The customer base is changing though, hence this article.
I'm glad you liked the article. I need to think of a topic for next month. Any ideas?
I should have guessed this article would cause some good discussion when my CEO read a draft and choked on his coffee!
This "OEMs making their own chips" trend seemed innocuous to me, and it's been very obvious to me and my company's sales team that this is occurring.
I think there is sufficient evidence in the marketplace to claim that some of the most innovative consumer product companies are "re-verticalizing", at least for their most important products that require differentiation. What I don't have a clear answer for is, "Why?"
I have a hypothesis that it is actually the software that is driving systems companies to design their own chips. When I was at TI, we offered operating system board support packages and driver software along with our OMAP phone chips. Software was not a core competence (buzzword alert!) of TI, and it took many years and lots of money to do it sufficiently well. TI were experts on the chip, but not on software.
When we look at a company like Google, Facebook or Microsoft, these companies are experts at software, but are looking to create innovative battery-operated devices. If they buy merchant silicon, they have to buy a chip that was designed for no particular OS, tool chain, application or form factor in mind. If they design their own chip, they have total control over all these things as well as exclusive access to the end product.
I don't think every OEM will choose to design their own chips, only the ones that can get an advantage through innovation (higher pricing) or significantly lower costs. I imagine the economic hurdle rate to design one's own chip is quite huge. Apple and Microsoft have determined that some of their product lines meet this hurdle rate, and Google and Facebook may have, too.
In any case, we're lucky to be in an industry that innovates not only with technology, but also with new business models. It keeps all of us from being replaced with computers ;-)
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.