Buying FPGA technology is definitely a good portfolio extension. But should Intel buy a big player? Many base patents for SRAM-based FPGA are running out soon and the customer base for standard FPGA is quite different from the potential Intel target market for µP+FPGA . Therefore no good match with Altera or Xilinx. But the FPGA know-how is also available from small vendors e.g. Actel. Low invest for an easy to up-scale base know-how!
Imodu - A Xilinx FPGA is an innovation platform - and controlling such an established platform (similar to VxWorks) secures business in the future & prevents someone else from doing the same...
Maybe also Intel is shopping ingredients for a future computer architecture where FPGA areas between the (many) cores offer flexible communication. That's what the researchers use today: CPUs and FPGAs.
Guys, the answer is in Intel CTO Justin Rattner's slides at the ISCA 2008 keynote, refer to his slides 12-17 on time to market. It's been two years, if Intel wants speed, acquisition is their best bet.
Over the past two years, they've ramped up energy efficiency with Atom, and they failed in GPU with Larrabee. Their ambition left is the Time to Feature. So it makes sense to acquire an FPGA vendor, especially one with expertise in FPGA tools.
Much of the hardware in a PC was, as we say, "standard": The one-design-fits-all meant good business for Intel that scaled well.
Non-PC devices are more specialized, SoC hardware is diverse. Now for Intel to create a SoC for every niche and then compete with everyone else that has a SoC does not scale. However, if Intel combines FPGAs onto an already highly integrated Atom SoC, the result may be a standard platform again, a one-design-fits-many-SoC-markets. If it's priced low, it will change the SoC market and will scale well for Intel. Not good news for ARM and SoC companies, because many solutions can be implemented that way on a standard chip and will no longer require the NRE expenses of a SoC.
You can argue about who has the best process technology. Intel's is probably a little better as far as I can tell (even though TSMC is just about shipping 28nm while Intel is at 32nm), but my feeling is a lot of that has to do with the fact that Intel's fab is proprietary so they don't have to report their yield difficulties while TSMC customers are quick to blame the foundry for yield issues, but in either case the difference won't make or break FPGA sockets.
But why would Intel want to get into the PLD business in the first place? FPGAs are by definition not used in the high volume consumer products that Intel wants a piece of, since the ODM would just migrate to ASIC. Look at a teardown for any random ipod and count Xilinx design wins. Besides, TI, BRCM, et al. are already on top of every embedded function you could possibly want so I canâ??t see what Intel brings to that table. Embedded is ARM, and Intel divested themselves of ARM a few years ago. It seems their strategy now is to cram x86 into the embedded space (hence Atom), but they are going to need a better OS partner than MS for embedded since MS canâ??t even write software that runs efficiently on Intelâ??s high end processors.
Achronix is for niche application (places where performance primes over power consumption and any other consideration).
If the â??FPGA displacing ASICâ?ť trend is for real, Intel is better off buying an FPGA company that can work in a broader range of applications, sizes, etc.
If incumbent: Xilinx or Altera, if startup: Abound Logic
They key FPGA patents have expired, so anyone who wants to compete in FPGAs can go for it. Intel has a huge process advantage if they want to do something like this. They also have great SRAM and peripheral modules. IMO rather than buying an existing company and its legacy products for billions of dollars, Intel would be much better off with a small "skunk works" team doing it for a few million. However, I would expect that Intel, being a large company, would insist on doing it the big company way and end up with the usual merger disaster.
Here's a relevant and amusing quote from Dr. Hermann Hauser, one of ARM's original founders: "When we decided to do a microprocessor on our own, I made two great decisions. I gave them two things National, Intel, and Motorola had never given their design teams: the first was no money, the second was no people. The only way they could do it was to keep it really simple."
Well said, SL325. When I first read the report I had a similar reaction. I'm still skeptical that it would happen, but I must admit I've received emails from several people about synergies, particularly the automatic advantage of moving to Intel's process technology and being a generation ahead of any other FPGAs. I'd be curious to know what others think.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.