@DrFPGA, betajet; Discussions tend to be too superficial and full of acronyms/buzz words.
Multi-core/parallel programming is not too successful, yet it is 2 vs 4 core. in the discussion Maybe multiple single core is more like DrFPGA has in mind.
The FPGA design methodology has not changed because every register has to be placed and connections routed. Aggravated by adding extra registers for pipe-lining and timing closure. There is no wonder that "compilation" takes forever since P&R is included.
When a CPU is involved there is an automatic performance limitation due to loading and storing operands. There is just too much serialization even with dual/quad core because there is a single memory.
OK, what can be done? Let's start by looking at C source code. A debugger can single step through the source and display variable contents as well as expression evaluations.
That is because the source line number is in effect the state of a state machine.
Implementing state machines is routine in FPGA design. The number only limited by chip resources. Expression evaluation starts with 2 operands and combines the result of ech operator with the next operator. Dual port memory can be used and since it is pre-placed only the routing between the memory and alu is required.
DrFPGA asked: Any other thoughts on what the FPGA guys could be doing to differentiate themselves from the current processor architecture roadmaps?
IMO the problem with FPGAs is not that they need to differentiate themselves from standard CPUs, but that they're already too different in the following two ways:
1. It takes orders of magnitude longer to compile a design for an FPGA compared to the time it takes to compile a program of the same complexity for a standard CPU. The FPGA vendors IMO do not seem to think this is a problem, because they're competing with ASIC design.
2. Standard CPUs have open instruction sets so parties other than the manufacturer can design development tools for them. Thus we have FLOSS compilers that produce excellent code very fast for standard CPUs, but no way to solve problem #1.
FPGAs have much more flexibility than standard processors, so I'd like to see some architectural divergence from the traditional processor quad-core roadmaps.
Why not put in multiple dual-cores instead of going to a quad? FPGAs have always been better at distributed processing so it makes more sense to me to have more elements more widely distributed on the die. With an ability to put processors and their associated fabric co-processors and peripherals into low power modes when they are not needed more seperation would be better, right?
Any other thoughts on what the FPGA guys could be doing to differentiate themselves from the current processor architecture roadmaps?
I would prefer a dual core A57 instead of a quad core A53..., but I think I'm likely to become dissapointed. Quad core looks more impressing on paper, but at least our applications would be easier to program on a dual core A57.
@Max: For big.LITTLE they'd need to go for A53+A75 combination. I suspect power-saving isn't so important in typical Zync designs so it would seem like a significant effort and complication. It seems unlikely that Xilinx are hoping to displace ASIC SoCs in smartphones...
Given the mention of 64-bit I think we can be fairly sure that A9s won't be present.
Will it be a dual or quad core -- or even more? What do you think?
We know it's going to be at least an A-53 (next generation, 64-bit). To equal Altera they'll have to put down quad core A-53. I'm hoping they leapfrog A-53 and jump to quad core A-57 processors.
Although >4 cores would provide some interesting possibilities (and compete with Cavium/Tilera) it seems unlikely as the ARMv8 architecture only supports 4 cores in an SMP cluster. To get >4 cores you need multiple clusters sharing AMBA 5 CHI or AMBA 4 ACE, a risky jump from a relatively simple dual core A9 on current generation. We can but hope though!
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.