Breaking News
Blog

Seeing FPGAs Though Nemos's Eyes

NO RATINGS
View Comments: Oldest First | Newest First | Threaded View
Page 1 / 4   >   >>
betajet
User Rank
CEO
Re: The hard parts about developing with FPGAs
betajet   12/26/2013 7:17:04 PM
JeffL_2 wrote: The two biggest issues about developing with FPGAs I find is 1) the "core" voltage is some low non-standard value which you may have to provide at a fairly high current, and ESPECIALLY 2) sockets for these devices either do not exist or are two orders of magnitude more expensive than the device itself! (Not that current MCUs and MPUs are devoid if this issue either, for reasons that I still find inexplicable.)

1) You want the voltage to be as small as possible to save power, but not so small that you lose performance.  Usually the voltages are standard values like 1.2V, 1.5V, and 1.8V, but generating a non-standard value is usually as simple as adding a couple of 1% resistors.  Needing a lot of power-on in-rush current can be a pain -- especially at very low temperatures -- but I thing they're getting better at that.

2) Sockets are nice for 84-pin PLCC and smaller, but IMO aren't reliable for dense TQFPs and BGAs.  The sockets expensive because very few people use them, because those who have tried them (like moi) have found that they're much more trouble than they're worth.  If you want access to signals, add some high-density headers.  As for MCUs and MPUs, manufacturers probably find that few customers come begging for larger packages.  Plus, larger packages have longer internal wires (this applies to FPGAs as well), and those longer internal wires add inductance, leading to ground bounce and similar electrical problems.

JeffL_2 wrote: Also if you pick a device that's large enough you could potentially add the VHDL to internally replicate an existing MCU...

The FPGA implementation is going to be a lot slower than a custom-designed CPU.  Plus, the logic cells needed to implement the CPU in an FPGA are probably going to cost a lot more than the CPU, and take more power.  If you don't need much CPU performance, you can get by with a simple CPU and it's quite practical.

JeffL_2 wrote: Also I believe there's way too much "diversity" in the interfaces for these devices, something like the JTAG standard that caught hold for some MCUs isn't all that commonly used for FPGAs...

Most if not all current FPGAs have JTAG.  Xilinx does an excellent job of documenting its JTAG instructions and data so you can do your own programming and debugging using a wide variety of JTAG host devices, including MCU GPIOs.  I haven't looked closely at other vendors.

JeffL_2 wrote: I would NOT say that learning VHDL SHOULD be a problem since the benefits appear to outweigh the learning curve and it's very widely accepted (although I'm no expert in it yet either).

Personally, I prefer Verilog.  Its C-base syntax is more concise than VHDL, which is based on Ada.  Chacun a son goût (YMMV).

JeffL_2 wrote: There's also issues about understanding how a particular device architecture "maps" its resources and how to best "tweak" your design to fit those resources but that probably deserves an entirely different article.

This is indeed a problem with VHDL and Verilog.  You have to write your source code carefully so that the synthesizer generates the hardware you really want, and if you don't get it right the synthesizer may do wildly unexpected things.



Paul A. Clayton
User Rank
CEO
Re: The hard parts about developing with FPGAs
Paul A. Clayton   12/26/2013 8:34:06 PM
NO RATINGS
you could potentially add the VHDL to internally replicate an existing MCU for which compilers and assemblers already exist, but then you'd probably be in violation of someone's copyright, and most one-off projects aren't large enough to justify independently develop BOTH an instruction set and the support tools to develop the code with.

First a niggle: one would almost certainly not be violating copyright with an independently developed implementation since one does not generally have access to the HDL source code. You presumably meant that you would probably be in violation of someone's patent. There are two dangers here. 1) That your design violates a valid patent. For the kinds of ISAs and microarchitectures likely to be implemented in an FPGA, this is unlikely. 2) That you will be unjustly sued (or threatened to be sued) for patent violation. This seems unlikely. Even ARM, which has the FPGA-targeted Cortex-M1, would generally have little incentive to pursue implementers of its ISA for low-volume internal use. Even apart from ill-will generated by such actions, the benefit (perhaps a few would be frightened or compelled into licensing a core design) seems unlikely to justify the cost. (Trademarks are a different matter, but one does not need to claim that one implemented an ARM, MIPS, or other trademarked name brand core. Trademarks also lose their power if not enforced--pressuring companies to more aggressively pursue possible violators--; patents and copyright are valid independent of previous lack of active enforcement.)

I would also argue that producing an ISA definition is not that difficult when the ISA is simple (as approrpriate for an FPGA soft core) and similar to established ISAs. If one is willing to accept the limits of GNU tools, even the porting of such tools is not (from what little I have read) overwhelmingly difficult (again assuming a simple ISA similar to existing ones). Of course, it seems odd that one would bother creating a new ISA and implementing a core when there are already cores available for free (unless one considers such part of the fun of the project). (I suspect licenses for Nios II [Altera] and MicroBlaze [Xilinx] soft cores are not extremely expensive, but I have not looked into such.)

betajet
User Rank
CEO
Re: The hard parts about developing with FPGAs
betajet   12/26/2013 9:47:05 PM
NO RATINGS
There are a number of tested open-source CPUs available with software support.  I don't know what the legal status is of any of these.  IANAL

http://en.wikipedia.org/wiki/OpenRISC

http://en.wikipedia.org/wiki/Amber_%28processor_core%29

http://en.wikipedia.org/wiki/Atmel_AVR#FPGA_clones

The Amber core is ARMv2-compatible.  The name has been changed to protect ARM's trademark.  They chose ARMv2 to avoid patents in later ARM architectures.

Paul A. Clayton
User Rank
CEO
Re: The hard parts about developing with FPGAs
Paul A. Clayton   12/27/2013 10:01:15 AM
NO RATINGS
Broadly there will be 5-6 micro architecture families, corresponding roughly to a Cortex - M4, Cortex A7, Cortex A53/57, Core i5/i7, Xeon 4-12 core and 64-100 core Xeon Phi type HPC. Instruction set is the Berkeley Risc-V.

It is neat that Berkeley's RISC-V is actually being used. Will the variable length encoding be used?

(I am disappointed about the instruction encoding, particularly with respect to supporting VLE. While the length indication encoding is similar to something I thought of [my thought was a slight modification--using two bits like RVC--of per-parcel end of instruction indicator bit, inspired by similar predecode bit per byte in some x86 implementations; RVC puts those bits in the first parcel], the placement of register fields is very different in 16-bit and 32-bit instruction formats. [A tiny side benefit of greater compatibility in 16-bit and 32-bit encodings could be greater similarity in placement within a parcel and bit pattern between the function field for R-type format and the opcode field for I-type format as well as similar placement with a parcel of opcode field bits for 16-bit formats and function field bits for 32-bit instructions.] The register field packing also works against a simple extension to 64 registers, which might be useful for FP/SIMD. [The alternate encoding that I found would probably not improve decode efficiency significantly, but even trivial weaknesses bother me when I think I could do better.])

(I tend to disagree with some of the other design choices for RISC-V, but I do not feel I understand the trade-offs even as little as I understand the trade-offs for instruction encoding.)

While RISC-V may not be perfect (even for its design goals--I would be tempted to sacrifice some conceptual and implementation simplicity for other benefits), ISA fragmentation has significant costs (even with highly similar, RISCy ISAs). After watching the lack of progress in the OpenRISC 2k project, reading that RISC-V will be used outside of Berkeley sounds encouraging.

KarlS01
User Rank
Manager
A truly "programmable" FPGA design
KarlS01   12/27/2013 11:05:56 AM
To All:  I would like to join in with a different perspective.  There are already soft cores available for FPGAs.  Their reputations boild down to "too big, too slow", the real point is that FPGA is not well suited for RISC implementation.  FPGAs are ideal for a programmable design.  No, I am quite serious and have a lot of experience, so I will try to explain.

The "back end" of the compiler process where the intermediate language is mapped to a RISC architecfure is the weak link.  RISC uses many instructions with the assumptions that clock speeds can be infinite.  FPGAs have lower clock speed than ASICs of the same generation.  The solution is to reduce the number of cycles to execute HLL statements and to evaluate expressions. 

Some strong points of FPGAs:
  1. FPGAs have block memories with true dual port capability, practically unlimited interconnect, and 6 input LUTs that can evaluate incredibly complex Boolean expressions in a couple of levels of logic.
  2. IBM used micro-code control for high end main-frames with great success and FPGAs have memory available.
  3. Program control flow is done by evaluating relational expressions and choosing one of two execution paths. Very straight forward.
  4. Expression evaluation operators require 2 operands that can be supplied from a dual port RAM if all cariables and constants are kept in  that RAM.
  5. The cycle time per operator is about the same as the typical cycle for the technology.
  6. The operands are not loaded innto registers from memory with the result stored back into memory for expression evaluation.  They are local.

The hardware design take a couple of hundred LUTs and 3 block RAMs. 

The software is C#.  A parser and control word builder that generates content for the control word memory and code that generates the operand memory content.

There are so many ISAs and variations available that if there were an ISA appropriate for FPGAs, chances are that it would exist already.  Going off to design still another one is probably just a waste of time and effort.

Notice that there is no cache.  Cache is there to hide some of the external program memory access time. External memory is only for transient system data that can be accessed as needed, probably via DMA to local memory,

 

Sanjib.A
User Rank
CEO
What about the SoCs? Cyclone V OR Zynq
Sanjib.A   12/27/2013 11:25:17 AM
NO RATINGS
It is great to see so many expert comments out of extensive experiences!! What is your opinion about the recently launched SoCs? For example the Cyclone SoC from Altera (which has an ARM core in built) or the Zynq from Xilinx. Anybody tried those? Any pros/cons?

KarlS01
User Rank
Manager
Re: What about the SoCs? Cyclone V OR Zynq
KarlS01   12/27/2013 1:27:43 PM
NO RATINGS
@Sanjib,A:  If tou go to LinkedIn FPGA group, Steve Leibson of Xilinx marketing has several postd about the Zynq.  Adam Taylor has an OS running on one core and bare metal on the other -- I have seen nothing about a real app.  Since the hard core on Zynq is not as fast as an ASIC, I guess they threw in a second to see if anyone could use it.

They also have the standard mix of ARM interfaces on Zynq.  That may enable usage of some existing MCU tools.

The ARM is still a RISC but not implemented as a soft core so not real clear except you get the baggage of memory controller and 2 leveols of cache if you really want them.

For performance you probably go for optimized C compiler,  But if there is a problem, it is probably not de-buggable.

Altera Forums has an SoC category where you might find what issues the users have.

betajet
User Rank
CEO
Re: What about the SoCs? Cyclone V OR Zynq
betajet   12/27/2013 2:03:40 PM
NO RATINGS
I haven't had a chance to check out Zynq, plus the eval boards are pretty expensive.  There are some cheaper ones coming in 2014, so we'll see.  Given that the Xcell Journal article in 2Q2011 claimed "a starting price below $15", the current chip price is still pretty high.  I noticed at the zedboard.org site that the original Zedboard is still US$395 for up to 5, but higher quantities have an "anti-discount" which raises the price to US$495.  The US$395 price includes "manufacturers' subsidies".  This is JMO/YMMV, but this engineer who is leery of being an "early adopter" wonders whether a manufacturer is having yield problems.

I hope Xilinx learned lessons from the Virtex-II Pro, which had built-in PowerPC cores.  We considered that chip at one time since we were using PowerPC SoCs, but we gagged on the price.  We later went with an IBM/AMCC 405EP and a Spartan-IIE, with a 33 MHz 32-bit PCI bus between them.  That was very cost-effective and worked out really well.

So I'd like to try Zynq some day, but I'm waiting for pricing to get a closer to $15.

anon7632755
User Rank
Manager
Re: The hard parts about developing with FPGAs
anon7632755   12/27/2013 6:17:05 PM
NO RATINGS
A question is asked:

"To be honest, until recently I still had questions like 'Why should one consider FPGA technology when we have well-known microcontrollers available to us'?"

The answer is quite obvious, actually: it's because generally, FPGAs and microcontrollers/microprocessors solve different problems. 

The question our blogger asked is generally asked by folks coming in from a host software background, or perhaps from a embedded micro background. 

A processor presents a sequential instruction execution model, limited I/O, and a straightforward programming model which allows the engineer to write programs in C or other language. A microcontroller's peripherals and pin-outs are limited to what the chip designers thought would be a good balance between cost and flexibility. 

An FPGA is a platform for general digital logic design. The engineer can implement whatever is necessary to meet the design goals. The designer isn't limited to what peripherals were chosen by the chip vendor, so if you need nine SPI ports, each with different word lengths, you design them and add them to the larger system. If you need a wacky ping-pong assymetric memory interface to handle sensor data, you design exactly what you need. Pinouts are limited by the device choice; if you need more pins or more logic, you choose a larger device.

Of course, the flexibility of the FPGA comes at a cost, or at least with a requirement: you have to be a digital design engineer. It is not like writing firmware for a microcontroller. It is a different skill set. This needs to be understood.

Now of course the FPGA vendors are embracing High-Level Synthesis and "C to gates" methodologies as a way to "ease" the transition from high-level software and algorithm acceleration into hardware. But for mainstream logic design, where one really does need to be concerned about the cost of the FPGA device (as part of the overall BOM), HLS is still a non-starter.

And then there are the Systems-on-a-Chip, which embed a processor hard core with some amount of FPGA fabric. The intent here is a good one, since some applications require both a processor to handle stuff like an Ethernet or USB interface to a host, combined with the custom logic that is necessary to implement the design. Does every design require an SoC? Certainly not, but it's a useful tool for a designer to have (assuming the vendor design tools don't totally suck eggs).

anon7632755
User Rank
Manager
Re: What about the SoCs? Cyclone V OR Zynq
anon7632755   12/27/2013 6:22:41 PM
"I hope Xilinx learned lessons from the Virtex-II Pro, which had built-in PowerPC cores. We considered that chip at one time since we were using PowerPC SoCs, but we gagged on the price. We later went with an IBM/AMCC 405EP and a Spartan-IIE, with a 33 MHz 32-bit PCI bus between them."

We did a bunch of designs with the Virtex-4 FX with built-in PPC, and we had the same experience. The chips were more expensive than a standalone PPC and a smaller FPGA next to it, and the savings in board space going with the V4FX wasn't all that much. We didn't do PCI between the PPC and FPGA, we just did the processor local bus.

And all of that cost was part of the equation. The other part was the continually-"evolving" toolset. Let's go from OPB to PLB to AHB, making the customer rewrite the same custom cores THREE times. Let's provide REALLY AWFUL cores with the tools. And so on.

Page 1 / 4   >   >>
Radio
NEXT UPCOMING BROADCAST
EE Times Senior Technical Editor Martin Rowe will interview EMC engineer Kenneth Wyatt.
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Flash Poll