I program in C+ .Net, C#.Net, Assembler, and VB6. I currently designing a PCB targeted at automotive M2M.
I am looking at a built-in-self-test for the product. I do not have a any 'battle' experience with FPGAs or ASICs to speak of. Can an FPGA/ASIC be employed as the core of the built-in-self-test test structure.
Any pointers or comments would be most appreciated.
@Ash88: Which one is getting more attention to enhance and improve and which one do you think will dominate :)
Historically there's been an inertic when it comes to moving forward. When Verilog and VHDL an dSynthesis firtst came out, designers said that they could create better designs by hand working at the gate level. But that's only true when you are working with a limited numbert of gates. When you are deallling with 100,000+ gates, you can experiment with your RTL and synthesise it much quicker.
Now we have C/C== and HLS (which as I say is used to generate VHDL/Verilog and output) -- some people a reluctant to use it, but others are using it to create really REALLY big designs...
@Ankit: "how all programmable devices consume less power?"
This is a complicated question. An ASIC/SoC uses the least power and gives the best performance, but it costs millions of dollars and takes at least a year (maybe two) to design and build. An FPGA consumes more power and offers less performance than an ASIC/SoC, but you can implement a design much faster and cheaper.
A microprocessor/microcontroller is really cheap, and great at implementing decision-making software, but it' svery ineffuicient at performing algorithmic data processing tasks. An FPGA can perform algorithmic dataprocessing tasks at a lower clock rate using less power because it can do things in a massivelt parallel fashion
Thank you Max and Brian. Unfortunately I miss tomorrows live broadcast session on Tools and Methodolgies, my area of expertise ... due to a late schedule change. Dang ... and may not even be able to get back 'online' quickly enough for the post braodcast Q&A ... but I will listen to the audio portion. Thank you for taking the time to address the Q&A's.
HLS refers to a form of synthesis technology, RTL refers to a level of design abstraction. HLS takes a C/C+ description and generates RTL (in the form of Verilog or VHDL) -- thsi RTL is then "consumed" by regular logic synthesis
Personally I find SystemVerilog quite comfortable to work with (my experience is mostly with synthesis, rather than verification). It has enough support for low-level stuff (down to the transistor level) and high-level abstractions (even structured data types and object orientation). Add to that general industry support (and investment) and you have a clear winner. I see the subject of Verilog vs. VHDL in the same way as the old C vs. Pascal conflict from the 80s.
In today's lecture, it was a bit unclear to me regarding slides 4, 5, 6 ... and later slide 12. I agree what was said regarding slide 4 -- that at the end of execution, both registers will contain the value 6. However, it appears to me that this is NOT the case for slide 5 with common clock (seems that registers will swap values with each clock cycle). And what about slide 6 ... depends on whether the last 2 instructions are executed sequentially or concurrently ? Similarly, for slide 12 (& slide 13). With a common clock, what happens to blocks B, C, & D during the first 3 clock cycles until the pipeline is "loaded". Are we to assign some initial conditions to the appropriate registers (during power-on-reset or equivalent) such that whatever values reside on the output registers are loaded into the successive register -- until the first 4 clock cycles are finally achieved ? I am an analog engineer, dabbling here. Thanks.
@lleiva: "the synthesis from C/C+ is effective in terms of time and area? "
The big advantage of HLS (sythesis from C/C ) is that it allows you to explore lots of different implementation options -- you can make something very area effcients (by resource sharing) but have lower performance, or use a lot of area (resources in FPGA terms) and have higher performance -- like all engineering it's a tradeoff
@Cristian: "Is EDA community waiting/seeking for a new better HDL?"
Hard to say -- there are some issues with VHDL and Verilog that are confusing -- like "assignments" -- you have to remember that these languages are 20 years old now. Languages like MyHDL are based on newer concepts that make them easier to understand and use -- but there's a lot of "inertia" when it comes to moving to something new
@lucava: Re your question "Question: why are the different blocks not loaded with data at clock #1 ?"
Tis is a good question -- in fact they are all loaded at clock #1, the problem is that before clock #1 they all contain random "stuff" -- on click #1 we load the "good" values from the inputs into blovk A, and the random/unknown values from block A into Block B .. as we ckke on clocking things we get rid of the unknown / random values
Have been writing embedded software in an ASIC/SoC design environment for about 11 years. The software has been targeted at hardware validation, hardware debug, as well as production software for the final end product.
A testbench is simply a HDL module that instantiates the module you want to test and provides input stimulus. It's just another .vhd or .v file. Xilinx ISE can generate the basic skeleton for you. Maybe Quartus can as well.
The difference between synthesis and compilation: Compilation turns your code into an ordered sequence of instructions run by the processor. Synthesis turns your HDL into what is essentially physical hardware - more equivalent to the processor itself than the instructions being run on it.