@anon7632755: OK, now I'm confused. Judging by some of the other posts here, there's at least one othe rperson who's getting worked up, but I'll settle for "confused".
I never denied USB2 works at 480 MHz, or 480 Mbps. What I DID say, way back in my first post, is that I need a simple FPGA to downsample a 120MHz x14-bit ADC, in order to effectively use a USB2 connection. I also pointed out that the 480Mbps is an ideal - once you take overhead into consideration, a more achievable goal for ACTUAL DATA TRANSFER is 300Mbps.
When the data is downsampled, it goes to a "buffer" and is read by [whatever is handling the USB part]. If the downsampling is inadequate, you risk writing new data faster than old data gets cleared, resulting in buffer overflow [lost data].
The confusing thing is - you seem to agree with this. Yet you seem to be harping on about 480MHz, and I don't know why. You take my attempt to reduce the data bandwidth, and deride it as "you don't understand USB".
I have to wonder if I'm being baited to respond. If so, judging by the forum threads I'm not the only one. I hesitate to use the term "Troll" but you sir/madam, are making some confusing / inflammatory posts.
Penguin: "Side note: @anon7632755, yes high speed USB is always 480 mbps. IF the host controller isn't busy, [mouse, keyboard etc. all eat USB bandwidth] it might even all happen my one device. That still leaves the overhead of a packetized protocol; Header, CRC and SOF (Start Of Frame) data all eat into actual throughput. There are a few papers at IEEE talking about practical throughput, they give 300 - 305Mbps as a limit before you start losing data. Would you like me to look up the reference for you?"
I happen to have the USB 2.0 spec right here.
But again, you're missing the point. The wire speed is fixed. Bits toggle on it at 480 MHz.
The data throughput varies for a bunch of different reasons, notably bus utitlization, different transaction types, host buffering and drivers and all of that. Even the hub type makes a difference (look up Single Transaction Translator vs Multiple Transaction Translator when merging multiple full speed endpoints onto a high speed bus in a hub).
And when you say "start losing data," that doesn't happen because of the wire rate. It happens because a host or device can't sink or source the data promised.
To start with: "If you've been doing the same job in the same industry for 20 years then you've proven you're not worth promoting." What, so you think that someone can't develop a valuable skill and continue doing it for most of a career? Why is that a bad thing?
And what do you mean, "same job in the same industry?" How do you know what I"ve done over the course of my career, and for whom?
I like what I do, and the boss values the work and we ship product to happy customers and everyone is happy. When previous bosses and companies no longer valued the contribution, I left and found employment where such work is valued. So, no, I haven't been locked up in the same lab for 20 years. (Believe me, I'm not your stereotypical engineer.)
So, really, you're just being silly. Or, have you not always been a compliance engineer?
Anyways: You're NOT a design engineer. You DON'T do FPGA design for a living. Nor do you do embedded firmware (which I do, too). You don't know what sort of stuff I work on, and to say that I'm "making up the specifications as I go along" is patently absurd.
This thread (and the entire "Programmable Planet" section here) is about FPGA design. It's about techniques and skills and tools. You have absolutely no experience with any of this. So how can you have opinions when you have no information upon which to base them? Go back. Re-read your posts. You've written a lot of stuff which is just plain wrong. You've been called out on it, and yet you can't say, "Uh, yeah, you're right, I was wrong."
So please, just stop, because you're embarrassing yourself. First rule of holes: when you find yourself in one, stop digging.
If you want to learn, then perhaps you should consider: less typing, more reading.
@KarlS01: What - you see my firast steps into adding an FPGA to a circuit, and you want me to do the whole thing in a single [larger] FPGA? Ewww, and Errrk.
Ewww: Remove a functional low pin-count CPU, with Hi speed USB support [Atmel SAM3U series] and add a 256+ pin BGA that costs more, needs to be configured, and needs a 4 [or 6] layer board just to get to all the pins?
Errk: Those big chips aren't cheap. Come to think of it, neither are 4+ layer boards.
No, I think us raw beginners need to stick with pre-made boards when possible. If you need do something fancy [like my ADC] keep it as small and simple as possible, then feed it to an established system. Cheaper, easier to configure / faultfind. If I could get a Logi-bone [see kickstarter], strip out all the PMOD connectors and have even a single [fast] ADC channel like the Red Pitaya, I'd have plenty to play with - on a Beaglebone.
Side note: @anon7632755, yes high speed USB is always 480 mbps. IF the host controller isn't busy, [mouse, keyboard etc. all eat USB bandwidth] it might even all happen my one device. That still leaves the overhead of a packetized protocol; Header, CRC and SOF (Start Of Frame) data all eat into actual throughput. There are a few papers at IEEE talking about practical throughput, they give 300 - 305Mbps as a limit before you start losing data. Would you like me to look up the reference for you?
"In the FPGA world you get maybe 20-30 pages to describe an entire FAMILY of parts, with lots of "weasal words" like "the lower end of the family doesn't support the entire complement of tools" and if you need to know more you just contact your friendly local tech representative."
That is most certainly NOT TRUE. Let's use Xilinx Spartan-6 devices as an example. Go here for the list of S6 user guides and other documents. There are THOUSANDS of pages of documentation for this device family. And it's conveniently broken down into functional sections, such as the clocking infrastructure, BRAMs, the I/O stuff including the SERDES, and the DSP block.
Really, the documentation is quite comprehensive. (I have some particular nits to pick but that's not relevant here.) "In other words these guys figure they're mostly not just selling to but DOCUMENTING to a "Xilinx shop" or an "Altera shop", they're just telling you there's some parts on which you can't use the entire complement of tools YOU'VE PROBABLY ALREADY BOUGHT."
(Again for Xlinx, Altera and MicroSemi are similar) most users can get work done using the WebPack. The only parts not supported by WebPack are the super-duper big devices that the vast majority of users won't even consider.
Of course, why if I buy ARM tools from NXP? Those tools don't support a Silicon Labs device. "Go back and look at the original premise of the article, it's to introduce the premise of development with FPGAs to people largely unfamiliar with them, and particularly to compare it with other types of development the reader may already be familiar with."
And the premise is somewhat misleading, because as I've noted elsewhere in this thread, FPGAs and processors are intended to solve different problems! The FPGA designer needs a serious background in digital logic design and doesn't need to know anything about C or assembly-language programming.
The processor firmware guy doesn't need to really know anything at all about digital electronics. He doesn't need to know about creating and meeting timing constraints. He doesn't need to know about crossing clock domains. He doesn't need to know about power consumption. He doesn't need to know about transmission lines on PCBs.
So, really, the two disciplines are for the most part entirely separate. "For you to imply "no one who knows what they're doing could possibly think of using brand X in a design because it's not 'mainstream' enough for me to use" only reinforces this attitude and does nothing to clarify anything about the original issue."
That is not at all what I said. You simply don't understand what is meant by "mainstream." AGAiN, ALL of the FPGA vendors have settled on JTAG as a standard way of programming and debugging on their devices. Call your local Lattice guy and ask. Call the MicroSemi guy and ask. Or, better yet, spend five minutes on each vendor web site. It's all there.
FPGAs which are "outside of the mainstream" are the space-grade things from Aeroflex and MicroSemi, which I would guess you'll never use, and anyways if you're spending $15,000 on a single piece of FPGA, then you're playing in a different game. QuickLogic with their OTP devices are also in this category. They're "outside the mainstream" because they're intended for specialist applications. "The very last project I worked on was certifying the software for a safety-critical subsystem containing 10 PowerPCs and 8 DSPs scheduled in 1-msec intervals which communicated over military Firewire in real-time. At least the hardware people who devised this monster had the presence of mind NOT to try and stuff all of this into a couple of SoCs! (The "glue logic" that held them together however WAS stuffed into FPGAs, although not by me.)"
So, in other words, you know a little bit about software, but you're not an actual developer (you "certify the software," which means you push paper), and you know nothing about FPGAs, and then you have the balls to say to people who've been doing FPGA design for 20 years that our opinions and experience are wrong. Please, give us all a break.
When you've got a handful of FPGA designs under your belt, where you've done the design, written the HDL, verified it in simulation, built boards and shipped product, then perhaps we'll take your opinions seriously.
"Xilinx has something called ChipScope which (if I understand it correctly) makes it easy to view internal signals. I don't know how it uses JTAG."
ChipScope is simply a logic analzyer that is embedded in the FPGA. It uses BRAMs for sample storage. You use the Core Inserter to select which signals you want to analyze, just like when you hooked up the pods of your HP1660 to a board. Those signals are sampled on the same clock which they run on; in other words, signals are sampled synchronously. You can set it to trigger on events and patterns and pretty much everything you'd do on your 1660.
JTAG is used as the transport between the FPGA and the host computer, because it's convenient and always available. The ChipScope analyzer software runs on the host PC and manages all of that, and basically it presents the logic-analyzer interrface most of us are familiar with.
if you do FPGA design for a living, the cost of the ChipScope license (or the cost of Altera's equivalent) is well worth it.
It's not at all a substitute for simiulation at the HDL level, but when something isn't working, it's good to have it.
The Atmel devices are certainly not mainstream -- I think they bought someone else's deprecated line. Lattice FPGAs use JTAG, not their small PLDs. Xilinx, Altera, MicroSemi all use JTAG.
No, I don't have a bias towards Xilinx. I don't know why you think that. "Also having a group of pins labeled 'JTAG' on a pinout doesn't REALLY mean the vendor really supports that interface for doing anything useful, and whatever it does represent it CERTAINLY doesn't hope to imply that a person could just plop down a few bucks on a Macraigor Wiggler or Demon or Raven and start debugging code, like you've been able to do for DECADES with standard embedded MPUs, in fact for FPGAs it doesn't really imply ANYTHING."
Honestly, you don't know what you're talking about. OK, so the FPGA vendors don't support the Wigglers and the software used by them, so you have to buy the NOT EXPENSIVE (as in "a few bucks") dongle from the vendor. Honestly, I don't see why this is a problem.
And considering that I use JTAG to program and "Debug" FPGAs every working day, I don't understand what you're on about. Maybe if your experience with the devices was practical and not theoretical, you'd understand. As for proto setups in the lab, in the Real World, where professionals do this for a living, part of the prototype is a board spin. And when we build prototypes, we don't build just one, we usually do at least three first-article boards. This way, if that lightning strike happens, we are not screwed.
And you mentioned "I'll agree over something close to 144 pins ..." which tells me you haven't built anything very complicated. Let me put it this way: when your design has 16 100-MHz/16-bit ADCs that need to be read and their data processed and then passed on to a host, then you'll quickly realize that a 144-pin QFP FPGA isn't an option.
And this product line doesn't have 100K-unit production runs, so I don't know why that's even relevant.
You keep reading the FPGA vendor literature and YES, they all push these super-huge devices of which they sell a dozen a year to specialist customers. And the reality, which we keep telling the FAEs, is that the Spartan 3AN-50 (which is in a QFP) and 200 are excellent jellybean devices for a whole bunch of varied applications. We don't need the big devices, we don't care about them, we don't want to hear about them.
Now one thing the small company can do to minimize costs is to settle on a jellybean FPGA. Is the 200AN "too big" for a lot of things? Probably. But if you can buy them by the tray instead of individual piece parts, the price becomes somewhat less of an issue.
If those constant coefficients are truly constant (not programmable), check out Canonic Signed Digit (CSD) arithmetic, which also replaces multipliers with adders & subtractors. You're essentially hard-wiring those look-up tables into the logic that decides whether to add, subtract or do nothing, based on the non-zero bits of the CSD data.
And speaking of FIR filters, don't forget to exploit coefficient symmetry. That probably goes without saying, but that mistake still sometimes gets made and results in FIR designs doing twice as many multiplications than are actually neccessary.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.