You complain about GPIB cables being stiff. True, but being able to screw themconnectors in place can be desireable, especially in product test settings. GPIB connectors are far more robust than Ethernet and USB. USB is great for quick connections on the bench, but GPIB cables won't pull out. Yes, Ethernet cable have clips to hild them in, but are flimsy compared to GPIB.
I would also argue that GPIB is easier to implement than you say. There are a time when GPIB was as hard as you say, but the software has gotten much better.
The problem, of course, is that no one ever upgrades either hardware or software. How many times have I had to add/replace a piece of equipment on a 10-15 year old rack that's being run by an equally old Sun Sparcstation (or similar)? Too many.
Update software? Not likely - requires a complete replacement of the machine and likely a major re-write of the code.
Replace all the stuff with newer stuff that gets along? Why would they do that, it's too expensive!
grrr... I've been spoiled by pug and play. I don't much use USB for the reasons you state, and on the rare instance I've had an ethernet cable go bad, it was a lot easier to replace.
Even an ether to serial converter box is prefereable to GPIB... To me, anyway.
I actually did that once. I put (nearly) an entire rack into a single NI PXI chassis... For about 1/3 the cost, and a lot less cabling.
Yeah, I like the PXI stuff, but if that's not an alternative, I'll still stick with something like Ethernet. Even USB isn't too bad in some cases, especially if the connectors are not accessible by monkeys... (I've seen the screwed down with special hardware)
Very late response, just stumbled across this discussion.
I work for a company that still sells gpib, for Linux and Windows. When people claim things are flakey, almost always they're using cabling way out of spec. Originally it was 2 meters per load (generally per device), but in 1987 it was cut back to 1 meter per load, when the speed was jacked up to 1 MB/sec. After all these years, it's still astonishing how many people get it wrong. I've seen it wrong in HP/Agilent manuals, in NI documentation, elsewhere. People want the 'high' speed and the longer cabling, and get upset when it doesn't always work. There's actually quite a bit of margin in the definition, but some people always push it until it breaks.
And yes the plastic clips on Ethernet cables, and the friction fit of USB just aren't appropriate for industrial control applications. Setting dip switches for addresses can be a pain, but it pales in comparison to the frustrations of getting a LAN device working in some environments, particularly when the IT department is half way around the world.
Ha! I recently did a 10 minute presentation on GPIB for my degree. It was shocking to discover some students had never heard of GPIB before!! I did however end the presentation with the suggestion that VME and VXI could be taking GPIBs place in the industry though.
HPIB was a very sound interface design that was clearly much ahead of its time. It endured generations of instruments and digital electronics technologies.
Designing interfaces for HPIB was a breeze, with clearly defined addressing, interrupt and talk/listen paradigm.
Of course, the addressing space is rather small, fixed addressing for nodes is limiting, and auto discover address resolution protocols are much more flexible, but HPIB networks still work very reliably.
A piece of sound electrical engineering from the 60's that still inspires my designs of embedded systems of today.
VXI was adopted by the mil/aero/defense business, where it remains today. Commerical industries either never tried it because of size/cost or gave up because of initial interoperability issues. VXIplug&play (Now IVI) fixed that, but then came PXI, which was smaller, less expensive and more interoperable. that was particularly because it was all made by National Instruments. Over time, PXI and now PXIe have become very popular for manufacturing test. Performance now rivals traditional box instruments except for power supplies.
Do Windows 7 and 8 have GPIB support? It's hard enough to get parallel port support for them.
I know that our company has moved toward using USB to GPIB adaptors rather than plug in cards so maybe those work. Or at least the person who did the setup made them work....
We have some older equipment in some test setups where we use labView test scripts to run a test using GPIB. We've got dedicated computers and data loggers for those setups so we don't have to go to the pain of reconfiguring. Most of these setups are thermal chambers which are still in almost constant use even with the new chambers we got in the last couple years...
No operating system has ever included support for GPIB. Well, mayne some HP controllers when GPIB (HP-IB) was first developed. GPIB interface manufactures have alwasy provided drivers for the hardware and still do.
If some of you folks think GPIB is a dinosaur, I wonder if there's still anyone around who even remembers HP-MSIB (modular system interface bus)? This was introduced by HP out of Roseville just as GE was trying to define the CASS test system for the Navy, and HP had used MSIB for several of the instruments in the RF/microwave test rack. I can barely recall it now but I do recall there used to be a mechanism (I guess one of the instruments in the chassis served as a kind of protocol "bridge" so you could get to various MSIB components via GPIB) and I was brought in under contract on a team down in Huntsville AL that wrote drivers in 68000 assembly language to allow multiple components on the GPIB and the MSIB to be composed into "virtual instruments" which could be accessed in a custom variant of the ATLAS language. In many ways it was the ultimate "thankless task", GE was WAY behind schedule and wound up closing the Huntsville facility almost before the solder had cooled and transitioning the project to the "production phase" at Martin Marietta. It turned out that the entire PREMISE of "virtual instruments" actually DOOMED the scope of the project before it even got off the ground because what it actually did was to demonstrate that you could really extend the value of the capital you had poured into your efforts generating test programs in ATLAS by assembling instruments that met the test scenarios almost "on the fly" so the need for large complex racks with high capital spend was obsolete before the project was even finished, and unfortunately for GE the Navy realized it as well! Ah yes, the "good old days"?
I never used it myself, but I thought HP-IL (HP Interface Loop) was pretty cute when I read about it. It's a token ring topology designed for medium speed communication between various controllers -- including battery-operated programmable calculators -- and test equipment. The links are transformer-coupled, so you don't need a common ground. The transformer turns ratio matches 5V logic with relatively weak current to low-voltage high-current differential signals for transmission on the links. Connectors and cables have gender so you can't hook it up wrong [*]. Nice piece of electrical engineering, I thought.
[*] after reading Jonny's comment below, I'd like toinsert "without a concerted effort" :-)
Yes, I have used HP-IL!
When I joined HP Scientific Instruments Division, several of our lab instruments had HP-IL.
It was a very clever 2 wire differential signaling that simplified lab equipment networking of disk drives, printers, chromatographs, analyzers, to build test setups and analytical systems.
I worked as a systems engineer for petrochemical applications, amongst other things, and I used the HP5890 chromatograph with the HP7673A sampler robot, with chromatography integrators.
It had no-brainer D-shaped connectors that made easy for anybody to setup the network, with automatic plug-and-play address discovery.
However, one sales manager once managed to connect the cables inverted, by connecting the D-Shaped upside-down. It was a nightmare to discover why the network was completely disrupted.
Since then I've learned not to underestimate the users capacity of misusing perfectly designed interfaces.
I fully agree with the article. GPIB has to die in production environments. It's very narrow use case makes everything horrible expensive with long delivery times etc.
But, it seems to me that lot of people don't realize that when you switch to USB (USB TMC) or Ethernet (LXI) you only switch the physical interface. The IEEE488.2 and SCPI standards are still valid and will be valid for other decades. So all your knowlede how to talk to a measuring device is still true, you are just using another plug on your computer :-)
In some cases that's true, in some cases not. Many vendors (especially NI) have made that set of drivers available on the software side for (relatively) easy porting of legacy software to new equipment.
However, there's plenty of devices out there that each require their own custom driver - which is also worrisome in the case of long-term test equipment. Better for things to have well-defined software interfaces that use industry standard hardware interfaces and protocols. This way you can upgrade when necessary, even if at an increased cost
While I share much of the sentiment regarding GPIB and its disadvantages, I have to disagree with the conclusion that it is ready to die.
GPIB's ace is latency. At around 30 times lower than ethernet and 4 times lower than USB, GPIB still wins when speed is critical and data transfer sizes are low. This is generally the case in production testing. While 1000 microseconds of latency does not seem like much, a test sequence for a complex wireless device may have up to 20,000 measurement transfers of a few bytes each. 1000us latency each time adds 20 seconds of dead time to the test sequence, reducing the throughput and increasing test cost by as much as 20%.
National Instruments has a number of papers on this subject on its website: http://www.ni.com/white-paper/3509/en/#toc2
PXI has the advantage of very low latency and high bandwidth PCI Express, which makes it a great choice for speed critical testing such as production test. For those using discrete instruments while concerned about test times, GPIB is still the way to go.
Kudos to Hewlett-Packard for developing an interface that has endured for over 40 years. Calls for its demise are somewhat premature.
Three decades ago, my home and lab computers required a GPIB interface to the external hard drive and certain other peripherals. As noted, the cord was expensive, inflexible, short, and required an expensive interface. The ability to stack and daisy chain the (bulky) connectors was an advantage. I'd say that for a consumer, USB has rendered the GPIB interface obsolete and irrelevant. Computer manufacturers seem to agree (indeed most laptops are thinner than the connector). In a highly technical instrumentation lab where latency rules, I'll concede they may wish to continue connecting their test and measurement devices with GPIB interfaces.
The problem is that the equipment on the other side also needs thumb screws.
I've played a bit with USB high retention force connectors (available for standard size A and size B); they help. Even better, Ampenol makes a locking type A (PDF!) connector that works with any standard type A USB cable -- but neither type is very common, although I have seem some industrial equipment advertised with the high retention connectors.
For one customer, we had to add a little machined (which means expensive!) bracket next to the connector so they could cable-tie the USB cable to the bracket.
"GIPB continues to amaze people with its persistence. While so much in the tech world is very fast to change to newer cheaper and better technology, GPIB remains surprisingly entrenched in the test and measurement arena."
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.