Very late response, just stumbled across this discussion.
I work for a company that still sells gpib, for Linux and Windows. When people claim things are flakey, almost always they're using cabling way out of spec. Originally it was 2 meters per load (generally per device), but in 1987 it was cut back to 1 meter per load, when the speed was jacked up to 1 MB/sec. After all these years, it's still astonishing how many people get it wrong. I've seen it wrong in HP/Agilent manuals, in NI documentation, elsewhere. People want the 'high' speed and the longer cabling, and get upset when it doesn't always work. There's actually quite a bit of margin in the definition, but some people always push it until it breaks.
And yes the plastic clips on Ethernet cables, and the friction fit of USB just aren't appropriate for industrial control applications. Setting dip switches for addresses can be a pain, but it pales in comparison to the frustrations of getting a LAN device working in some environments, particularly when the IT department is half way around the world.
"GIPB continues to amaze people with its persistence. While so much in the tech world is very fast to change to newer cheaper and better technology, GPIB remains surprisingly entrenched in the test and measurement arena."
The problem is that the equipment on the other side also needs thumb screws.
I've played a bit with USB high retention force connectors (available for standard size A and size B); they help. Even better, Ampenol makes a locking type A (PDF!) connector that works with any standard type A USB cable -- but neither type is very common, although I have seem some industrial equipment advertised with the high retention connectors.
For one customer, we had to add a little machined (which means expensive!) bracket next to the connector so they could cable-tie the USB cable to the bracket.
Three decades ago, my home and lab computers required a GPIB interface to the external hard drive and certain other peripherals. As noted, the cord was expensive, inflexible, short, and required an expensive interface. The ability to stack and daisy chain the (bulky) connectors was an advantage. I'd say that for a consumer, USB has rendered the GPIB interface obsolete and irrelevant. Computer manufacturers seem to agree (indeed most laptops are thinner than the connector). In a highly technical instrumentation lab where latency rules, I'll concede they may wish to continue connecting their test and measurement devices with GPIB interfaces.
While I share much of the sentiment regarding GPIB and its disadvantages, I have to disagree with the conclusion that it is ready to die.
GPIB's ace is latency. At around 30 times lower than ethernet and 4 times lower than USB, GPIB still wins when speed is critical and data transfer sizes are low. This is generally the case in production testing. While 1000 microseconds of latency does not seem like much, a test sequence for a complex wireless device may have up to 20,000 measurement transfers of a few bytes each. 1000us latency each time adds 20 seconds of dead time to the test sequence, reducing the throughput and increasing test cost by as much as 20%.
National Instruments has a number of papers on this subject on its website: http://www.ni.com/white-paper/3509/en/#toc2
PXI has the advantage of very low latency and high bandwidth PCI Express, which makes it a great choice for speed critical testing such as production test. For those using discrete instruments while concerned about test times, GPIB is still the way to go.
Kudos to Hewlett-Packard for developing an interface that has endured for over 40 years. Calls for its demise are somewhat premature.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.