The Kansas City Hyatt Regency skywalk collapse on July 17, 198i during the Friday Tea Dance in the lobby killed 114 people due to a structural engineering error during construction, resulting in an infinite shear force applied to the concrete on the 4th floor walkway.
The wikipedia article to which the above link refers, says: "The engineers employed by Jack D. Gillum and Associates who had approved the final drawings were convicted by the Missouri Board of Architects, Professional Engineers, and Land Surveyors of gross negligence, misconduct and unprofessional conduct in the practice of engineering; they all lost their engineering licenses in the states of Missouri and Texas and their membership with ASCE."
Actually, the civil engineers were responsible since they didn't verify that the structure was built as designed. Someone substituted a cheaper part for what should have been used, and the engineers didn't catch it.
No infinite forces here. What actually happened was that the original design called for a 40' threaded rod, from which two separate catwalks would hang. But the folks actually building the thing balked at the impracticality of that plan, so they proposed an alternative: two 20' rods, one from the ceiling to the higher catwalk, and one from that catwalk to the lower one. They got that ECO all the way through the approval process: the designer signed off, the builder, the inspector...the problem is, hanging the 2nd catwalk off the 1st one puts twice the force on the upper catwalk's washer/nut (the weight of the upper catwalk plus the weight of the lower, plus the 2nd set of 20' rods!). Obviously a bad idea, but it almost worked -- it was strong enough to hold the catwalks, but when the revelers packed the lower catwalk, that's when it all became too much.
@j_brooks: Yes, you did miss something, and it'll take a Georgia Tech engineer to explain it. Although @rpcy below quoted the wikipedia article, in fact we covered this in our Statics class back in 1982.
What happened is that, as @rpcy corrctly stated, the original suspension called for single rods, which meant the tensile load was supported entirely on the steel rods, with no additional shear forces on the 4th floor walkway.
However, because of the offset on the 4th floor walkway, a shear force was generated between the rod coming from the roof truss, and the rod connecting to the the 2nd floor walkway.
Now, let's look at this new, additional shear force on the 4th floor walkway concrete: The closer the rods, the higher the shear force; and in fact as the rods move closer, this shear force goes to infinity.
No shear (original):
The interesting question about the Hyatt Regency disaster is the one that wasn't asked. People debated the diameter and thread pitch of the supporting rods, and whether they should have been lapped or continuous. What nobody asked is why anyone would build an indoor foot bridge out of concrete in the first place. Isn't that what wood is for? Light, strong, and fatigue resistant.
The Tacoma narrows bridge in Washington state collapsed when winds coming up the canyon excited a resonant frequency in the bridge leading to positive feedback and ever increasing oscillation amplitude.
I was changing a light fixture in my bathroom over the sink and all was going well until i began connecting the new fixture. The new fixture was larger than the old one and was blocking the light from the tub so I couldn't see the screws for the connections. I got down from the sink and instinctively flipped on the light switch. I climbed back up on the sink counter and started making the connections until I touch both wires. My wife heard 2 load bangs. The first was when I flew across the room and hit the shower wall, the second is when I dropped and hit the tub. I now always place tape over the switchs when I do electrical work at home.
Vital steps missing.
Wear approptiate PPE.
Use approved meter.
Test live voltage.
Lock out Tag out.
Test for no or minimal voltage.
Next, re-test meter on live power, to verify meter is functional.
Now safe to work.
This is per NFPA70E.
Used in USA, Canada has similar rules.
Oh, don't wimp out! Live life on the edge. Hook it up live! When I lived in England some electrician showed up to repair flakey electricals (I was just renting) and he worked on the 240v they have without cutting the breakers. He said they trained them to take the buzz.
I still like a little safety. And stay off the aluminum ladder while you're hooking up the lights, too. I have fallen out of a few trees, though. I make sure my wife is within earshot when I do certain work. That way I can hear about it the rest of my life. But at least I have a continuing life.
Many, many years ago, when I worked for AST Research, I developed a motherboard ASIC that was basically a "garbage can" of the random logic. After the systems were in high-volume production for more than a year, there was a particular legal software title that would not "beep" correctly when some kind of event occurred, due to a subtle bug in my design. I've never held lawyers in high regard, so I thought it was kinda funny that I was able to jab a bunch them with my screw-up.
Nothing beats the Mars mission with metric / US unit interchange - no loss of life, but lots of lost prestige, etc.
Always remember, as engineers, the world does revolve around us. We pick the coordinate system. (Stolen from somewhere, I forget where.)
Let's not forget the fiasco of the original primary mirror on the Hubble Space Telescope, which was ground to the wrong shape and had to be replaced later by shuttle astronauts.
I have also heard rumors of a satellite that was launched, and during on-orbit checks, the command to power down was sent, but the craft never responded to a later command to power up again. When the same command sequence was tried on the Engineering Model back here on Earth, the same thing happened. Somebody apparently got over-zealous in deciding which circuits responded to the power down command.
But again, that's just a rumor I've heard, on and off over the years :)
The real sad part of the Hubble screw up is that guys from JPL visited Perkin Elmer to check the mirror with the old fashioned "knife edge" test, but PE wouldn't let them into the lab because the facility was also used for secret spy satellite optics. PE assured then that their sexy computerized testing was far better anyway.
The fix was not to replace the screwed up primary mirror--it was too big. The solution was to map out the aberrations and grind a secondary mirror to cancel them out.
Well... You've got the case of the commercial airplane that crashed because the pilot thought he had x-kg of fuel, while they actually loaded x-lbs... Right in the middle of the flight, the engines turned off...
I wonder why such marvelous machines wouldn't include one of those simple fuel level indicators, at least to check fuel level while on ground (surely, it won't work during acrobatic maneuvers nor high-pitch climbs).
The early VW Beetles had that little tap you could turn to open the gallon-or-so reserve tank. No gauge at all, but turn the tap and you've got another 20 miles to find a gas station. Simplicity works too.
Yeah, but if you forgot to turn the valve back off after a refill ... Later models sort of "built in" the reserve tank with a wedge-shaped line after the "E" to show you were on the way to running out out. (We used to sometimes new drivers the "F" stood for "Fill 'er up" and "E" stood for "Enough").
I had one of those, a '58. It was a little lever between the driver and passenger. 12 o'clock was main, between 1 and 2 was off, and 3 o'clock was reserve.
My mates found out about it and used to turn it off surreptitiously with their feet. So I ran out of power in the middle of a busy main road out of town once and did some snappy lane changes so I could pull over if necessary...this was noticed by some passing cops who pulled me over and chewed me out good and proper. With my mates sniggering away....
What about the Air France plane that went down in the Atlantic? Airbus is incredibly arrogant in that they believe they know better how to fly the plane than the pilots do. With both pitot tubes became iced over, the computers believed the aircraft was flying so slow that it was in danger of stalling. the pilots know that wasn't the ase. But the computers kept taking control from the pilots, putting the plane into a dive to pick up air speed until it crashed into the Atlantic. Supposedly one of the last things the pilots said before they crashed was they couldn't control the aircraft.
Needless to say the board of inquiry had to blame someone besides Airbus. The pilots' good names were thus smeared.
Thank goodness we didn't use Airbus for the next generation of in-flight refueling tanker aircraft!
You should find one of the excellently detailed reports of the incident and study it more closely. Almost everything you state above is wrong.
The two main causes were
(1) failure to act on a known problem with the airspeed sensor design on behalf of the plane manufacturer and the airline and (2) failure to train the pilots on how to handle this known failure mode.
"Indeed, introducing yourself as an engineer at a party likely won’t earn you the admiring ooh’s and ahh’s that, say, a fireman, or U.S. Navy Seal might get."
Hey Sylvie! Speak for yourself! I'd infinitely rather hear from an enthusiastic EE who is doing fun work, at a party, rather than a fireman. Sheesh.
Okay, so for EE disasters, e.g. the ones that would really make me pucker, are for instance a ship running aground, or colliding with another ship, because of a design fault in the control systems. Ditto, obviously, with airplanes.
Or cases of "friendly fire" caused by faulty identification hardware or software.
Take for example the disaster of that Air France flight over the Pacific, a couple of years ago. That was reportedly caused by sensor failure that gave the automatic controls the wrong instructions. Could a human pilot be trained to notice the problem? Most likely yes. But more to the point: could the control system be designed to accommodate that type of sensor failure?
These are the things that might keep EEs awake.
"Not every Airbus that lands in the water is piloted by Captain Sullenberger"
~Dan Schwartz, 2009
It was an Air France Airbust 340 over the Atlantic; and although when the plane splashed the engines were running full throttle, the plane was in a stall and the pilots flew it right into the drink.
According to the black box, one pilot was pulling back on the yoke and the other was (correctly) pushing it forward, to bring the nose down and restore lift.
*These* are what I consider to be "EE disasters." Not burning your finger tip on a soldering iron, which really has nothing to do with DESIGN or EE.
Anyway, it's always best to have the control system deconflict the info provided by many different sensors, so as to not have to rely so much on pilot expertise.
I heard a different story; that Airbus is incredibly arrogant and believes they know how to fly the aircraft better than the pilots do. When the pitot tubes iced up, the computers thought they were going slower than they actually were. When the pilots reacted correctly, the autopilot kept kicking in, taking control of the aircraft from them because their reaction was not what should be done when the pitot reports speeds near stall condition. needless so say the computers, not the pilots, crashed the plane into the Atlantic.
And who do you think "specifies what the software must do," never mind writes the software, tests, debugs, then validates that the control system operates as intended? Or are you among those who really believe that EEs spend all day using a soldering iron?
I can safely say that in decades of EE work, the only time I use a soldering iron is for my home projects. Engineering, by definition, is about design. The construction part is often not done by the engineer at all.
an EE disaster? back at the sophomore days, some senior students were building/fixing/something their robot at a regular empty classroom. of course they messed up and caused a power failure in the building. don't ask me what they were working on, i only saw the smoke coming from 2 doors away from the lab in which we all cursed to insanity because of data loss in our computers.
luckly nobody got hurt, but there were some nasty carbonized spots on a few desks.
those are 2 disasters in one
1: whoever made the electrical plans for the 5 story building (something like 200 classrooms and a few offices) sucks. whatever short circuit at an end point shouldn't kill all of the building power
2: having well equiped and protected laboratories, why in blazes where they working on a regular classroom?
every eletromechanical student is a series of weird short circuits waiting to happen
Before i was an engineer, i worked summers for a small TV cable outfit. We had a small bucket truck with no outriggers on it. I had to take a 1/4 " steel stranded wire up our cable pole and attach it, right next to a power company pole ~ 4700V feeding a small town.
i had helmet on and gloves and the bucket controls had rubber caps. i was VERY careful on the way up. That wire was buried 10 feet deep in the ground a couple thousand feet away. Evidently i wasnt so careful on the way down. Next thing i knew i was curled up in the bottom of the bucket and could smell something like burning hair.
When i finally was brave enough to raise my had to the controls and get down, i asked if i still had my hair or eyebrows:) What i did have were 3 or 4 pinhole cauterizatiions on my finger tips of one hand (the hand on the controls). My gloves had cracks in the them, the bucket control had cracks in them: the voltage jumped from dust covered tires to ground....very close call:( my supervisor at the time joked i just got my batteries charged:( Besides the burning smell all i remember is a feeling like someone hitting the back of my neck REALLY hard !
The DoD published a very good document called the "Joint Software Systems Safety Engineering Handbook". It is easy to find via Google search and is a good reference for anyone doing embedded systems development. Appendix F is titled "Lessons Learned" and includes short case studies of some absolutely stunning engineering errors such as "Operator’s Choice of Weapon Release Overridden by Software Control".
F6 (Operator's Choice ...) was pretty scary, but how about the first one listed! I think I read about this one before.
"Eleven Therac-25 therapy machines were installed, ... The
Canadian Crown (government owned) company Atomic Energy of Canada Limited (AECL)
manufactured them. ... The software control was implemented in a DEC model PDP 11 processor using a custom
executive and assembly language. A single programmer implemented virtually all of the
software. He had an uncertain level of formal education and produced very little, if any
documentation on the software. ... Between June 1985 and January 1987 there were six known accidents involving massive
radiation overdoses by the Therac-25; three of the six resulted in fatalities. The company did not
respond effectively to early reports citing the belief that the software could not be a source of
failure. Records show that software was deliberately left out of an otherwise thorough safety
analysis performed in 1983, ... After a large number
of lawsuits and extensive negative publicity, the company decided to withdraw from the medical
instrument business and concentrate on its ___main business of nuclear reactor control systems.___ ...
Great! Just let them move from killing one person at a time to endangering a multitude! They shouldn't have been allowed to do janitorial duties after what they did!
When I was a young engineer, an electrician was changing a 220 overhead power connector. He thought he'd give the new engineer a few words of wisdom. "Electricity that goes across your heart is what will kill you, so keep one hand in your pocket when you are about to cut something." To show me, he put one hand in his pocket, then cut the wires with his other, upon which a large spark shot out.
"Hey Bob, I thought you shut off circuit 5!" He yelled at his assistant. "Oh, I thought you said 3!"
He made his point a bit better than he had intended to.
Apollo13: The oxygen tank explosion was root-caused to damage caused by ground crew connecting the tank to incorrect supply-voltage (~100V) rather than 28V. Apparently there were devices using the same power connectors for different voltages; they should have used different connectors or keying to prevent this.
When I was in high school, a close friend and I worked on stage lighting. For the time we had a pretty nice system--36 50 amp dimmers, with a very nice patch panel. The AC feed was 3 phase 208V, with each phase 4-0 cable, with and even bigger neutral (750 MCM, as I remember). So lots of current available.
My friend needed to wire a rental fixture with one of the 3-pin stage plugs, but got the ground and hot leads reversed. He clamped the fixture to one of the pipes used for stage lamps, plugged the patch cable into a 6 kW test circuit, and flipped the breaker. Because the pipe was well grounded, as was the fixture, we now had a dead short from a very low impedance source to ground. I'm pretty sure the 20 amp breaker for that circuit had it's "breaking current" exceeded--this is where the current through the breaker is so high that the contacts weld together before the breaker can open. Instead, the 50 amp breaker on the test circuit opened after about 2 seconds.
Meanwhile, the huge current caused the entire building to shake at 60 Hz. I was about 200 feet away in the booth at the rear of the auditorium. I heard this very loud hum, followed by a loud "thunk". Fortunately no one was hurt, nor anything damaged (other than that one 20 amp breaker, possibly).
So no scars, but a fun story to tell:)
The topic was "disasters in electrical engineering", but I offer up this example of an avoidable "multi-disciplinary" disaster.
A simple short-circuit, an oxygen-enriched environment, and a spam-in-a-can capsule (that required tools to open -- from the outside) spelled doom for NASA's Apollo I astronauts Grissom, White, and Chaffee, in what should have been a routine training drill.
Periodically I reread the words of Gene Kranz (Apollo program's Dir. of Flight Operations) for inspiration and guidance. According to wikipedia, Kranz spoke these words on the Monday following the accident.
"The Kranz Dictum" should be required reading for all engineering students and especially engineering-MBA candidates. Please read the full text on wikipedia or elsewhere:
"Spaceflight will never tolerate carelessness, incapacity, and neglect. Somewhere, somehow, we screwed up. It could have been in design, build, or test. Whatever it was, we should have caught it. We were too gung ho about the schedule and we locked out all of the problems we saw each day in our work...
...we will never again compromise our responsibilities. Every time we walk into Mission Control we will know what we stand for.
Competent means we will never take anything for granted. We will never be found short in our knowledge and in our skills. Mission Control will be perfect. When you leave this meeting today you will go to your office and the first thing you will do there is to write 'Tough and Competent' on your blackboards. It will never be erased. Each day when you enter the room these words will remind you of the price paid by Grissom, White, and Chaffee. These words are the price of admission to the ranks of Mission Control."
Sorry, there's no happy ending to my comment. Challenger, followed by Columbia proved that we often fail to learn from our past mistakes.
I remember from the sixties that one of the earliest manned orbital flights landed miles and miles from its intended spot. It turned out to be software: an engineer had used 365 days per year for the astronomical year, not 365.25 which is correct and hence the need for "leap years" in our calendars.
When I was younger I had a summer job at a paper mill where my dad worked. I studied electronics so I worked as an apprentice automation mechanic. I was working with one engineering student and we did random odd-jobs at the factory: everything from changing light bulbs to small automation installations.
One day we got an assingment to replace a fan on a control cabinet of a huge measurement unit that measured the thickness of the paper. We went to change the fan and we turned off the automatic fuse of the fan system, just in case.
Apparently something was not right because when my work pal started to screw off the broken fan a visible loud spark stroke out from the fan to the screwdriver and the whole measurement unit died. Few seconds after that my dad calls and asks did we do something because the whole lower level of that paper machine went dark and was operating on emergency power. With some stuttering we told that we might have caused some sort of short-circuit.
Later we found out that the spark extravaganza burned the main fuse even though it should have been impossible to happen. The whole fan changing fiasco resulted in few kilometers of ruined paper that had to be scrapped. Fortunately the 200k€+ measurement device was not damaged.
Being an engineer requires on-going study. Not only do you have a catch up with the technology advancement but also learn the new measure that you or your team members have figured out.
Mistake in Civil and Power Electronics can be fatal. Mistake in system and software engineering can cause a lot of trouble. You would think after 10+ years of Internet development that most cloud based system is secured. What happened to Yahoo yesterday has proven it to be wrong.
Nowadays, technology advances faster than anyone can get their hands on it, not to mention wrap around it. Due to various reasons, without fully understanding, people start using the technology and building a new product based on the technology. We are in the culture that we learn while we are doing it. No doubt we have to act so as to put ourselves into a better environment of learning. Yet, proper measure shall be taken. Knowledge from an experienced engineers will definitely be one of the key ingredients to keep yourselves from trouble.
I was once interviewing a new EE at our company. The interview was almost over and as I walked him back to Personnel we stopped by the Engineering Lab to show him around. I told him to keep his hands in his pockets as we had lethal voltages out in the open. He asked what I meant by lethal voltages? I walked over to one of our products that was under development and pointed to the large buss bars saying "Touch this and you Die!" He retreated immediately from the Lab and walked out of the building never to be heard from again. Up to this point I had thought he was a bright Engineer worthy of a very good job offer. Boy, was I wrong...and he had a degree in Power Engineering!
I managed to destroy a Makita drill in short order by inserting a power pack in backwards. The smoke was pretty impressive. I offered to pay for the repairs. Later my friend comes back to me saying they don't make that drill anymore because this was a common problem. I simply bought my friend a new drill. This isn't the first time I've fried something by getting the terminals backwards.
If there is any way to insert a battery in backwards, it will be done.
I started as an electronics tech in the US Coast Guard and have a few stories from that. One that somewhat haunts me to this day is a call I had to fix a 'broken radar' on a 44 foot rescue boat at the Cape Disappointment, WA small boat school. This is where they teach guys and gals to take the small boats out when everyone else is coming in - the training ground is the Columbia River bar. The boats always go out in pairs in case something goes wrong.
I showed up and the on-site senior electronics tech was convinced the antenna was shorted - becuse it was a 50 ohm antenna and the ohmmeter showed no resistance. While I knew better (impedance vs. resistance) we swapped the antenna. When that didn't work, the second boat got tired of waiting and went out in a brand new 41 foot harbour boat. A freak wave caught them - some never came back.
I figured out the radar was misconnected - nothing wrong with it. Wish I'd been faster.
Google UTB 41332 for more details.
I was on a team rushing to finish a piece of equipment the night before a customer demonstration. After working through the night wiring, assembling and checking, the moment of truth came. We plugged in the device and flipped the power switch. What ensued was the equivalent of a DC 4th of July celebration all taking place within the space of about 8 cubic feet. But instead of oohs & ahhhhs, there were different words and sounds uttered. It turns out that the particular power strip chosen for the occasion unfortunately was homemade and had its line & neutral reversed internally, and since it was just a demo, it was decided that we would forgo the usual formality of an isolation transformer in the interest of expediency. Fortunately, we had 2 of everything and that night we needed it - except for the homemade power strip.
Back in high school, working on a balky scoreboard in the gym, I asked if the power was off and was told yes. When I reached in to pull a fuse to check it, I got zapped. I distinctly remember my body's reaction: my arm pulled back with the fuse still in my and and flung it across the gym without even thinking about it! Didn't even fall off the ladder!
Oh many decades ago,
working on a project where we needed TTL voltages, 5 v , but for some reason, we though tit a good idea to distribute 5v around .
QED, some big Kw power supplies, wired in parallel, with big copper bars connecting to the rack.
You want to see what happens when you leave a spanner across the bars when the power is turned on,,,,,,,
Wasn't me,,,, really ...
but a room sprayed in real copper is very impressive.
Many years ago I was working on a military contract for a portable test set and was responsible for selecting a battery to power the set. I selected an early sealed gel-cell battery, largely because of its low temperature performance. The company decided to produce a commercial equivalent, but due to cost restraints used a vented cell battery instead (fortunately not my decision). After transporting the commercial set across the country via air to a hangar, the test set exploded from the hydrogen and oxygen liberated during charging, hitting an air conditioner at the top of the hangar. No one was hurt, but that was the end of that project. I still am wary of the lithium batteries in my laptop.
At Norway's only technical university 30+ years ago we did not chose Telecom vs Computers vs Power Distibution until we had a stab at all for a few years. After some "interesting" experiments in the High Voltage lab I decided 5V was enough for me.
However how was I to know that 5V@300A PSUs as needed for some quite advanced telecom/computing stuff in as needed by the industry ca 1980 also make for an interesting day in the lab? ECL and Schottky - not LS gets hot as well.
So 5V@30A and 74LS only was then my new limit for what I would touch - until I found out about what happens to tantal caps when connected the strong way rund! I have been an advocate of low power CMOS and ceramic caps ever after!
Waaaay back when i was a teenager & messing around with electronics in our basement, i built a nifty little box. It had an AC plug on one end, a switch and a fuse (the screw-in type) on the box, and two big colored alligator clips coming out of it. Yep - i could hook 110 to wherever i chose, & switch it on and off "safely!" When I think of the number of times that i could have fried myself in that little basement room with that little box of torture, it just makes me amazed that i'm still alive! To top it off, my dad was an electrician - yes, he taught me well, but an enthusiastic teenager playing with electricity is not well-known for caution.
Stuxnet. We all agree it was sabotage, but the centrifuge designers were "leading with their chins." Even without the possibility of malice, any control system, especially one with computer control, should have an independent over-range safety shutoff. In this case one needs a completely independent tachometer that would stop the centrifuge if the speed exceeded a preset limit.
Many years ago, a young technician was working on a Racal SSB transmitter (all valve). He had an original set of spares. He decided to replace the main HT electrolytic capacitor, a rather large device, screw terminals etc. 450V rating. Turned the power on and the capacitor exploded with sufficient force as to coat the ceiling. Needless to say that the capacitor had never been powered up since manufacture and was in dire need of re-forming. It is a wonder that the floor under the technician was not coated as well.
I was once checking out a CCFL inverter and after some tests, I forgot to turn off the machine. So, I grabbed the inverter, because I wanted to disconnect it, and I suddenly I sensed a BBQ-like smell......which turned out to be my burning thumb in touching the HV connector on the PCA.
It didn't hurt at all, but it left a funny hole in my finger which took almost 2 weeks to disappear.
Sorry, no pics...
This reminds me of when I was working on a project which involved a relatively high power piezo-electric transducer. One suitable for the job used a much lower operating frequency and higher power than what the company itself had ever produced. My request to purchase a suitable transducer was denied and the VP of engineering decided we could make one of our in house units work by mechanically loading the piezo ring to lower the resonant frequency.
His plan was to machine a metal ring which would be heated and pressed over the cooled (approximately 3" diameter, 2" high, 1/4" thick) piezo-electric element, which would create a tight fit at a common temperature. The metal ring was in the oven and I had obtained some dry ice which I was bathing the ceramic ring in. I used some handy needle nose pliars to pick up chunks of dry ice and place them in intimate contact, both inside and outside the ring.
Focussed on the task at hand, I neglected to think about the ceramic ring contracting from the cold and the resulting potential voltage developing across the metalized electrodes on the inner and outer surfaces of the ring. Lo and behold I managed to get my hands across the voltage, resulting in my arms flailing out to the side, losing the needle nose plairs, and almost hitting my coworker who was observing. I instantly realized what had happened and luckily the shock was short lived from a dc source.
The experiment did lower the resonant frequency but killed the efficiency, resulting in no net gain. We ended up buying the proper transducer when all was said and done.
At my first job after college my boss/mentor had spent many years in medical electronics before moving to our oceanographic firm. One time when I was upset that management wouldn't listen to reason he told me that as an engineer many times you will see disaster coming, management won't listen to you, and people will die. You just have to make sure you have done your best and get over it.
Years ago I was building cryogenic systems that involved high current circuitry. I had a customer's rep on-site observing a test and he asked me what was the worst thing that might happen. Nonchalantly I told him 'it might blow up.' 'What should I do?' re replied. 'Run' I said.
Of course the __mmed thing DID blow up, so after I got done dumping x liters of LHe and shutting down all power I looked around the (then) cloudy and smelly lab and nobody was there.
He came back into the building about 10 minutes later and said 'Well, you told me to run.'
Moral: don't answer the question 'What's the worst thing that can happen.' Because it will.
I was recently working on a medical infusion pump project. One of the bigger infusion pump companies (Baxter), the pump was ordered recalled and approx 215,000 pump destroyed. There were deaths and injuries associated with this and other pumps. (google/bing/yahoo search for: "baxter colleague recall")
Midway through the project the FDA came up with new sets of rules on electrical/mechanical/software/human factors engineering and documentation. I entered a low priority bug in the bug tracking systems regarding the GUI/human factors aspects. This bug was entered before the FDA directives but never acted upon until after the directive. Some of my engineering co-workers saw the FDA directives as "big government/Big Brother", I saw this as an opportunity to learn from others mistakes.
This is about having a good engineering process and the dangers of trying to get to market to quickly and people with a good attitude about what is good engineering.
John - I apprecuate how you view that. I wish more medical equipment companies did. I took my EE and went into 'clinical engineering', dealing with how all this works in the hospital. i was on the end user side of that recall.
Not a deadly failure, but more a minor nuisance. A co-worker had a flat tire at work. It was a large vehicle with big tires, a Durango SUV. I had an air compressor that I carried in my car and I let him use it. We hadn't quite fully inflated the tire when the compressor suddenly died. Closer inspection of the compressor revealed a label on the bottom surface of the compressor which stated that the compressor should not be used for more than 10 minutes at a time. Okay, so perhaps we had used it beyond 10 minutes. Now I felt silly, because I had never really read the compressor's instruction manual. After all It's just an air compressor and all I wanted to do was inflate a tire--do you really need to read the manual? ;) So I went back and read the manual, plus all the verbage on the packaging. The 10 minute warning was nowhere to be found! Now I felt somewhat vindicated. Why hide a warning lable like that on the underside of the compressor where no one will ever see it unless they are scratching their noggins and inspecting the air compressor in detail because it has failed?
Caveat emptor! Don't buy an air compressor with a 10 minute warning label on it.
At Georgia Tech in the early 1980's, the head of system protection for the Southern Company (Georgia Power, Alabama Power and one of the Florida power companies), Clayton Griffin, PE taught the undergrad & graduate system protection, coordination and relaying courses. We just completed the graduate section on the sizing of distribution ground fault protection, including discussing why (at the time) monitoring for high impedance ground faults was difficult when, in south Georgia, three kids were killed by a downed line in their yard over a weekend.
That Monday in class, Griffin showed up, but he was as white as a ghost, visibly shaken~
For those of you who are electronic engineers, in the USA NFPA70E is about electrical safety. 3 Phase power if shorted phase to phase can develop an arc flash that is fatal. Power engineers are trained to be aware.
The video shows arc flash events with mannikins.
NFPA 70E has many requirements, such as eliminating live work. After lock out tag out, phase conductors must be tested with approved meter using live-dead-live testing. NFPA 70E is very serious material. No laughing matter.
About Apollo 13. I did some reading up on what went wrong after it happened. There were 2 fundamental problems, being penny-wise but dollar foolish, and poor documentation. The tank that exploded was originally installed on an earlier Apollo flight. There was trouble pressurizing it during the launch sequence, so it was removed and replaced by a spare. Instead of tossing it, it was refurbished and approved for use on a later flight. When it was used on Apollo 13, again they had problems pressurizing it. But this time, instead of replacing it they used an alternate approved procedure. It was approved, but it had never been used before. Remember the Apollo 1 fire? The command module was extensively redesigned as a result of the fire to improve safety. As part of that redesign, the in-capsule DC bus voltage was lowered. The gantry bus voltage retained the original higher bus voltage. The alternate procedure included powering the tank from the gantry bus rather than the in-capsule bus, because it had been written for the earlier design when the bus voltages were the same and was inadvertently not updated when the capsule was redesigned. The higher voltage was thought to have damaged the insulation on the wiring for the stirring motor inside the tank. It was when one of the astronauts turned on this stirring motor that the tank exploded.
Back in the early days of advanced CMOS processes (74ACxxx) Fairchild hadn't quite worked out the details of parasitic SCR latchup. I had one circuit using a 74AC04 inverter that exploded on me one day. Seems a software engineer wearing a rayon shirt and no ESD strap accidently rubbed up against the card, inducing enough voltage into an input of that IC to cause it's parasitic SCR to turn on.
When I looked at the package, the top of the epoxy had been blown out, there were bond wires hanging in air, and no silicon anywhere. It had all vaporized. I called up Fairchild, and gave them the component to analyze. I put a socket in that spot and put another inverter in there. From time to time I had to continue to replace them as their subsuently also exploded.
Need I say anymore,
I'm told by relatives that this could be a picture of me as I did the exact same thing at around the same age only with a fork.
As the story has been told to me by countless relatives, I was thrown backwards across the room at around 100 MPH before anyone could even yell out "No don't do it".
I guess I just had to find out why that magic box in the wall made the vacumn cleaner roar so loudly.
Needless to say my hair is now currlier than it was before my stunt.
I think this is how I got into high power RF amplifier design, 120 volts just wasn't enough for me after this.
In the late 80's, the company I worked for was shipping systems internationally, so we had a fridge sized high current 120v/60Hz to 240v/50Hz converter. One of the techs was testing a new system, when out of the corner of my eye I saw the lab illuminated by an intense actinic blue flash and heard a loud Mrrrzzzphhssttttt! There happened to be a length of 2x4 handy (from a shipping crate being built) which I grabbed, so as to, should it be necessary, separate the tech from the burning (?) converter. No-one was injured, but the converter was a charred wreck, and was never repaired. We found a better way to test the systems.
I had a quick soldering job holding up progress on my project. Unfortunately, the only free soldering station in the lab was faulty. The light on the base unit was lit, indicating that it was powered but the iron did not heat up. So, I fiddled around with it until it worked. Pleased with myself, I waited for it to heat up. I was just about to start my soldering job when a technician turned up, helpfully telling me that the soldering station did not work. I said it did. He said no it didn't and I guess he wanted to be proving his point rather theatrically. I only had time to start my next sentence with "but..." while he snatched the soldering iron out from its holder and with a smug grin on his face, he pressed the business end of the iron in his palm and closed his fingers on it. There was this searing sound, a bit of smoke rising, closely followed by the unmistakable smell of burning flesh. I think he learnt a valuable lesson in safety.
First, about engineers soldering and such: some engineers actually do work with reality instead of passing that all off to others. Next, it is entirely possible to work on live circuits and not be injured, BUT it takes more concentration than many people are able to have. Much more than the millisecond MTV generation could ever muster.
But the biggest engineering disaster was not an exciting explosion, although that option did exist. WE built some fairly complex boxes to provide power for two film cameras and four lights, for recording crash tests inside a vehicle. Two 20 amp camera loads and four 15 amp lamp loads, all from a linear regulated supply powered by three 12 volt 7.5 AH gell cells. Every thing worked, almost, except that the contract engineer who designed it overlooked the voltage drop at 100 amps. So instead of providing the specified 28 volts regulated at 100 amps, for 15 seconds, it would drop down to 25 or 26 volts, and the cameras would not come up to speed correctly. The fault was found to be just a bit to much voltage drop in a whole lot of different places, including too many 15 amp connectors. So the customer would not pay and we had a whole load of very expensive anchor blocks. THAT was a disaster!
I just came across this EE Life article, so here is my entry.
Years ago, my parents went on a vacation and made good friends with a gentleman by the name of Dick Mendenhall. Dr. Mendenhall was a giant in the early years of vacuum tubes (valves) at Bell Labs, and he had the scars to prove it. He had lived through at least one accident involving high voltages and metal lab floors. He was missing toes on both feet, and the feet themselves were deformed, as a result of those accidents.
Some of you might remember the very powerful AM radio stations across the border in Mexico. Dr. Mendenhall designed the power tubes that ended up in some of those 500 kilowatt radio transmitters, back in the 1930s. They were big, 8 foot long vacuum tubes, with non-thoriated filaments that dissipated over 15KW of filament power alone, per tube.
Dr. Mendenhall's daughter or granddaughter ran Jim Henson's Muppet Lab. FYI.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.