The power failure started in the afternoon - my buudy and I came back from a late lunch to a dark house and beeping UPS. So they wouldn't have seen the blackout spread, but as night fell over North America they would have noticed a lot of missing cities. Some parts of Toronto didn't get power back for more than a week.
I understand the huge impact of this kind of massive power failure during winter (especially during a snow storm) there in Canada & NA, because this is a very rare event and usually people are not well prepared for this kind of scenarios,
On a lighter note...I live in the southern part of India, winter is very pleasent here and can survive without power for a few days...and moreover, we are used to experience regular power cuts :) (due to supply-demand gap)
Here in New England, we often times lose power during the winter due to snow, ice, tree damage to the power lines. Even during the summer there is the all too frequent car accident taking out a power pole. The one thing that I noticed a number of years ago was the NE power companies did not seem to be doing the line maintance: trimming back overhanging branches, cutting failing trees down and as a result we had many more outages. It seems that they would rather wait until some nasty weather conditions do the trimming for them only to have to get their poor crews out in the most ugly weather. Recently, they came to their senses and started doing the trimming and cleaning up of overhanging branches now we enjoy much better and more reliable power! This was a business decision not operator error (except of course cars hitting poles!) that contributed to both the number and duration of power outages.
Excellent observation. Monitoring from space is a possibility. With the right imaging technology, the entire US or a maginified portion of it (i.e. Northeast US and Canada) could have been monitored and light could have been seen going out in sequence----food for thought for power companies, Homeland Security, etc
It's comments like this: "What should have been a manageable local blackout cascaded into widespread chaos on the electric grid." That make me a) glad I have a generator and b) wish I were totally off the grid.
How real-time can this be if it takes 10 years to surface? This would have been a great story in 2003? But 2013? Its clearly product placement, but it doesn't even pass the red-face-test.
The excerpt from above, I find especially ironic: that it may "perhaps" avoid a blackout. They had monitoring that "perhaps" may have avoided it in 2003, and look how that worked out. Frankly, monitoring is a necessary condition, but not sufficient. The real problem is that we have been running the grid open-loop forever. Phasor Measurement Units are finally being deployed to where we can actually see what's really going on, but measurement isn't closed-loop control -- just the first step on the way.
Of course this is a little like the orchestra on the Titanic -- doing what we know how to do, instead of something that can actually change the outcome. Our entire load base has moved to Direct-Current, but we are still grapling with how to monitor (and maybe someday, close the loop on control) in the AC domain. When do we move on to the inevitable DC grid and integrated energy storage that we really need?
@Some Guy---the 2003 blackout was the first time Genscape had this system in place. There have been many more blackouts since, of course, and the point here is not that that the technology exists, but who chooses to use it and work with companies like Genscape to refine the technology with a "smart" system geared towards prediction and prevention----I have not seen anyone do this yet.
Companies like Genscape have the tools, but if power companies choose not to use them or to expand upon their effectiveness, then we will continue to have blackouts.
While going through the detailed report published on EDN, I noticed that it is mentioned:
" Operators were unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage."
So, more than the "software bug", which is mentioned as the root cause for the failure, the reason for the incident looks more like an improper documentation or training for the users of the software to me.
Hello Sanjib.A---I believe that you are correct---training of the operators is paramount to the success of utility companies being able to minimize the effects of a sub-station failure or some other event that can trigger a "domino effect" and a blackout. The same goes for Nuclear power plant generation operators
In my vast experience working quality problems, if you are blaming the operator or training or documentation, you have failed to understand the problem or take corrective action seriously. It's usually a cover-up for inadequate procedures or systems. If you have designed the procedures and systems correctly, they are resilient to non-malicious operator actions.
Blaming the operator, documentation or training, especially first and only, is the hallmark of an immature quality & reliability approach. It's a cop-out to fix the blame instead of fix the problem.
this is so very true in so many areas. I can't count the number of times I've delt with programmers and web designers that couldn't wrap their head around the fact that the end user shouldn't conform to their specific and peculiar way of doing things. When the majority of people find an issue in your system, it doesn't matter if you think it is correct... it isn't.
Like most aircraft accidents and Three Mile Island, it was an accumlation of a bunch of "little" errors and loss of situational awareness by the operators that led to it. Behind it was also a poorly managed energy provider -- what they refer to in 8D problem solving as the root cause of the root cause.
To me the big issue going forward isn't reliability as much as security. With the move toward automated metering and control the chance of a 'hack' that tricks the system into thinking something is going wrong seems to be a much more important potential root cause of future failures...
Ah yes DrFPGA---with more sophisticated technology comes more creative hackers who love the challenge of defeating the security of a system. This will surely be one of the most daunting tasks that we will face as we enter this next phase of a smarter Grid system.
"Genscape's proprietary power monitors .....detected the blackout cascade with the loss of Homer City."
There wouldn't have been a certain Mr Simpson involved in this would there??
Ona a more serious note, I remember a blackout we had once in Zimbabwe that affected most of the country and I think parts of the surrounding countries. The whole grid went unstable and the generators were loaded down till they almost stopped. I remember the fluorescent lights flashing slower and slower until they got down to around 1 flash a second, and I knew something was seriously wrong. I think it took over a day to get it all back on again.
@David Ashton---the fluorescent light blinking slower and slower is a reason Genscape has their frequency monitoring sensor. That is an early indication that something is happening bad on the grid----the 50 or 60 Hz usually has a drastic drop in frequency for a very short time when a sub-station or another power node goes off line
@Steve....the silly thing is I used to work for the precursor of the Zimbabwe Electricity Supply Authority (years before this happened). I ran a small design lab producing test gear, indicators etc. One of my projects was a Synthesised signal generator which supplied 240VAC at 45-55 Hz, selectable on thumbwheels in 0.1 HZ steps. It was for testing under-and over-frequency relays which (I was told) were to trip a feeder if the frequency went out of spec. Guess they didn't work any more, or something.....
Correction: it was't caused by a software bug. Rather, something alot more important. But the fish in the sea doesn't know it's a water world. So...
In 1984 I worked at AEP in NYC, in a small group developing hardware & software infrastructure for achieving realtime monitoring of, and ultimately controlled recovery from, faults wherever they occur. It was ground-breaking, developing DSP based metrics using symmetrical-components modeling of the 3-phase system. We had a working scaled down physical model of a grid in a large room, complete with reconfigurability, to provide a test bed for the detection, monitoring and recovery part under development. It was at a stage where the remote monitoring minicomputers in the field were accurately reporting fault types and locations as they actually occurred, along with simulated reaction commands.
What happened? Nearest I can tell, politics and lack of leadership by higher ups, above the actual project level. What amazes me is that these events happen and nobody realizes that our vulnerability to them is a product of above-mentioned myopia at the highest corporate levels, where only the status quo & bottom line matters. Such a new system as was in development would require rethinking and new products displacing old ones, as well. This is obvious existence-proof that it's been an industry without sufficient competition or pressures for innovation driving it. It's the Microsoft syndrome.
Well, folks, we got what we paid the utilities all those years for. Happy now?
@nanonical--thanks for your insight and commentary. Unfortunately, I know how mis-management at a utility---I live on Long Island in NY---we have LIPA---just ask Governor Cuomo or any of us rate-payers about their management.
Yes, I am happy, relatively speaking. In my 50-odd years here in America the power has been pretty good. The utility strategy and tactics we've tangled with over the last half century were not perfect and we SHOULD expect better. Our forbears were not idiots nor were their investors, albeit it some were, and others naive.
Your group's technical achievement is impressive and laudable. It's a shame that the "best and the brightest" at higher levels were unable to overcome their risk aversion and return-on-investment mindsets in time to mitigate an inevitable disaster.
It is likely that significantly increased competition in the power market had/has other "sub-optimal" consequences none of us could/can forsee.
1) re: Happiness. Good point. I'd still rather live in the USA. Not sure of, for example, European performance. Jury is out, to me. You say "pretty good" ... against what? Consider the wealth extracted in terms of utilities' stock mkt gains and dividends, our unexploited tech. leadership & capabilities and *hidden* lack of investment leading to a condition where the effects *are yet* to be seen. Then come back with your 2nd assessment. A true balance sheet to include all liabilities would show, I believe, an absolutely criminal incompetence & lack of quality. Why? Wrong incentives.
2) re: Increased competition. I think you mean that would cause corner cutting and even worse outcomes to some extent. Right. That's why you'd have a level of regulation forbidding specific practices *and* deterministic bad outcomes (to subvert the schemers), promulgated by the indep. experts, together with some true market based competition. Have penalties like those for Brazil banks: principals have their personal wealth and freedom at risk. Guess what? I believe you'd see a difference. Nothing like skin in the game.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.