From the “ignore all the severe weather outbreaks in the prior century, we’ve got something to prove” department. Imagine if a climate skeptic skeptic scientists released a 13 year study on ANYTHING related to weather or climate – they’d be excoriated for not having a long enough sample. Note that this is not a peer reviewed study, just something published on their website only. The fact that they cite a 2013 White House report, suggests this might have a political origination rather than a scientific one.
Berkeley Lab releases most comprehensive analysis of electricity reliability trends
New report finds that increasingly severe weather is linked to longer lasting power outages
From: DOE/LAWRENCE BERKELEY NATIONAL LABORATORY
In the most comprehensive analysis of electricity reliability trends in the United States, researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) and Stanford University have found that, while, on average, the frequency of power outages has not changed in recent years, the total number of minutes customers are without power each year has been increasing over time.
The researchers pinpointed what utilities and their regulators refer to as “major events,” or events generally related to severe weather, as the principal driver for this trend. “This finding suggests that increasingly severe weather events are linked to a 5-10% increase in the total number of minutes customers are without power each year,” said Berkeley Lab Research Scientist and Stanford PhD candidate, Peter Larsen, the lead author.

The researchers analyzed reports for a large cross-section of utilities representing nearly 70 percent of U.S. electricity customers spanning 13 years from 2000 to 2012. Their report, “Assessing Changes in the Reliability of the U.S. Electric Power System,” is available at:http://emp.lbl.gov/publications/assessing-changes-reliabi
Although a 2013 White House report noted that major power outages and severe weather events are increasing, this study is the first of its kind to use econometric analysis techniques to statistically correlate these events with electricity reliability. Most studies of reliability have relied on information that reflects only the largest power outages. Yet, over the course of any given year, the largest events typically account for no more than 10 percent of all power outages. This study, by relying on information for all power outages, both large and small, conclusively identifies a trend that is linked directly to these larger events.
One surprise was that the study did not find a consistent link between reliability and utility transmission and distribution (T&D) expenditures. “We expected to find that increased spending on T&D would lead to improved reliability, but it is possible that a combination of proactive versus reactive utility maintenance policies may be off-setting this effect on reliability,” Larsen said. He anticipates that future research will be able sort this out through more detailed analysis of utility spending practices.
###
Co-author and Berkeley Lab Staff Scientist Joseph Eto said: “We hope the findings from the study will provide a more solid basis upon which to ground future private and public decisions on the long-term reliability of the U.S. electric power system.”
This work was funded by the Office of Electricity Delivery and Energy Reliability, National Electricity Delivery Division of the U.S. Department of Energy. Other co-authors were Kristina H. LaCommare of Berkeley Lab and James L. Sweeney of Stanford.
Assessing Changes in the Reliability of the U.S. Electric Power System
Abstract
Recent catastrophic weather events, existing and prospective federal and state policies, and growing investments in smart grid technologies have drawn renewed attention to the reliability of the U.S. electric power system. Whether electricity reliability is getting better or worse as a result of these or other factors has become a material issue for public and private decisions affecting the U.S. electric power system.
This study examines the statistical relationship between annual changes in electricity reliability reported by a large cross‐section of U.S. electricity distribution utilities over a period of 13 years, and a broad set of potential explanatory variables including various measures of weather and utility characteristics.
We find statistically significant correlations between the average number of power interruptions experienced annually by a customer and a number of explanatory variables including wind speed, precipitation, lightning strikes, and the number of customers per line mile. We also find statistically significant correlations between the average total duration of power interruptions experienced annually by a customer and wind speed, precipitation, cooling degree‐days, the percentage share of underground transmission and distribution lines. In addition, we find a statistically significant trend in the duration of power interruptions over time—especially when major events are included. This finding suggests that increased severity of major events over time has been the principal contributor to the observed trend.
UPDATE: WUWT reader Eric adds this in comments:
Having several friends working in the power industry they have repeatedly said that the biggest single reason for the length of outages has to do with utilities reducing the number of personnel available to respond to outages. This has been in response to investor pressures. Now outages which would have lasted a few hours 20 years ago are stretched into days.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I work for an electric utility and my storm assignment is Damage Assessor and Outside crew Coordinator.
Couple of things here.
1) Am I understanding the Abstract correctly, that what they were measuring is total customer outage minutes? If so, you could be maitaining or improving restoration times yet still see that number increase due to an increase in the number of customers.
2) Outage restoration increasing due to fewer resources. While it is true that the number of line crews a utility has on hand are decreasing – we have gone from 68 down to 42 – the impact is not as great as some here are saying. Utilities have outside crew resources available to call on and for bigger storm events we do so. The last big winter storm we had I was running 6 crews up from California. Our company sent three crews to New England tfor the winter storm a couple of years ago and for Sandy recovery efforts.
3) Utility companies do not perform failure or root cause analysis following storms. They don’t have the time. The overriding objective is to get services restored, not to perserve evidence and evaluate why a particular pole failed. It is possible that pole failures may be increasing due to their being increasingly overloaded as communications companies push to build out faster networks in a very competative industry. (See Malibu Canyon fire.) It is notable that this study was a crunching of numbers and not a real root cause analysis effort.
The necessity of conducting an after action report (AAR) following a restoration effort varies across the state public utility commissions (PUCs). The majority of PUCs in the Northeast and mid-Atlantic states require an AAR be submitted – typically within 30 days of the completion of the restoration effort. In addition, some states (e.g., the Massachusetts Department of Public Utilities) have the ability to assess financial penalties for “poor” storm response performance. The AARs, along with input from interested parties via public hearings, are used to determine the thoroughness of the storm response.
With Massachusetts, the penalties are capped at $20M per storm with no annual limit – https://www.cga.ct.gov/2011/rpt/2011-R-0385.htm . Assessed penalties paid by the utility are returned to the ratepayer as a one time adjustment to their monthly electric bill. The penalties must be paid by the shareholder (investor owned) and cannot be assessed to the ratepayers. Interestingly, municipal utilities in Massachusetts (there are no electric co-operatives or public power authorities) are exempt from these penalties. Keep in mind, though, that the penalties are IN ADDITION TO the loss of revenue from the storm itself (if the meters don’t operate, the utility makes no money) and the likely denial of full recovery of storm-related costs (if the performance was poor, then inefficiencies and waste likely occurred which shouldn’t be covered by the ratepayer).
Therefore, a utility – at least in Massachusetts – can lose significant amounts of money whenever a restoration effort is conducted. In 2012, Massachusetts utilities were penalized $24.8M for their responses to two, separate storms in August and October 2011, which were upheld in 2014 by the Massachusetts Supreme Judicial Court – https://www.boston.com/metrodesk/2012/12/11/three-utilities-facing-million-fines-for-faulty-response-tropical-storm-irene-and-halloween-snowstorm/SwiaTiLjeXKrO6XvXrmocJ/story.html . Legislators and regulators asserted that such an approach should “incentivize” a utility to improve its restoration effort; however, several variables that affect the global estimated time of restoration are typically beyond the control of the utility (e.g., resource acquisition during a National Response Event – http://www.eei.org/issuesandpolicy/electricreliability/mutualassistance/documents/ma_101final.pdf [p. 5]). As a result, the application of penalties could be construed as largely punitive rather than constructive, as implied by the public advancement of incentivizing the utilities.
Preposterous even to remotely connect weather and climate to increasing length of outages. – as comments preceding show. Great example of irrelevant correlation.
During the period of this study 2000-2012, the US wind turbine capacity increased from 2578 MW to 60005 MW .. This represents in excess of 30,000 turbines( assuming 2MW/turbine ) by 2012 that feed the local grids through new and widely distributed transmission lines and control systems . I wonder how much of the delays were associated with the new renewables like wind and solar. Germany has about 25000 turbines and a combined solar and wind capacity of about 72000 MW . They experienced about 3000 times in 2014 when they had to step in and stabilize the grid. With Obama calling for 28% renewable generated electricity by 2030, the number of additional wind turbines could be 100,000 more . More brown and black outs ?
‘the length of outages has to do with utilities reducing the number of personnel available to respond to outages’
____
sounds reasonable – why should the utilities finance the MASKING of the devastating green policies outcomes.
what the EPA orders let the EPA pay.
Hans
we know Stalins answer to a reaction like
‘what the EPA orders let the EPA pay.’:
‘sabotage !’
waiting for EPA’s reaction.
Hans
Study: US Power Grid Has More Blackouts Than ENTIRE Developed World
The United States power grid has more blackouts than any other country in the developed world, according to new data that spotlights the country’s aging and unreliable electric system.
The data by the Department of Energy (DOE) and the North American Electric Reliability Corporation (NERC) shows that Americans face more power grid failures lasting at least an hour than residents of other developed nations.
And it’s getting worse
http://www.offthegridnews.com/grid-threats/study-us-power-grid-has-more-blackouts-than-entire-developed-world/
In a mature industry, it’s simply impossible to grow earnings at double digit rates, year in and year out, without shorting somebody. After you’ve cut staffing and wages as much as you possibly can, you must compromise the product; after you’ve cheapened the product as much as you possibly can, the only thing left is to short the end user. … Power outages in the affluent Dallas suburb where I grew up some decades ago lasted, at most, a few hours —- and that was before “global warming” arrived to end winter as we knew it [sarc]: Last ice storm, a couple of years ago, my parents were without electricity for over a week. …. It’s fashionable perhaps to blame regulation for utility shortcomings, but I think the regulators are doing the utilities’ bidding — they provide cover for management practices that are focused on producing ever higher earnings, not ever better service. …. Did Nixon create the EPA to protect the environment? Or was the real goal to suppress competition by creating a maze of regulations that only the Big Boys could afford to comply with (while also restricting supply which raised prices)? … Americans have simply become a cash cow for the corporate owners of the US government, which structures the business environment to promote their semi-monopolies, and consumers are being milked for all they’re worth. Case in point, the execrable quality and high expense of internet service in the country which invented the damn thing in the first place.
It’s disingenuous to assert profit as the motive with respect to electric infrastructure neglect, “extreme weather events,” and the duration of associated restoration efforts.
Given that they are heavily-regulated monopolies by necessity (because of the critical service they provide), the best a utility can achieve financially is a fully-allowed return on investment (ROI) rather than “producing ever higher earnings,” as if there is no limit. Utility accounting does not follow private industry accounting. The amount a utility can “make” in “profit” within a given year is typically capped by the state public utility commission. And even then, service quality indices (SQIs) nibble away at the cap, reducing the ROI even further, if the utility fails to accomplish the SQI targets and objectives.
Investor-owned, electric distribution companies represent conservative, financial investments – low risk with low returns because of the regulated ROI caps. Higher returns are seen with electric transmission and power generation because of the increased volatility of those markets – higher risk but with higher returns because of federal and differing oversight from the need to insulate from market manipulation (e.g., the reasons behind the 2000/2001 California energy crisis – https://en.wikipedia.org/wiki/California_electricity_crisis ). However, damage to transmission and power generation assets following a major event is minimal compared to distribution assets. Thus, distribution utilities “margins” are historically thinner than their transmission and power generation counterparts.
RE:
“but I think the regulators are doing the utilities’ bidding ”
Yeah, were it only true. I’ve been tasked as subject matter expert on one particular rule making by our Commission. So far commission staff has pretty much ignored all of our comments.
In our state the commision is appointed. In others they are elected. Who do you think elected regulators are most likely to side with, at least on rates – the utilities or the ratepayers (who also go by the name voters)?
Sorry, this study is mostly drivel with respect to its conclusions. Let’s start with this:
“We find statistically significant correlations…”
How many times does it have to be said? Correlation is not causation. The above comments have included a number of key elements in the reasons for longer outages, including ageing infrastructure and more urban distribution infrastructure underground.
Here’s another. Urban transformer stations are carrying much higher load factors today than they were 30 years ago. Back in the 1960s and ’70s, the general rule of thumb was that you kept a TS at no more than 50% load. That way, if you lost a transformer, the other could carry the full load.
That was then. These days, the average TS is carrying above 80% of the peak daily load in large urban centres. Lose a transformer, in other words, and the utility is immediately into load shedding. Electrical distribution infrastructure today simply does not have the capacity margins that were in service 40 years ago. There are now entire districts in a large city near me with no local TS. Instead they are being served by long feeders from distant TSs. The effects on power quality can be pronounced depending upon what’s loaded on the system even if large specific loads have proper capacitor banks in place.
Why no local TS? Idiot downtowners who want parks instead of transformer stations. Idiot downtowners who think they are still full of PCBs.
Which brings up another issue. The heavier the load, the more heat stress a transformer will be under. They need a good fire-resistant insulating liquid. PCBs were and still are the best performing insulating liquid for large transformers. But PCBs were denounced as carcinogenic in the 1980s and all was withdrawn from service. The substitutes for askarel, silicone or mineral oil, are all inferior in either electrical breakdown performance or excessive cost. So nowadays, there’s a lot more transformer fires than there used to be.
Here’s an example of ageing infrastructure at work. A large TS had a very large event in the summer of 1994. A metering transformer (quite small, about the size of a small garbage can) sprang a small leak and moisture got in. It collected in a bubble at the bottom of the container where it caused a short circuit. In the words of the official Toronto Hydro report, “it ejected the core and coils.” Meaning it blew up with extreme prejudice. It did so much damage that it weakened the support structures, causing the overhead busbars to crash down on top of the main power transformers, shorting them both out.
The eastern third of Toronto was blacked out for nine hours on a hot summer weekend.
The metering transformer that caused all the problems was over 60 years old.
“The metering transformer that caused all the problems was over 60 years old.”
Most distribution transformers have no moving parts, and power transformers have a separate load tap changer (LTC) – a mechanical moving part – that can be removed separately. Therefore, transformers usually endure well beyond their anticipated lifespans (about 20-25 years). Once a utility has capitalized the cost of the transformer (usually 15-20 years), the ratepayer benefits directly from the unit’s full depreciation over the extended lifespan (i.e., they’re not paying for the capitalization of a replacement unit)
With that said, a utility should also implement an asset management system that monitors performance and failure rates (especially beyond projected lifespans) to determine the ideal time for replacements (i.e., maximizing use while minimizing ratepayer cost). With increased regulatory scrutiny of asset management, as well as ever-expanding environmental regulations, the days of operating an electrical asset to failure have passed, although they were likely still practiced in the above 1994 example.
The great majority of utilities have “some” asset management system established today to mitigate failures resulting in reliability (i.e., customer outage) concerns. Anecdotally, I’m unaware of an increased number or rate of transformer fires attributed to core or casing heating attributed to the performance of contemporary dielectric fluids. Ironically, though, the by-products of PCB combustion and oxidation (e.g., dibenzodioxins and dibenzofurans) do present a very real health concern, as opposed to just the PCBs. So, contemporary transformers actually represent an improvement, which doesn’t imply they don’t fail because they do. But again, an asset management system should mitigate those failures better than past practices.
“while, on average, the frequency of power outages has not changed in recent years, the total number of minutes customers are without power each year has been increasing over time.”
It’s a good thing we’ve been going through a period of fewer hurricanes and less extreme weather. If that trend doesn’t continue, not only will the total number of minutes customers are without power increase, but the frequency of power outages will also increase. Now, add to that a decrease in reliable fossil-fuel energy production and an increase in unreliable renewables, and the problem will only be compounded. The only question that remains to be seen if that happens is will alarmists and environmentalist seek to deflect blame, or will they applaud the situation and then run for their lives?
I’m more likely to blame our education system for turning out lost human junk who’s livelihoods are threatened by off-shore H-1B labor. I propose we don’t have as many skilled people working in infrastructure as we did not that many years ago (for a given unit of need), and that is why it takes longer to fix the damage. It should also be patently obvious that supply lines for energy are longer and spread deeper into remote areas than ever before. To put it another way, Bonneville Dam has far more customers than it did a few decades ago. There are lush neighborhoods now in areas where people once lived off the grid. The study is complete claptrap. Populations have grown and grown into the forested lands were a wind storm has greater impact than ever. None of this has anything to do with global warming right up to when it stopped 18 years ago. Global cooling that ended in the 1970s had no effect, either.
…..This finding suggests that increased severity of major events over time has been the principal contributor to the observed trend……
I read an article by a union there showing that the reason that major events take longer to fix is that staffing levels of fully trained staff are lower. This means that although equipment is more reliable on average so you six engineers are needed instead of twelve any major outage has no longer got the staff in reserve to cope with it by delaying routine tasks. They did invite the management to respond but got no takers.
Where is the most basic piece of the evidence in the number of staff and resources applied to dealing with the problem in each case? In the article it does not even merit a mention.
It also fails to mention how much of that increased spending was due to the cost of connection to large area wind farms instead of a few power stations. Where do they show the relevant figure of available resources for tackling major events?
In short a sloppy and amateurish piece of rubbish even for a third tier university.
If you are an academic, your career is based on publications. The criteria is simple: above the minimum cutoff for the tier your university is in, quantity counts more than quality.
One way to get another publication is to find data sources that provide a lot of variables. Run correlations among the variables, and then look for pairings that appear to be significant at the .05 probability level. (If you try enough variables, you will surely find such correlations even if the data came from a table of random numbers.) Examine these “significant” relationships to see if any can be made to fit into a narrative that some journal editors are emotionally committed to. Then write your paper and submit it.
Whatever you do, don’t reveal how many other correlations you saw that were not significant. There might be some referee with enough integrity and understanding of statistics to realize that the “significance” is probably an artifact of the shoddy research methods–data dredging–rather than a true indicator of a relationship.
This particular study looks like it may have done with this approach. The authors couldn’t find a correlation between the number of outages and weather trends, so they looked for something else that did correlate. And they found it. Voila, one more publication on the curriculum vitae.