In the U.S. a furious debate has erupted among academic energy experts about whether the country could run 100% on renewable energy. Joshua D. Rhodes, Postdoctoral Researcher of Energy at the University of Texas, Austin, explains what is going on and offers some thoughts of his own. Courtesy The Conversation.
Science is messy, but it doesnât have to be dirty.
On June 19, a group of respected energy researchers released a paper in the journal Proceedings of the National Academy of Sciences (PNAS) that critiqued a widely cited studyon how to power the U.S. using only renewable energy sources. This new paper, authored by former NOAA researcher Christopher Clack and a small army of academics, said that the initial 2015 study had âerrors, inappropriate methods and implausible assumptions,â about using only the sun, wind and water to fuel the U.S.
What followed was a storm of debate as energy wonks of all stripes weighed in on the merits of the PNAS analysis. Mark Z. Jacobson, a Stanford University professor who was the lead author of the 2015 study, shot back with detailed rebuttals, in one calling his fellow researchers âfossil fuel and nuclear supporters.â
I recently took a tour of Hoover Dam. One of the first things they tell you is that the dam was built for irrigation and flood control, and that electricity production is a nice side product
Why the big kerfuffle? As an energy researcher who studies the technologies and policies for modernizing our energy system, I will try to explain.
In general, getting to a clean energy system â even if itâs 80 percent renewable â is a well agreed-upon goal and one that can be achieved; itâs that last 20 percent â and how to get there â that forms the main point of contention here.
âEnergy Twitterâ on fire
Jacobsonâs seminal paper, which was also published in PNAS, tied together a significant amount of work of his own and others showing that all energy used for all purposes in the U.S. could come from wind, water and solar (WWS) by 2050.
What about when the sun doesnât shine, the wind doesnât blow or water is unavailable? His findings postulated that significant amounts of energy storage would be needed, mostly in the form of heat and hydrogen, to meet energy demand when there isnât enough renewable energy and to store it when thereâs too much. They also concluded this scenario would be cheaper than a world that relies on other technologies such as nuclear, carbon capture and other methods of reducing carbon emissions.
The Clack rebuttal was blunt and cut deep at the assumptions that underlie the work of Jacobson and colleagues. The same PNAS issue also included a counter-rebuttal to Clack from Jacobson.
Energy Twitter â that is, energy wonks like me on Twitter â exploded.
So why all the fuss?
Much of the heat from this debate seems to stem from Jacobson making some pretty bold claims in and about his paper, going so far as to tell MIT Technology Review that âthere is not a single error in our paper.â That is a very, very bold claim and, depending on how it is interpreted, could be read to say that the study authorsâ model is perfect, which of course it is not, as none are.
This debate may seem arcane, but it has significant political and societal implications.
Some celebrities have signed on to Jacobsonâs vision and have pressed for policies formed around his analyses of the feasibility of an entire energy system that runs 100 percent off of wind, water and solar. If policymakers buy into the technical and economic assumptions in the paper, it has big implications for the direction of state, local and national policies.
Detractors, meanwhile, have raised a number of concerns. In particular, they argue that decisions made based on Jacobsonâs analyses alone could lead to serious overinvestment in only the technologies considered, which could possibly backfire if the costs turn out to be higher than expected.
The nitty-gritty
To make projections around how the future energy system will work, researchers create computer-based models, input assumptions and then run simulations.
The rebuttal from Clack and co-authors focused on four major issues they saw with the WWS paper: 1) modeling errors, 2) implausible assumptions, 3) insufficient power system modeling and 4) inadequate scrutiny of the input climate model, which informs how much solar and wind power are available for power generation. Here are some highlights with my own thoughts sprinkled in.
Having worked in residential energy use, and energy retrofits in particular, I find the amount of geothermal energy storage retrofits for heating and air-conditioning in buildings Jacobson assumed hard to fathom
Clack takes issue with the amount of hydroelectric power that Jacobson assumes is available. In their rebuttals, they spar over the exact numbers, but Jacobson assumes there is about the same amount of total energy produced from hydropower in 2050 as today, although when, and at what rate, that energy is produced is a crucial question.
In Jacobsonâs model, there is a significant increase in hydropower capacity â up to 1,300 gigawatts (or about 10 times current capacity), which appears to run for at least 12 hours straight in some days of the model output. Jacobson says this is possible by installing more turbines and generators at existing dams, just not using them very often.
But dams are built with specific maximum flow rates because if you let too much water flow through a dam, you can flood areas downriver. Jacobson has since admitted that providing this much extra power from existing dams would be hard.
I recently took a tour of Hoover Dam. One of the first things they tell you is that the dam was built for irrigation and flood control, and that electricity production is a nice side product. So expecting that dams in the country could boost their output might be harder than the analysis implies.
Implausible assumptions
Clack questions a long list of input assumptions of Jacobsonâs model. A number are related to how quickly technologies can mature and be used at large scale, including underground thermal energy storage, phase change materials to store solar thermal energy, and hydrogen as a usable fuel. Other critiques focus on assumptions around how flexible the demand for energy can be â a key consideration when dealing with variable sun and wind power. Then thereâs the amount of electric transmission power infrastructure needed, the costs of all the capital required, the pace of investment needed and land use issues.
If Jacobsonâs work can survive this challenge, I figure it will stand the test of time
Some criticisms are probably fair. I tend to be bullish on the potential of technology to advance rapidly, but having worked in residential energy use, and energy retrofits in particular, I find the amount of geothermal energy storage retrofits for heating and air-conditioning in buildings Jacobson assumed hard to fathom.
I have some reservations on the ability of 67 percent of demand to be flexible. I also have some questions on the pace of investment required in Jacobsonâs scenario.
Insufficient power system modeling
Clack attacks LOADMATCH, the power system model in Jacobsonâs analysis, as being too simplistic. The main criticism of LOADMATCH is that it does not consider frequency regulation â the need to keep the frequency of the power grid steady at 60 Hz, which is a very important aspect of keeping the power supply reliable.
One piece of anecdotal information: Jacobson states in the paper Supplementary Information that it takes LOADMATCH about three to four minutes to simulate an entire year. Our simulations of just the Texas electricity market can take hours to run, and can take significantly longer for simulations of high levels of renewables.
After reading both papers, both supplementary information sections, the counter-rebuttal, a lot of news articles and tweetstorms (from other energy folks I trust), I find myself thinking that the burden of proof is still in Jacobsonâs court. There are many lessons to learn here.
But, in the end, my view is that the body of scientific understanding will be stronger for it. The peer review process is slow, uses imperfect human volunteers and doesnât always get it exactly right the first time. The list of authors on the Clack rebuttal is impressive, and should be paid attention to. However, if Jacobsonâs work can survive this challenge, I figure it will stand the test of time.
Editorâs Note
This article was first published on the website of The Conversation and is republished here with permission from the author and under the Creative Commons licence of The Conversation.
Joshua D. Rhodes (@joshdr83) is a Postdoctoral Research Fellow in The Webber Energy Group and the Energy Institute at the University of Texas at Austin. His current research is in the area of smart grid and the bulk electricity system, including spatial system-level applications and impacts of energy efficiency, resource planning, distributed generation, and storage. He is also interested in policy and the impacts that good policy can have on the efficiency of the micro and macro economy, especially policy that utilizes market forces to increase efficiencies.
[adrotate banner=”78″]
Hans says
“In general, getting to a clean energy system â even if itâs 80 percent renewable â is a well agreed-upon goal and one that can be achieved; itâs that last 20 percent â and how to get there â that forms the main point of contention here.”
So let’s make this 80% the policy goal. In the mean time we can do R&D for the last 20%.
Tilleul says
There are scientifc paper explaining how to go to 100% renewable since 1976 ! Everyone should ask why pick this unknown study. It reminds me of the Koch brothers financed attacks on Michael Mann’s “hockey stick” after it was shown in Al Gore’s documentary. First they pick a study and spread some false information that everything scientists says depends on this study, then start to create doubt over the study then at the end start to attack the man with false news (remember the hack just before Copenhague ?).
Karel Beckman says
You are poorly informed if you think Jacobson’s is an “unknown study”. He is famous in the U.S. energy debate, this is clearly explained in the article. Why don’t you read it? Or Google Jacobson? Before you can make a comment like this?
Arnold Roquerre says
The hockey stick has been show to be a good example on how to cherry pick a time interval to exagerate temperature change. The trick used is similar to playing wth statistics to get a desired result. Or, NOAA picking a extremly low temperture to make the temmperature orange and red in the fall when temperatures are between 65 and 80.
Sadly, both sides are biased and are not value neutral in their analysis.
Paul Bryan says
I agree in principle, and I would not dispute that INDIVIDUAL scientists are sometimes guilty of “cherry picking” time intervals or some other basis for presenting their data.
However, when we look at past CO2 or T extremes, we know at least two things:
(1) Any past CO2 rise could not have been anthropogenic.
AND
(2) The T rise COULD have been caused by something other than CO2.
Indeed, in many (most? all?) cases, the CO2 rise occurs AFTER the initial T rise, so CO2 could not have been the primary cause. And we know also that our best scientific information and analysis does NOT support a cause for the present T rise OTHER THAN human activity.
Given that the modern CO2 rise preceded the T rise, that there is no evidence other than human activity to explain the size of the CO2 rise, and that there is no evidence to support a cause other than the CO2 rise for the observed T rise, I feel that the “Hockey Stick” is fair.
Additionally, for historical rises, there was nothing like today’s expanse and density of human populations on the Earth. Adaptation was a bit simpler when humans could carry everything they owned, and when they had built no great infrastructure. Sea rising? Move inland. Too hot? Increase altitude or latitude. Too dry? Walk until you find water.
And simple as it might have been to adapt, it’s quite possible that many humans DID die of climate change. We have no cave paintings, as far as I know, denying that climate change was responsible for the deaths of neighboring groups. Perhaps the Trump and Pruitt families just don’t go back that far.
martin says
How can ppl fail to see that Jacobson’s cost assumptions for 2050 (for production and storage) were insanely pessimistic? In 2017, two years after his publication, actual production costs are lower than these 2050 assumptions, and storage costs will be in a few years. And yet Clack & all find the assumptions irrealist and optimistic (!!!).
Many new storage technologies see there costs falling at an abrupt pace, much faster than the most optimistic expectations. The use of storage in Jacobson’s scenario may be questionable sometimes, but in the end I am certain he doesn’t have access to the latest R&D findings, or he would have made way bolder claims for 2050 (with somewhat different technologies).
As to Clack statements that frequency regulation may be a problem under Jacobson’s eletricity mix, he should have come to Europe where industry, wind and storage have been massively providing frequency services for years, often at extremely competitive costs. No European TSO believes any longer than short term frequency regulation under a 100% renewable mix will have a significant cost impact.
Poor Science.
Bob Wallace says
Any paper/study has to be read within the context of when it was written. Jacobson’s paper was received for publication in May 2015. Since then we’ve seen major drops in solar and storage costs. I look at that paper as giving the least expensive mix for 2050 based on 2015 realities. I’m sure that Mark or no one else expects things to not change over time. Best to view Mark’s results as a ‘worst case’ scenario while expecting better options to appear as time marches on.
Budischak did a similar study for the largest wholesale grid in the US in 2012. The numbers his group used now look “quaint” because wind, solar and storage costs have dropped so much.
—
The frequency regulation alarm tells us that the authors of the other (Clark/”21″) paper don’t know very much about generation from wind and solar facilities. That energy flows through inverters which tightly lock down waveform.
Frequency variation comes from large thermal plants encountering sudden load changes.
Helmut Frik says
Well I think some of the critics, that the model is too simplistic at several points are correct and false at the same time.
The answer depends on the expectation on the model.
The study produces a answer on the question ot or if not it is possible to run the US on 100% renewables if you shrink the kinds of supply on wind solar and hydro only. And because time and moey are always limited, it does so on a limited amout of details in the model.
Is it reasonable to limit yourself on these ways to produce energy ? Surely not. E.G. power from biomass (wastes) can surely be used too. As well as some pumepd storage facilities can be built too.
And even within the limited amount of power production and storage techniques the result is most likely not the optimum combination. It is one among very many.
So if someone would like to have a study based on a more detailed model, a second and third more detailed study would be neccesary.
Or you keep the study as it is, and look how things roughly would look like at other extreme points, e.g. with no additional hydropower turbines, but with waste-biomass+gas storages and plenty of generation capacity. To get a rough overview over the whole giant multidimensional field of possible solutions.
And then to do the practical realisation in revolving short and mid turn simulation and modelling to adopt to the permanently happening changes and technical developments under market conditions. When it is known that even with extreme assumtionsin one or the other direction a more or less feaseble solution is possible with acceptable costs, it is quite sure that some much better solution is possible in between these extreme points.
Bob Wallace says
” expecting that dams in the country could boost their output might be harder than the analysis implies.”
First, let’s acknowledge that Jacobson is a Ph.D. civil engineer so he clearly knows how to measure stuff. He’s a professor in a very major university and has a lot of publications. He clearly understands the need for basing one’s claims on verifiable data. Mark has been working in the renewable energy field for many years, he’s not someone who wandered in from another discipline to offer an opinion.
Now, boosting dam output. It’s not hard. It would require construction work, but we are going to do a lot of construction work between now and 2050, regardless of how we are generating electricity in 2050. Almost all our existing plants (nuclear, coal, gas and wind) are age limited and will have to be replaced with something.
Here’s how one boosts dam output…
1) Build a secondary reservoir below the dam. That reservoir does not to be large, it needs to hold a few days of output water.
The secondary dam allows for a large discharge and then controls the flow of water downstream so that there’s no “flood”.
2) Upsize the existing turbines or install more turbines.
That’s it. It’s a construction project. Almost all dams already have a safety zone below them where no one lives, no building are built. The secondary reservation doesn’t need to have a dam at all. It can simply be a ‘pond’ excavated into the ground with an outlet pipe further downhill.
The “21” apparently don’t know about secondary reservoirs although some dams already have them.
This is how we convert existing non-electricity producing dams into pump-up hydro storage facilities. Catchment reservoir. Pump/turbine combo. Hook up to the transmission lines. Storage achieved. The US has about 80,000 existing dams and only 2,500 or so are used for electricity generation. At least10% (based on a survey of dams on federal lands) should be usable for pump-up storage.
Nigel West says
250 is out by a factor of 5. Actually across the USA there are 51 potential pumped storage hydro projects in the FERC pipeline that would provide 39GW of generating capacity. A broad estimate of storage capacity is 10GWh/GW generating capacity. So approx. 0.5TWh total PSH capacity. In a high renewables scenario that wouldn’t be enough to cover the storage needs of just California, never mind the USA.
The other unanswered question is who would pick up the bill for all this new PSH capacity? Private investors will not be interested unless the costs are underwritten with long-term contracts. Rather than falling on consumers, renewables developers should pay to solve the intermittancy problem they have caused.
BTW civil engineers are unlikely to understand electrical power transmission systems.
Helmut Frik says
Nigel, the costs are included in the calculation, so usual construction companies will build such projects. It will be worth the money, because as soon as renewable generation ill be expanded, conventional generation will leave the market. Which will result in lower prices during high renewable generatio, and higher prices during low renewable generation.
If the effects you expect show up, that there will be a lack of power generation due to low renewable generation, this will provide a financial incentive to supply more CAPACITY which can be switched on on short notice, and which has low costs per kWh. And it will incentives measures to shift generation away from times with high wind/solar generation, due to lower earnings per kWh at such times.
Additional turbines at existing hydrodams are among the cheapest possibilities to provide extra capacity – so it is reasonable to build them IF the market demands tha additional capacity. Which MIGHT happen with high renewable penetration. So far in germany with around 38% renewable generation share this year so far, this demand does notexist. So renewable shares obviously must be higher than that to provide the demand during low renewable generation for this. Nobody here sees the neccessity for such projects below 50% renewables share, and utilies do not see a need for them below 60-80% renewables share.
So there is plenty of time to see which dam is best suitable for such systems. Or where a dam could be dual use.
I did read plans for two spanish pumped storage projects, with 3 and 3,5 GW, which should, as a side effect provide storage for irrigation too, so store energy+water during windy winter and releasse energy+water during hot summer days with airconditioning. They are not designed for seasonal storage, but water is quite valuable in spain in summer, so part of tha capacity is a dual use backup (50%?) – it can be released when there is big lack of electricity providing a small push, or being released to help irrigation downstrams, providing a feelable backup.
Bob Wallace says
(80,000 – 2,500) * .1 = 7,750 potentially convertible dams. At the minimum.
Add to that tens of thousands of abandoned rock quarries, open pit mines, subsurface mines, and places where closed loop PuHS could be installed.
Who would pick up the cost of new pump-up storage?
Who will pick up the cost of whatever new generation and storage we build? Between now and 2050 we basically have to rebuild everything we have now except for dams.
Bob Wallace says
Currently there are 33 PuHS projects underway around the world. 11 are in the US. Progress ranges from pre-contracting when permits are obtained, through contracting, and to construction.
Many of the US builds are west of the Rockies where coal is quickly disappearing. Obviously someone thinks that RE + storage makes sense. They’re spending money on it.
There’s a monster being built in BC, Canada. A max output of 4 GW. Duration has not yet been revealed but it’s role is time-shifting electricity so there’s likely to be some serious energy stored in the upper reservoir. The lower reservoir is a large river. It’s being built with private money.
(There are also 76 battery storage projects currently being built.)
Bob Wallace says
” The main criticism of LOADMATCH is that it does not consider frequency regulation â the need to keep the frequency of the power grid steady at 60 Hz, which is a very important aspect of keeping the power supply reliable.”
I’m not extremely knowledgeable about grid operation so perhaps someone can correct me if I’m wrong but here’s what I understand causes frequency variation.
A sudden change in load/demand can cause voltages to rise or fall and that either puts more or less load on spinning generators. If the load is large they combined input of all generators may not be sufficient to allow them to return to full speed (60 Hz) or, in the case of load drops, the momentum in spinning generators causes them to turn faster when unloaded.
Because of this spinning generators are typically run at less than full power. “Spinning reserve”.
Grids are now installing batteries to deal with voltage fluctuations. Batteries respond faster and have now become cheaper than paying for spinning reserve.
As we add wind and solar to grids we should have fewer problems with frequency fluctuation. Wind farms and solar farms can be used to lock down grid frequency.
Some wind turbines drive an AC/AC converterâwhich converts the AC to direct current (DC) then back to AC with an inverterâin order to match the frequency and phase of the grid. Solar farms generate DC which is then converted with an inverter to AC. These systems can lock onto a specified waveform.
The “21” may have not known about the behavior of DC/AC inverters. If one is considering an (almost) all wind/storage/storage fed grid frequency variation is not an issue.
Nigel West says
Bob, when demand and generation become out of balance that affects system frequency, not voltage. The inertia of synchronous machines helps to arrest a fall in frequency while reliable reserve generation sources ramp-up to meet the deficit in generation relative to demand.
Grid operators are installing batteries to provide frequency response. That service should not be needed regularly just when a large generation infeed is lost. Hence the charging duty is not onerous. If 50MW battery installations were charge cycled daily, like a laptop/mobile phone, the batteries would soon lose >30% of their capacity so need replacing after a short time. Suppliers will not warrant batteries for daily cycling.
Pumped hydro storage plants are excellent for frequency response and don’t have a charge cycling problem.
Most wind/solar farms don’t provide reserve or frequency response services but will have to in future. Grid operators are cautious over asynchronous machines being able to provide response and reserve services. On the few days in Germany when the output of their wind/solar fleet is capable of meeting 100% of system demand I believe synchronous machines remain connected too to provide response and reserve so the grid remains stable.
Helmut Frik says
Nigel, take a deeper look into pattery techology. There are many, many kinds of batteries, and those used in laptops are high capacity per weight/low cost batteries, with very low numbers of cycles.
On the other end you have specialised LiFePo Batteries, with significant higher weight per capacity (irrelevant in stationary use) which can run 5000-20.000 100% cycles. those are used for grid services.
Accept that tthe engineers who build the batteries for such services, and which give the OK to finance them know more about the topic than you. Silently, without much public notice, several hundred MW of such batteries have been rolled out in germany, because they provide the grid services cheaper than th old fasioned and slow to react synchronus machine-rotating mass- electromechanical spring- systems with their tendency to swing, and their catastropic behaviour once maximum possible additional power is exceeded for a very short time.
Things are changing very fast in the market.
Nigel West says
Frequency response service is called on infrequently so batteries are not cycled daily hence no need for higher cost battery technology. Installations just need to be kept topped up. But cycle them daily and they will not last long.
Several hundred MW of batteries are being rolled out across the UK to provide frequency response services to Grid. Grid tendered for the service so the lowest cost battery tech. will be used. They only need a few 100MW to go with existing PHS capacity. They will not be procuring more. There is no business case for more battery capacity for just arbitrage use to store surplus renewables .
Helmut Frik says
Nigl, there is no significant higher cost per kWh in the high weght- high cycles batteries. But lower costs for operation. I do not really care what somebody buys as battery in UK. Here the batteries are also used to compensate short time load and generation fluctuation, and to maximise use of grid section, and to provide reactive power. They finance themselves from these services.
Bob Wallace says
” But cycle them daily and they will not last long.”
20,000 / 365 = ?
(Hint: The correct answer is more than a half of a century. Longer than any nuclear reactor has ever lasted.)
Bob Wallace says
” when demand and generation become out of balance that affects system frequency, not voltage”
Are you sure? When I turn on a large piece of woodworking machinery there is a voltage drop. When I turn it off there is a voltage spike.
If the power is being supplied by a spinning generator (rather than a battery/inverter) the generator slows under load which, in turn, decreases RPM until more fuel allows the generator to come back up to speed. Frequency drop follows voltage drop.
“If 50MW battery installations were charge cycled daily, like a laptop/mobile phone, the batteries would soon lose >30% of their capacity so need replacing after a short time.”
Tesla has simulated over 500,000 miles on their batteries and they continue to operate at over 80% of its original capacity. Figure 250 miles per charge/discharge cycle, that’s 2,000 cycles. Cycled daily those batteries would last 5.5 years above 80%.
Toshiba’s SCiB lithium-titanate cells have undergone 40,000 cycles and remained above 80% original capacity. That would be 109 years of daily cycling.
Nigel West says
Fancy battery technology to improve life will only make them even more expensive. Tesla’s Powerwalls use Lithium ion cells. Their warranty says down to 60% capacity after 10 years, or 18MWh discharge.
https://notalotofpeopleknowthat.wordpress.com/2016/07/02/teslas-incredible-shrinking-powerwall-warranty/
If their grid scale units use Lithium ion tech. performance will probably be similar.
Bob Wallace says
Lots of new cars come with 36 month, 30,000 miles warranties.
Do those warranties tell you anything about the expected lifespan of a car?
What we are seeing in the real world with Tesla batteries is that the vast majority are holding above 90% capacity as mileages exceed 100,000.
There are a few packs that fell below 90% early on. That’s where the warranty comes in. Those packs go back to the shop where the substandard cells are identified and replaced.
Bob Wallace says
“One piece of anecdotal information: Jacobson states in the paper Supplementary Information that it takes LOADMATCH about three to four minutes to simulate an entire year. Our simulations of just the Texas electricity market can take hours to run, and can take significantly longer for simulations of high levels of renewables.”
Yesterday I put together a spreadsheet model of 2014 ERCOT load and wind generation. One hour data (24 x 365).
In 2014 ERCOT generated about 10% of their total electricity with wind. Only one hour block out of 8,760 was fully supplied by wind.
The unsubsidized cost of onshore wind in the US is now under $0.03/kWh and heading lower. I think it’s a safe assumption that onshore wind will be at or below $0.02/kWh in the next few years.
If ERCOT added 10x more wind generation they would generate 103% of their annual demand/load. 47% of all annual one hour blocks would be fully supplied by wind. 30% of generated wind would be curtailed (I assumed no storage or dispatchable loads). With 30% curtailment the cost of wind jumps from $0.02/kWh to $0.026/kWh.
If ERCOT then doubled the amount of wind generation over the “100% generation” level 73% of all one hour blocks would be fully supplied by wind. 58% of all wind produced would be curtailed. The cost of the electricity used would rise to $0.032/kWh.
If ERCOT tripled the “100% generation level 83% of all one hour blocks would be fully covered by wind. 70% of all wind generated would be curtailed. The cost of electricity used would rise to $0.034/kWh.
The short version: We can overbuild wind a lot without driving prices very much. Overbuilding, at least in the ERCOT case, would be much cheaper than building storage.
Just some rough data on the length of stretches when wind did not produce 100% of the hourly demand:
1 thru 6 hours = 49%
6 thru 12 hours = 29%
13 thru 18 hours = 9%
19 thru 24 hours = 3%
25 thru 48 hours = 9%
49 thru 57 hours (less than 2.5 days) = 1%. One 51 hour stretch and one 57 hours stretch. No low wind periods greater than 2.5 days.
When there were stretches of a few hours broken by one or two hours of adequate generation I counted the entire long stretch as one event. A 7/1/8 would have been counted as a 16 hour stretch of inadequate wind.
There were no “zero wind” hours. Some generation always happened. I took a quick look at a 43 hour low wind period in early August. Even though demand was not met wind still produced 59% of demand.
(BTW, my model does a rerun in less than 30 seconds. ;o)
Nigel West says
I realise this article is focussed on the US, but overbuilding renewables has been looked at for the UK here:
http://euanmearns.com/the-quest-for-100-renewables-can-curtailment-replace-storage/
Even if feasible, the required storage/wind/solar capacity would be prohibitively expensive for the UK. That is just to deal with the smaller issue of electricity demand, not total UK energy demand.
Helmut Frik says
Total UK energy demand would make things much more easy, since thermal stroage and batteries in BEV would balance out smaller low wind periods, as solar would balace out low wind periods in summer, where there are more frequent.
As well as power imports and exports are always possible, to fill the gaps. Alot of factors the fake news site you reference ignores.
Nigel West says
Over cycling BEVs to support the grid will ruin an expensive EV battery. If I owned one I would not permit a supply utility to use my car battery. Thermal storage is an expensive pipe dream and will not heat UK homes adequately during the winter.
The UK requires secure power supplies under national control so forget over relying on imports.
Helmut Frik says
So you always fuel your IEC car at exactly the same minute of day with exactly the same amount of fuel I guess, no matter if you need the fuel or not.
Otherwise, like usual people you would have the flexibility to fill the battery of the car, which e.g. runs 400 km on one fill of the battery and is moved e.g. 30km per day, between every day to once every 13 days, the latter to bridge up to 12 days with high prices. No discharging neccesary.
Also I guess you live in a mass-free home made out of utrathin foil like Graphen, otherwise your building would have mass, and using the +/- 2° tolerance of usual thermostates, about which nobody complains, this would allow to store many TWh of thermal energy in existing buildings without adding a single piece of equipment, only be using intelligent control algorithms (software, nit hardware).
Bob Wallace says
If the grid paid you enough to use your EV batteries you’d play along.
It’s unlikely that we’ll see EV batteries play a significant role in grid supply. Stationary batteries are likely to be much cheaper than batteries designed to be hauled around in vehicles. Plus there would need to be inverters to turn the battery power into grid AC (plus meters to measure how much is sold).
EVs will almost certainly play a very important role as dispatchable loads. EVs will need, on average, three hours or less of charging per day from a normal 220VAC outlet. That means that they can be charged when electricity is the most available and not charged when supply is strained.
“Thermal storage is an expensive pipe dream and will not heat UK homes adequately during the winter.”
[…]
There is a company (Ice Bear) that cools/freezes water at night when electricity is cheap then uses that stored “cool” to run AC during the day when electricity is expensive.
Their system uses 450 gallons of water for a normal sized US house. 60 cubic feet. A cube a bit less than 4 feet on each side.
Heat can be stored in the same way.
Perhaps there would need to be a bit of excavation in the garden. Bury a few thousand gallons of insulated water under the rose bushes.
Bob Wallace says
I don’t think you’re doing the right math. (And I trust nothing Means does.)
Onshore wind, in the US is moving toward $0.02/kWh unsubsidized. Offshore wind it other parts of Europe is getting close to $0.04/kWh, IIRC.
If Hinkley Point came online today the cost would be a bit over $0.11/kWh. With each year its cost will rise while the cost of wind and solar will drop.
More than 50% of electricity used should come directly from wind turbines and solar panels. Less than 50% should need to be stored. It’s hard to nail down the cost of PuHS. Assume it’s $0.10/kWh (although the Swiss claim about half that).
50% at $0.05 and 50% at $0.15 (5 for generation and 10 for storage) comes out less than $0.11.
And I’d estimate that with modest overbuilding and load shifting the amount used direct could rise to around 70% (at <5c) and 30% stored (at ~15c). That's $0.08/kWh electricity and a 24/365 grid.
Nigel West says
You need to consider capital costs.
To meet UK demand, renewable capacity required would be around 215 GW of off-shore wind at c.
ÂŁ3m/MW, and 137GW of solar at c. ÂŁ1m/MW. Would come to ÂŁ800bn so can be dismissed as unaffordable for the UK when there are much cheaper options. At least 3TWh of storage would be needed too – 100 times current storage capacity.
Whereas Nuclear costs about ÂŁ3m/MW – about ÂŁ150bn to cover UK demand. Nuclear would have operating costs to consider, but 215GW of wind and 137GW would need investments in 3TWh (not feasible using PSH and batteries highly unlikely) of storage and more extensive transmission works. 137GW of solar would not feasible either for a small island either unless UK gave up farming!
Helmut Frik says
Well offshore wind is already well below 3m, around 2m Pounds/MW and solar is at around 0,7m Pounds /MW
215 GW Offsore, further outside and with full load hour like in germany would provide around 860 tWh per year, 137 GW of solar would provide around 137 tWh per year, resulting in 997 TWh per year. Actual consumption is somewhrere around 400 TWh per year. So pretty much power to export.
Storage would be better in one or a few single units. Since UK has little space on land, but large areas of ocean around it, it would be reasonable to build such pumped stoage units offshore, in cas someone needs them which is unlikely as long as UK does not disconnect itself from the grid. Well in cse of UK nothing is sure, so maybe you will disconnect yourself from the rest of the world.
Pumped storage has a huge economy of scale. So when “Think Big” is included, the costs shrink dramatically. But so far there is no need for it even with very big extensions of renewable power generation.
Bob Wallace says
One has to consider capital cost, financing costs, and operating costs. All of those are rolled into cost per kWh/MWh. Cost per unit of electricity produced (or stored) is the best metric.
If you want to know why a per kWh price is what it is look into the capital, finance, and operating costs.
Nuclear is simply too expensive, no matter how many times you ignore the obvious.
Allan Hoffman says
As someone who has been thinking about energy issues since 1969, and who sees the U.S. and the world on the path to an energy future largely dependent on renewable energy, my reaction to the Jacobson-Clack debate is that I’m glad to have it. After many years of battling statements by traditional energy representatives that renewables are cute but can’t do the job, it is satisfying to see that achieving 80% renewables penetration is now being widely accepted. Admittedly, the final 20% will be a challenge, but we can see possible ways to make a significant dent in this final piece as well. We may not get there on Jacobson’s 2050 schedule, but we will make significant progress in the next few decades. All good news as far as I’m concerned.
Bob Wallace says
I suspect the world will hit 80% renewable before 2040.
Wind and solar have become so affordable that it now makes sense to turn off fossil fuel plants when the wind is blowing or Sun shining. And the cost of wind and solar are continuing to fall so the pressure to curtail FF will only increase.
The external costs of fossil fuels are becoming much better known to the public in general. People, on a large scale, are now understanding that fossil fuel emissions (power production and transportation) are killing us and costing us huge amounts in health costs. Now that we have (electricity) or soon will have (vehicles) more affordable alternatives public pressure will increase to get this stuff out of our lives.
And the world is becoming more concerned with climate change. People are going to ask, with louder and louder voices, why we are paying more for electricity and travel that increases global warming when we could pay less and lower the rate of global warming.
In 2014 the planet got 62% of its total energy from fossil fuels. Convert 3% of FF to RE per year and we replace 100% fossil fuels in 20 years.
Three percent is not an unreasonable target. We probably have 1% to 2% of all existing FF plants that age out each year and have to be replaced anyway. Replace them with RE, save significant “installed cost” money, use that savings to boost the rate of installing more RE.
Right away our electricity costs began to drop. Our cost of driving per mile drop. Our spending on FF pollution health costs began to drop.
Paul Bryan says
I have not read the study, and my expertise in the field is inadequate in any case to dispute either the study or the re^N-buttals since its release. I have read multiple accounts of both the study and the primary rebuttal, plus lots of comments like those above.
My interpretation is that Clack’s study is FAR off the mark on the level of hydro-power it requires. Any attempt at that level would consume more energy than it produces, even if we only count the gasoline burned by lawyers and activists going back and forth between hearings and protests. It is yet another classic example of the NAS as an academic-dominated organization with little feel for practical realities. NAS / NRC publications on “real-world” issues are frequently so deeply flawed that PNAS is probably the only place they could get published.
Clack also seems to be predicting scientific advances, and that is hard to do on any given timeline. Trends are useful up to a point, but if we believed trends without reservation, we’d predict solar cells becoming >100% efficient in a few decades. That’s a silly example, of course, but other technical improvements may reach fundamental barriers, or at least plateaus that are not so obvious.
All that being said, although Clack’s energy mix and timing may be far off, the GOAL is still one very well worth pursuing. I firmly believe that technology will advance, but I also believe that technology is very much a second-order effect here. Until we arrive at some way to have sensible, intelligent discussions of public policy on global, national, and local levels, the actual RESULTS will fall far short of and lag far behind what technology is capable of delivering. And those discussions seem to be moving in the WRONG direction. Trump is merely a severe, acute symptom of a long-term, chronic disease. Until society can overcome partisanship and financial self-interest, all the studies and all the technical advances in the world will make little difference.
Jim Shnell says
The decision whether to set 100% renewable energy as the climate change goal for the next few decades, as advocated by Professor Mark Jacobson and other authors in a 2015 paper on renewable energy, or to set 80% decarbonization as the goal, including carbon sequestration and nuclear power as well as renewable energy, as advocated in a more recent paper challenging some of the assumptions of the earlier paper, has been vigorously debated recently. As a practical matter, however, it does not seem that the outcome of the debate is likely to have great significance. It is unlikely that, once we reach 80% decarbonization, we will (or should) cease to pursue the goal of 100% renewable energy or that, if we need to continue to create carbon pollution for some time while pursuing the 100% renewable energy goal, we should not use any reasonable method of sequestering the carbon to minimize its damage until the 100% goal can be reached. The objective should in any event be to achieve the 100% goal as quickly as feasible. Even those who urge using 80% decarbonization as the goal acknowledge that 100% renewable energy can be achieved. Despite the widespread skepticism that greeted similar goals in years past, such goals have been achieved ahead of schedule, in large part because of changes in technology that were not foreseen when the high goals were set.
Indeed, we can change the terms of the debate from the âwind, solar and hydro onlyâ restriction in which it is currently bogged down. We can add other forms of renewable energy and enable the achievement of the 100% renewable goal well before 2050, solving a number of problems quickly and inexpensively. For example, California is a leader in renewable energy in part because it is the foremost producer of geothermal energy in the world. Geothermal is important because it is baseload. Hydropower, the other main form of renewable baseload power, is becoming less reliable as climate change worsens. Recently, researchers have found that Californiaâs geothermal resources are much greater than previously realized, not only in volume but also in quality. A significant part of those resources exceeds the critical point of water, both as to temperature (374°C) and as to pressure (221 bar). Supercritical water has dramatically different properties from liquid water. It can, therefore, generate electricity more efficiently and less expensively than current geothermal technology. More importantly, the electricity produced can, because of the temperature and pressure of the resources, be used, with additional geothermal heat, to power electrolysis using a ceramic proton exchange membrane to produce hydrogen efficiently and inexpensively. When intermittent wind and solar power need to be balanced, the electrolysis can be curtailed in a matter of seconds, and the electricity can be transmitted to balance the grid. When not needed to balance the grid, large amounts of hydrogen can be produced to replace fossil fuels currently used for transportation. In addition, the natural gas-fired generators needed to balance the grid today can be modified to use hydrogen as a fuel and expand the role of geothermal for balancing the grid. Hydrogen will become the primary mode for storing energy.
This use of supercritical geothermal cogeneration enables the addition of a large amount of geothermal to achieve the goal of 100% renewable energy, and also enables continued rapid expansion of wind and solar power because of geothermal energyâs balancing capability. It will unify the production of electricity and the supply of transportation and other fuels into a single, integrated energy industry. Moreover, while California is a logical place to pursue these developments, they can be used worldwide. The primary source of supercritical geothermal resources, which is the oceanic rift zone, stretches 60,000 kilometers around the world.
The Ocean Geothermal Energy Foundation is a nonprofit corporation formed to support research and development into high-enthalpy geothermal resources. We are working to advance supercritical geothermal cogeneration (i.e., using geothermal resources in multiple, coordinated ways) to solve multiple problems. Please see our website at http://www.oceangeothermal.org
Bob Wallace says
” Hydrogen will become the primary mode for storing energy.”
Hydrogen is a very inefficient way to store electricity. Well over half the energy input is lost. Compare to pump-up hydro which can be 85% efficient.
And hydrogen is difficult to store. It requires a lot of volume and, being a tiny molecule, easily escapes. Water, we store it in a hole in the ground.
Paul Bryan says
Agree that H2 per se is a horrible storage or transmission medium for energy. In some cases, though, such as where land is too flat for pumped hydro, or where natural gas can be produced and CO2 captured and sequestered at very remote sites, hydrogen may effectively be stored and transported in the forms of ammonia or methanol. Ammonia has advantages for power generation or direct use as a fertilizer; methanol (“liquid syngas”) is more useful if liquid fuels or chemicals are preferred end uses. Neither is terribly efficient, and perhaps some new battery technology will eventually emerge as a superior grid-scale storage medium, but both are superior in many ways to H2, and perhaps to any other option for making use of energy that would otherwise be 100% wasted (i.e., “0% efficiency”).
Bob Wallace says
It’s kind of hard finding extensive areas which are totally flat. And don’t have some old subsurface or open pit mines. Or abandoned rock quarries.
And are too far away from hilly places to be reached with HVDC transmission.
It’s looking like batteries will become the ‘short cycle’ technology for time-shifting electricity. They might become affordable enough for once every three days, on average, storage. PuHS can store a lot of energy cheaply by just excavating larger holes in the ground.
Using batteries and PuHS together means that we wouldn’t need enough PuHS power to cover high demand hours. When an extended period of low wind/solar is getting underway just start running the PuHS ~24 hours a day. During low demand hours recharge the batteries and then the batteries + PuHS output can meet peak loads.
Helmut Frik says
Even if there is one you can still build a PHS like this one: https://de.wikipedia.org/wiki/Pumpspeicherkraftwerk_Geesthacht in case there is plenty of low cost space.
Or you build it directly into a ocean: http://www.spiegel.de/wissenschaft/technik/belgien-plant-kuenstliche-nordsee-insel-als-pumpspeicher-a-1041411.html This test model is economical unineresting. but the effort to build it rise linear with diameter, the stored energy with the square of diameter. So in case someone would really need some TWh of storage, it would be in theory possible to build one with 60-100km diameter in the north sea, maybe connected with the island Tennet wants to build there anyway. Amout of material moved would be in the same area as german lignite mine move during 2 or 3 years today. So that’s damned big, but far from being impossible.
IF someone wants to have a storage like this.
Paul Bryan says
Yes, gravity is pretty well understood. But in, say, Kansas, even though it is not completely flat, to get enough elevation change for such a scheme to work, you need either a very deep hole or a very long pipe. If the “rise over run” for the pipe is too small, you wind up losing too much energy in frictional losses pushing the water uphill to store energy and then letting it downhill to recover it. And digging a hole large enough and deep enough for practical energy storage is not itself a practical idea.
In other places, there are perfectly good elevated spots available, but “Whoa!” someone has built a house there. Maybe several houses, with nice views enjoyed by the lawyers and rich friends of elected officials who live there. And guess what? Suddenly, that nice elevated spot is no longer available for a pumped-hydro pond the size of Lake Wobegon.
Don’t get me wrong, I am a big fan of pumped hydro, lakes in general, and gravity itself. But it simply is not an economically viable solution in every location where we’d like to be able to store off-peak renewable energy. In the U.S., some of the best regions for wind energy tend to be rather flat, as do some of the best regions for solar energy, and many are arid as well.
And if you don’t know lawyers and their friends in elected office . . . Welcome to California! A very ambitious 2013 proposal from the California Public Utilities Commission about grid-scale storage “. . . specifically excludes pumped hydro storage projects of 50 megawatts or more — the only storage technology with a proven track record of cost-effective, long term operation against natural gas-fired peaker plants — from being considered for the mandate” (and yes, I know megawatts are a RATE, not a CAPACITY, but apparently the CPUC does not). With California’s coastal hills and the Sierra Nevada range, combined with its large, energy-hungry population, it should be ideal for pumped hydro, but to quote from a movie about another man who tried to stick his nose into California water issues : “Forget it, Jake, it’s Chinatown.”
Here in California, we mock the orange-haired idiot in the White House, as well we should, but it turns out we can nearly match his prodigious ignorance in many cases. Sigh!
Bob Wallace says
Kansas is surrounded by states with plenty of mountains. Kansas is selling wind-electricity into other states. It can purchase stored electricity back from those states as needed. Wires can carry electricity in both directions.
Kansas also has abandoned open pit and subsurface mines, but I do not know if any are usable for storage. Apparently some of the abandoned open pit coal mines are 100 feet deep. That’s a good start on a storage facility. And, apparently that area is pretty messed up by the mining that happened early last century.
“I know megawatts are a RATE, not a CAPACITY, but apparently the CPUC does not”
The CPUC understands the difference between rate and capacity very well. As does the entire storage industry. The main consideration for large scale storage has been power (rate per unit time). If a large thermal plant suddenly goes offline how capable is the storage facility in terms of keeping grid supply up.
You will commonly see storage talked about in terms of MW, not MWh because the main consideration is how rapidly can the facility dump stored energy onto the grid. If you look at the DOE database for PuHS they list Power (in kW) for facilities. They don’t even have Duration (how one turns MW into MWh) data for all the world’s PuHS facilities.
Finally, yes projects can run into local resistance. Sometimes the projects are canceled or relocated. Sometimes the projects are built over local objections.
I give you the US Interstate Highway System for your consideration.
Paul Bryan says
Wow! If you want to “give me” the Interstate Highway System, the give me President Eisenhower and give me 1958, too. I’m afraid our country has changed a lot since 1958, and (mostly) not at all for the better. If you think that pumped hydro on a massive scale in the Rockies will be a practical way to store Midwestern wind energy in technical, economic, OR political terms, then you and I are living in worlds even further apart than 1958 and 2017.