A giant model of the entire US electricity sector which captures distributed energy resource (DER) potential has been getting a lot of attention. It estimates that distributed solar and storage can save $473bn in system-wide costs when deployed at scale (enough to power more than 25% of US homes). Rooftop solar is definitely much more expensive than grid generation, but its location (on your own roof) avoids a range of costly transmission and distribution investments. The sophisticated model digs deep, including estimates of changing load and demand as the system grows. But Meredith Fowlie at UC Berkeley’s Energy Institute at Haas explains that the model, although valuable and impressive, makes assumptions that underestimate DER costs, and assumes optimal deployment. In the real world, short planning horizons, the need for contingency planning, and transaction costs associated with identifying and incentivising the most promising investments are likely to reveal that cost-effective DER projects are fewer than the model assumes.
Policy makers, power companies, and a majority of Americans are coming around to the idea that the U.S. needs to accelerate its efforts to green the grid. Getting on the renewable energy train is one thing. Agreeing on how and where to ride this train is more complicated.
Discussions about the right mix of grid-scale and distribution-scale resources are getting polarised. At the crux are tricky questions about how distributed energy resources – such as rooftop solar and storage – will impact future grid operations and costs. We’ve blogged (and blogged) about what we know and don’t know about distributed generation benefits. But we’ve yet to dig into this high-profile work by Christopher Clack and co-authors at Vibrant Clean Energy (VCE).
Can Distributed Solar save $473bn in system-wide costs?
Clack et al. have built a giant model of the entire US electricity sector which captures distributed energy resource potential in some detail. Their national study found that building a lot more distributed solar and storage (enough to power more than 25% US homes) would save $473 billion in system-wide costs. Last week, VCE released their California-focused study which estimates that distributed solar + storage could save California ratepayers $120 billion over the next 30 years.
Energy pundits have been swooning over the high-powered modelling and the provocative punchlines. Distributed solar proponents want to take these findings and run with them. I can see that this modelling exercise is exciting – who doesn’t get excited about trillions of data points? But I also think some of the model’s key assumptions could significantly overstate the real-world cost savings potential.
Including all the costs
Grid-scale solar technology costs significantly less per megawatt than rooftop or community-scale solar technology. That’s a fact. But it’s also true that siting distributed energy resources (DERs) close to electricity demand could help us avoid expensive investments in local transmission and distribution. If we want to measure the full value of distributed solar, we need to assess this potential for reduced grid costs.
Before diving into a big and complicated model, let’s break down some fundamentals.
The first step of a forward-looking analysis involves forecasting how local demand patterns will change as we start to electrify more stuff:
Next we need to identify where and how grid system constraints will start to bind as electrification drives up electricity consumption. Not easy because there’s lots of variation across the system in terms of where there’s grid capacity to absorb new loads (as this cool map of my neighbourhood shows):
Finally, across locations of the grid where capacity is projected to get tight, we need to figure out where it could make economic sense to invest in distributed energy resources (e.g. solar plus storage) versus centralised generation and grid upgrades. This requires estimating deferrable distribution costs (which are notoriously hard to pin down).
How do you carry out this kind of exercise for many thousands of locations across the US grid? This is the challenge that the VCE team set out to tackle…
Digging into distribution cost modelling
This VCE model is impressive along many dimensions. Some of these are too technical for this non-engineer to really understand. But the part that I’ve been most interested in involves as much economics as engineering: How to estimate the grid costs that could be avoided with DER deployment?
[Wonk-alert]: This section digs into the details of the deferrable distribution cost modelling. These are important weeds to wade into! But if you want to just cut to the chase, ignore the text between the weeds below.
Ideally, the VCE model would incorporate location-specific information about distribution system constraints and costs to understand how DER benefits could vary across the US electricity system. But this detailed information is not readily available. So instead, the model opts for a much coarser approach.
To calibrate the model of distribution costs, the VCE team uses cost parameter estimates from this UT Austin study which analyses annual distribution system spending reported by US utilities between 1994-2014. It’s important to note that these UT Austin researchers were not trying to disentangle the causal effect of one cost driver (e.g. peak load) from another (e.g. annual electricity consumption). Their report summarises average univariate relationships between utility reported distribution costs and each cost driver. So, for example, when they summarise how utility distribution costs vary with kW peak demand, their estimate captures not just the impacts of increasing peak demand, but also the effects of factors that are positively correlated with peak load (such as supporting more higher annual kWh consumption).
Getting back to the VCE model (Section 1.9.2 of the technical appendix for you dive deepers), the distribution cost implications of different load profiles are estimated as the sum of two parts. The first component multiplies peak load on the system in a given location by the cost parameter from the UT Austin study that captures the average relationship between distribution costs and peak demand. The second part multiplies annual grid electricity demand on the system by the cost parameter that captures average relationships between distribution costs and utility distribution electricity consumption.
The VCE approach seems like a reasonable way to get the distribution cost model up and running. But it’s far from ideal:
- It assumes that all load reductions deliver the same cost savings, regardless of how constrained – or not – the system is likely to be in a particular location. This abstracts away from significant variation in cost deferral potential across locations.
- It uses average cost parameters to estimate marginal distribution system cost changes. These can be very different (my guess is that average cost exceeds marginal).
- Each of the two cost parameters from the UT Austin study capture the combined effects of correlated distribution cost drivers. It seems to me that adding the two components together – one that implicates peak load and one that implicates annual load – will over-estimate the cost implications of demand increases (and exaggerate the benefits of using DERs to offset an increase).
If we’re serious about assessing the potential for DER benefits, we need a better understanding of where DERs can offer real grid cost savings. Fortunately, one of Berkeley’s super-star graduate students is hard at work on this question…but I’ll leave that for a future blog.
Optimal versus actual DER deployment
It’s one thing to figure out how investments in distributed solar and storage could be optimally deployed to minimise costs. It’s another thing to make these distributed investments happen when and where we need them.
The VCE model is projecting savings under optimal deployment. So far, our track record with distributed generation deployment has been anything but optimal. Net metering incentives, for example, are available everywhere in California, regardless of whether there’s any potential for grid system benefits.
California has been trying for years to direct DER investments towards high value locations. This blog series provides a historical overview. An important lesson learned so far? The short planning horizon for distribution investments, the need for contingency planning, transaction costs associated with identifying and incentivising the most promising investments “leave a very narrow Goldilocks zone for procurement” of cost-effective DER projects.
Keeping it real
Wildfires, heat waves, a mega-drought, yikes. We’re getting daily reminders that we need to get our climate change mitigation act together. There’s climate urgency behind efforts to accelerate investment in renewable energy. There’s also a social obligation to keep costs contained and electricity rates affordable.
The VCE model is impressive along several dimensions. But there are some critical assumptions and blind spots that complicate the translation of optimistic findings to real-world policy priorities or prescriptions. I think we need a reality check on the deferred grid investment estimates. And we need to reckon with the fact that real DER deployment will likely be very different from the optimal DER deployment that the modelling assumes. That’s my take. Interested to hear what our blog-reader-brain-trust has to say…
Meredith Fowlie is an Associate Professor in the Department of Agricultural and Resource Economics at UC Berkeley. She is also a research associate at UC Berkeley’s Energy Institute at Haas and the National Bureau of Economic Research.
This article is published with permission
Keep up with Energy Institute blogs, research, and events on Twitter @energyathaas