“Shape-Shifting” Load Curves

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

Sub-Title: Achieving a “Net Zero” or “Net-Negative” Capacity Build

In previous dialogs we have analyzed and forecast the capital investment requirements of the power system through 2030 ($400 billion for SG applications and $1.6 trillion for power system infrastructure), and presented strategies for reducing those costs such as the 80/20 rule for SG applications deployment, and the potential for SG asset management applications to optimize the replacement of aging infrastructure.

We have also written about the shift to a “net” load forecast for dispatching decisions, and the increasing uncertainty of the “net” load forecast as self-optimizing customers make autonomous decisions about local energy use and distributed resource activation.

While replacement of failing aged equipment can’t be avoided, what about capacity needed for demand growth?  A major longer-term benefit of certain SG applications is the deferral of capital expenditures for new power plants through better use of existing plants and equipment.

IMG_3497 150x150How does that work? Many of us who have worked in the power sector look at the average utilization rates of 50% - 65% for existing capacity (generation, transmission, and distribution) and, even allowing for peak to average demand ratios, can’t help but think that the utilization rates could be increased, maybe substantially.

Could we even achieve a “net-zero” capacity build for the next decade or two with the help of SG applications? Could we reduce the amount of replacement required with a “net-negative” capacity build? What would the implications be for the current utility business model?

The New Load Curve

Dispatchers do not dispatch to the traditional load forecast any more, i.e., the gross demand of the customer, they use the “net” load forecast:  the load that the utility sees at the point of common coupling.

The utility or ISO can itself shape/manage the “net” load through dispatchable demand response, distributed generation (DG), distributed storage (DS), and Virtual Power Plant programs.

However, self-optimizing customers will shape their own load curve autonomously (with little to no visibility to the utility or ISO) by dispatching DG or DS, programming price-responsive devices (including high-speed M2M control loops), and/or charging EVs.

B&W Pole w/Wires 150x150The uncertainty of the “net” load forecast will be increasing due to the unknown price-elastic behavior of customers in dynamic pricing regimes and the intermittency of distributed PV generation.

The transactive energy concept aims to create dynamic equilibrium in an integrated power market for all participants including end-use customers, where prices will clear at all nodes in the power system simultaneously (or close to real-time), resulting in a continuing matching of power supply and demand. We are very far from that capability at present (both technologically and policy-wise), and some less optimal and possibly more practical approaches may intervene, such an orderly backlog management/queuing approach based on traffic engineering, see here.

By How Much Can Future Gross and Net Load Curves Differ?

It appears that the difference can be large, based on some fairly realistic scenarios, the most well-know of which is the CA-ISO “Duck Curve” – see graphic below. It depicts the net load curve that the CA-ISO will see, after subtracting all of the planned PV and wind generation (centralized and distributed) from the “gross”  load curve. Note that the “shape-shifting” is not technology-driven or economics-driven -- it is purely policy-driven as a result of the RPS mandate. No need to dwell here on the implications of the “Duck Curve” – they’re covered very well elsewhere -- primarily consisting of the need for more flexible capacity and intelligent grid capabilities (per CAISO), and load dispatch mechanisms.

PNG Duck

If this type of “net” load reflects the future displacement of the peaks and troughs of demand, we will need to re-think our approach to the design of time-of-use and other dynamic pricing regimes – traditional rate designs would likely exacerbate the situation.

It is clear that if we do nothing else, we will be going in an undesirable direction in terms of asset utilization by increasing our peak to average load ratio, and requiring base load plants to run inefficiently at partial capacity.

So What Does This Mean for Asset Utilization Rates And What Are Our Remedies?

To be thorough, we would need to look at the utilization rates of generation, transmission and distribution assets separately. Here, we will confine ourselves to examining generation capacity.

Since we are looking at long-lived assets, we’d prefer to characterize asset utilization rates on an annual basis, using traditional (and non-traditional) Load Duration Curves (LDCs) – see diagram below. In terms of asset management, these provide a more useful look than the snap-shot of a daily load curve. Here’s how to interpret a LDC -- take any point on a LDC curve -- it represents the number of hours per year that the system load exceeds that particular load.

In the LDC representation (see below), asset utilization improves when (1) the area under the curve increases as a percentage of the total area, (2) the left-hand peak decreases, and (3) the middle portion of the curve is broadest (i.e., a “flatter” curve). Traditionally, utilization factors have been in the ball-park of ~55%.

We will make the case that to improve asset utilization in the current energy policy and regulatory climate, SG applications deployment is a “must”.

IMG_3406 150x150Below we present conceptual (not to scale) LDCs – a traditional LDC (gross load), and non-traditional LDCs (”net” load) where only distributed renewables have been netted out. The LDC on the left reflects a traditional LDC for today’s power system. In the future, traditional LDCs will be a historical artifact. The Net LDC in the middle represents the impact of policy initiatives enabling customer generation, dynamic pricing, smart appliances, and DR, based on aggregated “net” load.  The Net LDC* on the right shows the potential impacts of SG applications introduced to increase asset utilization and operating efficiency.

We will use the areas under the curves as proxies for asset utilization rate. To simplify our analysis, we assume that no capacity is built except to replace failing infrastructure, or to create additional reserves to maintain system reliability as the proportion of variable generation increases.

PNG LDC Diagram

On the left, the generation asset utilization rate = A/(A+B+C). Today, it might average 55% - 60% in the U.S.  Target capacity margin might be about 15%.

In the middle future scenario, the “Duck Curve” effect has occurred, albeit smaller than for the CAISO curve above because we are taking only distributed PV into account in creating this Net LDC”. However, broad implementation of dynamic pricing policies is assumed to have occurred over an extended period of time, and it adds to the distortion of the Net LDC”, relative to the traditional LDC on the left. Asset utilization falls (= A”/(A”+B”+C”+ R), where R = additional reserves), even as reserves had been added to offset anticipated intermittency. Annual peak increases. The middle part of the LDC narrows. Capacity margin (C”+R) increases.

For the middle Net LDC”, the peak to average demand ratio has increased substantially. Capacity margin has increased a lot too – in this scenario, planners did not act to curtail the addition of additional reserves because they anticipated that while the implementation of SG applications would increase/optimize asset utilization factors, it would take longer to implement. Of course, some of these outcomes will have been anticipated by system planners and capacity plans adjusted.

The major challenge for us today is that energy policy is being implemented well-ahead of:

  1. SG applications-enabling policies (such as RTP, standardization/interoperability requirements)
  2. Policies to enable market rationalization through the integration of wholesale and retail markets (including the ability of analytics to allocate costs on an individual customer basis)
  3. The control systems technology that will be necessary to manage the grid in real-time while maintaining acceptable service reliability

Yes, we are playing catch-up.

IMG_2597-150x150Let’s look at the right-hand Net LDC* -- this graphic purports to reflect the results of successful deployments of SG applications (see below) suitable for managing capacity and load in the context of an integrated power market. Utilization factor increases = A*/(A*+B*+C*+R). Annual peak decreases. The middle part of the curve widens. Capacity margin increases substantially (C*+R).

As would be expected, the broad enablement of SG applications can create much better outcomes in terms of asset utilization. This broad enablement will require three supporting elements: promulgation of supportive energy and regulatory policies, development and commercialization of interoperability standards, and broad-based advances in power system controls equipment and software.

A “Net Zero” Capacity Scenario

In the future SG-enabled Net LDC*, we see a generous “cushion” of capacity margin. Note that a base assumption for this Net LDC scenario is that no new capacity has been built to fulfil incremental demand.

This Net LDC* scenario leads one to speculate that the excess margin can be used to first fulfil any increased demand requirements due to load growth, and secondly, when this is done, it may offset some of the replacement requirements of failures of aging infrastructure. For a long interim period, it is likely that optimally deployed SG applications will improve asset utilization, and reduce power wastage, creating a “net-zero” or even “net-negative” capacity requirement.

So the SG applications in a sense “create” capacity.

How Do SG Applications “Create” Capacity From Existing Assets?

Some examples of SG applications that can “create” capacity include:

  1. Smart sensors: these allow operators to run the system closer to the envelope reducing the required capacity margin through better data, better communications, better controls, and faster response – e.g.,  if we increase the U.S. utilization factor by 10% through the approaches discussed above for Net LDC*, about 116 GW of capacity will be “created” -- at a cost of ~$1,500kW , we will have deferred about ~$174 billion of new generation capacity
  2. Volt/VAR Management, including CVR: reduce energy waste through the reduction of reactive power and tighter management of voltage levels, reducing the amount of capacity needed to serve loads. Using average numbers for potential VAR and V reductions, we can estimate the capacity saved and the value of this application
  3. Reduction of delivery losses:  provide the ability to coordinate and dispatch distributed energy resources (DER), Virtual Power Plants (VPP)/micro-grids as part of the dispatch stack, and facilitate DR programs – both DER and DR reduce transmission and distribution losses since they create capacity at the load eliminating the need to deliver the power. This type of load dispatch/management can be thought of being part of a transition to DSM 2.0. Knowing that the total delivery losses are ~8%, we can calculate the value of capacity saved and the benefits by estimating the DG/DR as a % of total generation capacity and pro-rate it for the reduction in losses
  4. Reduced SAIDI: SG OMS applications are being applied successfully today to reduce the length of outages through faster outage restoration (resulting in revenue re-capture, reduced penalties, and less unserved energy) plus improved reliability, stability, and security
  5. Preventative maintenance/asset management: use of smart monitoring and data analytics (“big data”) to support plant life extension, i.e., getting more out of existing capacity
  6. Better short-term “net” load forecasting: with better visibility behind the meter through to intermittent DG, M2M appliances, and EV charging, for instance, the ultra-fast collection of real-time data by the SG can increase the accuracy of the net load forecast, enabling tighter management of capacity and reduced reserves requirements, and improved power market protocols
  7. Improved distribution simulation models: smart sensors can be used to upgrade state estimation tools to near-real-time, providing the ability to maintain service reliability, security, and stability with less capacity margin; in conjunction with the sensors, the improved distribution models can then be used to locate SG applications at the highest value locations, reducing SG capex requirements
  8. More efficient market structures: SG data-related applications support more granular cost of service allocations, dynamic pricing programs, integration of wholesale and retail markets, monitoring of price-driven M2M appliances, provide data for improving power trading efficiency, and facilitate coordination between the physical power system and power markets

Net LDCs for Transmission and Distribution Systems?

8. DSC_0873-150x150The same type of analysis as above should be possible for improving utilization factors of T &D assets by constructing Net LDCs for each of these asset classes. These could be used to address congestion in the transmission systems and stressed circuits in distribution systems.  SG applications (e.g., dynamic line rating) can increase transfer capacity in congested locations, and smart sensors, coupled with improved distribution models, can be used to identify and assess distribution system stresses. Question: has anyone come across a reference or analysis of this type for T&D assets?

Parting Thoughts

“Creating” capacity -- sure, that’s great -- but there is an enormous amount of work to be done to implement SG applications that increase utilization factors by 5 - 10 percentage points. The work involves SG R&D, field demonstrations, standards development testing and certification, as well as energy and regulatory policy changes.

And SG applications are not free, but they cost far less per kW for “created” capacity than new power plants.

Comments, as always, welcome and appreciated.

 

One thought on ““Shape-Shifting” Load Curves

  1. John Powers

    Great stuff as always, Dom. Load Duration analysis is an important tool in comparing the relative value of various options to address issues like the infamous Duck Curve.
    The notion that SG applications can “create capacity” is an important one, and one that needs to sink in more deeply at all the ISOs, not just California.
    A couple of other concepts should to be considered as well. First, there is nothing cast in stone about “traditional” TOU rate structures. There is no reason that peak pricing periods could not be redefined to address a later net load peak.
    Second, a portion of the Duck Curve “problem” comes from the practice of mounting solar panels on south-facing roofs, where maximum energy production takes place too early in the day to address most high-load hours. West-facing solar panels produce about the same amount of total energy (depending on the specifics of local climate), but produce far greater output during the later afternoon hours of concern to the California ISO. See, for example, http://www.greentechmedia.com/articles/read/are-solar-panels-facing-the-wrong-direction. A mix of South- and West-facing solar would almost certainly provide a more valuable mix of benefits in California than South-facing alone.
    Both of these changes have essentially zero cost (no new hardware, no new technology breakthroughs, no preposterously expensive batteries, no multi-gigawatt gas-turbine-building binge), yet each could address important pieces of the issue.
    Just my two cents; other viewpoints welcome of course!
    Keep up the great work,
    John

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *