Smart Sensors for the SG: You Can’t Manage What You Don’t Measure

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

An Integrated Measurement Strategy for the SG

Obviously, since the SG is inanimate, we don’t expect it to intuit how to do “smart” things by itself (!). We have to provide it with data, and with analytical rules. Even the recent introductions of AI-based algorithms in some SG applications must still be derived from “learnings” from empirical data.

At present, the deployment of SG applications can be characterized as “tactical”-- uncoordinated with other activities in the SG, and special-purpose in nature – certainly not following the holistic, long-term visions of SG architectures and power markets developed by such entities as GWAC, NIST, EPRI, IEC, IEEE, SGIP, etc.  The result is a hodge-podge of application-specific sensors with different capabilities which don’t communicate across applications and which operate in different time domains. But it does not need to be like that, as we shall outline below.

Let’s Define Measurement, Sensors, and Smart Sensors

10. DSC_1334-150x150Smart sensors are the fundamental building blocks for the implementation of a truly “smart” grid. They are an essential part of every SG solution. Regular analog sensors become the “smart sensors” of the SG when they add intelligence to the measurement function, i.e., analog to digital conversion, processing power, firmware, communications, and even actuation capability.

We can think of smart sensors as the first link in a four-link SG decision-making chain that consists of:

(1) Location-specific measurement -- sensor function only

(2) Monitoring -- a sensor with one-way communications functionality

(3) Diagnosis at the “edge” -- a sensor with localized diagnostic intelligence based on data analytics and/or centralized diagnosis based on communicated sensor data

(4) Edge-embedded control actions (based on embedded algorithms, including Remedial Action Schemes (RAS)) -- a sensor with intelligence and control/actuator capability. The algorithms for this functionality could also be centralized and use two-way communications with an “edge” sensor/actuator, and/or they could drive peer-to-peer coordination of control actions at the “edge”; however, a substantial amount of R&D still needs to be done to develop autonomous real-time or quasi-real-time control algorithms for power distribution systems

To Date, Smart Sensor-Based Measurement in the SG Has Been “Tactical”

Granted, as we’ve said before, there is a reason for this tactical approach to sensor deployment – up to now the choices of SG projects are driven by energy and regulatory policies and rules that target a limited set of SG applications. Fair enough -- none of us expect that the evolution of the SG will follow a “grand deployment plan” – it will be imperfect, following the zigs and zags of these real-world drivers.

Continue reading

“Shape-Shifting” Load Curves

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

Sub-Title: Achieving a “Net Zero” or “Net-Negative” Capacity Build

In previous dialogs we have analyzed and forecast the capital investment requirements of the power system through 2030 ($400 billion for SG applications and $1.6 trillion for power system infrastructure), and presented strategies for reducing those costs such as the 80/20 rule for SG applications deployment, and the potential for SG asset management applications to optimize the replacement of aging infrastructure.

We have also written about the shift to a “net” load forecast for dispatching decisions, and the increasing uncertainty of the “net” load forecast as self-optimizing customers make autonomous decisions about local energy use and distributed resource activation.

While replacement of failing aged equipment can’t be avoided, what about capacity needed for demand growth?  A major longer-term benefit of certain SG applications is the deferral of capital expenditures for new power plants through better use of existing plants and equipment.

IMG_3497 150x150How does that work? Many of us who have worked in the power sector look at the average utilization rates of 50% - 65% for existing capacity (generation, transmission, and distribution) and, even allowing for peak to average demand ratios, can’t help but think that the utilization rates could be increased, maybe substantially.

Could we even achieve a “net-zero” capacity build for the next decade or two with the help of SG applications? Could we reduce the amount of replacement required with a “net-negative” capacity build? What would the implications be for the current utility business model?

The New Load Curve

Dispatchers do not dispatch to the traditional load forecast any more, i.e., the gross demand of the customer, they use the “net” load forecast:  the load that the utility sees at the point of common coupling.

The utility or ISO can itself shape/manage the “net” load through dispatchable demand response, distributed generation (DG), distributed storage (DS), and Virtual Power Plant programs.

However, self-optimizing customers will shape their own load curve autonomously (with little to no visibility to the utility or ISO) by dispatching DG or DS, programming price-responsive devices (including high-speed M2M control loops), and/or charging EVs.

B&W Pole w/Wires 150x150The uncertainty of the “net” load forecast will be increasing due to the unknown price-elastic behavior of customers in dynamic pricing regimes and the intermittency of distributed PV generation.

The transactive energy concept aims to create dynamic equilibrium in an integrated power market for all participants including end-use customers, where prices will clear at all nodes in the power system simultaneously (or close to real-time), resulting in a continuing matching of power supply and demand. We are very far from that capability at present (both technologically and policy-wise), and some less optimal and possibly more practical approaches may intervene, such an orderly backlog management/queuing approach based on traffic engineering, see here.

By How Much Can Future Gross and Net Load Curves Differ? Continue reading

The Elephant in the Room: Addressing the Affordability of a Rejuvenated, Smarter Grid

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Adding Up All Of The Costs

The cost of transitioning to Smart Grid (SG) 2.0 is high. The cost of replacing aging power system infrastructure is high. The cost of complying with new energy policy and regulations is high. For the most part, all three of these major cost burdens on the electric power system have been analyzed separately. We need to look at the whole picture.

In the tables below, we construct a 30-year cumulated budget and net benefits forecast for the U.S. power system, integrate all three of the above major cost burdens, and suggest a practical cost management strategy that could save as much as $750 billion.

Business-As-Usual Cost

If we take a business-as-usual approach to the replacement of aging power system infrastructure and concurrent deployment of the SG, the 30-year total net costs (costs minus benefits) for the transition to the SG are estimated to be about $1.2 trillion, based on our integration and curation of numerous credible studies -- see here and here. So, despite the considerable benefits generated by the SG, they are dwarfed by the costs of deployment.

Making Electricity Bills More Affordable in the SG Era

The net result is a large and economically debilitating increase in consumer’s electricity bills, and questions around the grid operators’ ability to maintain present-day levels of service reliability.

We were sufficiently concerned by the current "silo-ed" approach to costs, and the size of totaled costs, that we decided to explore an optimized least-cost deployment strategy, the goal of which would be to trade-off ubiquitous deployment of the SG against a more “surgical” deployment that optimizes the value of SG applications.

According to our estimates, if we optimize the deployment of SG applications in a least cost deployment strategy, the net costs of this new smart/automated power system can be reduced substantially. In the calculations below, we estimate that the net costs over the 30-year period could be as low as $450 billion, plus or minus some as-yet unaccounted-for costs and savings as presented at the end of this dialog.

8. DSC_0873-150x150Even with a least cost strategy, electricity bills will not be reduced from today’s levels (see also here) – but they will grow at a slower rate, and it would seem that they will be manageable, based the assumptions below.  Note that in all scenarios, there is no avoiding the front-end loading characteristics of the transition to the SG. What we can do, though, is to first deploy those SG applications that have the highest potential for creating near-term cash savings.

Here is a sequence of seven fairly self-explanatory tables that tell the story. Assumptions used in the forecasts are provided after the tables at the end of this dialog. All dollar amounts in the tables below are in nominal $ billions, undiscounted.

Cost of Replacing Aging Infrastructure and Deploying SG 2.0

Tables 1 and 2 below present business-as-usual net cash flows for rejuvenating power system infrastructure and deploying SG 2.0.

Table 1: Budget Forecast for Power System Infrastructure*

Cash Flow Through Each 5-Year Period

Year 1- 5

Year 6 - 10

Year 11-15

Year 16-20

Year 21-25

Year 26-30

Power system infrastructure investments -- Aging Infrastructure Replacement + Capacity to Serve Demand Growth (G,T, and D) ($1.6 trillion in total)

-267

-267

-267

-267

-267

-267

Cyber-security ($22 billion in total) -- also, see here

- 4

- 4

- 4

- 4

- 4

- 4

Environmental compliance ($110 billion) -- see also here and here and here

-17

-17

-17

-17

-17

-17

Total infrastructure investment by period

-288

-288

-288

-288

-288

-288

Grand total (nominal, undiscounted $)

-1728

 

 

 

 

 

Table 2: Budget Forecast for Transitioning to SG 2.0* Continue reading

Interoperability of Smart Grid (SG) Applications Is Mission-Critical, And Good For Business Too

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

It is clear to us that energy policy and regulations are the key drivers of the business case for SG applications, and not technologies. These policies/regulations promote, for example, RPS mandates, dynamic pricing, demand response, competitive market structures, self-optimizing customers (e.g., distributed generation and storage, smart appliances, micro-grids), electric vehicles, cyber-security, and data privacy. It is a kind-of “policy-push” market, with SG applications in a “catch-up” mode.

In order to implement the new policies and regulations in all of their complexities and not-designed-for impacts on the traditional electricity grid, while still maintaining the current levels of service reliability, stability and security, the grid needs to be smarter, and react faster. We will be operating “closer to the edge”.

The SG is at its core about automation, control, and optimization across the power system operations – both physical and market operations. For example, it comprises smart sensors, intelligent electronic devices, communications systems, M2M interfaces, data analytics, situation awareness and alerts, and control systems.

In its ideal form, the SG is a system of systems that in essence have the potential to optimize power system operations and capacity requirements. To realize this potential, i.e., for the grid to be “smart”, these systems ultimately need to be interoperable since the SG is an interconnected system from generation all the way to end-use of electricity.

The above new policies/regulations are out ahead of the SG in terms of technology, interoperability, and grid operations – the SG is playing “catch-up”. But more importantly, we also need the SG in order to realize the full benefits of these new policies and regulations.

The “catch-up” situation can lead to unintended/undesirable consequences related to the operation and reliability of the power system.

Fortunately, SG applications have the capability, if not yet the readiness, to mitigate these risks, provided they are interoperable.

The Transition to an “Ideal” SG Architecture Will Be Messy -- We Are Going To Feel Uncomfortable

DSC_1253-150x150As engineers, we like tidiness. In a perfect world, the transition to a fully-functional SG would be orderly and paced to accommodate new applications while protecting grid integrity: perhaps a three-stage transition -- from today’s operations’ data silos in utilities to a single common information bus, then to many common, integrated buses, and finally to a converged system.

But in a non-perfect world, i.e., reality, the SG will evolve as a hybrid of legacy and new systems -- it will not be an optimized process – there will not be a “grand plan” – clusters of interoperability will appear here and there across the SG.

The transition will take perhaps 30 years -- not for technology-based reasons, but because the “refresh cycle” for utility assets is lengthy – so, there’s time for a whole career for all of us in deploying SG applications! Continue reading

A Supply/Demand Curve for “Grid Flexibility” Products – The Price of “Optionality”

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

 

Dom Geraghty

 

Operators of the power system are facing a paradox – on the one hand, the grid is becoming smarter which gives operators an extra “edge” in managing the grid, but, on the other hand, policy changes associated with the promotion of renewable power and customer choice are creating increased uncertainty in both supply and demand. Yet operators must still deliver power with an average service reliability of 3 x 9s (the LOLP or LOEP target in system planning models), while also maintaining short-term stability and security of supply, even as the inertia of the system decreases.

Supply Uncertainty

Increases in supply uncertainties are caused by the steadily increasing penetration of renewable energy production as a result of RPS mandates. Delivery uncertainties are on the rise due to lagging transmission construction leading to congestion.

Demand Uncertainty

Pole w/Wires 150x150Demand uncertainty is increasing as self-optimizing end-users install distributed energy and storage, smart appliances that use M2M controls and chargers for EVs, and take advantage of time-differentiated pricing.

Electricity dispatch is based on minute-to-minute forecasting and clearing of customers’ “net load”. That is, the end-use customer’s load minus any local power generation or discharging of energy storage devices.

But the utility or system operator usually does not have visibility into, or interconnection with, this generation behind the meter – creating demand uncertainty and an inability to take advantage of the distributed generation in reliability emergencies.

Price Volatility and Price Elasticity-Created Uncertainty

The market as structured also creates short-term operational challenges for the grid operators. Prices in wholesale markets can change rapidly over short periods of time, e.g., 15 minutes, leading to sharp changes in the availability of supply and in the level of demand, driven by price elasticity. These impacts of price volatility, combined with the increased percentage of intermittent resources, creates the need for additional fast-acting reserves to maintain the grid operator’s target service reliability level.

Managing Uncertainty

How can the power system operator cope with the increased physical- and market-driven uncertainties?

We suggest that a “least cost” coping approach can consist of a combination of (1) investments in the “smart grid” (smart sensors, advanced controls, data analytics), and (2) investments in flexibility products (to be defined below). Continue reading

Is Service Reliability the Next Business Opportunity?

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dominic Geraghty

 

As described in our previous dialog, a number of new market factors are stressing utilities’ ability to deliver 3 x 9s reliability.

These factors fall into four categories: (1) new or expanded energy policies and regulations, (2) deployment of SG applications absent reliability-enhancing SG controls, (3) imperfect coordination between electricity market-clearing processes and the physical control processes of the power system, and (4) aging power system infrastructure.

Evolving Energy Policies and Regulations Have the Potential to Negatively Affect Reliability

Utilities dispatch power generation based on a net load forecast, where net load equals the native customer load minus any power generated by (1) a self-optimizing individual customer (e.g., distributed generation or energy storage discharge), (2) an aggregated self-optimizing set of customers, or (3) a micro-grid.

RPS energy policies as well as regulatory policies encouraging DG, EVs, distributed storage, CHP, and micro-grids are having increasingly significant effects on the shape of the net load and on the first derivative of the shape. For example, Mike Niggli, CEO, SDG&E, speaking in Distributech 2013’s plenary session, referred to expected load ramp rates in March 2020 of 4,500 MW down in two hours and 12,500 MW up in two hours, on a 25,000 MW system.

In most cases, the utility does not have visibility into customers’ distributed generation decisions ahead of time. The challenge for the utility is to maintain its target level of service reliability despite the uncertainty associated with the ensuing net load.

IMG_3406 150x150To a certain extent, short-term volatility in the net load caused by intermittent generation (distributed PV) may threaten system stability, especially if aggregated. Some utilities have established rules of thumb for the maximum percentage of PV they will allow on a feeder, e.g., 15%. However, it appears that these rules of thumb/heuristics are overly conservative. One private study simulating a typical distribution system found that its feeders, even in low load situations, could tolerate PV capacity of more than 50% of the load when appropriate (and not too complicated) control equipment is put in place.

To decrease the uncertainty in the net load forecast, and to access additional existing capacity next to the load center that can help maintain reliability in tight supply situations, some utilities offer a “virtual power plant (VPP)” program to their customers. For example, ConEd, PGE, CPS Energy/San Antonio, Duke’s Microgrid Program, AEP, and Europe’s FENIX program offer VPP programs of different types.

In some of these VPP programs, the utility interconnects, maintains, and operates the customer-owned generation/demand reduction applications as a bundle of dispatchable capacity, in return for which the utility provides the customer with certain tariff concessions.

Jurisdictions offering dynamic pricing, e.g., TOU, CPP, and RTP, also create uncertainty in the load forecast. Automated customer price responses can produce large, rapid, swings in the net load. If the consumer’s price response is not automated, i.e., not “smart”, the net load forecast uncertainty can likely be reduced over time based on increasingly accurate (“learned”) estimates of the price elasticity of customer segments -- it helps that price responses will likely be diversified across the service area.

To incentivize an acceptable level of service reliability, state regulators in over 50% of states have mandated penalties for SAIDI or CAIDI performances above a predetermined acceptable range, or have instituted service quality mandates with quantitative metrics. The penalties can be costly -- they provide a strong incentive for utilities to install equipment that improves reliability.

Naturally, these equipment costs are subsequently reflected in customers’ bills. However, the solutions simultaneously improve utility asset utilization and can even prolong the lifetime of some utility assets.

Somewhat Surprisingly, Initial Deployment of SG Applications Can Have a Negative Impact on Reliability

IMG_2597-150x150While SG applications can help enhance reliability through smart sensors and increased automation, it appears that the initial SG applications could negatively impact system reliability before subsequent D.A. applications provide ameliorating automation, i.e., SG can be first a sword against, and later a shield for, reliability.

Here we will address the negative impacts and follow-up below with some business opportunities for SG applications that mitigate these negative effects on reliability. Continue reading

Providing 99.87% Reliability* Is Going to Cost a Lot More – Are There Related SG 2.0 Business Opportunities?

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Historically, utilities have provided about “3 x 9s” reliability. The cost of this reliability is currently bundled into the price of electricity. This includes the cost of maintaining reserves, contingency plans, and automated generation control to cover the stochastic behavior of forced outages and electricity demand.

This cost is going up. Why?

Bulk Power Supply Uncertainty Is Affecting Reliability

The implementation of the RPS mandates is increasing the proportion of intermittent power production plants and by default decreasing the inertia, i.e., the damping ability, of the power system. As a result, a substantial amount of extra generation reserves and ancillary services are required to cover the increased uncertainty of supply while maintaining “3 x 9s” reliability levels. Recognizing this, most ISO markets trade various reserve and ancillary service products.

Trans-15-almost-purple-New-Image-150x150The transmission system is becoming more congested and there is widespread resistance against building new transmission lines. As a result, to maintain target levels of reliability and system security, more contingencies and remedial action plans and systems are needed to cover the increased uncertainty of delivery capability. Recognizing this, the ERCOT wholesale market trades month-ahead “congestion revenue right” products.

Real-World Examples of Related Supply-Side Reliability Events

A recent article by Dr. Paul-Frederik Bach, an expert in power system operations, discusses the impact of renewables penetration on the German power Grid. “The number of interventions has increased dramatically from 2010-2011 to 2011-2012…….

Bottlenecks are often detected in local grids. It makes no difference to the owner of a wind turbine if local or national grids are congested…………..In an attempt to establish an impression of the extent of interventions in Germany, EON Netz will be used as an example………..

During the first quarter of 2012, EON Netz has issued 257 interventions. The average length was 5.7 hours. Up to 10 interventions have been issued for the same hour. A total of 504 hours had one or more interventions. Thus, there have been interventions active for 23.1 percent of the hours during the first quarter of 2012..........

The total amount of curtailed energy from wind and CHP is probably modest, but the observations seem to indicate that German grids are frequently loaded to the capacity limits. Strained grids have a higher risk of cascading outages caused by single events.”

Another informative and very detailed analysis of a widespread outage in Europe in 2006 -- one which overloaded power lines and transformers in Poland by 120% and 140%, respectively -- can be found here. It includes a very interesting map of the European interconnected system showing voltage phase angle differences between substations varying from +60° to -50° across the region.

Demand-Side Uncertainty Is Also Affecting Reliability

Limited band-width, short-term frequency and voltage control is provided by traditional power plants.

However, the power industry does not have closed loop control between demand and supply. Continue reading

How Do You Spend $400 Billion? Part V: A Business Opportunity – Identifying the “Top 20%” High-Value Nodes in the Power System

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dominic Geraghty

 

In Part IV of this dialog series, we presented a Least Cost Strategy for deploying SG 2.0. The present dialog discusses how we identify and screen the most valuable projects within the Least Cost Strategy and how we address the risks and uncertainties inherent in the deployment of these projects.

In all situations, it is the business case that provides the basis for the go/no-go investment decision of an SG 2.0 application in a particular location. As we’ve said previously, a complete business case will include market, regulation, power system impact, other technical considerations, and an assessment of the risks/uncertainties. It will also include an assessment of qualitative factors.

The primary prioritization metric for projects is the benefit-to-cost ratio of the business case, subject to meeting hard constraints such as service reliability and environmental compliance.

Identifying High-Potential SG 2.0 Opportunities

Developing a business case is a very resource-intensive task. We don’t want to do it for every potential SG 2.0 application. Is there an efficient way to develop a first-cut list of the “Top 20%” (most valuable) applications? Yes, power system operators can speed-up the process.

10. DSC_1334-150x150These power system operators, i.e., vertically-integrated utilities, utilities, distribution companies, ISOs, and RTOs, will be the main users of SG 2.0 applications. They are the power system experts. They should be able to develop an initial list of high-potential opportunities for SG 2.0 applications based on their knowledge of stressed system nodes and likely optimal locations.

Project and power system simulation models can then be used to confirm (or not) the value of the applications for these locations. This is a business opportunity for software developers and SG 2.0 vendors.

As part of this identification process, the benefits that an SG 2.0 application may bring to the broader power system needs to be considered (see the discussion on power system simulation models below).

Developing Business Cases

Having screened for the most valuable SG 2.0 application opportunities first, detailed business cases are developed for these opportunities. Some re-ranking in the initial list based on the results of the business cases will result. Certain trade-offs will still be inherent in the process, based in part on the level of risk that decision-makers are willing to tolerate, e.g., their valuation of short-term versus long-term savings, their views on an acceptable level of risk.

It is entirely likely that the business cases will exhibit our desired characteristic discussed in Part IV, i.e., conforming to the 80%/20% rule that projects that 20% of the projects will provide 80% of the benefits of SG 2.0 applications. This is a critical element in achieving our “least cost” goal to create a significant reduction in cost relative to a less selective, less discriminating, deployment approach. Continue reading

How Do You Spend $400 Billion? Part IV: A Least Cost Strategy for SG 2.0

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Given the high costs and risks associated with SG 2.0, we recommended in Part III that SG 2.0 be implemented based on a “Managed Deployment Strategy”.

What is a Managed Deployment Strategy? It comprises:

(1) Picking SG 2.0 applications that provide the best benefit to cost ratio, prioritized within an affordable budget,

(2) Building-in sufficient flexibility to minimize the impact of major future uncertainties, and

(3) All the while maintaining the target level of service reliability.

That is, our Managed Deployment Strategy combines budgeted high-benefit-to-cost-ratio investment commitments (abbreviated to “Least Cost Strategy”) with a risk management strategy.

In this dialog, we look at the Least Cost Strategy component. In the following dialog we will talk about identifying high-value SG 2.0 applications and evaluating their business cases within the context of a risk-managed, Least Cost Strategy.

Developing a Least Cost Strategy for SG 2.0 Deployment

DSC_1310-150x150It has been estimated in several studies that a national deployment of SG 2.0 would cost about $400 billion and provide a benefit to cost ratio of 3:1, including the value of qualitative benefits (which comprised a substantial portion of the total benefits). The studies assume significant national market penetrations for different SG 2.0 applications.

Some studies of service-area deployments by utilities have been less optimistic about the benefits, suggesting a benefit to cost ratio of approximately 1:1. These studies have included less, or no, qualitative benefits, which could likely account for the lower benefit to cost ratio. These studies also include fairly significant market penetrations for different SG 2.0 applications.

In both the national and the utility cases, the costs of various SG 2.0 applications have been derived from similar data sources. We don’t think that there is a lot of disagreement about the “ball-park” costs of deploying various SG 2.0 applications. However, there is much less consensus about the benefits.

We have suggested that the SG 2.0 costs could be much less than the top-down forecast as a result of the 80%/20% rule, i.e., experience in similar situations indicates that one may be able to obtain perhaps 80% of the benefits of SG 2.0 applications by deploying into about 20% of the power system.

Assuming that this rather convenient rule applies here, we could reduce the investment budget for SG 2.0 deployment substantially below the projected $400 billion above, and yet still derive most of its benefits. We’ll see – it sounds somewhat optimistic, but we do buy into the idea of “surgical” deployment of SG 2.0 applications, i.e., focusing on, and limiting the deployments to, high-benefit situations.

Caveat Emptor!

We’ve received a warning. The original business cases for AMI have proven to date to be weak. Continue reading

How Do You Invest $400 Billion? – Part III: Planning Assumptions and Trade-Offs

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dominic Geraghty

 

What Is the Transition to SG 2.0 About, in Reality?

The transition is not just about an orderly, four-stage SG architecture change, driven by logical change-outs of information and communications technology, as presented in Part II. It is important to have thought this through, but it is not enough, and the transition will likely look quite different. This road-map will have detours.

It won’t happen that way because economics will determine in what parts of the system the transition will occur – the changes will take place through “cherry-picking” the highest benefit-cost ratio SG 2.0 applications. Realizing cash benefits will be a priority to offset the high capital requirements of power systems, i.e., to create “avoided costs”.

IMG_2597-150x150Neither will the transition be just about architecture and economics – it will be about the timeliness of supportive regulatory and policy changes. Regulations and policy deeply affect the economic incentives and outcomes of SG 2.0 applications.

The transition will be also mediated by tougher requirements on technology readiness. To date, value-added applications that were supposed to be provided by AMI installations have had mixed delivery results. SG 2.0 vendors will be increasingly required to demonstrate value propositions, to “prove” them. Demand for “pilot demonstrations” will increase, and “system acceptance tests” will become more stringent. Certification of functionality and interoperability will become the norm.

And lastly, a successful transition is about end-use customers, who are concerned about the size of their electricity bills and about receiving an adequate level of service reliability – will the continued polarization of the political processes that govern power system investments create a reliability crunch in stressed locations? Customers have already exhibited hardening resistance to AMI installations, in part because they do not see benefits. Will customers increasingly “self-optimize” by building more DG in the light of higher bills and the potential for lower reliability?

To understand how the transition will occur, and at what pace, we need to build in assumptions about all of the above determining factors as we develop corporate and individual business cases for SG 2.0 applications.

Competing for the Capital Required for Utility SG 2.0 Installations Continue reading