Category Archives: Automation & Control

SG 2.0 is fundamentally about automation and control

In the Smart Grid, Über-Sensors Declare “When = Where”

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Transitioning to the power sector’s Smart Grid (SG) involves delivering the full continuum of functionality for SG applications, as follows:

  1. Sensing
  2. Monitoring
  3. Diagnosis
  4. Control
  5. Automation, and
  6. Optimization.

Of these functionalities, most of the SG applications today deliver sensing, monitoring (i.e., sensing plus communication), and some level of diagnosis which may require additional post-processing. The remaining three functionalities -- control, automation, and optimization -- are much more difficult to implement for a variety of reasons, not the least of which is the dearth of algorithms available to support these applications in a grid that is often operating in ways for which it was not originally designed and in electricity markets that are not fully integrated.

The Importance of High-Performance Sensors

DSC_1310-150x150All SG applications require some form of sensing – it is a prerequisite functionality upon which all of the other functionalities in the chain of applications depend.

But sensing, while basic on the surface, is not “a walk in the park” for a grid that is undergoing major physical and market structural changes even as it moves towards increasing automation.

Traditionally, we have used static optimal power flow (OPF) models for state estimation of the nodes in the distribution system and for designing power security protection schemes. With the computer processing capability available today, these models have become quite fast, with state recalculations for the entire distribution system now possible in seconds using blade servers. Today a full simulation can be completed well within a typical SCADA cycle. But the results are still “estimates” of the state of the distribution system.

While better OPF models are necessary, they are not sufficient to address the upcoming challenges of the SG because the distribution system is being operated in ways never contemplated when it was designed, due to the introduction of:

  1. Two-way power flows associated with distributed resources, micro-grids, and virtual power plants
  2. Intermittency introduced by PV installations in distribution systems
  3. Electric vehicle charging
  4. Automated demand response, and
  5. M2M appliances in end-user facilities

The most challenging impacts of these changes are:

  1. Volatility of distribution power flows is increasing significantly
  2. The rate of change of power flow metrics is accelerating, and
  3. Short-term load forecasts of “net demand” are exhibiting a broader range of uncertainty than before, creating challenges for operators to schedule production that meets minute-to-minute demand

As a result, even the fastest OPF models need to be supplemented (as well as re-calibrated) with real-world data that reflect these new operating regimes and uncertainties.

Bottom line: to maintain acceptable reliability, security, and stability levels as the physical grid and its associated markets are re-structured, we need to have a much more granular, real-time situational awareness of the distribution system.

How do we address this need for heightened situation awareness as we continue to implement interoperable SG applications?

Introducing the Über-Sensor Platform

Let’s assume that we have installed a set of ultra-accurate, ultra-fast, sensor platforms in our distribution system. The sensors are synchronized to the same “very exact” nano-second accuracy clock. All sensor measurements are time-stamped and concurrent.

What do the sensor platforms consist of? Each sensor consists of a high accuracy analog front-end (AFE) and a flexible, high-performance field programmable gate array (FPGA) integrated as a system-on-module (SOM), connected to, or embedded in, power measurement devices or intelligent electronic devices (IEDs).

DSC_1334 150x150Using technology available today, combined with some proprietary technology configured in a novel manner, the AFE can be highly accurate – providing, for example, a steady noise floor of ~-150 dB all the way out to 105 Hertz and perhaps ~-120 dB at 106 Hertz. The AFE can be supplemented by fast digital signal processors (DSPs) and hardware accelerators. Each sensor could then provide wide spectrum capture, power system hardware DSP, programmable DSP cores, programmable response, a customizable IED firmware stack, and the ability to communicate securely in milliseconds. It is, in effect, a platform that can support all six of the SG functionalities discussed in the opening paragraph of this dialog.

A sensor platform with these performance characteristics could be used in point-on-wave, phasor, power/energy, and power quality measurement applications. Electricity waveforms could be sampled with high accuracy at high frequency. For example, a 2 µ-second transient could be captured and displayed with high fidelity. For distribution system voltage levels, total vector error would need to be in the range of ~<0.2%, which is attainable today, but not without some difficulty.

With an Über-Sensor, We Can Equate “When” with “Where” in the Distribution System

Imagine what we could do with this Über-sensor platform/network in the electric distribution system.

Here’s an example: an incipient fault, or a 2 micro-second transient created by a lightning strike, has created an aberration in the wave form emanating from a particular location in the distribution system.

It spreads like the ripples from a stone cast into a pool. “I felt a great disturbance in the Force”, says Obi-Wan Kenobi. In Gridiopolis, the Engineer Ultra (υπερ-και για το μηχανικό) feels the hair stand out on his Greek neck and he reaches quickly for his krater. But seriously……

In the power system, just like in “The Force”, we know that everything is connected to everything in dynamic meta-stable equilibrium – it is a network of inter-dependent nodes in constant motion, maintained in equilibrium by grid operators.

Emanating outwards from the location of the above transient event, associated disturbances in the wave-form propagate across the distribution system and are sensed by the distributed über-sensor platforms. Because all of the sensor measurements are synchronized and concurrent, the time taken for the propagation to reach an individual sensor is known, and thus the disturbance can be traced back to a particular location by a form of triangulation between the sensors. (Of course, such a locational algorithm is yet to be developed, and we would also need pattern recognition and out-filter algorithms to eliminate the modulations of the disturbance introduced by intermediate devices.)

Yes, “When” Can Equal “Where” in the Distribution System

In effect, if we know very precisely the “when” of a disturbance, then, with a high-performance set of sensor platforms, we should be able to determine the “where” (the origin of the disturbance). In addition, with pattern recognition, the type and likely cause of the disturbance could also be determined.

Pink-middle-of-tower-42-New-Image-150x150And so, we wouldn’t merely detect a highly transient fault or an incipient fault, we would also locate its origin with the same sensors, and very quickly too.

With a network of über-sensor platforms, we would not need an OPF model to estimate states of the distribution grid, because we would “know” the states in real-time. We could archive and use this operations knowledge to re-calibrate the OPF model for off-line planning applications and post-mortems of disturbance events and remediation schemes.

The above application would be useful in terms of two high-value SG applications:

  1. High-speed fault detection, location, and causality
  2. Intelligent replacement of aging power system infrastructure through the identification of incipient failures

Availability of Über-Sensors Today

Über-sensor platforms like this are available today as “evaluation boards”. They go beyond ARM capability to add all of the additional functionality above, with a much-reduced chip-count per module. After they’ve being thoroughly tested and refined in upcoming field applications, they will ultimately be turned into high volume, low-cost ASICs, embeddable in IEDs commonly used in distribution systems.

Why Do We Need Über-Sensors?

Policy-driven structural changes are occurring, and are expected to occur, in the power grid and its associated markets. These changes will have profound operational implications for the grid. The transition to “Smart Grid” applications is being driven by the need to cope with these operational challenges.

In this context, if we are to deliver the same or better reliability of service at an affordable cost to electricity customers, we will need the above high-performance/low-cost sensors as fundamental building blocks to support our ultimate goal -- an interoperable, automated, optimized SG capable of maintaining resilient dynamic equilibrium while under constant fire from millions of continuously-occurring potentially destabilizing events.

 As always, comments are welcome and appreciated.

Implementation of Interoperability in the “Real World” of the Smart Grid (SG)

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

Dom Geraghty

Abstract

U.S. energy policy initiatives are changing the structure of the physical power system and power system markets. While achieving policy goals, they also create undesirable side-effects for service reliability and power costs. Smart Grid (SG) applications can mitigate these side effects. However, the SG can only work if its applications are interoperable because objects in the power grid network are inter-dependent.  In the “real world”, interoperability comes in many forms. Whatever form it takes, it is a prerequisite for the implementation of SG applications, which in turn are required to ameliorate the undesirable and/or unintended side-effects of salutary and broadly-supported energy policy initiatives.

Situation – Structural Changes in the Power Sector

Today, there are two primary structural changes occurring in the U.S. power grid: (1) physical (ongoing) and (2) power markets (early phases). These changes are being driven by the following energy policy initiatives:

  • Renewable Portfolio Standards (RPS) mandate which introduces increasing amounts of intermittent/variable power production
  • Promotion of, and subsidies for, distributed energy resources (DERs), micro-grids, virtual power plants (VPPs)
  • Promotion of electric vehicles (EVs)
  • Availability of demand response (DR)/load dispatch programs
  • Availability of increased end-use customer choice/energy management options such as smart thermostats, “Green Button”, home automation systems, building automation systems, dynamic rates
  • Integration of wholesale and retail markets, and integration of physical and power market operations

Side-Effects of Energy Policies – There Is a Disturbance in the Force

The above structural changes resulting from energy policy changes have some undesirable side-effects that impact service reliability and create challenges for grid operators.

The operations-related side effects of policy-related structural changes in the grid include:

  • More volatile operations as a result of intermittent resources
  • Events/actuations happen faster – machine-to-machine (M2M), some automation – create a need to manage system security/protection and system stability more tightly
  • Un-designed-for operation of traditional distribution systems, e.g., two-way flows in distribution systems, high-gain feedback loops due to price-responsive demand management programs
  • Visualization of the instantaneous “state of the grid” becomes more challenging
  • The power dispatcher’s job becomes more complex in terms of matching supply and demand on a quasi-real-time basis, e.g., load-following is more demanding
  • Forecasting the “net” load curve is more uncertain
  • More reserves are required to maintain service reliability targets

Two-t-lines-56-mauve-New-Image1-e1355772451904-150x150The side-effects occur because the electricity grid is an interconnected network. Energy policies can affect service reliability negatively because an undesigned-for change in one part of the grid’s operation affects other parts of the grid to an unknown extent, e.g., Lorenz’s “butterfly effect” -- the sensitive dependency on initial conditions in which a small change at one place in a deterministic nonlinear system can result in large differences in a later state. Everything in the electricity grid is interdependent – everything is connected to everything else.

Examples of this interconnectedness in action in the electric power grid include:

  • The November 2006 European grid collapse into three separate domains as phase angles sharply separated between the north, south and east due to insufficient inter-transmission service operator coordination and non-fulfilment of an N-1 criterion
  • The proven ability of a 120V wall socket in a University of Austin building to sense disturbances in the ERCOT grid over 350 miles away

Other, More Generic, Undesirable/Unintended Side-Effects of New Energy Policies

The policy-created structural changes in the power sector can also create other undesirable side-effects:

  • Increased costs because the “first cost” of SG-related equipment is almost always higher than existing (less-smart) equipment
  • Reductions  in system load factor

Bottom Line – Undesirable Side-Effects Need to Be Addressed

If unmitigated, the implementation of the above broadly-supported policy initiatives creates undesirable reliability, cost and asset utilization side-effects under business-as-usual power grid operations -- enter the SG with solutions to mitigate or even eliminate some of these side-effects.

A Brief Digression - Definition of the SG as an Intelligent Network

Before discussing how SG applications can mitigate or eliminate undesirable side-effects of new energy policies, it is important to define the SG.

The ultimate SG is a network of physical objects related to the generation, delivery, and utilization of electricity -- the objects are provided with unique identifiers and the ability to transfer data over Continue reading

Physical and Market Drivers in the Power Sector

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dominic Geraghty

 

The Power Sector's Transition to the Smart Grid

The power industry has developed a vision of the ultimate “plug-and-play” smart grid (SG). Many detailed and thoughtful architectures for this ultimate fully automated and optimized SG have been proposed.

There is a wide variety of opinions as to how we’ll get there from here, but there is consensus that it will take decades for a robustly implemented SG to come to fruition. There is also consensus that the total investment requirements will be enormous. Lastly, it is understood the journey will not be without obstacles because field implementation is always full of surprises.

DSC_0190-150x150A credible implementation scenario will not be just about technology; it will consist of business cases that take strategic drivers of the power sector into account. To implement the SG, we need a thorough understanding of the strategic drivers of its evolution – as a basis for planning, evaluating, and investing risk capital in R&D&D, new products and the infrastructure of this future grid.

Various players in the industry have developed a profusion of listings of the strategic drivers of the evolving power sector in the past few years. We have consensus, more or less, on the common elements of a “master list”, but not necessarily on their relative importance, given the differing agendas of impacted industry stakeholders.

This brief paper summarizes and clarifies the strategic drivers of the SG evolution, using a unique new and simplified categorization. The paper then presents the potential impacts of these drivers on service reliability and the cost of service, impacts that logically lead to the need for embedded intelligence in the power grid, i.e., the ultimate SG.

Strategic Drivers of the Power Sector

1. Structural Changes

The power industry is responding to the very real structural changes occurring in the electric sector, changes occurring in both its (1) physical and (2) market configurations.

Physical structural changes are occurring as a result of a broad set of energy policy mandates promoting:

  • Renewable energy production, distributed energy generation and storage, micro-grids, electric vehicles, energy efficiency, increased end-use customer choices

Market structural changes are occurring as a result of efforts to increase the efficiency of electricity markets:

  • New products such as demand response, frequency regulation
  • Promotion of peak-shifting wholesale generation and transmission rates and dynamic pricing  for end-use customers
  • Competition from non-utility providers and end-use customers
  • Broader participation in, and pending integration of, wholesale and retail markets
  • Expansion of “incentive regulation” program

2. Aging Infrastructure

Pole w/Wires 150x150The infrastructure of the power sector is not just aging – it is aged, with much of it now well past its original design life. Legacy control systems are the norm, providing far less functionality than that available from automated intelligent digital devices available based on today’s technology. There is a strong concern within the power sector that this aging infrastructure, unmitigated, will inevitably lead to lower levels of service reliability.

3. Cybersecurity

Lastly, there is continuing verification of increased levels of cyber-based intrusions within the power sector. This has raised concerns in particular about the vulnerability of unprotected legacy operations technology (OT) -- the devices and software that control the grid. Legacy monitoring and control systems are widespread in our aging power system, designed and installed in an era when cybersecurity was not an issue.

So that’s it – our definitive list of strategic drivers of the SG – condensed into three categories.

How Do These Strategic Drivers Affect the Power Sector’s Performance?

The power sector’s performance is measured primarily by service reliability and the cost of service.

The physical structural drivers listed above, if unmitigated, will:

  • Decrease service reliability due to the intermittency of renewable power production, the increased uncertainty of “net” load (demand) resulting from unpredictable end-use customer use of on-premises energy production and management equipment, and the operation the distribution system in ways for which it was not designed, i.e., two-way power flows
  • Increase the cost of service because:
    • The capital costs of renewable energy, energy storage, and some customer-owned production and energy management devices are currently much higher than traditional grid technologies (1, 2)
    • Increased spinning and regulation reserves are needed to maintain existing levels of reliability as the proportion of renewable energy increases in the production mix (3, 4)
    • The load duration curve’s shape is shifting unfavorably towards a lower asset utilization rate across the grid as the ratio of peak load to average load increases (5)

Fortunately, we have shown elsewhere that SG applications can fully mitigate the above negative outcomes. (6, 7)

In contrast, the market structural changes, when implemented, will increase service reliability and decrease the cost of service. However, this implementation is subject to a lengthy political and analytical process involving regulators, various stakeholders, and likely the legal system as well.

Most of the structural market changes presented here have been under discussion for decades with very little progress being made in terms of implementation. Structural market changes represent a major opportunity to lower the deployment cost of the smart grid, while maintaining acceptable reliability levels.

We Are Driving Embedded Intelligence into the Power Grid

To mitigate the negative and support the positive impacts of the above strategic drivers, we will be embedding intelligence across the power grid. This intelligence will be supported by today’s and tomorrow’s advanced information, operational, and communications technology.

DSC_0316_2-150x150The ultimate SG will be an optimized, automated system meeting a prescribed level of service reliability and security, delivering commodity-priced electricity. To achieve this, SG applications will introduce, on a project by project basis, automated intelligent digital devices distributed across the grid. The transition will take multiple decades.

Initially, the SG will deliver sensing, monitoring, diagnosis, and control functionality. As we progress in our understanding of the grid and develop more sophisticated algorithms, we will progress to automation, and ultimately to optimized operations.

The SG applications will have shorter working lives than the long-lived assets of today’s grid, but they will be substitutable, because interoperability will be the norm for all produced SG devices and applications (8).

The good news: planned properly, the net cost of the transition over its multi-decade duration should be zero, relative to continuing on a business-as-usual basis (5).

As always, comments are welcome and appreciated.

 References

  1. Chris, Namovicz, “Assessing the Economic Value of New Utility-Scale Renewable Generation Projects”, US-EOIA, EIA Energy Conference, June 17, 2013
  2. “Distributed Generation Renewable Energy Estimate of Costs”,  NREL, August 2013
  3. CPUC, “33% Renewable Portfolio Standard: Implementation Analysis – Preliminary Results”, June 2009
  4. Robert Gross, et al., “The Costs and Impacts of Intermittency“, U.K. Energy Research Centre, Imperial College, London, March 2006
  5. Geraghty, Dominic, “Shape-Shifting Load Curves”, smartgridix.com, January 25, 2014
  6. Geraghty, Dominic, “The Elephant in the Room: Addressing the Affordability of a Rejuvenated, Smarter Grid”, smartgridix.com, November 21, 2013
  7. “Estimating the Costs and Benefits of the Smart Grid”, EPRI Technical Report 1022519, March 2011
  8. Geraghty, Dominic, “ Implementation of the Interoperability in the Real Smart Grid”, October 2014, smartgridix.com (to be published, draft under review, available from the author)

 

Internet of (Smart Grid) Things – Achieving Interoperability

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

In the previous dialog, we introduced the “Internet of (Smart Grid) Things”, or “Io(SG)T”, a real-world microcosm of Cisco’s IoT or IoE.  We make no excuses for accepting the admitted Cisco “spin” -- we have already been living this “spin” since the advent of the SG.

The SG is a microcosm of the IoT because we have defined the ultimate SG as an automated plug and play system, just like we increasingly plug and play today on the Internet, moving inexorably towards the universal plug and play IoT or IoE in the future. The concept is similar to “The Feed” in Neal Stephenson’s book, “The Diamond Age” – an ultra-reliable commodity mix priced at marginal costs.

Our ultimate SG replicates the IoT concept as a power sector subset or “vertical” – it is a textbook crucible because we can draw a clear unbroken boundary around the sector. Within this bounded system are the inter-operations of power production, transmission, distribution, and end-use of electricity, representing as a whole the networked infrastructure of the SG.

Pink-71-two-t-lines-150x150Here, we define this automated, interactive power sector vertical as the Io(SG)T. The power sector is the first of many networked industry sub-sets of the IoT likely to emerge over time. While we have not been explicitly calling it the Io(SG)T until now, as professionals within this sector we’ve been working on it for more than couple of decades.

However, there is still a long way to go to automate this very complex machine -- our electricity grid. We posit a tentative time-frame of at least 30 years for the ultimate SG, the Io(SG)T, and, as we’ve said before, there will be many zigs and zags along the way.

Let’s Back Up a Little – Why Is the Io(SG)T Needed?

Energy and environmental policies, regulations, and mandates are creating operational challenges for the electricity grid – they call for the grid to be operated in ways for which it was not designed, e.g., accommodating intermittent renewable generation, two-way power flows in distribution systems, evolving integration of wholesale and retail power markets, autonomously operated distributed generation and storage, M2M-enabled smart appliances and buildings, and microgrids.

These new developments make it a challenge to maintain traditional 4 x 9s service reliability levels, increase the stresses on the aging infrastructure of the grid, increase electricity costs, decrease asset utilization factors, and add substantial uncertainty to the net load/demand curve against which grid operators dispatch generation and delivery assets.

DSC_0026-150x150Fortunately, today’s new SG technologies and applications can provide the processing power, reaction speed, bandwidth, accuracy, interoperability (sometimes), and instant situational awareness across the grid necessary to accommodate the new operational challenges above while operating the grid reliably and safely.

Actually, we don’t really have any option – intelligent automated functionality of the SG has become a prerequisite for economical, reliable, and safe operation of the power grid in the future.

The Costs of the Io(SG)T Can Be Managed

Turning to the costs, we’ve recently presented a comprehensive analysis of the elements of a least cost Io(SG)T deployment strategy that includes (1) locational selectivity (using the 80%/20% rule) in our deliberate shift from a “blanketing” approach for SG applications to a prioritized deployment approach, and (2) equipment rejuvenation -- retrofitting rather than “rip and replace” approaches.

We estimate that this strategy has the potential to decrease initially estimated Io(SG)T costs by half an order of magnitude – that is, that the deployment of the Io(SG)T will cost about $400 billion through 2030.

The approach recognizes that SG applications will offset capital requirements over time and reduce operating costs going forward. For example, SG applications can increase asset utilization (freeing up “latent” capacity), reduce power system losses, and increase the economic efficiency of power markets.

We Are Beginning to Stack Up the Benefits of the Io(SG)T

11. DSC_0118-150x150-150x150For many individual SG applications, we are still in early days in terms of calculating and crediting all of the benefits in the “stack” of benefits, but it is happening. For example, we are beginning to see numerous utilities leverage the very first SG applications, i.e., AMI systems, to improve outage management, reduce truck rolls, improve billing, identify and eliminate electricity theft, manage voltage, and monitor transformers. All of these applications add incrementally to the AMI benefits stack.

Other real-world examples of accruing multiple benefits from an SG application:

  • Some grid operators are using SG applications with energy storage to “smooth” intermittency, shift and shave peak, and create arbitrage opportunities in wholesale power markets, presenting a “stacked” benefits analysis of energy storage systems
  • The power industry is in the middle of deploying phasor measurement units (PMUs) across regional transmission grids to provide very sophisticated situational awareness and safely operate our systems much closer to their capacity limits. The benefits stack includes freeing up latent capacity in the power infrastructure, reducing reserve requirements for the increasing proportion of intermittent generation, and relieving congestion on transmission lines

Bottom line: the Io(SG)T is a prerequisite for accommodating energy and environmental policy goals and we continue to quantify in the field additional benefits of Io(SG)T applications that improve benefit-to-cost ratios.

But how do we connect all of these dispersed and opportunistic SG applications so that they work together seamlessly in an Io(SG)T?

APIs – Learn to Love Them in the Io(SG)T

Sub-Title: The Interoperability Continuum

The SG automation roadmap involves measurement (advanced sensors -  we need these first to enable the rest of this roadmap), monitoring (communications), diagnosis (analytics and visualization), and control (algorithms) all operating simultaneously across the physical and market nodes of the SG, and across the multiple operating and back-office systems of grid operators.

For the automation roadmap to be realized, the SG applications and utility systems upon which they operate must be interoperable.

To date, interoperability between applications or systems has been achieved by using standards developed by Standards Development Organizations (SDOs) or by purpose-built Application Programming Interfaces (APIs). It is actually a little more complicated than that, however. The graphic below presents the “Interoperability Continuum”. The left-hand side represents a situation where no standard exists for a proposed interface/integration. On the right-hand side, a mature standard exists which can be used for the interface. Moving from left to right denotes increased general interoperability.

Proprietary APIs

We define a proprietary API as a customized interface that is developed by a vendor, system integrator, or grid operator for a special-purpose application that is usually one-off. It usually connects proprietary systems to other proprietary or home-grown systems. For example, an AMI vendor may develop a proprietary API to connect its system to the utility’s home-grown OMS or billing system.  Vendors do not share proprietary protocols because they believe that they enhance their competitiveness, capture the customer for “up-sells”, and increase the value of their ongoing Service Level Agreements (SLAs).

The next step in the continuum of increasing interoperability is a hybrid interface consisting of the combination of a proprietary API and an existing standard that allows vendors, system integrators and grid operators to connect proprietary systems to systems with standardized interfaces. While a standard is utilized in the hybrid, the interface is still controlled by the developer of the API. In this case, an example would be the interface between a proprietary AMI system and the CIM standard.

Open APIs

By providing an Open API (i.e., an open-source API available to third-party developers), the vendor relinquishes control of its interface. In return, the vendor with a superior product will attract independent third-party developers who create interfaces for value-adding SG applications, thus increasing the demand “pull” for the vendor’s equipment or systems.

This is the business model used today by Facebook, Apple, and others to create customer “stickiness” through an interoperable applications portfolio that they do not have to develop themselves. It’s a strategy for increasing market share. As part of the Web 2.0 business model, it was initially conceived to (1) allow web-sites to inter-operate, (2) create virtual collaborative services environment to support, for example, the professional interface between a designer and an architect, and (3) expand social media platforms.

7. DSC_0150-150x150The Open API is the future of SG applications if it logically follows the IoT (r)evolution. As part of the interoperability continuum, it can inter-operate with a widely deployed standard, or it can be offered as a stand-alone independent API. While this goes against traditional business protection instincts of vendors, it in effect “outsources” applications development relevant to its offerings – development resources that it gets for free-- in return for the internal cost of developing and offering the Open API itself.

Furthermore, the Open API, while provided free to developers, can have an associated “revenue license” where the vendor receives some portion of any revenue earned by a third-party developer commercializing a SG application on top of the Open API. Again, the Web 2.0 business model can apply. In the long run, the vendor may even decide to acquire some of the third-party developers of interfaces based on its Open API. The advantage to the vendor, along with the potential for increased product demand, is that the initial business/development risk and investment are undertaken by a third-party.

Mature Standards

Finally, at the right-hand end of the continuum above, there are scores of mature standards that already allow plug and play to happen between SG systems.

But……, at least at present, it is common to find that a mature standard that facilitates some interfaces may be “partial” in the sense that it does not cover (1) other (perhaps newer) interfaces with grid systems that the application also impacts, or (2) different layers within SG application interfaces which may not be compatible with layers in the systems to which they interface – thus requiring the combination of an API and a mature standard to accomplish the integration/interoperability for an advanced SG application.

So, a mature standard will remain a moving target as the Io(SG)T continues to evolve, and APIs will continue to be needed (and create value) as the automation of the SG progresses.

For a more detailed discussion of this last point, see a very interesting position paper by Scott Neumann that he wrote for GWAC.

A Glimpse into the Future: Could We Leapfrog the Development of SG Control Algorithms/Standards Altogether?

DSC_1253-150x150Compared to generation and transmission systems, the distribution system in the SG has less-well developed grid control algorithms and virtually no associated standards. These algorithms will sit within distributed intelligent processors embedded in IEDs dispersed throughout the distribution system. Using state estimation tools, dynamic load flow modeling, and real-time state information from high-speed ultra-accurate PMU chipsets, we are just beginning to develop control algorithms  for actuators in the distribution system and for remedial action scheme firmware aimed at protecting the system during contingencies.

Could we instead leap-frog today’s relatively mature “system on chips” (SOCs) technology by substituting memristor chipsets embedded in IEDs? That is, adaptive learning chips based on how the brain’s synapses operate by conducting processing and memory activities simultaneously -- using this approach, couldn’t we derive control algorithms empirically? OK, assuming that we also had ultra-fast sensors.

But memristors and ultra-fast sensors already exist today………

So maybe we don’t need to worry too much about the last (most difficult) step in the Io(SG)T roadmap, i.e., the development of SG control algorithms, which would take place in earnest perhaps a decade from now. Why? Because we won’t need standard grid control algorithms per se -- memristors, with processing speeds six orders of magnitude faster than today’s best technology, will sort it all out for us empirically - we just need to interface them with those ultra-fast sensors that measure and communicate the requisite data to the memristors over interoperable SG networks.

More on these cutting edge technologies in a future dialog………………

As always, comments welcome and appreciated.

Smart Sensors for the SG: You Can’t Manage What You Don’t Measure

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

An Integrated Measurement Strategy for the SG

Obviously, since the SG is inanimate, we don’t expect it to intuit how to do “smart” things by itself (!). We have to provide it with data, and with analytical rules. Even the recent introductions of AI-based algorithms in some SG applications must still be derived from “learnings” from empirical data.

At present, the deployment of SG applications can be characterized as “tactical”-- uncoordinated with other activities in the SG, and special-purpose in nature – certainly not following the holistic, long-term visions of SG architectures and power markets developed by such entities as GWAC, NIST, EPRI, IEC, IEEE, SGIP, etc.  The result is a hodge-podge of application-specific sensors with different capabilities which don’t communicate across applications and which operate in different time domains. But it does not need to be like that, as we shall outline below.

Let’s Define Measurement, Sensors, and Smart Sensors

10. DSC_1334-150x150Smart sensors are the fundamental building blocks for the implementation of a truly “smart” grid. They are an essential part of every SG solution. Regular analog sensors become the “smart sensors” of the SG when they add intelligence to the measurement function, i.e., analog to digital conversion, processing power, firmware, communications, and even actuation capability.

We can think of smart sensors as the first link in a four-link SG decision-making chain that consists of:

(1) Location-specific measurement -- sensor function only

(2) Monitoring -- a sensor with one-way communications functionality

(3) Diagnosis at the “edge” -- a sensor with localized diagnostic intelligence based on data analytics and/or centralized diagnosis based on communicated sensor data

(4) Edge-embedded control actions (based on embedded algorithms, including Remedial Action Schemes (RAS)) -- a sensor with intelligence and control/actuator capability. The algorithms for this functionality could also be centralized and use two-way communications with an “edge” sensor/actuator, and/or they could drive peer-to-peer coordination of control actions at the “edge”; however, a substantial amount of R&D still needs to be done to develop autonomous real-time or quasi-real-time control algorithms for power distribution systems

To Date, Smart Sensor-Based Measurement in the SG Has Been “Tactical”

Granted, as we’ve said before, there is a reason for this tactical approach to sensor deployment – up to now the choices of SG projects are driven by energy and regulatory policies and rules that target a limited set of SG applications. Fair enough -- none of us expect that the evolution of the SG will follow a “grand deployment plan” – it will be imperfect, following the zigs and zags of these real-world drivers.

Continue reading

Interoperability of Smart Grid (SG) Applications Is Mission-Critical, And Good For Business Too

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

It is clear to us that energy policy and regulations are the key drivers of the business case for SG applications, and not technologies. These policies/regulations promote, for example, RPS mandates, dynamic pricing, demand response, competitive market structures, self-optimizing customers (e.g., distributed generation and storage, smart appliances, micro-grids), electric vehicles, cyber-security, and data privacy. It is a kind-of “policy-push” market, with SG applications in a “catch-up” mode.

In order to implement the new policies and regulations in all of their complexities and not-designed-for impacts on the traditional electricity grid, while still maintaining the current levels of service reliability, stability and security, the grid needs to be smarter, and react faster. We will be operating “closer to the edge”.

The SG is at its core about automation, control, and optimization across the power system operations – both physical and market operations. For example, it comprises smart sensors, intelligent electronic devices, communications systems, M2M interfaces, data analytics, situation awareness and alerts, and control systems.

In its ideal form, the SG is a system of systems that in essence have the potential to optimize power system operations and capacity requirements. To realize this potential, i.e., for the grid to be “smart”, these systems ultimately need to be interoperable since the SG is an interconnected system from generation all the way to end-use of electricity.

The above new policies/regulations are out ahead of the SG in terms of technology, interoperability, and grid operations – the SG is playing “catch-up”. But more importantly, we also need the SG in order to realize the full benefits of these new policies and regulations.

The “catch-up” situation can lead to unintended/undesirable consequences related to the operation and reliability of the power system.

Fortunately, SG applications have the capability, if not yet the readiness, to mitigate these risks, provided they are interoperable.

The Transition to an “Ideal” SG Architecture Will Be Messy -- We Are Going To Feel Uncomfortable

DSC_1253-150x150As engineers, we like tidiness. In a perfect world, the transition to a fully-functional SG would be orderly and paced to accommodate new applications while protecting grid integrity: perhaps a three-stage transition -- from today’s operations’ data silos in utilities to a single common information bus, then to many common, integrated buses, and finally to a converged system.

But in a non-perfect world, i.e., reality, the SG will evolve as a hybrid of legacy and new systems -- it will not be an optimized process – there will not be a “grand plan” – clusters of interoperability will appear here and there across the SG.

The transition will take perhaps 30 years -- not for technology-based reasons, but because the “refresh cycle” for utility assets is lengthy – so, there’s time for a whole career for all of us in deploying SG applications! Continue reading

Is Service Reliability the Next Business Opportunity?

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dominic Geraghty

 

As described in our previous dialog, a number of new market factors are stressing utilities’ ability to deliver 3 x 9s reliability.

These factors fall into four categories: (1) new or expanded energy policies and regulations, (2) deployment of SG applications absent reliability-enhancing SG controls, (3) imperfect coordination between electricity market-clearing processes and the physical control processes of the power system, and (4) aging power system infrastructure.

Evolving Energy Policies and Regulations Have the Potential to Negatively Affect Reliability

Utilities dispatch power generation based on a net load forecast, where net load equals the native customer load minus any power generated by (1) a self-optimizing individual customer (e.g., distributed generation or energy storage discharge), (2) an aggregated self-optimizing set of customers, or (3) a micro-grid.

RPS energy policies as well as regulatory policies encouraging DG, EVs, distributed storage, CHP, and micro-grids are having increasingly significant effects on the shape of the net load and on the first derivative of the shape. For example, Mike Niggli, CEO, SDG&E, speaking in Distributech 2013’s plenary session, referred to expected load ramp rates in March 2020 of 4,500 MW down in two hours and 12,500 MW up in two hours, on a 25,000 MW system.

In most cases, the utility does not have visibility into customers’ distributed generation decisions ahead of time. The challenge for the utility is to maintain its target level of service reliability despite the uncertainty associated with the ensuing net load.

IMG_3406 150x150To a certain extent, short-term volatility in the net load caused by intermittent generation (distributed PV) may threaten system stability, especially if aggregated. Some utilities have established rules of thumb for the maximum percentage of PV they will allow on a feeder, e.g., 15%. However, it appears that these rules of thumb/heuristics are overly conservative. One private study simulating a typical distribution system found that its feeders, even in low load situations, could tolerate PV capacity of more than 50% of the load when appropriate (and not too complicated) control equipment is put in place.

To decrease the uncertainty in the net load forecast, and to access additional existing capacity next to the load center that can help maintain reliability in tight supply situations, some utilities offer a “virtual power plant (VPP)” program to their customers. For example, ConEd, PGE, CPS Energy/San Antonio, Duke’s Microgrid Program, AEP, and Europe’s FENIX program offer VPP programs of different types.

In some of these VPP programs, the utility interconnects, maintains, and operates the customer-owned generation/demand reduction applications as a bundle of dispatchable capacity, in return for which the utility provides the customer with certain tariff concessions.

Jurisdictions offering dynamic pricing, e.g., TOU, CPP, and RTP, also create uncertainty in the load forecast. Automated customer price responses can produce large, rapid, swings in the net load. If the consumer’s price response is not automated, i.e., not “smart”, the net load forecast uncertainty can likely be reduced over time based on increasingly accurate (“learned”) estimates of the price elasticity of customer segments -- it helps that price responses will likely be diversified across the service area.

To incentivize an acceptable level of service reliability, state regulators in over 50% of states have mandated penalties for SAIDI or CAIDI performances above a predetermined acceptable range, or have instituted service quality mandates with quantitative metrics. The penalties can be costly -- they provide a strong incentive for utilities to install equipment that improves reliability.

Naturally, these equipment costs are subsequently reflected in customers’ bills. However, the solutions simultaneously improve utility asset utilization and can even prolong the lifetime of some utility assets.

Somewhat Surprisingly, Initial Deployment of SG Applications Can Have a Negative Impact on Reliability

IMG_2597-150x150While SG applications can help enhance reliability through smart sensors and increased automation, it appears that the initial SG applications could negatively impact system reliability before subsequent D.A. applications provide ameliorating automation, i.e., SG can be first a sword against, and later a shield for, reliability.

Here we will address the negative impacts and follow-up below with some business opportunities for SG applications that mitigate these negative effects on reliability. Continue reading

Providing 99.87% Reliability* Is Going to Cost a Lot More – Are There Related SG 2.0 Business Opportunities?

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Historically, utilities have provided about “3 x 9s” reliability. The cost of this reliability is currently bundled into the price of electricity. This includes the cost of maintaining reserves, contingency plans, and automated generation control to cover the stochastic behavior of forced outages and electricity demand.

This cost is going up. Why?

Bulk Power Supply Uncertainty Is Affecting Reliability

The implementation of the RPS mandates is increasing the proportion of intermittent power production plants and by default decreasing the inertia, i.e., the damping ability, of the power system. As a result, a substantial amount of extra generation reserves and ancillary services are required to cover the increased uncertainty of supply while maintaining “3 x 9s” reliability levels. Recognizing this, most ISO markets trade various reserve and ancillary service products.

Trans-15-almost-purple-New-Image-150x150The transmission system is becoming more congested and there is widespread resistance against building new transmission lines. As a result, to maintain target levels of reliability and system security, more contingencies and remedial action plans and systems are needed to cover the increased uncertainty of delivery capability. Recognizing this, the ERCOT wholesale market trades month-ahead “congestion revenue right” products.

Real-World Examples of Related Supply-Side Reliability Events

A recent article by Dr. Paul-Frederik Bach, an expert in power system operations, discusses the impact of renewables penetration on the German power Grid. “The number of interventions has increased dramatically from 2010-2011 to 2011-2012…….

Bottlenecks are often detected in local grids. It makes no difference to the owner of a wind turbine if local or national grids are congested…………..In an attempt to establish an impression of the extent of interventions in Germany, EON Netz will be used as an example………..

During the first quarter of 2012, EON Netz has issued 257 interventions. The average length was 5.7 hours. Up to 10 interventions have been issued for the same hour. A total of 504 hours had one or more interventions. Thus, there have been interventions active for 23.1 percent of the hours during the first quarter of 2012..........

The total amount of curtailed energy from wind and CHP is probably modest, but the observations seem to indicate that German grids are frequently loaded to the capacity limits. Strained grids have a higher risk of cascading outages caused by single events.”

Another informative and very detailed analysis of a widespread outage in Europe in 2006 -- one which overloaded power lines and transformers in Poland by 120% and 140%, respectively -- can be found here. It includes a very interesting map of the European interconnected system showing voltage phase angle differences between substations varying from +60° to -50° across the region.

Demand-Side Uncertainty Is Also Affecting Reliability

Limited band-width, short-term frequency and voltage control is provided by traditional power plants.

However, the power industry does not have closed loop control between demand and supply. Continue reading

Business Case for the SG is About Automation and Control

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

A Final "Pivot" in the Definition of SG 2.0

We’ve come some way in defining what the SG is, and what it is not, but we are not quite there yet  - it is time for a (hopefully) final “pivot”, the purpose of which is to propose a definition of the SG that provides a solid and clear foundation upon which to develop our SG 2.0 business cases.

DSC_1310-150x150Here, we’ll first summarize our key conclusions derived from the series of previous dialogs about the “State of the Smart Grid”. Then we’ll propose a new (narrower) definition of SG 2.0 applications. Please click on the Smart Grid 2.0 "Category" to the right if you would like to see all of the previous SG dialogs.

 

Some Key Conclusions So Far About the SG, Based on Our Previous Dialogs

SG Costs/Deployment Duration

(1)    It will cost about $400 billion to implement the SG nationally

(2)    Required power system infrastructure replacement will cost about $1.6 trillion over the same period

(3)    Full implementation of the SG will take about 30 years, and will evolve as a hybrid of legacy and new systems, with increasing interoperability being supported by a combination of custom APIs and the development and promulgation of new standards

(4)    The total cost estimate above likely includes everything but the kitchen sink, and we might expect that the costs, while very substantial, will not be quite as high, based on a more thorough, and more granular, evaluation of a practical and economically viable deployment plan. We will suggest such an approach in what we are calling “A Managed Deployment Strategy for the SG” in our next dialog

SG Definition

(5)    In everyday conversations, the definition of the SG is plastic – the SG is viewed as including many elements that are only peripherally, at best, “smart”. For example, depending on the individual, the SG connotes or includes renewable energy, sustainability, CleanTech, electric vehicles, distributed generation, AMI, energy storage, distribution automation, and/or demand response

(6)    We’ve pointed out that AMI is not the SG – it is infrastructure – see the previous presentation of our new definition of SG 2.0

Power System Control

(7)    Power systems have used closed loop control for decades for generation and transmission in the form of the AGC software application on an EMS. ISO dispatch decisions are based on load forecasts (every 5 minutes, hour, day) and tight, reactive, management of Area Control Error (ACE) and system frequency. The electric distribution system does not use closed loop control.

(8)    Demand forecasts have become increasingly uncertain and volatile as customers begin to self-optimize their power usage

Regulation

(9)    Policy changes necessary to enable the realization of SG benefits have lagged the deployment of the SG, thus negatively impacting its ability to achieve its own fundamental policy goals

Policy and the SG

(10) The SG and CleanTech policies are symbiotic – while the SG is not CleanTech, some CleanTech elements, e.g., RPS mandates, end-use customer choice, require that the electricity grid be “smarter” if we are to maintain our present service level reliability

(11) SG capability is also needed because of other policy-created changes in the power system, e.g.,  increasingly dynamic loads, increased intermittency of distributed power production, charging of EVs, penetration of ADR, smart appliances and HANs, and the increased potential for electricity distribution system instabilities -- we will discuss this latter concern in an upcoming dialog

OUR “PIVOT”: SG 2.0’S BUSINESS IS AUTOMATION AND CONTROL”

As we’ve shown, the SG is not infrastructure, or CleanTech, or AMI.

The real business of the SG consists of automation and control systems:

  1. Sensors with embedded smart control firmware for local control
  2. Communications to enable systems control for a variety of time domains
  3. Control software with embedded algorithms for operations management
  4. M2M (fast response) and hybrid M2M/human control loops (slower response)
  5. “Big data” mining for critical control loop information
  6. Power system and sub-system control loop simulations and analysis (including customer response to market prices -- market response is one of the control loops and it interacts with, and affects, physical system control loops)

Trans-15-almost-purple-New-Image-150x150Thus, SG 2.0 provides the requisite control systems to support and integrate the operations of (a) CleanTech power installations, (b) the traditional power system infrastructure, and (c) power markets.

SG 2.0 automation sits on top of these three operations. It is a prerequisite to the success of the Smart Grid and power-related CleanTech policy, broadly defined.

Ironically, if we consider AMI to be a system of sensors, then it can be viewed as falling under the rubric of “automation” since AMI provides data that can be used for control systems with slower required response times. That is, under our new “stripped-down” definition of SG 2.0 as automation and control -- if the SG is really a smart control system, AMI is part of the SG’s system control infrastructure.

SG 2.0 As “Automation and Control”: Business Opportunities and Cases

Defining SG 2.0 as automation and control disentangles the evaluation of SG 2.0 applications businesses from investments in traditional power infrastructure, AMI, and CleanTech.

It provides us with a logical connection between SG 2.0 and existing AMI systems that provide some of the necessary inputs for SG 2.0 automation applications.

It clarifies and focuses the context within which we must develop and evaluate business cases for SG 2.0 applications.

There are numerous automation and control business opportunities across the entire SG value chain. We will present the more interesting of these in subsequent dialogs.

As always, comments are appreciated, in the box below.