Category Archives: Smart Grid IoT

The Smart Grid: A National Necessity (Part 1)

 

SG IIoT Logo Powerpoint 10.12.15Part 1: “Real-World” Situational Analysis for a Smart Grid (SG) Transition

Dom Geraghty

Excerpts from Part 1 of "The SG National Necessity Series"

  • New energy and regulatory policy initiatives, especially the mandate to increase the percentage of renewable generation, are creating unintended reliability and cost-of-service consequences for the power grid that must be addressed – a list of the most important initiatives is presented and their impacts discussed
  • Aging electricity infrastructure threatens service reliability – replacement cost estimated to be in the region of $1.7 trillion over the next two decades
  • Implementation of the SG is necessary to address these two challenges, at an estimated cost of $400 billion over the same period
  • SG infrastructure can substantially offset the total $2.1 trillion cost with operating and capital cost savings -- a broad-based regulatory incentive framework, coupled with the elimination of current regulatory disincentives, would be an important catalyst in achieving these savings -- it  is even conceivable that the operating and capital cost savings that the SG generates could pay for the replacement of aging infrastructure (see Part 5)
  • To address the unintended undesirable outcomes resulting from the new policy initiatives, we need to move from the current “nice to have” improvements created by selective use of SG applications to a “need to have” broadly-based, cyber-secure SG
  • More granular grid situational awareness is the first, prerequisite step in the transition to the SG -- especially awareness of the traditionally sparsely-monitored distribution system and behind-the-meter operations - we must move beyond static optimal power flow models
  • Today, SG applications are delivering sensing, monitoring, and diagnosis – collectively, situational awareness, and also some control functionality -- as we increase our understanding of operations of the newly-configured grid, and develop more sophisticated algorithms, we will progress to automation, and ultimately to optimization of the grid’s operations
  • Some of the required SG technology is already commercial, e.g., high accuracy sensing, wide-spectrum capture, very low noise floors, miniaturized high-performance processing, low-latency communications links; some technology needs to be developed and/or demonstrated in the field, e.g., advanced control and optimization algorithms
  • Still, lags in enabling regulatory incentive policies are inhibiting the transition to the SG
  • National power sector goals are proposed that provide meaningful metrics and motivation for the transition to the SG, including, for example, targeting SAIDI at 60 – 70 minutes (4 x 9s is equivalent to 53 minutes), and improving asset utilization percentage by five percentage points

"Human, All Too Human"

           -Friedrich Nietzsche, 1878

“It is time for man to fix his goal. It is time for man to plant the seed of his highest hope…What is great in man is that he is a bridge, and not an end; what can be loved in man is that he is an overture…

Look here my brothers! Do you not see it, the rainbow and the bridges of the Übermensch?”

            -“Thus Spake Zarathustra”, Friedrich Nietzsche, 1885

As always, comments are welcome and appreciated.

In the Smart Grid, Über-Sensors Declare “When = Where”

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

Transitioning to the power sector’s Smart Grid (SG) involves delivering the full continuum of functionality for SG applications, as follows:

  1. Sensing
  2. Monitoring
  3. Diagnosis
  4. Control
  5. Automation, and
  6. Optimization.

Of these functionalities, most of the SG applications today deliver sensing, monitoring (i.e., sensing plus communication), and some level of diagnosis which may require additional post-processing. The remaining three functionalities -- control, automation, and optimization -- are much more difficult to implement for a variety of reasons, not the least of which is the dearth of algorithms available to support these applications in a grid that is often operating in ways for which it was not originally designed and in electricity markets that are not fully integrated.

The Importance of High-Performance Sensors

DSC_1310-150x150All SG applications require some form of sensing – it is a prerequisite functionality upon which all of the other functionalities in the chain of applications depend.

But sensing, while basic on the surface, is not “a walk in the park” for a grid that is undergoing major physical and market structural changes even as it moves towards increasing automation.

Traditionally, we have used static optimal power flow (OPF) models for state estimation of the nodes in the distribution system and for designing power security protection schemes. With the computer processing capability available today, these models have become quite fast, with state recalculations for the entire distribution system now possible in seconds using blade servers. Today a full simulation can be completed well within a typical SCADA cycle. But the results are still “estimates” of the state of the distribution system.

While better OPF models are necessary, they are not sufficient to address the upcoming challenges of the SG because the distribution system is being operated in ways never contemplated when it was designed, due to the introduction of:

  1. Two-way power flows associated with distributed resources, micro-grids, and virtual power plants
  2. Intermittency introduced by PV installations in distribution systems
  3. Electric vehicle charging
  4. Automated demand response, and
  5. M2M appliances in end-user facilities

The most challenging impacts of these changes are:

  1. Volatility of distribution power flows is increasing significantly
  2. The rate of change of power flow metrics is accelerating, and
  3. Short-term load forecasts of “net demand” are exhibiting a broader range of uncertainty than before, creating challenges for operators to schedule production that meets minute-to-minute demand

As a result, even the fastest OPF models need to be supplemented (as well as re-calibrated) with real-world data that reflect these new operating regimes and uncertainties.

Bottom line: to maintain acceptable reliability, security, and stability levels as the physical grid and its associated markets are re-structured, we need to have a much more granular, real-time situational awareness of the distribution system.

How do we address this need for heightened situation awareness as we continue to implement interoperable SG applications?

Introducing the Über-Sensor Platform

Let’s assume that we have installed a set of ultra-accurate, ultra-fast, sensor platforms in our distribution system. The sensors are synchronized to the same “very exact” nano-second accuracy clock. All sensor measurements are time-stamped and concurrent.

What do the sensor platforms consist of? Each sensor consists of a high accuracy analog front-end (AFE) and a flexible, high-performance field programmable gate array (FPGA) integrated as a system-on-module (SOM), connected to, or embedded in, power measurement devices or intelligent electronic devices (IEDs).

DSC_1334 150x150Using technology available today, combined with some proprietary technology configured in a novel manner, the AFE can be highly accurate – providing, for example, a steady noise floor of ~-150 dB all the way out to 105 Hertz and perhaps ~-120 dB at 106 Hertz. The AFE can be supplemented by fast digital signal processors (DSPs) and hardware accelerators. Each sensor could then provide wide spectrum capture, power system hardware DSP, programmable DSP cores, programmable response, a customizable IED firmware stack, and the ability to communicate securely in milliseconds. It is, in effect, a platform that can support all six of the SG functionalities discussed in the opening paragraph of this dialog.

A sensor platform with these performance characteristics could be used in point-on-wave, phasor, power/energy, and power quality measurement applications. Electricity waveforms could be sampled with high accuracy at high frequency. For example, a 2 µ-second transient could be captured and displayed with high fidelity. For distribution system voltage levels, total vector error would need to be in the range of ~<0.2%, which is attainable today, but not without some difficulty.

With an Über-Sensor, We Can Equate “When” with “Where” in the Distribution System

Imagine what we could do with this Über-sensor platform/network in the electric distribution system.

Here’s an example: an incipient fault, or a 2 micro-second transient created by a lightning strike, has created an aberration in the wave form emanating from a particular location in the distribution system.

It spreads like the ripples from a stone cast into a pool. “I felt a great disturbance in the Force”, says Obi-Wan Kenobi. In Gridiopolis, the Engineer Ultra (υπερ-και για το μηχανικό) feels the hair stand out on his Greek neck and he reaches quickly for his krater. But seriously……

In the power system, just like in “The Force”, we know that everything is connected to everything in dynamic meta-stable equilibrium – it is a network of inter-dependent nodes in constant motion, maintained in equilibrium by grid operators.

Emanating outwards from the location of the above transient event, associated disturbances in the wave-form propagate across the distribution system and are sensed by the distributed über-sensor platforms. Because all of the sensor measurements are synchronized and concurrent, the time taken for the propagation to reach an individual sensor is known, and thus the disturbance can be traced back to a particular location by a form of triangulation between the sensors. (Of course, such a locational algorithm is yet to be developed, and we would also need pattern recognition and out-filter algorithms to eliminate the modulations of the disturbance introduced by intermediate devices.)

Yes, “When” Can Equal “Where” in the Distribution System

In effect, if we know very precisely the “when” of a disturbance, then, with a high-performance set of sensor platforms, we should be able to determine the “where” (the origin of the disturbance). In addition, with pattern recognition, the type and likely cause of the disturbance could also be determined.

Pink-middle-of-tower-42-New-Image-150x150And so, we wouldn’t merely detect a highly transient fault or an incipient fault, we would also locate its origin with the same sensors, and very quickly too.

With a network of über-sensor platforms, we would not need an OPF model to estimate states of the distribution grid, because we would “know” the states in real-time. We could archive and use this operations knowledge to re-calibrate the OPF model for off-line planning applications and post-mortems of disturbance events and remediation schemes.

The above application would be useful in terms of two high-value SG applications:

  1. High-speed fault detection, location, and causality
  2. Intelligent replacement of aging power system infrastructure through the identification of incipient failures

Availability of Über-Sensors Today

Über-sensor platforms like this are available today as “evaluation boards”. They go beyond ARM capability to add all of the additional functionality above, with a much-reduced chip-count per module. After they’ve being thoroughly tested and refined in upcoming field applications, they will ultimately be turned into high volume, low-cost ASICs, embeddable in IEDs commonly used in distribution systems.

Why Do We Need Über-Sensors?

Policy-driven structural changes are occurring, and are expected to occur, in the power grid and its associated markets. These changes will have profound operational implications for the grid. The transition to “Smart Grid” applications is being driven by the need to cope with these operational challenges.

In this context, if we are to deliver the same or better reliability of service at an affordable cost to electricity customers, we will need the above high-performance/low-cost sensors as fundamental building blocks to support our ultimate goal -- an interoperable, automated, optimized SG capable of maintaining resilient dynamic equilibrium while under constant fire from millions of continuously-occurring potentially destabilizing events.

 As always, comments are welcome and appreciated.

Implementation of Interoperability in the “Real World” of the Smart Grid (SG)

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

Dom Geraghty

Abstract

U.S. energy policy initiatives are changing the structure of the physical power system and power system markets. While achieving policy goals, they also create undesirable side-effects for service reliability and power costs. Smart Grid (SG) applications can mitigate these side effects. However, the SG can only work if its applications are interoperable because objects in the power grid network are inter-dependent.  In the “real world”, interoperability comes in many forms. Whatever form it takes, it is a prerequisite for the implementation of SG applications, which in turn are required to ameliorate the undesirable and/or unintended side-effects of salutary and broadly-supported energy policy initiatives.

Situation – Structural Changes in the Power Sector

Today, there are two primary structural changes occurring in the U.S. power grid: (1) physical (ongoing) and (2) power markets (early phases). These changes are being driven by the following energy policy initiatives:

  • Renewable Portfolio Standards (RPS) mandate which introduces increasing amounts of intermittent/variable power production
  • Promotion of, and subsidies for, distributed energy resources (DERs), micro-grids, virtual power plants (VPPs)
  • Promotion of electric vehicles (EVs)
  • Availability of demand response (DR)/load dispatch programs
  • Availability of increased end-use customer choice/energy management options such as smart thermostats, “Green Button”, home automation systems, building automation systems, dynamic rates
  • Integration of wholesale and retail markets, and integration of physical and power market operations

Side-Effects of Energy Policies – There Is a Disturbance in the Force

The above structural changes resulting from energy policy changes have some undesirable side-effects that impact service reliability and create challenges for grid operators.

The operations-related side effects of policy-related structural changes in the grid include:

  • More volatile operations as a result of intermittent resources
  • Events/actuations happen faster – machine-to-machine (M2M), some automation – create a need to manage system security/protection and system stability more tightly
  • Un-designed-for operation of traditional distribution systems, e.g., two-way flows in distribution systems, high-gain feedback loops due to price-responsive demand management programs
  • Visualization of the instantaneous “state of the grid” becomes more challenging
  • The power dispatcher’s job becomes more complex in terms of matching supply and demand on a quasi-real-time basis, e.g., load-following is more demanding
  • Forecasting the “net” load curve is more uncertain
  • More reserves are required to maintain service reliability targets

Two-t-lines-56-mauve-New-Image1-e1355772451904-150x150The side-effects occur because the electricity grid is an interconnected network. Energy policies can affect service reliability negatively because an undesigned-for change in one part of the grid’s operation affects other parts of the grid to an unknown extent, e.g., Lorenz’s “butterfly effect” -- the sensitive dependency on initial conditions in which a small change at one place in a deterministic nonlinear system can result in large differences in a later state. Everything in the electricity grid is interdependent – everything is connected to everything else.

Examples of this interconnectedness in action in the electric power grid include:

  • The November 2006 European grid collapse into three separate domains as phase angles sharply separated between the north, south and east due to insufficient inter-transmission service operator coordination and non-fulfilment of an N-1 criterion
  • The proven ability of a 120V wall socket in a University of Austin building to sense disturbances in the ERCOT grid over 350 miles away

Other, More Generic, Undesirable/Unintended Side-Effects of New Energy Policies

The policy-created structural changes in the power sector can also create other undesirable side-effects:

  • Increased costs because the “first cost” of SG-related equipment is almost always higher than existing (less-smart) equipment
  • Reductions  in system load factor

Bottom Line – Undesirable Side-Effects Need to Be Addressed

If unmitigated, the implementation of the above broadly-supported policy initiatives creates undesirable reliability, cost and asset utilization side-effects under business-as-usual power grid operations -- enter the SG with solutions to mitigate or even eliminate some of these side-effects.

A Brief Digression - Definition of the SG as an Intelligent Network

Before discussing how SG applications can mitigate or eliminate undesirable side-effects of new energy policies, it is important to define the SG.

The ultimate SG is a network of physical objects related to the generation, delivery, and utilization of electricity -- the objects are provided with unique identifiers and the ability to transfer data over Continue reading

Internet of (Smart Grid) Things – Achieving Interoperability

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

In the previous dialog, we introduced the “Internet of (Smart Grid) Things”, or “Io(SG)T”, a real-world microcosm of Cisco’s IoT or IoE.  We make no excuses for accepting the admitted Cisco “spin” -- we have already been living this “spin” since the advent of the SG.

The SG is a microcosm of the IoT because we have defined the ultimate SG as an automated plug and play system, just like we increasingly plug and play today on the Internet, moving inexorably towards the universal plug and play IoT or IoE in the future. The concept is similar to “The Feed” in Neal Stephenson’s book, “The Diamond Age” – an ultra-reliable commodity mix priced at marginal costs.

Our ultimate SG replicates the IoT concept as a power sector subset or “vertical” – it is a textbook crucible because we can draw a clear unbroken boundary around the sector. Within this bounded system are the inter-operations of power production, transmission, distribution, and end-use of electricity, representing as a whole the networked infrastructure of the SG.

Pink-71-two-t-lines-150x150Here, we define this automated, interactive power sector vertical as the Io(SG)T. The power sector is the first of many networked industry sub-sets of the IoT likely to emerge over time. While we have not been explicitly calling it the Io(SG)T until now, as professionals within this sector we’ve been working on it for more than couple of decades.

However, there is still a long way to go to automate this very complex machine -- our electricity grid. We posit a tentative time-frame of at least 30 years for the ultimate SG, the Io(SG)T, and, as we’ve said before, there will be many zigs and zags along the way.

Let’s Back Up a Little – Why Is the Io(SG)T Needed?

Energy and environmental policies, regulations, and mandates are creating operational challenges for the electricity grid – they call for the grid to be operated in ways for which it was not designed, e.g., accommodating intermittent renewable generation, two-way power flows in distribution systems, evolving integration of wholesale and retail power markets, autonomously operated distributed generation and storage, M2M-enabled smart appliances and buildings, and microgrids.

These new developments make it a challenge to maintain traditional 4 x 9s service reliability levels, increase the stresses on the aging infrastructure of the grid, increase electricity costs, decrease asset utilization factors, and add substantial uncertainty to the net load/demand curve against which grid operators dispatch generation and delivery assets.

DSC_0026-150x150Fortunately, today’s new SG technologies and applications can provide the processing power, reaction speed, bandwidth, accuracy, interoperability (sometimes), and instant situational awareness across the grid necessary to accommodate the new operational challenges above while operating the grid reliably and safely.

Actually, we don’t really have any option – intelligent automated functionality of the SG has become a prerequisite for economical, reliable, and safe operation of the power grid in the future.

The Costs of the Io(SG)T Can Be Managed

Turning to the costs, we’ve recently presented a comprehensive analysis of the elements of a least cost Io(SG)T deployment strategy that includes (1) locational selectivity (using the 80%/20% rule) in our deliberate shift from a “blanketing” approach for SG applications to a prioritized deployment approach, and (2) equipment rejuvenation -- retrofitting rather than “rip and replace” approaches.

We estimate that this strategy has the potential to decrease initially estimated Io(SG)T costs by half an order of magnitude – that is, that the deployment of the Io(SG)T will cost about $400 billion through 2030.

The approach recognizes that SG applications will offset capital requirements over time and reduce operating costs going forward. For example, SG applications can increase asset utilization (freeing up “latent” capacity), reduce power system losses, and increase the economic efficiency of power markets.

We Are Beginning to Stack Up the Benefits of the Io(SG)T

11. DSC_0118-150x150-150x150For many individual SG applications, we are still in early days in terms of calculating and crediting all of the benefits in the “stack” of benefits, but it is happening. For example, we are beginning to see numerous utilities leverage the very first SG applications, i.e., AMI systems, to improve outage management, reduce truck rolls, improve billing, identify and eliminate electricity theft, manage voltage, and monitor transformers. All of these applications add incrementally to the AMI benefits stack.

Other real-world examples of accruing multiple benefits from an SG application:

  • Some grid operators are using SG applications with energy storage to “smooth” intermittency, shift and shave peak, and create arbitrage opportunities in wholesale power markets, presenting a “stacked” benefits analysis of energy storage systems
  • The power industry is in the middle of deploying phasor measurement units (PMUs) across regional transmission grids to provide very sophisticated situational awareness and safely operate our systems much closer to their capacity limits. The benefits stack includes freeing up latent capacity in the power infrastructure, reducing reserve requirements for the increasing proportion of intermittent generation, and relieving congestion on transmission lines

Bottom line: the Io(SG)T is a prerequisite for accommodating energy and environmental policy goals and we continue to quantify in the field additional benefits of Io(SG)T applications that improve benefit-to-cost ratios.

But how do we connect all of these dispersed and opportunistic SG applications so that they work together seamlessly in an Io(SG)T?

APIs – Learn to Love Them in the Io(SG)T

Sub-Title: The Interoperability Continuum

The SG automation roadmap involves measurement (advanced sensors -  we need these first to enable the rest of this roadmap), monitoring (communications), diagnosis (analytics and visualization), and control (algorithms) all operating simultaneously across the physical and market nodes of the SG, and across the multiple operating and back-office systems of grid operators.

For the automation roadmap to be realized, the SG applications and utility systems upon which they operate must be interoperable.

To date, interoperability between applications or systems has been achieved by using standards developed by Standards Development Organizations (SDOs) or by purpose-built Application Programming Interfaces (APIs). It is actually a little more complicated than that, however. The graphic below presents the “Interoperability Continuum”. The left-hand side represents a situation where no standard exists for a proposed interface/integration. On the right-hand side, a mature standard exists which can be used for the interface. Moving from left to right denotes increased general interoperability.

Proprietary APIs

We define a proprietary API as a customized interface that is developed by a vendor, system integrator, or grid operator for a special-purpose application that is usually one-off. It usually connects proprietary systems to other proprietary or home-grown systems. For example, an AMI vendor may develop a proprietary API to connect its system to the utility’s home-grown OMS or billing system.  Vendors do not share proprietary protocols because they believe that they enhance their competitiveness, capture the customer for “up-sells”, and increase the value of their ongoing Service Level Agreements (SLAs).

The next step in the continuum of increasing interoperability is a hybrid interface consisting of the combination of a proprietary API and an existing standard that allows vendors, system integrators and grid operators to connect proprietary systems to systems with standardized interfaces. While a standard is utilized in the hybrid, the interface is still controlled by the developer of the API. In this case, an example would be the interface between a proprietary AMI system and the CIM standard.

Open APIs

By providing an Open API (i.e., an open-source API available to third-party developers), the vendor relinquishes control of its interface. In return, the vendor with a superior product will attract independent third-party developers who create interfaces for value-adding SG applications, thus increasing the demand “pull” for the vendor’s equipment or systems.

This is the business model used today by Facebook, Apple, and others to create customer “stickiness” through an interoperable applications portfolio that they do not have to develop themselves. It’s a strategy for increasing market share. As part of the Web 2.0 business model, it was initially conceived to (1) allow web-sites to inter-operate, (2) create virtual collaborative services environment to support, for example, the professional interface between a designer and an architect, and (3) expand social media platforms.

7. DSC_0150-150x150The Open API is the future of SG applications if it logically follows the IoT (r)evolution. As part of the interoperability continuum, it can inter-operate with a widely deployed standard, or it can be offered as a stand-alone independent API. While this goes against traditional business protection instincts of vendors, it in effect “outsources” applications development relevant to its offerings – development resources that it gets for free-- in return for the internal cost of developing and offering the Open API itself.

Furthermore, the Open API, while provided free to developers, can have an associated “revenue license” where the vendor receives some portion of any revenue earned by a third-party developer commercializing a SG application on top of the Open API. Again, the Web 2.0 business model can apply. In the long run, the vendor may even decide to acquire some of the third-party developers of interfaces based on its Open API. The advantage to the vendor, along with the potential for increased product demand, is that the initial business/development risk and investment are undertaken by a third-party.

Mature Standards

Finally, at the right-hand end of the continuum above, there are scores of mature standards that already allow plug and play to happen between SG systems.

But……, at least at present, it is common to find that a mature standard that facilitates some interfaces may be “partial” in the sense that it does not cover (1) other (perhaps newer) interfaces with grid systems that the application also impacts, or (2) different layers within SG application interfaces which may not be compatible with layers in the systems to which they interface – thus requiring the combination of an API and a mature standard to accomplish the integration/interoperability for an advanced SG application.

So, a mature standard will remain a moving target as the Io(SG)T continues to evolve, and APIs will continue to be needed (and create value) as the automation of the SG progresses.

For a more detailed discussion of this last point, see a very interesting position paper by Scott Neumann that he wrote for GWAC.

A Glimpse into the Future: Could We Leapfrog the Development of SG Control Algorithms/Standards Altogether?

DSC_1253-150x150Compared to generation and transmission systems, the distribution system in the SG has less-well developed grid control algorithms and virtually no associated standards. These algorithms will sit within distributed intelligent processors embedded in IEDs dispersed throughout the distribution system. Using state estimation tools, dynamic load flow modeling, and real-time state information from high-speed ultra-accurate PMU chipsets, we are just beginning to develop control algorithms  for actuators in the distribution system and for remedial action scheme firmware aimed at protecting the system during contingencies.

Could we instead leap-frog today’s relatively mature “system on chips” (SOCs) technology by substituting memristor chipsets embedded in IEDs? That is, adaptive learning chips based on how the brain’s synapses operate by conducting processing and memory activities simultaneously -- using this approach, couldn’t we derive control algorithms empirically? OK, assuming that we also had ultra-fast sensors.

But memristors and ultra-fast sensors already exist today………

So maybe we don’t need to worry too much about the last (most difficult) step in the Io(SG)T roadmap, i.e., the development of SG control algorithms, which would take place in earnest perhaps a decade from now. Why? Because we won’t need standard grid control algorithms per se -- memristors, with processing speeds six orders of magnitude faster than today’s best technology, will sort it all out for us empirically - we just need to interface them with those ultra-fast sensors that measure and communicate the requisite data to the memristors over interoperable SG networks.

More on these cutting edge technologies in a future dialog………………

As always, comments welcome and appreciated.

Smart Sensors for the SG: You Can’t Manage What You Don’t Measure

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

An Integrated Measurement Strategy for the SG

Obviously, since the SG is inanimate, we don’t expect it to intuit how to do “smart” things by itself (!). We have to provide it with data, and with analytical rules. Even the recent introductions of AI-based algorithms in some SG applications must still be derived from “learnings” from empirical data.

At present, the deployment of SG applications can be characterized as “tactical”-- uncoordinated with other activities in the SG, and special-purpose in nature – certainly not following the holistic, long-term visions of SG architectures and power markets developed by such entities as GWAC, NIST, EPRI, IEC, IEEE, SGIP, etc.  The result is a hodge-podge of application-specific sensors with different capabilities which don’t communicate across applications and which operate in different time domains. But it does not need to be like that, as we shall outline below.

Let’s Define Measurement, Sensors, and Smart Sensors

10. DSC_1334-150x150Smart sensors are the fundamental building blocks for the implementation of a truly “smart” grid. They are an essential part of every SG solution. Regular analog sensors become the “smart sensors” of the SG when they add intelligence to the measurement function, i.e., analog to digital conversion, processing power, firmware, communications, and even actuation capability.

We can think of smart sensors as the first link in a four-link SG decision-making chain that consists of:

(1) Location-specific measurement -- sensor function only

(2) Monitoring -- a sensor with one-way communications functionality

(3) Diagnosis at the “edge” -- a sensor with localized diagnostic intelligence based on data analytics and/or centralized diagnosis based on communicated sensor data

(4) Edge-embedded control actions (based on embedded algorithms, including Remedial Action Schemes (RAS)) -- a sensor with intelligence and control/actuator capability. The algorithms for this functionality could also be centralized and use two-way communications with an “edge” sensor/actuator, and/or they could drive peer-to-peer coordination of control actions at the “edge”; however, a substantial amount of R&D still needs to be done to develop autonomous real-time or quasi-real-time control algorithms for power distribution systems

To Date, Smart Sensor-Based Measurement in the SG Has Been “Tactical”

Granted, as we’ve said before, there is a reason for this tactical approach to sensor deployment – up to now the choices of SG projects are driven by energy and regulatory policies and rules that target a limited set of SG applications. Fair enough -- none of us expect that the evolution of the SG will follow a “grand deployment plan” – it will be imperfect, following the zigs and zags of these real-world drivers.

Continue reading

Interoperability of Smart Grid (SG) Applications Is Mission-Critical, And Good For Business Too

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

It is clear to us that energy policy and regulations are the key drivers of the business case for SG applications, and not technologies. These policies/regulations promote, for example, RPS mandates, dynamic pricing, demand response, competitive market structures, self-optimizing customers (e.g., distributed generation and storage, smart appliances, micro-grids), electric vehicles, cyber-security, and data privacy. It is a kind-of “policy-push” market, with SG applications in a “catch-up” mode.

In order to implement the new policies and regulations in all of their complexities and not-designed-for impacts on the traditional electricity grid, while still maintaining the current levels of service reliability, stability and security, the grid needs to be smarter, and react faster. We will be operating “closer to the edge”.

The SG is at its core about automation, control, and optimization across the power system operations – both physical and market operations. For example, it comprises smart sensors, intelligent electronic devices, communications systems, M2M interfaces, data analytics, situation awareness and alerts, and control systems.

In its ideal form, the SG is a system of systems that in essence have the potential to optimize power system operations and capacity requirements. To realize this potential, i.e., for the grid to be “smart”, these systems ultimately need to be interoperable since the SG is an interconnected system from generation all the way to end-use of electricity.

The above new policies/regulations are out ahead of the SG in terms of technology, interoperability, and grid operations – the SG is playing “catch-up”. But more importantly, we also need the SG in order to realize the full benefits of these new policies and regulations.

The “catch-up” situation can lead to unintended/undesirable consequences related to the operation and reliability of the power system.

Fortunately, SG applications have the capability, if not yet the readiness, to mitigate these risks, provided they are interoperable.

The Transition to an “Ideal” SG Architecture Will Be Messy -- We Are Going To Feel Uncomfortable

DSC_1253-150x150As engineers, we like tidiness. In a perfect world, the transition to a fully-functional SG would be orderly and paced to accommodate new applications while protecting grid integrity: perhaps a three-stage transition -- from today’s operations’ data silos in utilities to a single common information bus, then to many common, integrated buses, and finally to a converged system.

But in a non-perfect world, i.e., reality, the SG will evolve as a hybrid of legacy and new systems -- it will not be an optimized process – there will not be a “grand plan” – clusters of interoperability will appear here and there across the SG.

The transition will take perhaps 30 years -- not for technology-based reasons, but because the “refresh cycle” for utility assets is lengthy – so, there’s time for a whole career for all of us in deploying SG applications! Continue reading

Business Case for the SG is About Automation and Control

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

 

A Final "Pivot" in the Definition of SG 2.0

We’ve come some way in defining what the SG is, and what it is not, but we are not quite there yet  - it is time for a (hopefully) final “pivot”, the purpose of which is to propose a definition of the SG that provides a solid and clear foundation upon which to develop our SG 2.0 business cases.

DSC_1310-150x150Here, we’ll first summarize our key conclusions derived from the series of previous dialogs about the “State of the Smart Grid”. Then we’ll propose a new (narrower) definition of SG 2.0 applications. Please click on the Smart Grid 2.0 "Category" to the right if you would like to see all of the previous SG dialogs.

 

Some Key Conclusions So Far About the SG, Based on Our Previous Dialogs

SG Costs/Deployment Duration

(1)    It will cost about $400 billion to implement the SG nationally

(2)    Required power system infrastructure replacement will cost about $1.6 trillion over the same period

(3)    Full implementation of the SG will take about 30 years, and will evolve as a hybrid of legacy and new systems, with increasing interoperability being supported by a combination of custom APIs and the development and promulgation of new standards

(4)    The total cost estimate above likely includes everything but the kitchen sink, and we might expect that the costs, while very substantial, will not be quite as high, based on a more thorough, and more granular, evaluation of a practical and economically viable deployment plan. We will suggest such an approach in what we are calling “A Managed Deployment Strategy for the SG” in our next dialog

SG Definition

(5)    In everyday conversations, the definition of the SG is plastic – the SG is viewed as including many elements that are only peripherally, at best, “smart”. For example, depending on the individual, the SG connotes or includes renewable energy, sustainability, CleanTech, electric vehicles, distributed generation, AMI, energy storage, distribution automation, and/or demand response

(6)    We’ve pointed out that AMI is not the SG – it is infrastructure – see the previous presentation of our new definition of SG 2.0

Power System Control

(7)    Power systems have used closed loop control for decades for generation and transmission in the form of the AGC software application on an EMS. ISO dispatch decisions are based on load forecasts (every 5 minutes, hour, day) and tight, reactive, management of Area Control Error (ACE) and system frequency. The electric distribution system does not use closed loop control.

(8)    Demand forecasts have become increasingly uncertain and volatile as customers begin to self-optimize their power usage

Regulation

(9)    Policy changes necessary to enable the realization of SG benefits have lagged the deployment of the SG, thus negatively impacting its ability to achieve its own fundamental policy goals

Policy and the SG

(10) The SG and CleanTech policies are symbiotic – while the SG is not CleanTech, some CleanTech elements, e.g., RPS mandates, end-use customer choice, require that the electricity grid be “smarter” if we are to maintain our present service level reliability

(11) SG capability is also needed because of other policy-created changes in the power system, e.g.,  increasingly dynamic loads, increased intermittency of distributed power production, charging of EVs, penetration of ADR, smart appliances and HANs, and the increased potential for electricity distribution system instabilities -- we will discuss this latter concern in an upcoming dialog

OUR “PIVOT”: SG 2.0’S BUSINESS IS AUTOMATION AND CONTROL”

As we’ve shown, the SG is not infrastructure, or CleanTech, or AMI.

The real business of the SG consists of automation and control systems:

  1. Sensors with embedded smart control firmware for local control
  2. Communications to enable systems control for a variety of time domains
  3. Control software with embedded algorithms for operations management
  4. M2M (fast response) and hybrid M2M/human control loops (slower response)
  5. “Big data” mining for critical control loop information
  6. Power system and sub-system control loop simulations and analysis (including customer response to market prices -- market response is one of the control loops and it interacts with, and affects, physical system control loops)

Trans-15-almost-purple-New-Image-150x150Thus, SG 2.0 provides the requisite control systems to support and integrate the operations of (a) CleanTech power installations, (b) the traditional power system infrastructure, and (c) power markets.

SG 2.0 automation sits on top of these three operations. It is a prerequisite to the success of the Smart Grid and power-related CleanTech policy, broadly defined.

Ironically, if we consider AMI to be a system of sensors, then it can be viewed as falling under the rubric of “automation” since AMI provides data that can be used for control systems with slower required response times. That is, under our new “stripped-down” definition of SG 2.0 as automation and control -- if the SG is really a smart control system, AMI is part of the SG’s system control infrastructure.

SG 2.0 As “Automation and Control”: Business Opportunities and Cases

Defining SG 2.0 as automation and control disentangles the evaluation of SG 2.0 applications businesses from investments in traditional power infrastructure, AMI, and CleanTech.

It provides us with a logical connection between SG 2.0 and existing AMI systems that provide some of the necessary inputs for SG 2.0 automation applications.

It clarifies and focuses the context within which we must develop and evaluate business cases for SG 2.0 applications.

There are numerous automation and control business opportunities across the entire SG value chain. We will present the more interesting of these in subsequent dialogs.

As always, comments are appreciated, in the box below.