Monthly Archives: April 2014

Internet of (Smart Grid) Things – Achieving Interoperability

Final Avatar 80x80-Logo-SG-1-and-2-and-IX-LOGO-e1363114874895-150x150

 

Dom Geraghty

In the previous dialog, we introduced the “Internet of (Smart Grid) Things”, or “Io(SG)T”, a real-world microcosm of Cisco’s IoT or IoE.  We make no excuses for accepting the admitted Cisco “spin” -- we have already been living this “spin” since the advent of the SG.

The SG is a microcosm of the IoT because we have defined the ultimate SG as an automated plug and play system, just like we increasingly plug and play today on the Internet, moving inexorably towards the universal plug and play IoT or IoE in the future. The concept is similar to “The Feed” in Neal Stephenson’s book, “The Diamond Age” – an ultra-reliable commodity mix priced at marginal costs.

Our ultimate SG replicates the IoT concept as a power sector subset or “vertical” – it is a textbook crucible because we can draw a clear unbroken boundary around the sector. Within this bounded system are the inter-operations of power production, transmission, distribution, and end-use of electricity, representing as a whole the networked infrastructure of the SG.

Pink-71-two-t-lines-150x150Here, we define this automated, interactive power sector vertical as the Io(SG)T. The power sector is the first of many networked industry sub-sets of the IoT likely to emerge over time. While we have not been explicitly calling it the Io(SG)T until now, as professionals within this sector we’ve been working on it for more than couple of decades.

However, there is still a long way to go to automate this very complex machine -- our electricity grid. We posit a tentative time-frame of at least 30 years for the ultimate SG, the Io(SG)T, and, as we’ve said before, there will be many zigs and zags along the way.

Let’s Back Up a Little – Why Is the Io(SG)T Needed?

Energy and environmental policies, regulations, and mandates are creating operational challenges for the electricity grid – they call for the grid to be operated in ways for which it was not designed, e.g., accommodating intermittent renewable generation, two-way power flows in distribution systems, evolving integration of wholesale and retail power markets, autonomously operated distributed generation and storage, M2M-enabled smart appliances and buildings, and microgrids.

These new developments make it a challenge to maintain traditional 4 x 9s service reliability levels, increase the stresses on the aging infrastructure of the grid, increase electricity costs, decrease asset utilization factors, and add substantial uncertainty to the net load/demand curve against which grid operators dispatch generation and delivery assets.

DSC_0026-150x150Fortunately, today’s new SG technologies and applications can provide the processing power, reaction speed, bandwidth, accuracy, interoperability (sometimes), and instant situational awareness across the grid necessary to accommodate the new operational challenges above while operating the grid reliably and safely.

Actually, we don’t really have any option – intelligent automated functionality of the SG has become a prerequisite for economical, reliable, and safe operation of the power grid in the future.

The Costs of the Io(SG)T Can Be Managed

Turning to the costs, we’ve recently presented a comprehensive analysis of the elements of a least cost Io(SG)T deployment strategy that includes (1) locational selectivity (using the 80%/20% rule) in our deliberate shift from a “blanketing” approach for SG applications to a prioritized deployment approach, and (2) equipment rejuvenation -- retrofitting rather than “rip and replace” approaches.

We estimate that this strategy has the potential to decrease initially estimated Io(SG)T costs by half an order of magnitude – that is, that the deployment of the Io(SG)T will cost about $400 billion through 2030.

The approach recognizes that SG applications will offset capital requirements over time and reduce operating costs going forward. For example, SG applications can increase asset utilization (freeing up “latent” capacity), reduce power system losses, and increase the economic efficiency of power markets.

We Are Beginning to Stack Up the Benefits of the Io(SG)T

11. DSC_0118-150x150-150x150For many individual SG applications, we are still in early days in terms of calculating and crediting all of the benefits in the “stack” of benefits, but it is happening. For example, we are beginning to see numerous utilities leverage the very first SG applications, i.e., AMI systems, to improve outage management, reduce truck rolls, improve billing, identify and eliminate electricity theft, manage voltage, and monitor transformers. All of these applications add incrementally to the AMI benefits stack.

Other real-world examples of accruing multiple benefits from an SG application:

  • Some grid operators are using SG applications with energy storage to “smooth” intermittency, shift and shave peak, and create arbitrage opportunities in wholesale power markets, presenting a “stacked” benefits analysis of energy storage systems
  • The power industry is in the middle of deploying phasor measurement units (PMUs) across regional transmission grids to provide very sophisticated situational awareness and safely operate our systems much closer to their capacity limits. The benefits stack includes freeing up latent capacity in the power infrastructure, reducing reserve requirements for the increasing proportion of intermittent generation, and relieving congestion on transmission lines

Bottom line: the Io(SG)T is a prerequisite for accommodating energy and environmental policy goals and we continue to quantify in the field additional benefits of Io(SG)T applications that improve benefit-to-cost ratios.

But how do we connect all of these dispersed and opportunistic SG applications so that they work together seamlessly in an Io(SG)T?

APIs – Learn to Love Them in the Io(SG)T

Sub-Title: The Interoperability Continuum

The SG automation roadmap involves measurement (advanced sensors -  we need these first to enable the rest of this roadmap), monitoring (communications), diagnosis (analytics and visualization), and control (algorithms) all operating simultaneously across the physical and market nodes of the SG, and across the multiple operating and back-office systems of grid operators.

For the automation roadmap to be realized, the SG applications and utility systems upon which they operate must be interoperable.

To date, interoperability between applications or systems has been achieved by using standards developed by Standards Development Organizations (SDOs) or by purpose-built Application Programming Interfaces (APIs). It is actually a little more complicated than that, however. The graphic below presents the “Interoperability Continuum”. The left-hand side represents a situation where no standard exists for a proposed interface/integration. On the right-hand side, a mature standard exists which can be used for the interface. Moving from left to right denotes increased general interoperability.

Proprietary APIs

We define a proprietary API as a customized interface that is developed by a vendor, system integrator, or grid operator for a special-purpose application that is usually one-off. It usually connects proprietary systems to other proprietary or home-grown systems. For example, an AMI vendor may develop a proprietary API to connect its system to the utility’s home-grown OMS or billing system.  Vendors do not share proprietary protocols because they believe that they enhance their competitiveness, capture the customer for “up-sells”, and increase the value of their ongoing Service Level Agreements (SLAs).

The next step in the continuum of increasing interoperability is a hybrid interface consisting of the combination of a proprietary API and an existing standard that allows vendors, system integrators and grid operators to connect proprietary systems to systems with standardized interfaces. While a standard is utilized in the hybrid, the interface is still controlled by the developer of the API. In this case, an example would be the interface between a proprietary AMI system and the CIM standard.

Open APIs

By providing an Open API (i.e., an open-source API available to third-party developers), the vendor relinquishes control of its interface. In return, the vendor with a superior product will attract independent third-party developers who create interfaces for value-adding SG applications, thus increasing the demand “pull” for the vendor’s equipment or systems.

This is the business model used today by Facebook, Apple, and others to create customer “stickiness” through an interoperable applications portfolio that they do not have to develop themselves. It’s a strategy for increasing market share. As part of the Web 2.0 business model, it was initially conceived to (1) allow web-sites to inter-operate, (2) create virtual collaborative services environment to support, for example, the professional interface between a designer and an architect, and (3) expand social media platforms.

7. DSC_0150-150x150The Open API is the future of SG applications if it logically follows the IoT (r)evolution. As part of the interoperability continuum, it can inter-operate with a widely deployed standard, or it can be offered as a stand-alone independent API. While this goes against traditional business protection instincts of vendors, it in effect “outsources” applications development relevant to its offerings – development resources that it gets for free-- in return for the internal cost of developing and offering the Open API itself.

Furthermore, the Open API, while provided free to developers, can have an associated “revenue license” where the vendor receives some portion of any revenue earned by a third-party developer commercializing a SG application on top of the Open API. Again, the Web 2.0 business model can apply. In the long run, the vendor may even decide to acquire some of the third-party developers of interfaces based on its Open API. The advantage to the vendor, along with the potential for increased product demand, is that the initial business/development risk and investment are undertaken by a third-party.

Mature Standards

Finally, at the right-hand end of the continuum above, there are scores of mature standards that already allow plug and play to happen between SG systems.

But……, at least at present, it is common to find that a mature standard that facilitates some interfaces may be “partial” in the sense that it does not cover (1) other (perhaps newer) interfaces with grid systems that the application also impacts, or (2) different layers within SG application interfaces which may not be compatible with layers in the systems to which they interface – thus requiring the combination of an API and a mature standard to accomplish the integration/interoperability for an advanced SG application.

So, a mature standard will remain a moving target as the Io(SG)T continues to evolve, and APIs will continue to be needed (and create value) as the automation of the SG progresses.

For a more detailed discussion of this last point, see a very interesting position paper by Scott Neumann that he wrote for GWAC.

A Glimpse into the Future: Could We Leapfrog the Development of SG Control Algorithms/Standards Altogether?

DSC_1253-150x150Compared to generation and transmission systems, the distribution system in the SG has less-well developed grid control algorithms and virtually no associated standards. These algorithms will sit within distributed intelligent processors embedded in IEDs dispersed throughout the distribution system. Using state estimation tools, dynamic load flow modeling, and real-time state information from high-speed ultra-accurate PMU chipsets, we are just beginning to develop control algorithms  for actuators in the distribution system and for remedial action scheme firmware aimed at protecting the system during contingencies.

Could we instead leap-frog today’s relatively mature “system on chips” (SOCs) technology by substituting memristor chipsets embedded in IEDs? That is, adaptive learning chips based on how the brain’s synapses operate by conducting processing and memory activities simultaneously -- using this approach, couldn’t we derive control algorithms empirically? OK, assuming that we also had ultra-fast sensors.

But memristors and ultra-fast sensors already exist today………

So maybe we don’t need to worry too much about the last (most difficult) step in the Io(SG)T roadmap, i.e., the development of SG control algorithms, which would take place in earnest perhaps a decade from now. Why? Because we won’t need standard grid control algorithms per se -- memristors, with processing speeds six orders of magnitude faster than today’s best technology, will sort it all out for us empirically - we just need to interface them with those ultra-fast sensors that measure and communicate the requisite data to the memristors over interoperable SG networks.

More on these cutting edge technologies in a future dialog………………

As always, comments welcome and appreciated.