An Integrated Measurement Strategy for the SG
Obviously, since the SG is inanimate, we don’t expect it to intuit how to do “smart” things by itself (!). We have to provide it with data, and with analytical rules. Even the recent introductions of AI-based algorithms in some SG applications must still be derived from “learnings” from empirical data.
At present, the deployment of SG applications can be characterized as “tactical”-- uncoordinated with other activities in the SG, and special-purpose in nature – certainly not following the holistic, long-term visions of SG architectures and power markets developed by such entities as GWAC, NIST, EPRI, IEC, IEEE, SGIP, etc. The result is a hodge-podge of application-specific sensors with different capabilities which don’t communicate across applications and which operate in different time domains. But it does not need to be like that, as we shall outline below.
Let’s Define Measurement, Sensors, and Smart Sensors
Smart sensors are the fundamental building blocks for the implementation of a truly “smart” grid. They are an essential part of every SG solution. Regular analog sensors become the “smart sensors” of the SG when they add intelligence to the measurement function, i.e., analog to digital conversion, processing power, firmware, communications, and even actuation capability.
We can think of smart sensors as the first link in a four-link SG decision-making chain that consists of:
(1) Location-specific measurement -- sensor function only
(2) Monitoring -- a sensor with one-way communications functionality
(3) Diagnosis at the “edge” -- a sensor with localized diagnostic intelligence based on data analytics and/or centralized diagnosis based on communicated sensor data
(4) Edge-embedded control actions (based on embedded algorithms, including Remedial Action Schemes (RAS)) -- a sensor with intelligence and control/actuator capability. The algorithms for this functionality could also be centralized and use two-way communications with an “edge” sensor/actuator, and/or they could drive peer-to-peer coordination of control actions at the “edge”; however, a substantial amount of R&D still needs to be done to develop autonomous real-time or quasi-real-time control algorithms for power distribution systems
To Date, Smart Sensor-Based Measurement in the SG Has Been “Tactical”
Granted, as we’ve said before, there is a reason for this tactical approach to sensor deployment – up to now the choices of SG projects are driven by energy and regulatory policies and rules that target a limited set of SG applications. Fair enough -- none of us expect that the evolution of the SG will follow a “grand deployment plan” – it will be imperfect, following the zigs and zags of these real-world drivers.
The first SG sensors consisted of smart meters installed in AMI deployments, programs that were supported, if not encouraged, by most state regulators. The wireless backbones of these AMI systems were supposed to be “universal”/”future-proofed” in the sense that the various communication systems would also support demand response, distribution system automation applications, and other operational needs.
But it has become clear that the required response requirements for SG operational decision-making are too fast for AMI communication systems to support. They lack sufficient bandwidth, processing power, and in some cases memory capacity, and thus exhibit too much latency. Having said that, it’s been shown that they can supply important complementary data where time is not of the essence.
Lastly, most AMI systems and other utility operational applications are not interoperable, requiring substantial resources and time to be expended to develop customized APIs.
Shouldn’t we be thinking holistically about the ultimate operational vision of the SG to determine what types of data we’ll need to support that vision? We already know that sensors are critical to the successful deployment of a digitized, intelligent, inter-connected, resilient SG. But what mission-critical challenges that the power sector faces today can the deployment of the SG solve? What made us start down this (expensive, challenging, and multi-decade) path to an ultimate plug-and-play SG – an “Internet of (Smart Grid) Things”?
We would suggest that there are three key strategic challenges that the deployment of the SG must address:
- It is clear that unallayed regulatory and energy policies (with worthwhile societal goals) have negative effects on service reliability and electricity bills
- The infrastructure of grid is aged -- asset management can get the most out of our existing capacity by supporting an “intelligent component replacement” (ICR) approach
- The cost of deploying the SG is enormous -- we have to be clever about how we create the SG -- employing a least cost strategy can reduce the cost very significantly
Within This Strategic Context, How Do Smart Sensors Help?
Sensors have two basic roles in the SG:
- Providing data (and intelligence in some cases) for reliable and economic grid operations
- Monitoring equipment condition across the SG
Firstly, let’s look at grid operations. Smart sensors enable us to manage “closer to the edge” of the operating envelope. In essence, they can create “extra” capacity from existing equipment, and they also improve system reliability, stability, and security through, for example, increased efficiency, decreased delivery losses, volt/var optimization, and provision of extra regulation reserves.
Generation (G) and transmission (T) systems already have a significant amount of measurement/sensor systems in place (and transmission system operators are installing very sophisticated high-speed synchrophasor measurement systems for even greater situational awareness). We will not discuss G&T sensors here.
In contrast, distribution system operators are relatively “blind” between the substation and the meter, and grid operators have little or no sensing abilities beyond the meter to gauge the behavior of self-optimizing customers. These are important “gaps” in today’s SG – coincidentally, they represent a large opportunity for businesses providing smart sensing systems.
Secondly, let’s look at equipment condition monitoring. In addition to power measurement sensors, we will need to deploy other types of measuring devices that are capable of sensing incipient or actual equipment failures. These sensors may be based on RF, ultrasound, LIDAR, magnetic fields, for example.
Importantly, the “smart” processes in the sensors used for operational and asset management applications are the same: conversion of analog measurements to digital signals (A/D conversion), data compression and storage, time-stamping, analytical firmware, communications, and actuation. Of course, the time domain requirements will be different for different applications – a “universal” smart sensor needs to cover a time domain from sub-cycles to hours. Its sampling rate must be “tunable” for different applications.
OK. But flexible, sophisticated, universal, interoperable sensors are not cheap, and it looks like we may need a lot of them. Let’s discuss the possible elements of a least cost deployment strategy.
Least Cost Deployment: How Many Sensors Do We Need?
While it would be nice to have smart sensors everywhere in the grid, it would be both cost-prohibitive and, anyway, it is unnecessary. How can we identify the highest value/priority locations?
Most distribution system operators have a good idea where the areas of stress are in their systems, i.e., the locations about which they would like to have better operational and equipment condition data. This field experience can be supplemented by load-flow simulation models, and, in the future, by increasingly sophisticated real-time visualization of power flows.
Based on this combination of operator experience and simulations, the initial most valuable locations for smart sensors can be determined. It is likely that the 80%/20% rule approximately applies, i.e., ~80% of the value of measurements can be derived from a deployment of sensors to ~20% of the possible locations in the distribution system.
As the sensors provide additional empirical data, and as the system configurations change, the locations can be continuously optimized.
But do we need to have multiple different sensors at each stressed location to get the different measurements needed for different SG applications?
Well, we say: “no, we don’t” -- that’s where the real “smarts” of advanced sensors come into the picture.
Flexibility of the Smart Sensor Itself
So, we are asserting that to support grid operations or asset management we should not have to co-locate multiple independent sensors, that is, assuming that the sensors are “smart”. But what does “smart” mean in this context?
Let’s separate the functions of a smart sensor into two main categories: (1) analog measurements, and (2) post processing of the analog measurements (“the smarts”), including the initial step of converting the analog measurements to digital data. The analog measurements will require different sensing mechanisms, but the post-processing can apply the same functions to all analog measurements. One can imagine this occurring in an integrated IED of the future – one that will accept multiple analog measurements as inputs to be post-processed by a smart, embedded chip-set programmed to handle multiple applications (including autonomous controller actuations) across multiple time-domains simultaneously.
The power sensing part of the post-processing will be fairly constant from location to location -- we will always be measuring some attribute of electricity flowing at a location – for example, it may be kWh, kW, voltage, current, phasors, or the quality of the power. Similarly, the condition-sensing function will deliver analog data that might characterize, for example, RF, ultrasound, light, thermographic or magnetic emissions, to be post-processed into representative digital data.
Smart sensors, as defined above, can be flexible, i.e., as the requirements at a particular sensor location change, the firmware in the smart sensor can be re-programmed “over-the-air” through the sensor’s communications link. This is important because we know that a smart sensor will likely be installed to handle just one application to start with, and we expect that additional applications will be added over time. The smart sensor needs to be flexible regarding re-programming and have the embedded processing capacity, memory and communications capacity to handle an increasing number of applications at its location.
In making the business case for the installation of a smart sensor, while its initial cost will be compared to the benefits derived from the initial application that it supports, the ultimate business case involves its eventual benefits’ stack, i.e., the benefits of all of the expected future applications less the cost of the incremental application functions added after the initial application.
We would expect that a “universal” smart sensor that can be programmed to support the post-processing of additional measurements will be a least cost solution, and certainly less costly than installing separate sensor systems for each application.
Creating an Incentive to Operate More Efficiently
A strong case can be made that smart sensors, combined with smart grid applications, have the potential to reduce the costs of operating the smart grid and the amount of capital expenditures needed to replace aging infrastructure and meet demand growth.
But when we talk about operational and capital efficiency that smart sensors and a SG can deliver, there is always this one enduring, frustrating snag, namely:
The current rate making structure does not provide adequate incentives for utilities to invest in cost-saving applications related to increased efficiency in operations, reduced losses, volt/var optimization. Why? Because the benefits, both economic and service-reliability based, accrue solely to utility customers, while the utility takes the investment risk.
Moreover, there is a disincentive to invest in SG applications that can defer capital expenditures, since this goes in exactly the opposite direction to the Averch-Johnson incentive today to increase the rate-base.
To capitalize on the enormous the cost savings and capital deferment opportunities that can be created by the SG, and to fund it, regulators should consider creating incentives that encourage utilities to put capital at risk for SG application deployments, i.e., introduce Performance-Based Regulation (PBR). PBR can be structured so that the utilities and their customers share in the benefits of the SG, assuming that the performance saves money or defers infrastructure capital. So much for our “soap-box”……
Smart Sensor-Related Business Opportunities
Before we finish, we thought it’d be interesting to identify a few possible business opportunities related to smart sensors for the SG -- comments welcome below on other possibilities:
- A smart sensor company “roll-up”, e.g., like the LumaSense business model
- Aging infrastructure – an “intelligent component replacement” services company, e.g., Quanta Technology
- A company positioned to “create capacity” with smart sensors by facilitating operations “closer to the edge”, reducing losses, optimizing voltage, reducing the amount of reserves needed, etc., thus increasing the utilization factor of existing infrastructure -- maybe someone can think of a name for this, analogous to the use of “NegaWatts” for DR or DSM? e.g., "XtraWatts"?
Smart sensing will be one facet of the ultimate SG – the “Internet of (Smart Grid) Things” – a fully interoperable, IP-based, digitized system, tying together the physical and market structures of the power system of the future.
In our next dialog we will discuss the material role that smart sensors can play in improving short-term net load forecasting.
Disclosure: Dom Geraghty is Executive Chairman of Smart Energy Instruments, Inc. (SEI) -- a high-speed, ultra-accurate SG chipset developer.
As always, comments are welcome and appreciated in the comment box below.