Skip to main content

The Book on Energy Program Evaluation – Tssssss

By March 21, 2023Energy Rant
Image shows a graphic of stacked books with "Energy Rant" on the spine. Next to graphic is text displaying: "The Book of Energy Program Evaluation - Tssssss"

Although, or maybe because Michaels Energy has provided research and evaluation services for about 25 years, I’ve been a consistent critic of evaluation, measurement, and verification (EMV), and I’ll tell you why in several chapters.

Chapter 1 Resource Acquisition

Evaluation methodologies are predominately based on widget programs, also known as resource acquisition programs. The hypothesis is this: efficient equipment costs more, and that extra cost is the barrier. How to mitigate or remove the barrier? Pay down the incremental cost with a rebate after the purchase or incentive before the purchase. It’s that simple and dumb.

The simpleton approach to evaluating widget programs includes calculating savings against a hypothetical minimum viable product, the minimum efficiency on the market, also known as the baseline or counterfactual. Read all about this in Evaluation’s Jurassic Dogma.

What will you, evaluator, do when the customer pays nothing and has no incremental cost to offset? I’m not talking about the cheesy home kits that can ship to my house for free. I’ll take the light bulbs and nightlights, but that showerhead and aerator are going in the trash. I’m talking about vastly different delivery models that leverage technology and services with zero out-of-pocket for the customer.

For example, we install an IoT[1] monitoring system at no charge to the customer, and we split the savings with them 50/50 by having eyes on their critical energy-consuming equipment. We immediately get savings and continuously tune and fix alarms set off by control points drifting out of the intended range. My experience with evaluators is that I could easily see our labor to document every move and prove savings would exceed the cost of getting the savings.

Hundreds of measures or actions might be taken annually to reduce energy and/or demand by 30%. But what’s the baseline when something recurs four times a year? I don’t know. That’s for you, the evaluator, to figure out. Get that? YOUR problem. Don’t burden the implementation contractor.

Chapter 2 Fiefdoms

If a portfolio’s evaluation and implementation contractors have not changed for ten years, you must ask yourself, Mx. Commissioner, “What is going on here?”. Innovation suffers when everyone involved is vested in greenwashing how fabulous everything is. Bring in a new evaluator for a new look. Set EMV policy that encourages a valid critique of results – and lost opportunity.

Chapter 3 Rotating Doors

For other portfolios, the evaluator never changes (see a pattern here?), but implementation contractors change frequently. I need clarification on this for a couple of reasons.

First, changing implementation contractors is a more significant disruption and hassle for the customer base. It’s like changing teachers three or four times a year in a K-12 while the Principle and Administration never change. What are the customers (students) experiencing?

Second, if the implementation contractors or teachers are doing poorly, what are the evaluator, Principles, and Administrations doing to fix the problem? Is this not the job of program evaluation? It is to me, and if it’s not in the scope of work, I don’t want that work because other firms will take money to grind numbers year after year, delivering next to nil by way of improvement or value.

Chapter 4 The Devil We Know

Years ago, in Orlando, as I recall, I was in an elevator car with a couple of program administrators (utilities in this case). They were talking candidly about an RFP they had out for bids for an evaluation contractor. They agreed that “the devil we know” is the safer pick for the evaluation contractor. Wow, that must have been a high-performing valuable provider of evaluation services.

Likely scenario – the evaluation contractor was unpleasant and provided little value, but, hey, the administrator and the commission like the results!

Chapter 5 Don’t Bother

Certain states and program administrators are auto-no-goes for EMV bids. A discussion may go like this: “The [insert client name here] has requested bids for this evaluation eight times, and RSC [2] has won every time. They’ve had it forever, and [name of client] will never change. No go.”

The ratepayers aren’t getting what they deserve. And yes, they are ratepayers in this situation because they are paying for not much.

Chapter 6 Process Evaluation

Even though I’m an engineer, I lost interest in most impact evaluation work unless it is for the first rounds of a new program or technology launch. Producing impact evaluations that deliver statistical dead heats year after year is a waste of time and money.

We’ve had battle royales with competing implementation contractors because they don’t want us to see their proprietary calculation, tool, algorithm, engine, etc. Well, I don’t care about their methodologies, energy models, or calculations. I want to know how their program performs through the lived experience of customers, market actors, program managers, account managers, and program administrators. That(!) is where the value is.

This is another major downside to using RSC all the time. They may be blind to or unwilling to tell the emperor he has bad hair and his outfit clashes horribly.

[1] Wireless internet-connected system -internet of things, or IoT

[2] Rubber Stamp CONsulting

Jeff Ihnen

Author Jeff Ihnen

More posts by Jeff Ihnen

Leave a Reply