Skip to main content

Efficiency – With Tectonic Power and Pace

By August 27, 2018November 5th, 2021Energy Rant

I am no mountaineer, but without looking, I know the Himalayan range is growing taller. How do I remember this? Because the earth’s crust is made up of tectonic plates that are always moving. The edge of tectonic plates forms fault lines for earthquakes. Did you know, that at some point, coastal California will be neighbors with Alaska? It’s true. A hell of a lot of earthquakes will happen in between, giving “bumpy ride” new meaning. In Southern Asia, the plate that India sits on is slamming into Asian landmass, thrusting Everest higher, adding roughly 2.4 inches per year to the 29,000-foot peak.

Energy efficiency programs evolve at a speed proportional to the growing summit elevation of Everest and the speed at which Californians are cruising to Alaska.

I am frequently reminded of efficiency’s tectonic speed; that fact, along with last week’s AESP Summer Conference and AESP magazine’s article by Val Jensen with ComEd, Pay for Performance [utility program of the future], precipitated this post.

I don’t think fast under pressure, but when not under pressure, my brain runs faster than my mouth or hands can communicate thoughts. The latter was the case as I sat in a panel session last week at AESP, moderated by Gene Rodriguez with ICF. If you have a chance to see a Gene-Rodriguez-moderated panel, GO!

The Way It’s Always Been Done

To set the stage, I will swing back to Val’s article. Val wrote along the lines of my post a couple weeks ago: “Modern Efficiency and the Disappearance of the Clapping Seals.” The clapping seals is just one of many metaphors I have used over the years, describing how old-school efficiency programs simply throw money at customers to buy efficiency. As I wrote in Driving Ms. Free Rider Daisy, does a $300 rebate on an awesome $15,000 mini-split heat pump system motivate the customer?

  • Motivate paperwork for the $300? Yes.
  • Motivate adoption of efficiency? No!

The tell-me-what-I-want-to-hear program evaluator would roll this up and report that wow, this program achieves savings at less than a penny per kilowatt-hour. The reality is the $300 isn’t making any difference at all. It is a 100% free rider.

Per Val’s article, and rolling in linguistic freedoms of Jeff Ihnen, we have an efficiency institution obsessed with:

  • Identifying the next widget and the customers who could use the widget
  • Marketing and getting widgets installed
  • Spending millions on evaluation to determine what motivated the customer
  • Demonstrating to the regulators that the way it has always been done is firmly intact, with less progress than a tectonic plate

Vibrations of Something Different

Let’s move to Gene’s panel discussion, during which my brain was in overdrive with possibilities. It was in overdrive for the collection of ideas I have been writing about for months.

Carmen Best led off the panel indicating that we need to measure savings at the meter. This is essentially (pause here to avoid nausea) M&V 2.0. I panned M&V 2.0 in the past because it doesn’t serve the current model well. To wit, when the savings aren’t demonstrated at the meter, as determined by regression models, which is to say, compared to normal customer usage patterns, why did the savings not occur? On-site investigation is required to learn why.

In the new paradigm, the program seeks impacts for whatever reason, in line with Val’s points. Consider a manufacturer wants to make stuff faster, and they save energy as a result. They hardly care about the savings, but they love the increased production. The program helped them achieve their production goal that captured the savings. The driver wasn’t energy savings? Who. Cares.

I could fill books with similar examples.

Program Evaluation Upended

In the new world, attribution studies must die. Heresy? Does anyone study the cost-effectiveness of supply-side resources? Not the way cost-effectiveness of demand-side resources are evaluated. Essentially, supply-side resources are evaluated by demonstrating one of the following:

  1. If we don’t do this project, we can’t keep the lights on for customers.
  2. We can do this project while maintaining rates or even lowering rates.

Compared to what?

Crickets.

For example, what if a fully depreciated (paid for) power plant is closed rather than used to the end of its useful life? What if we kept that plant operating, and we only paid its incremental cost or cost of production? How does that compare to the new asset scenario?

This is like running out of car payments after five years, buying a new car with marginally greater mileage and marginally greater payments. The cash flow remains the same. How many times have you heard people say, “it’s paid for”? Does the new car cost more or less than the old one after the old one is paid for?

One question from Gene’s panel was, “What about process evaluation?” Glad you asked. In the new world, process evaluation would move to the implementation side of the coin. The implementers will WANT process evaluation because it will improve performance, fast. Suddenly, process evaluation gains lots of value because it focuses on flaws, bottlenecks, disconnects, and ways to improve rather than doing in-depth studies on program elements that are working.

About that M&V 2.0 thing, this too moves to the implementer side of the coin. They are motivated by impacts, NOT widgets, and therefore, their incentive is to figure out how to optimize products and services.

I think I’ll write more on this topic next week.

Jeff Ihnen

Author Jeff Ihnen

More posts by Jeff Ihnen

Join the discussion One Comment

Leave a Reply