Skip to main content

Program Evaluation – Most Excellent Avocado Practices

By July 31, 2017November 6th, 2021Energy Rant

Everyone has applied for health insurance[1], and many of you have applied for life insurance. Anyone over, oh, 40, 50, or for sure 60, knows health flaws start to accumulate like the dumpster’s worth of unwelcome gifts, used shoes, and outdated clothes for those of you who stay current with the style trends of the day.

Come to think of it, all you digital natives, back in the day before weddingregistry.com (where you shop and make other people fill your house with stuff you want), we just took whatever the aunts, uncles, or more especially, great aunts and uncles, decided to unload. It wasn’t pretty. You have no idea what “it’s the thought that counts” means.

Imperfections – Deal With Them

Ok. Efficiency programs are liked flawed people. When applying for health or life insurance, I would not advise sweeping a few medication prescriptions under the rug, or skipping the fact that your parents, grandparents, and great grandparents had heart disease, sarcomas, or other type of cancer. Life happens.

Know what? It’s a good thing to face the music, even if it is for the loathed insurance company. Living in a world of denial and unicorns can be dangerous. That which is measured and reported to those responsible improves.

Here I pick up on last week’s Rant, Drive-By Evaluation, describing the primary driver for some evaluations. That driver is to make the program administrator look good, by making the programs look good, by mainly doing process evaluation[2] and blowing past the impact evaluation[3].

Conflicts of Interest?

Once upon a time, I recall an industry veteran opine that evaluators should only work for regulators. This is the case in some states, like California and Wisconsin, two of the most mature, but diverging, efficiency markets in the country. California’s model has less conflict of interest because the utilities are the program administrators. The regulators, as with the supply side, are watchdogs on the demand side of ratepayer funded programs. This makes sense, although I’m not going to say it is the best approach. There are issues readers will have to take from me offline.

While the Public Service Commission of Wisconsin is not the administrator, they hire the administrator, and they hire the evaluator. One could argue this isn’t much different than a utility hiring a third party program administrator and also hiring an evaluation contractor.

It is fine for a utility to contract with an evaluation firm, but I think regulators need to be at the table for evaluation planning and certainly as the results are reported. Are regulators not fully engaged when utilities want to add a power plant, upgrade substations, or raise prices with a rate case? Do we not want demand side resources to compete with supply side resources? If so, why should evaluation be so opaque?

Efficiency as a Resource

Michaels provides program evaluation services in a couple states where utilities need accurate impacts. One of these utilities, like many others, is shutting down a lot of coal-fired generation, and they need to know their efficiency impacts for resource planning. It matters to them and their customers.

Other utilities for which we provide evaluation services bid efficiency resources into forward capacity markets. We started evaluation services for one utility several years ago. In the meantime, they joined an RTO that uses the forward capacity model. All of a sudden we weren’t just verifying installations and blindly per direction, taking the technical reference manual savings (next section). We are developing hourly load shapes and savings shapes for every hour of the year. For reference, a couple interesting load shape papers are included here and here.

Wrap Up and Excellent Practices

As I was researching technical reference manuals (TRMs), and because I always like to leave readers with more than they can stand to learn, I am passing along this ACEEE Summer Study paper for “process best practices” for maintaining TRMs.

Wow! Mark that down for evaluation too! Summarizing from that paper, I have revised the list somewhat to apply to evaluation.

  • Use of an open and transparent technical collaborative, including the implementation contractor(s), administrator, regulatory staff, and if they aren’t a major pain, intervenors. This:
    • Fosters technical understanding between regulators and stakeholders.
    • Builds regulatory (and RTO) confidence in results.
    • Results in a collaborative consensus-building process.
  • TRM values need updating Here is a secret: it costs money to do that. It costs money to get it right. It costs money to not allow ratepayers to be ripped off. Primary data collection and metering are required.
  • Out of date measures must be expelled. Those would be free riders in transformed markets. This is one of the few things net-to-gross is good for.
  • “Key measure parameters must be updated with current and timely EM&V.” It’s worth repeating.
  • It is important to analyze which measure parameters have the greatest impact on key EE metrics such as savings and cost effectiveness, and spend greatest data collection resources on refining the most impactful measure parameters.” To us, this isn’t best practice, it is standard practice.
  • Baselines and costs must be updated regularly. This costs money. News alert!

[1] Except for freeloaders 26 years of age and under – your turn is coming.

[2] Process evaluation = how does the program work for contractors, trade allies, and customers? How effective is marketing? Etc.

[3] Impact evaluation = What are the inspected, measured savings?

Jeff Ihnen

Author Jeff Ihnen

More posts by Jeff Ihnen

Leave a Reply