Without fail, it seems that every custom efficiency or self-directed impact evaluation we do has a controversial, giant project accounting for 25% (or more) of the program’s savings. It turns out it shouldn’t even have been allowed into the program because it doesn’t qualify – and in many cases, aside from that, the savings calculation is demonstrably and by the laws of thermodynamics and utility meter readings, wrong.
An analogy to the message today might be a home energy assessment. Our house was built to our liking thirteen years ago. I laid out the floor plan in about fifteen minutes on an 8.5×11 sheet of paper (literally). The architect pretty well stuck to that but added a couple features (for the better) to mask the fact that it was designed by an engineer with no style points. I would do a few things differently today with dimensions, but primarily for better envelope characteristics – continuous insulation for sheathing, thermal breaks and insulation for the basement concrete walls, and larger soffits for better shading. When it comes to tightness, it’s probably mediocre.
If I hired a decent home assessor, I would expect a bunch of recommendations, especially with plugging air leaks. I’m sure a worthwhile home assessor has acquired learned and honed skills for plugging substantial air leaks.
Suppose I hire up an assessment, and I wait around for a couple hours and the guy says, “This house is quite new. [duh] There is nothing that can be done to save energy.” So what is my response? “That’s wonderful. What do I owe you for providing no value whatsoever? Get out of here.” Then I’d call the program administrator to complain, and after listening to 20 minutes of Al Jarreau, Air Supply, and Lionel Richie music, I’d be told a grunt would call me back.
What is the purpose of The Energy Rant (big picture)? As described in the recent post Program Evaluation – Nellie You are Toast, a major purpose of the rant is to improve the industry from within – like the home assessor is supposed to do.
This all leads up to a recent multi-million dollar portfolio evaluation report that was called to my attention. The portfolio and resultant report featured a typical menu of programs, but the results were shocking. Eleven out of twelve natural gas measure categories had realization rates of 100%. Twelve out of seventeen electric measure categories had realization rates of precisely 94%. It was as though savings were decided by flipping a coin – one side featuring 100% and the other 94%. Perhaps next time at least use a Magic 8 Ball.
These results are physically and statistically impossible. So I read the description – desk reviews, site visits, data logging, the whole eight yards. Findings from a decent sample of projects should take the form of a normal distribution – the bell curve – but with the exception that there is typically a pile of outliers with zero or near zero realization rates, and/or some with double/triple savings (200% realization rates and up) on each end of the spectrum. They should be evenly distributed about 100%, meaning 100% should be the most likely in an ideal world. Even in an ideal, perfectly representative sample with random results, the probability of hitting 100% for any category of measures (say commercial air conditioning) is probably less than 10%.
What should the results look like in these cases? Some randomness and disorder, perhaps? I would expect that given this number of categories, at the low end there would be something around 60% and at the high end, possibly as high as 130%.
In this case, as the buyer of this report, I’d seriously but sardonically ask for my money back. This does no one any good – not even the implementers because only a chump would take stock in the results.
This is my fear for energy efficiency programs and evaluation, specifically – that the industry is heading down the cheap and ratty path to a metaphorical dive motel in the red light district.
These results are not only less likely than running the table in college football, not only less likely than running the table in college basketball (Indiana, 1976), and not only less likely than running the table in the NFL (Dolphins, 1972). It would be something like running the table on the PGA tour or 162 straight wins in MLB, plus sweeping every playoff series for 43 straight wins, ending the day after Thanksgiving with a World Series championship.
If I were buying evaluation reports, I’d look at examples – not just the ones provided. I’d find other ones and I’d talk to references (people) for those. The appearance of rubber stamps, or superficial hand waving with a smattering of shiny objects, as distractions should be obvious.
 Realization Rate is the adjusted gross savings (verified savings) divided by the originally estimated savings.