This post is brought to you by the International Energy Program Evaluation Conference (IEPEC), circa 19, er 2015. I moderated one session featuring four great papers and presentations concerning residential space heating and cooling. I also observed one concurrent session for nearly all the timeslots in the conference. The theme I found, which was very pleasing to me, is that doing useful research and evaluation is challenging and expensive.
The reason it pleases me is that, well, getting things right is everything, but it also levels the playing field. I hate losing bids, but it is less painful to lose to firms who do good work. It is painful to lose to a firm known to do lightweight, “tell me what I want to hear” work – at 70% of the cost required to do a good job.
One of my moderated papers was leaked on this blog, incognito, back in April. That post, Duct Leakage, The Results are In, was referencing this paper by Applied Energy Group; now available for public consumption. The point of my April post was that despite popular thinking, duct leakage in homes rarely results in wasted energy and may even save energy. The AEG paper, at minimum, confirmed the lack of waste because there were no savings for duct sealing as they had found in their fine analysis and study.
Another paper in my session by Navigant demonstrated, as I described in Condensing Boilers – Test for Show, Ship for Dough, that savings often fall short because proper controls are not installed, or the installers didn’t set the controls for performance. We found the same thing occurring more often than not in the neighboring state of Connecticut, and other random places.
Another paper by Nexant found that extensive training of literally thousands of residential HVAC air conditioning technicians resulted in no discernable impacts. They analyzed at least 10 installations before training and 10 installations after training, installed by the same individuals and found no difference in cooling performance. I have my own ideas as to why this program didn’t work, and I will include that along with other training failures as another post someday.
Our good friend Bob Wirtshafter was sitting in the middle of the audience for our session. One of Bob’s comments/questions was in effect [paraphrasing], “I’ve been in this business [evaluation] for thirty years and I’ve never heard or seen anything like this. Have you guys been hiding this stuff?”
I don’t recall the answers, but my mental answer was that the panelists/authors did great work, knowing what to look for, getting it, analyzing it, and calling it like it is; not like someone might want it to be. Quite possibly, the research methodologies and in these cases, the engineering, is better or improving over time. More likely, in my opinion, there is real differentiation from firm to firm.
I concluded our session with this statement that I made up on the fly: “Great findings and great results are often, if not usually, two different things.”
Buyers Sort it Out
How does a buyer – a utility, regulatory agency or their consultant – determine the differences among bids/bidders? Certainly reviewing publically available evaluation reports are one way to do it. There is nothing more compelling than work examples. But other, more concise sources are these conference papers. They provide insights in an executive-summary format for how the firm being considered thinks and produces value.
When we, at Michaels, interview potential hires, I don’t so much want to hear about them or what they’ve done. What do they know, and how do they approach things? We ask pointy off-the-wall questions (yes, they are politically correct and HR-friendly). When I want to talk with references, I don’t call the contacts provided. Why would I do that? They are presumably ready and have good rapport with the candidate.
Instead, find another employer or client for whom the candidate or bidder has worked and call them. Posted references have sealed-lips policies anyway, and they typically can’t say more than “I know them” or “yes, they worked for us”. Thanks. Send my love to your corporate legal department along with the science experiment growing in the back of the fridge.
At some point in my infinite spare time, I would like to aggregate evaluation data by service provider and draw some comparisons. I would be looking for realization rates, net-to-gross (wow – another reference to Bob Wirtshafter), and overall process results. Which providers are the best practitioners to improve programs?
Consider this: as well as being an efficiency nut, I am a running freak. I’ll go into the details in a future post, but I’m always trying new things to run more efficiently and relaxed (e.g., faster). Over the years, I have visited many a practitioner to help me with this, but they rarely find anything more than addressing the symptom of the day.
Poor evaluations.
I would rather they tell me a half dozen primary things to improve and another list of things to do once those are mastered; a decent assessment and expert advice.
The same goes for evaluation. I’m not even close to my potential. Don’t tell me how great I’m doing. Tell me what needs improvement so I can improve!
Join the discussion One Comment