See no Evil
In almost every evaluation RFP, there will be some requirement stating that the evaluation must target 90/10 confidence and precision levels. What does that mean? Most people (non-statisticians) will say that 90% confidence is needed to be sure that the results of the evaluation are within 10% of the “real” savings. However, is this really the case?
All reviews are not created equal
Deciding what methods should be employed during an evaluation is a complex process that must balance costs and benefits. When weighing the costs, it is important to recognize that the evaluation method will directly affect the results of the evaluation.
A project desk review typically involves a review of the project paperwork to verify that the completed project (based on the supplied invoices and descriptions) is consistent with the savings claimed for the project. In some cases, the desk review process will also include customer phone interviews. If insufficient information is presented to change something, no changes are made. Therefore, desk reviewed projects tend to have realization rates (ratios of project savings before and after evaluation) close to 100% and very good confidence/precision bounds. The phrase, “ignorance is bliss” comes to mind.
An onsite evaluation will involve a desk review, plus an onsite inspection of the equipment installed for the project, and in many cases will involve metering. This allows a much greater understanding of the project, as well as the ability to change the analysis to better reflect the savings specific to this customer. This added knowledge can provide valuable feedback to the program, but comes with some “risk.” The project realization rates can be (in some cases significantly) greater or less than 100%, and the confidence/precision bounds are greater.
This is shown clearly in a paper presented at IEPEC last year, where the same projects were evaluated with both desk reviews and onsites. For each program reviewed, the realization rate, as determined by the onsite, was not even within the 90% error bound for that program based on the desk reviews. The 90/10 requirement was met, but the answer was wrong.
But we have a statewide TRM!
One argument frequently heard is “We don’t need to measure X, Y, or Z because we have a statewide TRM. The TRM tells us what those values are.”
It is true, a Technical Reference Manual (TRM), will often define default variables or assumptions that must be made for prescriptive projects. However, it is important to remember that these still are assumptions and not necessarily accurate for all projects.
Balance
A desk review or an onsite evaluation is not appropriate for all circumstances. Not every program needs to have all projects evaluated with onsites every year. However, most programs will benefit from onsites at least once every few years.
Join the discussion One Comment