Skip to main content

Evaluation, Measurement and Verification; From a Neander Cave

By July 13, 2015November 7th, 2021Energy Efficiency, Energy Rant

A title like Lies, Damned Lies and Modeling: Energy Efficiency’s Problem With Tracking Savings sounds like a natural fit for The Energy Rant, and in this regard, it did not disappoint.  Contributions to the article can be summarized as follows: professionals serving the evaluation, measurement and verification (EM&V) business are Neanderthals.  Having just discovered the open flame, these grunting upright creatures appear to be working on their next great discovery: rolling objects.

Okay smarty pants.  The challenges for EM&V professionals can be boiled down to two words:

  • Money
  • Access

 Little Money

The article describes the fact that too much EM&V – the verification of savings part – relies excessively on billing regression and self-report surveys.  Billing regression is a sexy term for comparing energy bills after implementation of an energy efficiency measure[1] to energy bills prior to implementation.  Self-report surveys essentially ask participants questions and take their word for it.

Why do these EM&V creatures engage in such primitive activity?  Lack of sufficient money, for one.  Like many industries in which it is impossible to compare oranges to oranges, we (yes, we at Michaels are part of the homo sapien subspecies) are our own worst enemy.  Somebody will always provide EM&V or any other EE service for less money.  You get what you pay for.

Buyers in many industries head down this slippery slope and then complain incessantly, as many in the above article do about performance and results.  Other examples of sliding services include architecture and engineering services, building construction, controls implementation, and building commissioning.  Consumers of these services think the outcomes they are buying are all the same.  I.e., the answers are correct.  Only the price differs.  The energy performance will be there.  NO!  It will not because the market expects king crab legs for imitation crab prices, and somebody will always claim their fake crab is real crab; not saying they are lying, but the value is priced out of the project.

Intermission

I will cover energy modeling and simulations in another post, but the article quotes Michael Blasnik.  He indicates that traditional modeling results can be off by double digits.  Oooo!  Call the cops!  Anyone thinking they can consistently nail savings projections within single digits is living in Neverland.

He goes on to say the “scandal” is that utilities rely too much on Neanderthals using “projections, models, and widgets” rather than energy data.  Mr. Blasnik is a senior building scientist for Nest Labs so we can see where this is coming from.  Just last week I cited two studies that show programmable thermostats do not save energy.  Nest is a learning thermostat, but the reasons programmable thermostats don’t save energy do not change for the learning thermostat.

And for full disclosure, I have five Nest devices in my house, one of which is the Nest thermostat.

Anyway, the only way to assess savings resulting from envelope measures and thermostats is billing regression; not building simulations and modeling.  This is the way it is done in the industry.  If we want to learn why programmable thermostats don’t save energy, a lot more digging and expense is required.  However, that is a waste of time because it is not going to change the primary reason why thermostats fail at the meter.  The reason: human behavior before the new thermostat is installed.

Irony alert: I explain in the second paragraph above that the article says we rely excessively on billing regression.  Which is it?  Crumby models or billing regression?  Cake and eat it too?

Tough Access

The article describes implementers taking matters into their own hands and monitoring savings via metering and the cloud – and using “intelligent efficiency”.  This is good, but here’s what often happens: they don’t account for interactive effects of multiple programs influencing the same customer.  One famous implementer is notorious for using billing regression, taking credit for behavior changes, and not accounting for the five measures implemented and credited to another program.  Trust me; this isn’t the only challenge where things can go wrong either.

Evaluators need to investigate and tweeze these issues from the haystack, and this presents another problem: access to data.  Many times implementers hide behind the “proprietary” red herring; as though they have devised a way to trick the second law of thermodynamics[2].  Their analytical model very well may violate the second law.  Let us look at it.  And no; we don’t trust it until we investigate it.  That is our job.

Finally, as evaluators, we need customer information, including energy use.  Every single time; always, always, always, waiting for this information causes delay and added cost.  Interval data (readings every 15 minutes or so) are even more difficult to get.  Oftentimes the utility needs to go out to the storage shed, so to speak, to get older data we need.

Last Word

All sources of these complaints can be ameliorated with more access and money.  You want more relevant real-time evaluation?  More than one sample (for the entire program year) is needed or the results will not be representative, especially with the annual year-end rush of project applications.  One or two large custom projects can make or break an entire portfolio – and they may not arrive until late in the year.

[1] An EE measure can be adding insulation, replacing a refrigerator, or installing a control system, etc.

[2] Second law says water does not run uphill without a pump. Your refrigerator won’t work unless you plug it in (heat doesn’t travel from something cold to something warm without energy input = your refrigerator).

Jeff Ihnen

Author Jeff Ihnen

More posts by Jeff Ihnen

Leave a Reply