Skip to main content

Fish Fried Conversations of Efficiency

By August 10, 2021November 17th, 2021Energy Rant
efficiency

As described last week, net savings and program attribution are measures of an efficiency program’s influence on making a project happen for utility customers. There is a range of influence that energy savings has in motivating customers to do a project, and that range is 0% to 100%, while accurate attribution results may be 90% or better. The role of energy savings in a decision can be largely irrelevant in determining attribution. How? Non-energy benefits!

The situation reminds me of fluid dynamics, a core course in mechanical engineering. There are major friction losses and minor friction losses. Major losses are those associated with fluid-flow friction losses in a straight pipe, or maybe on a surface in external flow, like an airplane fuselage. Minor losses are due to fittings and other things – elbows, tees, strainers, valves, or flaps and landing gear on airplanes. Minor losses can swamp major losses like non-energy benefits can swamp energy benefits.

The attribution questions are:

  1. What happened with the program?
  2. What would have happened without the program?

Poor Questions

Number 2 is certainly a challenging question to declare an answer to. There is no correct answer. However, since energy savings, incentives, money savings, etc., are only part of the sauce, why do I see questions like the following in net-savings protocols? They’re asking for “major” losses when the “minor” ones may be much greater.

  • Did you receive promotional materials from the program?
  • Did you participate in this program before?
  • Did you get information from a program representative?
  • And of course, on a scale of one to ten, how much did the program influence your decision to install efficient equipment?

Meeting Customers Where They Are

On the program side of the football, a portfolio should come as a united front of problem-solving for customers. Solutions include improving productivity, removing bottlenecks, reducing scrap and lost material, improving product quality, getting more customers in the door, or expanding the business. In fact, I surmise that customers are more interested in any of this stuff than they are in getting help to save energy or be more efficient. “Energy efficiency” may be a turnoff to customers.

A customer’s perspective may be, “yeah, I don’t need any help with that. Get out of here. I have bigger fish to fry.”
A helpful program would reply, “Oh, yeah? Tell me about those fish. How can I help?”

So, we reel in the customer by solving their fish fry issues. They couldn’t care less about energy savings initially, but they don’t mind the $40,000 incentive check that comes later. When the attribution team comes along with their single-variable battery of questions, what will this person say about a “program” or energy savings? What program and don’t care, respectively. Attribution: 0.0

Wrong!

Program implementers and evaluators for commercial and industrial customers need to meet them where they are and not come in like an inquisition dropping the efficiency anvil on their heads. Once the ice is thawed, we can start to learn. At the appropriate time, the evaluator can ask about a project the customer did.

“I see here you ___. Can you tell me about that project and how it’s working for you? How did this project come about?”

Be Sneaky

Find out why something happened without asking why it happened. That is an art of writing and maybe being a psychiatrist. It might require tons more work, like following up with designers, contractors, suppliers, wholesalers, their standard practices, and having similar discussions with them – for every project in the sample. Sound complex and expensive? Do we want an accurate result or pin-the-tail-on-the-donkey’s ear? To me, it is better to learn something to move the needle than do a compliance-driven routine of poor questions to produce inaccurate results of no benefit.

Randomizing Shame and Pride Away

Swinging around to the social desirability bias again, I promised to describe some interesting ways to avoid it. As a reminder, social desirability bias is the tendency for people taking a survey to take credit for good things, blame others for bad things, inflate their desirable traits, hide their negative characteristics, etc.

I found this paper on ResearchGate to be interesting: Reducing Social Bias Error Through Randomized Response. In simplest terms, it makes it safe for respondents to provide a socially undesirable answer. For example, tell the respondent to flip a coin, and if the result is tails say little or no influence or the opposite. If it’s heads, provide your best answer. By keeping the results aggregated and anonymous, participants feel freer to provide more accurate answers. The same can be done for one-to-five-type responses. Rather than using a coin, use a die. Filtering out the fake numbers is simple math.

The bottom line is, I prefer the friendly investigative approach because there is a wide range of possible answers and insights to find. The randomized safety shields do not address this. They only provide cover to answer one issue while there may be many.

Jeff Ihnen

Author Jeff Ihnen

More posts by Jeff Ihnen

Leave a Reply