ACEEE just released its first assessment of energy efficiency potential studies (potential studies) across the land – its first in 10 years! Hallelujah! I’ve been waiting all this time. That may not be true, but certainly I am interested in potential studies, so this is a great excuse and opportunity to write about it.
Potential studies are used by states and utilities to determine technical, economic, and achievable energy savings for purposes of setting savings targets and designing EE portfolios by assessing key technologies and market applications…among other things.
Technical potential is the savings that could be achieved if all existing energy-consuming equipment were replaced with something that is more efficient and available in the marketplace today.
Economic potential is a subset of the technical potential and only includes “cost effective” replacements and upgrades.
Achievable potential introduces the reality that not every single economic / cost effective measure is going to be implemented. There are limits to what a program can influence.
Once again, I will provide behind-the-curtain insights of the sausage making with Tarot Cards. But Jeff, you’ve never conducted a potential study so what do you know? I know this: I’ve projected savings potential for hundreds, if not thousands, of buildings. I’ve been in the business for 20 years, and I’ve been obsessed with convincing people that energy efficiency is the right thing to do – reading people, hearing their excuses, er make that barriers, and working with many people in the research and evaluation community, some of whom don’t like performing potential studies, and I’ll get to that next.
What I like about the ACEEE report is that it backs my assertions for the weaknesses of potential studies. From the executive summary:
Some of the more important inputs for which detailed information can be opaque or missing entirely include models for forecasting participation rates; assumptions about these rates; assumptions about incentive levels; the impacts of codes, standards, and emerging technologies; policy limitations; and utility avoided-cost assumptions. Many of these assumptions are inherent in the models used and in specific inputs, and as a result they are rarely disclosed or discussed, often for proprietary reasons. Lack of transparency about assumptions is a major issue for potential studies.
I’ll explain it more succinctly: Potential studies include vast quantities of guessing and pulling numbers out of the air.
But I am fair. The reason for the guesswork by Tarot Cards is limited resources, skimpy budgets, and in some cases, crazy-short timelines to do these things. An example of this included an RFP that I read, for a potential study, and the scope of work was to be limited to secondary research only – you know, evaluating reports that are already published.
Technologies, saturation, and market acceptance can change drastically over the 3-5 year period between potential studies. Moreover, prices of efficient technologies can change drastically over the same period. This greatly affects economic and achievable potential – the primary purpose of the potential study. It’s quite difficult to assess these things without primary research.
My opinion: if there is no primary data collection, that is, calling customers and doing site visits, particularly for large commercial and industrial users, throw down the Tarot Cards and just use the rear view mirror. Assess what is happening with programs now, via evaluation, to project where to go the next program planning cycle.
Potential studies can provide a reasonable assessment for widgets – consumer products, light bulbs, appliances, and heating and cooling equipment. But analysts must get in the field to inspect this stuff; talk to customers, read the hieroglyphics of equipment nameplates, and take a wild guess at useful remaining life (typically an assumption). We can’t expect customers to know these things. Otherwise:
Analyst: What is the displacement of your car’s engine?
Customer: It’s in front of the car under the hood.
Analyst: Ok. Do you know how big your engine is?
Customer: I don’t know. It’s pretty crowded in there.
Analyst: Do you know now many cylinders the engine has?
Customer: Cylinders? Yes.
Analyst: Alrighty then. Let’s move on to the tires…
It’s exactly the same thing for cooling equipment. Is it an air-cooled chiller or an air-cooled condensing unit? Is it air cooled at all? What is air cooled? You have to get reasonably knowledgeable people in the field to get decent data on this stuff.
This uncertainty is about the easy stuff. What about the difficult stuff, which is where all the beef remains? Utilities are increasingly relying on custom efficiency, retrocommissioning, new construction, and process efficiency to hit their goals, and this is where the future is. Potential studies are almost entirely widget based and include analysis of widget penetration and market saturation, and emerging technologies. The good stuff, the big stuff; the cost effective stuff is harder to quantify, but not really. It is more difficult to achieve because it takes building/application specific customized knowhow.
My experience reviewing potential studies for these portfolio elements is that analysts use the rear view mirror approach, regression models (i.e., curve fit), essentially consisting of a carefully placed ruler if past performance looks like a straight line, or if not, maybe something they could bet from a French curve, available on Amazon.com for about five bucks.