Evaluability is a new-ish term making the rounds in the DSM program evaluation community. While there is no specific Webster definition, the general idea is that it’s a measure of whether or not a program can be evaluated cost effectively. A program that is very evaluable would have good data collection, clear processes, and transparent savings calculations. The opposite would be a black box program with little documentation and poor tracking data.
Making a program evaluable is not the main goal of any program manager, nor should it be (hint: the first goal should be positive customer experience). However, program evaluation can be a very useful tool for program managers. Not only does it verify impacts are being achieved, but customer satisfaction, program design changes, market changes, and customer behavior can all be examined. That is if there is enough data to accurately assess those research goals (i.e. very evaluable). Without accurate customer information, technical information about baseline conditions, or project dates, evaluators struggle to complete the necessary research to benefit the program. This ultimately reduces the cost effectiveness of evaluation and its usefulness to furthering program development.
Accurate project information is especially important for behavior programs. The program field staff or trade allies are the first, and usually the only, opportunity to determine and record current customer behavior. Considering evaluation data needs up front can get information collected directly from customers when it is most relevant, keeping evaluability high. Unless evaluators are tagging along during initial customer contacts, collecting many pieces of information a year or more after program involvement can lead to questionable results. Collecting less than accurate assessments of current schedules, temperature settings, and usage patterns are a missed opportunity.. These also help immensely with interpreting any results that evaluators might find.
Thinking through the program evaluation strategy during program planning is the best way to ensure programs have high evaluability. Most evaluators are more than willing to work through any concerns, data collection suggestions, and analysis methodologies with program staff as early as possible. Working together also fosters a more collaborative relationship with evaluation contractors and puts responsibility on evaluators to ensure the success of the program through completion. More involvement means more engagement, more data, more results, more lessons learned, and ultimately results in more customer benefits.