Beyond the TRC: New Perspectives on Cost-effectiveness Testing

A Guest Contribution By Robin LeBaron, Managing Director of the National Home Performance Council Good regulation, and ensuring that efficiency dollars are well-spent, is in everyone’s interest – from ratepayers to the small contractors that help bring efficiency projects to life. The problem is that in most states, the current application of cost-effectiveness tests are actually impeding the success of energy efficiency programs as well as broader public policy goals aimed at saving energy, reducing emissions, and growing jobs clean energy sector. To make certain that ratepayer dollars are well-spent on energy efficiency programs, program administrators (utilities or third-party administrators) are required to conduct cost-effectiveness screening, the Total Resource Cost Test (TRC) most common among them. They’ve been doing this since they began offering rebates and other programs to help customers save energy and reduce their costs. But the reality is that times have changed greatly since those early days, and in many cases, the regulatory structures have yet to catch up. Evolving the ways that individual measures, or programs, or even whole portfolios are screened could support strong energy efficiency and other demand-side programs to meet utilities’ energy needs at a lower cost than supply side resources. The problems with our current system of determining cost-effectiveness were brought home to me at the March 2011 national conference of Affordable Comfort Inc., where practitioners from across the U.S. gather to assess the state of the residential energy efficiency field and chart paths to move the industry forward. I was particularly struck by how small business owners such as contractors and software vendors – not necessarily the people I would have expected to be concerned with the minutiae of techniques for measuring spillover – were asking public questions about the cost-effectiveness tests, and expressing real concern that the tests were strangling the industry. “The TRC can put you to sleep – and put you out of business,” one contractor lamented. On the last day of the conference, the bubbling discontent appeared to hit the boiling point, and a number of stakeholders convened at an impromptu meeting, headlined “whiteboard with beers” to discuss the issue. Despite (or perhaps because of) the fact that the beer failed to materialize, the participants – including program administrators, utility representatives, former commission staff, U.S. Department of Energy staff, program implementers, consultants, contractors, businesses, and even a labor representative – seemed to agree. In their experience, the tests were constraining high-quality energy efficiency programs. And so they resolved to continue meeting as a national stakeholder group to solve the problem. My organization, the National Home Performance Council, a national non-profit organization that works to support whole-house energy efficiency retrofits, assumed responsibility for convening the group. The stakeholders began by working to detail the problems that the tests were causing. It was clear from group members’ reports that the consequences of illogical and inconsistent test implementation design were not trivial.
  • In some states, fear that an energy efficiency program “couldn’t pass the TRC” was enough to ensure that the program never moved past the early design phase;
  • For some programs, the tests imposed severe restrictions on program design, for example by constraining the measures that could be incorporated into a comprehensive whole-house or new construction program, even if the passed-over measures represented a one-time chance to achieve energy savings;
  • Field-level testing of specific jobs caused tremendous problems for one whole-house program by forcing contractors from closing the deal until the job had been screened and by dictating what could be in a job scope, sometimes excluding measures even if a customer wanted and was willing to pay for them;
  • For some programs, changes in the way the test was conducted required significant, mid-stream modifications in programs; and,
  • In extreme cases, program administrators were concerned that changes in the TRC might actually result in a program’s downsizing or elimination.
The stakeholders’ reports made it clear that whole house energy efficiency upgrade programs were particularly vulnerable. But other types of programs, such as multifamily and new home construction, also had difficulties clearing the TRC. The group began to explore potential solutions. Some stakeholders thought that changing tests or eliminating them altogether might be the only way to fix the problem. But closer scrutiny suggested that many of the problems that energy efficiency programs faced were the result of test implementation that was illogical or inconsistent with the tests’ stated goals. It also became clear that, over the years, experts in the field had suggested many ways to address these problems, only to see their proposals languish at the hands of regulators. The National Home Performance Council drafted a paper that recommended a series of best practices for testing based on many proposals. The paper stimulated considerable interest and critique, which made clear that a more comprehensive treatment of the subject was needed. Accordingly, with support from EFI (Energy Federation Inc.), in the spring of 2012 NHPC retained Synapse Energy Economics to write a detailed review of best practices for implementing cost-effectiveness tests. In July the Synapse team, lead by its vice president and former Massachusetts Department of Public Utilities commissioner Tim Woolf, released Best Practices in Energy Efficiency Program Screening: How to Ensure that the Value of Energy Efficiency is Properly Accounted For. As the title suggests, the paper provided an exceptionally thorough review of how cost-effectiveness tests are currently conducted, and proposed a comprehensive set of best practices to improve testing. One of the paper’s central arguments is that goal of the Total Resource Cost (TRC) and the Society Cost (SCT) test is to determine whether the costs to society of a demand-side program outweigh the benefits. As a result, the test results make sense only if the test takes into account all costs and benefits of a program. Unless all costs and benefits are incorporated, the tests will provide inaccurate results – misleading at best, and detrimental to the interests of all stakeholders at worst. The paper addresses a wide range of other issues. Regarding which test should be used, the report calls for accurate measurement of all avoided costs – including not only energy, but also capacity, transmission and distribution costs and compliance with environmental regulations. The report also recommends that the avoided costs of complying with current or enacted environmental regulations be included in the test calculations. For net-to-gross calculations, the paper recommends measurement of spillover as well as free ridership. The paper also recommends review of the discount rate used in the test, with either a societal discount rate (for the SCT) or a risk-appropriate utility weighted average cost of capital (WACC) rate (for the TRC). For more on all these terms, NEEP’s EM&V Glossary of Terms and Acronyms provides a good crash course. One of primary the reasons that implementation of the TRC and SCT tends to be biased towards costs rather than benefits is that the costs tend to be easier and less expensive to measure than the benefits. As a result, one of the paper’s recommendations deserves serious consideration.  If a program doesn’t have the capacity or budget to arrive at a reasonable estimate of all costs and benefits, it should use the Program Administrator Cost test, which has more easily quantifiable inputs, rather than the TRC or SCT. The paper recommends that the Societal Cost Test be used to screen programs (or the TRC with caveats, if the regulatory commission does not want to use the SCT). This gets to the key issue of ensuring that consumers benefit. At the portfolio level, however, the paper recommends that the programs be screened together by the Program Administrator Cost test, which will provide assurance that the portfolio as a whole is providing energy efficiency at a lower cost than the cost to the utility of providing the energy through an alternative, supply-side resource. This recommendation, if implemented, could go a long way to allaying regulatory concerns about the impact of programs on consumers. Cost-effectiveness tests are a useful tool for ensuring that energy efficiency programs meet the public interest. But tests that are not truly comprehensive and well-implemented only create inaccurate information, which, because it skews consistently against energy efficiency and demand-side resources, risks choking out high-quality programs before their potential is realized. Such testing is a real detriment to business and residential customers, who are deprived of opportunities to save on energy costs. The Synapse paper’s best practices section provides a practical way forward to right this problem, and NHPC looks forward to seeing commissions implement these best practices in the coming years.  

Stay informed

Stay up to date with the latest NEEP and industry news, policies, and trends to your inbox every so often.

Subscribe to our newsletter