In our work to identify the most effective ways to help animals, ACE employs both qualitative and quantitative strategies. One way that we evaluate programs (or groups of programs) quantitatively is by assigning numerical values to their immediate costs and benefits in order to model their cost effectiveness. For instance, we might estimate the number of animals helped by a particular ad campaign and the costs that were invested in that campaign. We then use those numbers to calculate a cost-effectiveness estimate (CEE) in terms of “lives spared per dollar” or “years of suffering averted per dollar.” These estimates allow us to directly compare different programs and charities, which helps us decide which of these programs and charities affect the most animals per dollar.
Our cost-effectiveness estimates represent approximations of the costs and benefits of each program. They are highly uncertain, in part because there is still very little evidence about the effects of most animal advocacy interventions. Our estimates are also subject to bias and other sources of error. We do our best to communicate the tentative nature of our CEEs to our audience, but we still worry that publishing them might give the impression that we have a higher degree of confidence in them than we actually do.
On this page, we describe our process for developing CEEs. We also describe the role that they play in our intervention reports and charity evaluations. We explain some of the challenges in creating CEEs and the risks of making them public. We conclude by explaining why we think the benefits of publishing CEEs are worth the risks.
Table of Contents
- How We Calculate Cost-Effectiveness Estimates
- Identifying Costs and Benefits
- Assigning Quantitative Values to Costs and Benefits
- Creating a Cost-Effectiveness Model with Guesstimate
- Other Examples
- How We Use Cost-Effectiveness Estimates
- Challenges and Uncertainties Associated with Cost-Effectiveness Estimates
- Our Cost-Effectiveness Estimates are Approximations
- Assigning Quantitative Values Fails to Resolve Uncertainty
- Our Models Are Subject to Bias and Error
- There are Other Possible Approaches to Making CEEs
- Risks of Making and Publishing Cost-Effectiveness Estimates
- The Risk of Appearing Overconfident
- The Risk of Obscuring the Research Frontier
- The Risk of Appearing Too “Calculating”
- Benefits of Making and Publishing Cost-Effectiveness Estimates
- Estimating Cost Effectiveness Supports Our Mission
- Estimating Cost Effectiveness is Useful for Making Direct Comparisons
- Publishing Cost-Effectiveness Estimates Increases Transparency
- Our Current Thinking on the Appropriate Use of Cost-Effectiveness Estimates
How We Calculate Cost-Effectiveness Estimates
Identifying Costs and Benefits
The first step in estimating the cost effectiveness of a program is to list the relevant costs and benefits of that program. Suppose we want to estimate the cost effectiveness of campaigning for a school to adopt “Meatless Mondays” for one year. Suppose, for simplicity, that the school serves 100 lunches each day, and previously they all contained meat.
The primary costs of the Meatless Monday campaign are:
- Money invested in the campaign
- Staff time invested in the campaign
The primary benefits of the Meatless Monday campaign are:
- Meat-containing meals replaced by meatless meals
Assigning Quantitative Values to Costs and Benefits
The next step is to estimate the quantitative values of the costs and benefits of the program. We estimate the costs in terms of dollars spent. We estimate the benefits in terms of (i) lives spared, and (ii) years of suffering averted. In this illustration, we will focus on lives spared. We’ve recently shifted from expressing our estimates as point values to expressing them as “subjective confidence intervals,” or ranges of values.1 Our use of ranges reflects the uncertainty of our estimates; we use larger ranges for estimates about which we are more uncertain.
Estimating the quantitative value of the costs of a program is usually fairly straightforward. We often simply ask charities what portion of their budget (including staff salaries and “overhead” costs) is allocated to each of their programs. We might learn, for example, that the Meatless Monday campaign costs $1,000 plus 50 hours of staff time at $10 per hour, which would be a total of $1,500.
Estimating the quantitative value of the benefits of a program is usually more complicated. It’s not always clear how many animals are helped by a particular outcome such as switching to meatless meals. We consider all of the relevant evidence we can find in order to make a well-informed estimate. We look for data and statistics collected by reliable sources, opinions of experts whom we trust, and research conducted by academics or animal advocates.
Unfortunately, the available evidence is often inconclusive and we must exercise our judgment in order to estimate the quantitative value of the benefits of a program. In these cases, we frequently ask each member of the research team to evaluate the evidence independently and make an estimate. Then, we compare our estimates and discuss them until we arrive at an estimated value, or range of values, that each researcher feels comfortable with. The amount of agreement or disagreement among our team influences the size of the range we use to express our estimates. Greater agreement is expressed by a smaller range and greater disagreement is expressed by a larger range.
In the case of the Meatless Monday program described above, we might rely on some of our previous research, like our estimate of the impact of a vegetarian year that we originally calculated for our online ads report. This research is informed by data about the U.S. population and statistics about animal agriculture collected by the USDA. It’s also informed by the calculations of Harish Sethu at Counting Animals, which are available online. In addition, we draw lessons from economics research to determine how much the elasticity factor for animal products reduces the effects that individual dietary choices have on animals.
We estimate that switching from a typical diet to a vegetarian diet for one year spares approximately 30 land animals. We treat one meatless meal as the equivalent of between ⅓ and ½ of a vegetarian day.2 This leads us to conclude that switching to a single meatless meal spares between 0.027 and 0.041 animals. The program described above would result in 100 meat-containing meals being replaced by meatless meals for 36 weeks in the school year, for a total of 3,600 meals.
Sometimes we apply discount factors to certain values. For instance, if we know that the school from which we have data serves about twice as many lunches as average schools, and if we want to model the cost effectiveness of an average school, we might apply a discount factor of 0.5 to the number of meals affected per year. Thus, the model described above could come from an analysis of a school that we think is of average size and that serves around 100 lunches daily, or from analysis of a school that we think is twice as large as average and that serves around 200 lunches daily. If we’ve applied a discount factor to data, it will be described in the model.
Creating a Cost-Effectiveness Model with Guesstimate
In 2016, we began using an online program called Guesstimate to create cost-effectiveness models. Guesstimate allows us to enter our quantitative estimates of the costs and benefits of a program as ranges, rather than point estimates.3 Then, Guesstimate uses Monte Carlo simulations to create a probability distribution for the outcome. We can set up functions within Guesstimate that represent the benefits of the program divided by the costs.
Once we enter our values, functions, and any applicable discount factors, Guesstimate produces a “subjective confidence interval” for the cost effectiveness of the program.
Switching to 3,600 meatless meals spares between 97 and 150 animals. If the program cost $1,500, it would spare 0.065 to 0.099 animals per dollar. We could do a similar calculation in terms of years of suffering averted.
The cost-effectiveness model above is intended to illustrate our process, but it is much simpler than most of our cost-effectiveness models. Often, estimating the cost effectiveness of a particular program requires taking many different costs and benefits into account—which can introduce challenges and uncertainties. For an example of how we calculate the cost effectiveness of a more complicated program, you can read about how we evaluate social media impact. For examples of how we calculate the cost effectiveness of charities, which are often engaged in many different programs, see our 2016 cost-effectiveness models for Mercy For Animals or The Humane League.
How We Use Cost-Effectiveness Estimates
We sometimes develop CEEs for interventions and use them to identify promising ways to help animals. For example, our intervention report for online ads included a section on the estimated effectiveness of ads programs in terms of lives spared and years of suffering averted per dollar.
We do not rely solely on CEEs to identify promising interventions. We also consider other factors—such as the indirect or longer-term outcomes of the intervention. Additionally, we consider the strength of the evidence that informs our CEEs. We might be more likely to recommend an intervention if we have very strong evidence that it spares 5–6 lives per dollar rather than if we have very weak evidence that it spares 6–10 lives per dollar.
We develop CEEs for most of the charities we evaluate.4 These CEEs are one of seven criteria that we consider each year when we choose our Top and Standout Charities. We do not have a formal procedure for weighing a charity’s estimated cost effectiveness against its performance on our other criteria. Rather, we evaluate each charity on all seven criteria, each of our team members individually considers which charities to recommend based on those criteria, and then we discuss our reasoning until we all agree on a set of recommendations. (Read more about our evaluation process.)
A charity’s CEE may factor into our recommendation decisions, especially if it is extremely high or extremely low relative to the CEEs of other comparable charities. Suppose there are two charities that engage in similar programs. The first might have an estimated cost effectiveness of 8–10 animals spared per year while the second might have an estimated cost effectiveness of -2–2 animals spared per year. The CEE of the first charity is roughly in line with the CEEs of our other recommended charities, but the CEE of the second charity is lower. We would consider these estimates to provide a strong (though not necessarily decisive) reason against recommending the second charity.5
In many cases, however, our CEEs are not a major factor in our recommendation decisions. Suppose we are comparing three charities. The first might have an estimated cost effectiveness of 4–7 animals spared per year, the second might have an estimated cost effectiveness of 6–9 animals spared per year, and the third might have an estimated cost effectiveness of 2–12 animals spared per year. In this case, we would not consider our CEEs to be a strong reason to favor any of these charities relative to the others. Our 90% subjective confidence interval for the cost effectiveness of each charity overlaps by a large amount, so we could not estimate with 90% confidence which of the three charities is most cost effective. We would rely more heavily on our other six evaluation criteria to decide which of these three charities to recommend.
Challenges and Uncertainties Associated with Cost-Effectiveness Estimates
Our Cost-Effectiveness Estimates are Approximations
Our cost-effectiveness models are approximations of the mechanisms that underlie a program’s actual cost effectiveness for at least three reasons.
First, we are never confident that we’ve identified every cost and benefit of every program. Consider the model of the Meatless Monday program presented above. It accounts for what we see as the primary costs and benefits: the number of meals affected and the money and staff time invested. However, there could be any number of additional costs and benefits that the model does not account for. Perhaps the program will inspire some students to eat more vegetarian meals at home or to go completely vegetarian. Perhaps it will backfire and lead students to eat extra meat after school. We may never be aware of all the costs and benefits of a given program.
Second, there may be a chance that an outcome might have occurred without the relevant program, or there may be outside factors that contributed to its occurrence. This difficulty arises often when we evaluate corporate outreach programs. Animal charities often work together to lobby corporations, and it is not always clear what portion of the results should be attributed to each individual charity.
Third, it can be difficult to determine which types of costs and benefits to include in a way that is both consistent and fair to all charities. Suppose we are modeling the cost effectiveness of an event that is 20% funded by donations and 80% funded by revenue from ticket sales. Since we are primarily interested in the cost effectiveness of donor dollars, it may seem that we should only include funding from donors as a “cost.” However, excluding revenue from the costs of revenue-generating programs is not something we are able to do consistently. One reason is that we often don’t know exactly how much revenue came from each of a charity’s programs. Another reason is that many revenue-generating programs are conferences, where the revenue comes from other animal advocacy charities paying to sponsor or attend the conference. When the source of revenue comes from within the animal movement, it seems misleading for us not to count revenue as an expenditure. Since we are unable to consistently exclude revenue as a cost, we consistently include it. In some cases, including revenue may lead us to underestimate the cost effectiveness of a donor dollar.
Assigning Quantitative Values Fails to Resolve Uncertainty
It’s not always clear how many animals are spared or how many years of suffering are averted by different interventions. While we feel relatively certain about some of our estimates—such as the number of farmed animals affected by dietary choices—we are much less certain about our quantitative estimates of the benefits of other programs.
To model the cost effectiveness of a leafleting program, for instance, we would want to account for the benefits of creating new vegetarians and vegans. However, there is little evidence about the rate at which distributing leaflets causes an individual to reduce their meat consumption. We coordinated a pilot study of the effects of leafleting in 2013, and several other animal charities have attempted to study the effects of leafleting—but the state of the evidence is still extremely weak. Therefore, we cannot make a very precise estimate of how much dietary change is caused by leafleting.
There are a few reasons why the conclusions we draw from animal advocacy research are often highly uncertain:
- There are still relatively few studies investigating the impact of any given intervention.
- Animal advocacy research is often underfunded, which may lead researchers to choose small sample sizes, resulting in studies that lack the necessary statistical power to detect the effects in which we are interested.
- Perhaps because of funding limitations (and perhaps because of lack of expertise), animal advocacy researchers sometimes choose not to use control groups—which may limit the causal conclusions that can be drawn from their research.
- Animal advocacy researchers are often invested in particular outcomes. For example, they may desire to find that particular interventions are effective. As a result, animal advocacy research may be subject to various sorts of bias.
- Animal advocacy researchers are often unable to directly measure the outcomes in which they are most interested, such as changes in participants’ behavior.
- Research on the effects of animal advocacy interventions often relies on data that is self-reported by participants. Self-reported data is subject to social desirability bias and other sources of error.6
We do our best to use the available evidence to assign ranges of values that are 90% likely to capture the value of each cost and benefit of an intervention. Our research staff regularly practices calibration exercises to improve our 90% subjective confidence intervals. Still, we want to stress that our CEEs remain highly uncertain.7
Of course, the cost effectiveness of an intervention or charity would be uncertain whether or not we created a quantitative model. Modeling cost effectiveness does not introduce uncertainty, but it often fails to resolve it.
Our Models Are Subject to Bias and Error
Our cost-effectiveness models are subject to bias, especially in favor of interventions or charities that have easy-to-measure benefits and/or difficult-to-measure costs. When we create a model, we try to account for all of the most important costs and benefits of a program, but we are more likely to account for an outcome if we know how to account for it. Recall that in the case of the Meatless Monday program illustrated above, we easily accounted for the number of meals influenced and the time and money invested, but we did not account for the number of students who may become vegetarian as a result of the program. One reason we might not account for such an outcome is that it would be difficult to measure. We recognize that our models are systematically biased. If an intervention has difficult-to-measure outcomes that are mostly positive, our estimate of its cost effectiveness might be too low. If an intervention has difficult-to-measure outcomes that are mostly negative, our estimate of its cost effectiveness might be too high.
When we assign quantitative values to the outcomes of a program, we are subject to other sources of bias and error. We may, for example, systematically overestimate the benefits of animal advocacy interventions. Everyone on the ACE team wants to find interventions that are effective, and we could be subject to something we might call a “wishful thinking bias.” Our process for developing an estimate almost always includes discussions about whether we might be biased in various ways, and each team member makes a conscious effort to address possible sources of bias in their own thinking and in their colleagues’ thinking. Still, we emphasize that our cost-effectiveness models are subject to bias—as well as other sources of human error.
Our evaluations would be susceptible to bias and error regardless of whether we made and published CEEs. We have no reason to think that quantitative reasoning is more likely to be biased than qualitative reasoning, though the influence of bias may be most apparent in our quantitative estimates. We find this to be one consideration in favor of making and publishing CEEs. Assigning numbers to outcomes may allow us (and our audience) to recognize and address the effects of bias more easily.
There are Other Possible Approaches to Making CEEs
Estimating Marginal Cost Effectiveness
When we identify a charity’s net costs and benefits from the past year and divide the benefits by the costs, we are estimating average cost effectiveness from the past year (i.e., the cost effectiveness of the average dollar donated to each charity). Since our goal is to help donors choose where to allocate their funds, we are primarily interested in marginal cost effectiveness (i.e., the cost effectiveness of an additional dollar sent to each charity).
Most often, we think a charity’s average cost effectiveness from the past year provides a suitable approximation of its marginal cost effectiveness. However, in some cases, a charity’s average cost effectiveness may be quite different from its marginal cost effectiveness. For instance, a charity that invests in new programs and staff trainings in one year might have relatively low average cost effectiveness for that year but relatively high marginal cost effectiveness. We don’t currently attempt to model marginal cost effectiveness separately from average cost effectiveness.
Discounting Estimates with High Uncertainty
We currently make few formal distinctions between estimates on the basis of their varying levels of uncertainty. One distinction that we do make is that we avoid including in our cost-effectiveness estimates any very long-term or indirect outcomes, in part because of our high uncertainty about how to appropriately estimate long-term costs and benefits. We don’t make or publish cost-effectiveness estimates for charities whose impacts we think are mostly long-term or indirect, because we don’t think they would be comparable to our other estimates.
This isn’t the only way we could handle varying levels of uncertainty among our estimates. We could concentrate only on expected value, and not make any distinctions between estimates on the basis of uncertainty—in which case we would probably publish cost-effectiveness estimates for some programs and outcomes we currently consider too uncertain to approach in this way. This might make sense because we are not only interested in short-term or direct effects, so in principle we should also include long-term and indirect effects in our calculations. However, if we have little information about these effects, trying to include them could mean doing a lot of work for little improvement in the accuracy of our estimates.
On the other hand, we could add a step to our modeling process where we consider our level of certainty regarding the various inputs to our models and our confidence in the modeling process itself, and then discount the outputs of models where we identify higher uncertainty relative to those where we identify lower uncertainty. This could make sense because cost-effectiveness estimates often become less optimistic when more information is considered—and we might be able to identify which of our estimates are most likely to be revised downwards later as we learn more. However, it could also make it harder for readers to understand our estimates, by adding an additional step which would rely especially heavily upon our subjective viewpoint.
Using Bayesian Formalisms
Possibly the most accurate way to adjust for uncertainty and other factors that may affect the reliability of our estimates would be to use Bayesian techniques, treating our explicit cost-effectiveness estimates as updates to a prior distribution of effectiveness based on the effectiveness we would expect for other interventions or charities in the same reference class. This would be technically complicated, and would require additional work in that we would need to develop priors to use. However this method is theoretically well-designed for the kinds of uses we want to make of our cost-effectiveness estimates, as it allows us to choose between different options with different estimated expected values—as well as different underlying sources and amounts of uncertainty in the estimated values.
The underlying reason to treat cost-effectiveness estimates as updates to a prior distribution—rather than accepting them at face value—is that they are noisy and uncertain. A very high estimate may arise from a high actual value, a high degree of error in a positive direction, or the combination of both.8 Using Bayesian techniques correctly is a way of formally accounting for the error in an estimate, within the calculation. A very high estimate where a high degree of error is expected will be reduced more than a very high estimate where a low degree of error is expected.
An alternative approach to this problem—and the one that we currently use—is to treat cost-effectiveness estimates as only one input into our decisions, and to be aware that if a cost-effectiveness estimate is much higher than we would expect based on the other things we know about an intervention or charity, that may be due to an error in our estimate rather than to truly exceptional cost effectiveness. When we encounter this situation, we look back at the estimate to try to understand why it deviates from our expectation, and whether that deviation is due to a correctable error, an activity that really is exceptionally high impact, or nothing we can identify (in which case we may remain especially cautious in using the estimate).
For example, during our 2016 evaluation process, we noticed that our cost-effectiveness estimate for The Humane League, and especially their corporate outreach program, was much higher than we expected (and higher than those for similar charities and programs). We looked for the cause of this issue and found that it was due to one specific policy commitment which we thought really did have unusually large effects. We noted in the review that the estimate was unusually high, that this was due primarily to the effects of a single policy commitment, and that we felt this might not be repeated in future years. In our decisions, we balanced our enthusiasm for the real commitment that led to the high cost-effectiveness estimate with our uncertainty that such a single event would be repeated in future years.
Risks of Making and Publishing Cost-Effectiveness Estimates
The Risk of Appearing Overconfident
We worry that publishing our CEEs seems to imply that we place more weight on them than we actually do. We have consistently struggled to communicate to our audience the extent to which our CEEs are approximations, highly uncertain, and bias-prone. Indeed, we’ve heard from some readers who accept our CEEs with a higher degree of confidence than we believe they should. We do not advise readers to make decisions about which charities to support based solely on our CEEs. (For that matter, we do not advise our readers to make decisions based solely on any one of our criteria.) We hope that this page will serve to clarify the appropriate use and interpretation of our CEEs.
The Risk of Obscuring the Research Frontier
We also worry that publishing cost-effectiveness models for interventions makes it appear as though the evidence regarding the effectiveness of those interventions is stronger than it actually is. For instance, publishing a CEE for a leafleting program may seem to imply that there has been enough research on the effects of leafleting on diet change. In fact, there is very limited evidence available about the effects of leafleting. We make our best estimate using the evidence that’s available, but we hope that research in that area will continue.
The Risk of Appearing Too “Calculating”
Some animal advocates may feel that it’s inappropriate to assign quantitative values to outcomes that affect animals’ lives. They may worry that focusing too much on numbers will distract us from what is really important: concern for each individual animal. It’s also possible that our focus on numbers could distract our readers from concern for each individual animal, which might result in fewer donations to our recommended charities. Some evidence suggests that people will spend greater resources to help individual victims rather than “statistical” victims.
We don’t feel that our focus on helping as many animals as possible prevents us from feeling moral concern for each individual. It may be that publishing our CEEs is not the most effective way to solicit donations to our recommended charities, but when we need to choose between communicating in a fundraising mode and communicating in a transparency mode, we try to choose transparency. We have written more about our numbers-oriented approach on our page about the philosophical foundation of our work.
Benefits of Making and Publishing Cost-Effectiveness Estimates
Estimating Cost Effectiveness Supports Our Mission
Because we are committed to the principles of effective altruism, one of our primary goals is to identify the most effective ways to help animals, given limited resources. We consider all seven of our evaluation criteria to be indicators of cost effectiveness. If we were able to model charities’ actual cost effectiveness with very high confidence, we would make our recommendations based heavily on our CEEs. The most cost-effective charities are, after all, the ones that allow donors to have the greatest positive impact with their donations. Even given the risks and uncertainties described above, directly estimating cost effectiveness is one of the best ways we know of to identify highly cost-effective programs.
Estimating Cost Effectiveness is Useful for Making Direct Comparisons
Cost-effectiveness estimates are sometimes useful for comparing different charities or interventions to one another. We develop CEEs using consistent methodology and data so that our CEEs for similar charities are meaningfully comparable. Though there are many sources of error that might influence our estimates of the effects of a given charity or intervention, some sources of error may be unlikely to influence our CEEs of charities relative to one another.
Suppose, for example, that Charities A and B both spend 100% of their funds on leafleting, but we estimate that Charity A spares 1–3 animals per dollar and Charity B spares 8–10 animals per dollar. Our estimate of the effectiveness of distributing leaflets might be too high or too low, but it would appear that Charity B is more cost-effective than Charity A, regardless. They might be distributing more leaflets than Charity A at the same cost.
It’s also possible that, in some cases, our use of CEEs skews our comparison of charities to one another. We are not able to make CEEs for every charity we evaluate. We do not attempt to estimate the cost effectiveness of charities that have mostly long-term or indirect outcomes. It’s not always clear how we should think about the effectiveness of these charities relative to the effectiveness of charities for which we’ve made CEEs.
Publishing Cost-Effectiveness Estimates Increases Transparency
We find that, in some ways, the quantitative components of our evaluations are easier for our readers to interpret than the qualitative components. Assigning numbers to uncertain values allows us to be clear about the effects we expect an intervention to have. It allows our readers to identify specific points on which they may disagree. If our evaluations were entirely qualitative in nature, it might be harder for people who disagree with us about the effectiveness of a program to pinpoint the source of their disagreement, since our qualitative statements are more open to interpretation than our quantitative ones.
Our Current Thinking on the Appropriate Use of Cost-Effectiveness Estimates
We want to be very clear: our cost-effectiveness estimates are approximations, they involve uncertain quantitative estimates, and they are subject to bias and error. They are not the only factor we consider when we evaluate interventions and charities—so our readers should interpret them carefully.
We do not advise our readers to accept our CEEs unquestioningly. As we’ve described, we exercise our judgment in many areas of our modeling, and our judgment is not infallible. In fact, our judgment is not necessarily better than the judgment of our readers. We do have some advantages: we have multiple team members who are able to work together, and most of us think about effective animal advocacy full-time. We believe there is value in publishing the results of our estimates so that others can use them as they see fit. Otherwise, any of our readers who are interested in comparing the cost effectiveness of different charities might have to develop a model on their own from scratch.
We do our best to communicate an appropriate level of confidence in our CEEs. In 2016, we began presenting our CEEs as ranges rather than point estimates, which we think better expresses our level of uncertainty. This page should also serve to clarify the tentative nature of our estimates. We hope that our readers will adjust our cost-effectiveness models however they feel is appropriate, and that the results will help inform their decisions.
A subjective confidence interval is a range of values that communicates judgments about an unknown quantity. We construct a 90% subjective confidence interval such that we feel 90% confident that the true cost effectiveness value falls within the range given.
We think most of the meals we’re accounting for are lunches, in this case. Whether a meatless lunch is closer to ⅓ or ½ of a vegetarian day depends on whether breakfast is counted as a meal at which one would normally eat meat.
We can select whether, if we sampled many different Meatless Monday programs, we would expect the outcomes (e.g., the numbers of animals spared) to be distributed normally, lognormally, uniformly, or according to a beta distribution. We almost always have uncertainty regarding which distribution is appropriate. Ideally our cost-effectiveness models would reflect this uncertainty—but we have not yet found a way to fully address this problem.
We would, of course, work to understand why the second charity appears less cost-effective than the first. The low CEE of the second charity may weigh more or less heavily in our decision depending on the reason.
“An important drawback of surveys is that they rely on self-report measures (i.e., measures that rely on respondents to report their own attitudes or behaviors). Self-report measures are subject to a number of limitations, including memory and social desirability biases, that may affect the accuracy of the results.” –ACE’s Survey Guidelines Project.
It’s worth noting that we are not the only organization that uses estimates to inform our work, despite the fact that they carry uncertainty. For example:
- GiveWell explains that their CEEs involve a number of uncertain inputs, including “subjective judgment calls” and “educated guesses.” Often, they rely on “poor-quality data” because relevant high-quality data is not available. Like ACE, GiveWell uses CEEs to compare effectiveness across different organizations. Also like ACE, they recognize the limitations of their CEEs and evaluate groups on other criteria as well—including track records.
- The Congressional Budget Office (CBO) provides Congress and the public with cost estimates of proposed legislation. They explain that their estimates are uncertain, in part because they only predict the effects of legislation under the current law; they do not attempt to account for possible changes in the law. Nevertheless, the CBO continues to publish their estimates and “endeavors to communicate to Congress the uncertainty of [those] estimates” (see their Processes page).
- The World Health Organization (WHO) uses cost-effectiveness analyses to guide health policy decisions. They explain that their estimates are subject to uncertainty for several reasons. For instance, their models include value judgments around which there is no consensus. There is also uncertainty regarding the appropriate functional form of their models, just as we sometimes have uncertainty about which distributions to use in Guesstimate. The WHO also notes uncertainty about the generalizability of their models. They recommend several methods of dealing with uncertainty and recommend using stochastic league tables to communicate uncertainty to policymakers.