Table of Contents
- Assigning Numbers to What We Believe
- Subjective Confidence Intervals
- Why Do We Use Subjective Confidence Intervals?
- Probability Distributions
- More Complex Examples
- Further Reading
In our work to identify the most effective ways to help animals, we use subjective confidence intervals (SCIs). We’ve received a lot of questions about SCIs from our community, so we decided to explain what our SCIs are and why we use them. We also provide some simple and detailed examples, and clarify possible points of confusion.
Assigning Numbers to What We Believe
Before explaining how we construct our subjective confidence intervals, it is important to outline why they are “subjective.” For many estimates that we make at ACE, there is not enough data to fully understand the situation. We often make estimates that may include some objective data, but they also rely on subjective assessments; this is what the “subjective” in “subjective confidence intervals” refers to.1
When making a subjective estimate of a probability, knowledge and understanding informs our degree of belief about that topic. In order to use that to estimate a probability, we need to assign it a numerical value. So how do we go about doing this with subjective degrees of belief?
Let’s consider the example of trying to predict whether it will snow on the weekend. To do this we could use any knowledge we may have of local weather systems, as well as considering the current conditions (how cloudy is the sky? how cold does it feel? etc.). The more confident we are that snowfall will take place, the higher our subjective estimate of the probability will be. But what value would we give to this subjective probability? As it is not obvious what this probability is, this will require some thought. A useful place to start is to make a quick best guess based on our ‘gut-feeling’ about the topic. Suppose we think it will probably not snow this weekend, but we aren’t sure of that. We can use subjective probability to further describe how likely or unlikely we think snowfall is. For this example let’s say that a 30% chance of snowfall seems like a reasonable probability. To help improve this further, we want to make some sort of assessment about how much knowledge we have on the topic. Asking ourselves specific questions that relate our knowledge to the topic can help. For example:
- Do we have any specific training in meteorology?
- How can we use that to improve our estimate?
- How long have we lived in this geographic location?
- How often have we seen it snow?
- Do we often follow local weather reports?
- Have they predicted any snowfall recently?
- How close is it to the weekend?
- The closer to the weekend we are, the easier it is to make an accurate prediction, as the weather will be less likely to change.
Answering questions such as these allows us to explore the topic in a more specific way than relying on a gut feeling. We can then update our probability—perhaps, for example, because you’ve watched local weather reports and know they think there is only a small chance of snow, you adjust your probability to 20%.2
These examples so far explain how we would make an estimate for an event that either will or will not happen. But what about something more continuous, that can take on many different values? If, for example, it did start snowing on the weekend, how would we estimate how deep the snow will be when it stops? Instead of trying to guess a specific depth, we can instead give a range of values that we think it’s likely the depth will be somewhere between. For example if it has already begun to snow and is pretty cold out, we might think it is 90% likely that there will be 0.5–6 inches of snow over the weekend. This is the basis of a subjective confidence interval.
Subjective Confidence Intervals
An SCI is a range of values that we expect—with a certain degree of belief—some value to fall within. An SCI has an upper and lower limit, as well as a level of confidence that the unknown value will sit somewhere within the limits. We express our confidence level as a percentage.
We generally use 90% SCIs, which we construct such that:
- In 90% of cases, we expect the unknown quantity to be within the interval.
- In 5% of cases, we expect the unknown quantity to be below the interval.
- In 5% of cases, we expect the unknown quantity to be above the interval.
We do this because it is impossible to be 100% certain about most things we are trying to estimate. Before looking at an example of an SCI, let’s consider a simple non-subjective case, so we can see how an SCI is constructed:
Example: We have a jar containing 20 marbles numbered from 1–20. A 90% confidence interval for the number of the marble drawn from the jar would be 2–19, inclusive, as shown in Figure 1. In 5% of cases we draw a number lower than 2 (i.e. drawing a 1) and equally in another 5% of cases we draw a number greater than 19 (i.e. drawing a 20). In the remaining 90% of cases we draw a number between 2 and 19, inclusive:
As this is a simple example, it is fairly intuitive to work out where the limits of the interval are best set at. However, as mentioned, the estimations that ACE makes are often much more complicated than this and require a degree of subjectivity. Let’s now consider a more typical example of ACE’s work and explore how we might go about developing a 90% SCI for it.
Example: Suppose we have estimated the number of farmed animals spared by a particular online ad campaign, and we report our estimate in the form of a 90% SCI of 100–200 farmed animals.3 This would mean that we feel 90% confident that the ad campaign spared between 100 and 200 farmed animals. It would also mean that we think there is a 5% chance that the campaign spared fewer than 100 farmed animals and a 5% chance that the campaign spared more than 200 farmed animals.
Typically, the research that would include an SCI such as this will be led by a single researcher who will be responsible for deciding on the range of values for the SCI. At the early stages of the project they will make an initial estimate. As their knowledge develops with the project, they will form new estimates by combining their previous estimate with any new information. They may also request estimates from other members of the research team, and factor this into their own estimate. With each update the SCI will generally become narrower (more precise) and move closer to the true value (more accurate), although it’s important to note that sometimes new information can act to increase our uncertainty about a given topic. We do our best to use the available evidence to assign ranges of values that are 90% likely to capture the value of what we are trying to estimate. Our research staff practices calibration exercises to improve our 90% SCIs.
Why Do We Use Subjective Confidence Intervals?
We have previously discussed the advantages of using quantitative information over qualitative, in some instances. Single figure (or “point”) estimates and SCIs are both examples of quantitative information. Although SCIs need more explanation than using a single figure estimate, they do offer some distinct benefits. As mentioned, there is often not enough data to give a single figure estimate with any degree of certainty. Additionally, the level of uncertainty we have can vary greatly depending on the topic.
In an SCI, the width of the interval itself describes how uncertain we are about an unknown quantity. Generally speaking, the more uncertain we are about that unknown quantity, the wider our SCI will be. Conversely, as we become more certain about an unknown quantity our SCI will generally become narrower. Using SCIs thus offers a way to not only provide an estimate, but to quantify our level of uncertainty about that estimate. At ACE, transparency is a core part of our philosophy; quantifying our uncertainty adds an extra dimension of transparency to our estimates that we believe is highly valuable.
In our simple jar example, because there is one of each numbered ball, there is an equal chance of drawing any particular number. This is often not the case for more complex examples. Some numbers in the interval usually have a higher probability than others of being closer to the actual value; we can explore this further by considering a new jar:
Example: Consider a jar that contains 20 marbles but this time they are numbered from 1–7. There is one 1, two 2s, four 3s, six 4s, four 5s, two 6s and one 7 as shown in Figure 2. As there are still 20 marbles, a 90% confidence interval will take a similar form—although this time it will range from 2–6, inclusive. The main difference now is that there is no longer an even chance of picking a particular number; there is a much higher chance that the chosen ball has a number that is either 4 or close to 4, at the center of the range. This example illustrates a concept known as a probability distribution.
Often, we don’t think that the values within our 90% SCI are all equally likely. The confidence we place in any one of these values depends on our subjective probability distribution. One common probability distribution is the normal distribution, which has a similar shape to the second jar example. For SCIs of values that follow a normal distribution, the values closer to the middle of the interval are more likely to occur than those further away. We can display probability distributions using a graph.
Example: Let’s return to our online ad campaign example, in which we estimated that 100–200 animals were spared by the campaign. The following graph shows a possible probability distribution of the number of animals spared, with the 90% SCI included:
The area under the curve represents the probability of there being that number of animals spared. Thus, the blue shaded area between 100 and 200 animals spared is 90% of the total area under the curve, reflecting the 90% SCI. In this case we estimate that it is most likely that 150 animals were spared, as that is where the peak of the curve lies. Additionally, if we were more certain of the answer then the interval would have a smaller width, for example 125-175, along with a narrower probability distribution.
More Complex Examples
While the normal distribution is the most common probability distribution, especially for unknown quantities, there are a variety of other possibilities.4 Consider the following example of a positively skewed Beta distribution shown in Figure 4. This is still a 90% SCI for 100–200 animals spared, as the area under the curve between 100 and 200 is still 90% of the total area. However, now the most likely number of animals spared is lower than before, at around 130.
It is common to want to aggregate two probability distribution-based SCIs together to form a new one. Suppose it is hard to initially estimate the number of animals spared by an online ad campaign, so instead the researcher chooses to create an SCI for the number of people reached and another for the number of animals spared by the average person reached. At ACE we use an online program called Guesstimate to do this. Guesstimate allows us to enter the ranges of our SCIs and the types of distributions we want. It uses Monte Carlo simulations to create corresponding probability distributions.5 We can then use the software to combine multiple distributions to form a new SCI, as can be seen in our cost-effectiveness and room for more funding estimates used in our charity reviews.
We can demonstrate this by looking at a possible Guesstimate model for the online ads example:
If we consider two factors that we may have an estimate for—”Online ad views” and “Animals spared per view”—we can multiply them together to get the total number of animals spared in the online ad campaign. Often, the estimates we make at ACE (particularly in our CEEs) use multiple combinations of many SCIs, and Guesstimate allows us to map out all of these connections to form complex models.
Sometimes, if we have a large amount of uncertainty about the effect of a factor—such as an intervention—our SCIs can go from a negative lower limit to a positive upper limit. If we consider the factor “Animals spared per view,” for example, animals are spared when the people who click on the ad are influenced enough to reduce their consumption of animal products. However, it is possible that it may have a negative influence and create a ‘backlash’ effect where people increase their food intake as a result of this exposure. If we felt that there was a chance of this, then our model may look more like this:
The range for “Animals spared per view” now stretches into the negative, which causes our overall estimate for animals spared to also stretch into the negative. This does not necessarily mean that we expect the actual overall effect to be negative, just that our uncertainty is large enough that it remains a possibility.
This page is intended to be a brief explanation of our 90% subjective confidence intervals. For further information about confidence intervals, probability distributions, or other topics discussed in this article, readers may like some of the following resources:
Hájek, A. (2011). Interpretations of Probability: 3.3 Subjective probability.
Jain, K., Mukherjee, K., Bearden, N., Gaba, A. (2013). Unpacking the Future: A Nudge Toward Wider Subjective Confidence Intervals.
Klayman, J., Soll, J.,González-Vallejo, C., Barlas, S. (1999). Overconfidence: It Depends on How, What ,and Whom You Ask.
Owen, S. (2015). Common Probability Distributions: The Data Scientist’s Crib Sheet
Weisstein, E. W. (2017) Confidence Interval.
Wikipedia, The Free Encyclopedia. (2017, November 8). Confidence interval.
Winstein, K. (2017). What’s the difference between a confidence interval and a credible interval?: Response.
Please also feel free to contact us with any questions.
There is another technique based on betting that can be used to test this estimate to see if it needs refining further. Let’s consider a jar that contains 100 marbles, numbered 1–100. We can then consider the following two options:
- (a) Win $100 dollars if it snows on the weekend.
- (b) Win $100 dollars if the marble drawn is numbered 1–20.
Which one should we pick? If, for example, option (b) seems like the clear choice, then we are actually less confident snowfall will occur than the 20% estimate we previously made. We can then reduce the number range and retry. Would we still pick (b) if the range was 1–15? How about 1–10? The idea of this exercise is to adjust the range until both options seem equally likely. The size of the range then gives a better estimate of our subjective probability—if we can’t decide between snow and drawing 1–15 out of 100, then our subjective probability of snowfall is 15%.
Our consideration of subjective probability distributions marks the most significant difference between our SCIs and conventional frequentist confidence intervals. The Frequentist approach regards the unknown quantity as fixed and therefore it cannot be described by a probability distribution, as the probability that it sits within a given range is always 0 or 1. The use of probability distributions is more in line with Bayesian credible intervals; however, due to the subjective nature of our intervals, there is little difference in practice and we feel that probability distributions better describe our process of thinking.
For a quick video introduction to Monte Carlo simulations, see here.