How to improve forecasting ability using findings in meteorology science

It was April 1950 and the United States Department of Commerce published the Monthly Weather Review with an article that would forever change the way accuracy of weather forecasts is measured. 

In that issue, Glenn W. Brier published his now classic Verification of Forecasts Expressed in Terms of Probability. In that publication, a novel formula was articulated to measure the accuracy of weather forecast.

Brier would from then on have a tremendous influence on the way statistical methods are used in meteorology and climatology. His contributions will influence these disciplines for years to come. 

As investors and as business owners, one of the key skills in achieving superior returns is the ability to make good predictions. 

Can we use these findings in meteorology science to improve forecasting ability? Can individual investors take advantage of these techniques to make better forecasts and hence better investment decisions? Michael J. Mauboussin and Phillip E. Tetlock think they can.

What does science say in relation to forecasting skill? 

One of the greatest experts in the forecasting world is, undoubtedly, Philip E. Tetlock, a professor at the University of Pennsylvania who, in his book “Expert Political Judgment”, analyzes how good we are in forecasting socio-political events. 

It turns out we are pretty bad. Not far from a dart throwing chimpanzee in terms of accuracy Tetlock says. 

Is it possible then to drastically improve forecasting ability? Apparently it is. 

Together with Tetlock, various academics have proposed a model known as the BIN Model.  B stands for Bias, I for Information and N for Noise.

Apparently those are the three dimensions in which forecasts can be analysed and improved upon. More on the BIN model later. 

So far we have a framework to work on to improve forecasting ability. What else are we missing if we really want to improve business or investment decisions? 

One of the most important aspects to get better in any skill you want to learn is the speed at which you receive feedback from the environment. This is important because it tells you if you are going on the right track or not. 

Learning to ride a bike has a relatively short learning curve because feedback is immediate, we fall or we ride the bike.

The problem with feedback loops in investment or business decisions is that the result of an investment decision taken today can take 3, 4 or 5 years to arrive. 

The process of becoming a good investor or a good capital allocator is slow and takes years of accumulated experience.

What if the learning process could be sped up to improve the rate at which we receive the feedback? 

At the end of the day, the decision to invest in a company or not, to invest in a new technology or not, has to do with a forecast about the future cash flows of that company or of that investment project.

This leads us to the original question we want to explore: 

It is possible to increase forecasting ability by augmenting the frequency of the feedback loops under the BIN model? Let’s see. 

Working with the BIN model in improving forecasting accuracy

As mentioned before, the BIN model improves forecasting ability focusing on three dimensions:

Bias: it occurs when there is a systematic and constant difference in the predictions of the same person or a group compared to the actual outcomes.

Information: refers to how the forecasters make use of the available information to estimate the probability of an event occurring.

Noise: it happens when there is a large difference between different forecasts of the same person for the same type of event or when there are several forecasters who estimate very different probabilities for an event occurrence. 

Using a dart throwing metaphor is really easy to see the difference between bias and noise:

Noise vs Bias 

Reducing cognitive biases

As I said before, biases are systematic errors in predictions. To reduce biases and improve forecasts it is necessary to work on the following aspects of a forecast: 

  • Having a list of all the cognitive biases of which we are generally victim. Wikipedia’s list of cognitive bias is really complete. 
  • Having a really good database of  base rates on the metric we are trying to forecast helps provide the “outside view” to inform the forecast. 
  • Use signposts with intermediate metrics to know if the forecast is going in the right direction. The signpost that you use has to be measurable, have a certain date and relevant to the metric you’re trying to predict (ie. the share price). 
  • Use pre-mortem analysis: think of 3 or 4 outcome scenarios and imagine what were the possible reasons that led to those specific outcomes. 

The case of over and under confidence in retail investors

An interesting example to consider on how cognitive biases influence decision making in investors was observed in India.

From a study carried out in that country where by law all IPOs must assign 35% of the shares to retail investors, certain cognitive biases were discovered.

When IPOs are oversubscribed what happens is that the lots destined to retail investors are raffled off. The result is that some investors have access to the shares offered and others do not. 

This situation creates the ideal condition to carry out an experiment: control group and experimental group with a random sample.  

What the researchers ended up observing was that those investors who were assigned the shares of the IPO and had a good return, ended up having an excess of confidence in their ability to make good investments, which generated an increase in trading in the following months. 

On the other hand, those investors who received the shares but then fell in value ended up trading less in the following months because they received a blow to their confidence as investors. 

This study shows, with a perfect experimental condition, how cognitive biases influence investors’ decisions. 

Improving the use of information

There are three key ways to improve forecasts in terms of the information processing dimension: 

  • Increase the frequency of updating forecasts as new information emerges. It basically means revising the forecasts as time goes by and new information is available. 
  • Be very clear in deciding which part of the information is noise and which part of the information is signal. 
  • Be very clear about the difference between strength and weight.

Key difference between strength and weight when analyzing information 

To analyze the concept of strength and weight I am going to use the example of the toss of a coin which, as we all know, can have only two results: heads or tails. 

Imagine a scenario where we flip the coin 10 times and 80% of the time it comes up heads and 20% comes up tails. In this case we are facing a result that has a lot of strength since there is a very big difference between the two results. Heads came out 4 times more often than tails. 

These types of situations where the difference between two results is so marked, tends to generate overconfidence in the investors as they tend to assume that the same result will still occur in the future. 

However, for a fair coin a result of this type after flipping the coin 10 times has a probability of occurrence of less than 5%. 

This means that if we repeat the experiment 20 more times (flipping the coin in 10 opportunities), the result of 8 heads and 2 tails will only occur 1 time. If investors assume that this result has a more frequent occurrence, they are incurring in overconfidence.

Clearly this is a simplified example, but you could extrapolate this insight to other situations. 

Now let’s see what happens with the phenomenon of weight

If I flip a coin 1,000 times and get 550 heads and 450 tails, then the difference between the two results is very small but it carries a lot of weight.

Assuming the coin is fair, meaning that it has no weight difference between the sides, the probability of that difference occurring is approximately 1 in 5.900. 

In other words, if we get that result after flipping the coin 1.000 times, it is most likely a skewed coin.

While that observation (that the coin is skewed) is almost a certainty, investors would show underconfidence, that is they would not be so sure that the coin is biased. 

This is another example of how generally humans are not entirely rational when analysing probability scenarios. 

Reducing noise in forecasts

To reduce noise in predictions some of the actions that we can take are:

  • Combine the predictions of several informed forecasters to reach a consensus. The combination of several informed opinions tends to generate more accurate forecasts than individual predictions. 
  • Use an algorithm for forecasting. By following the same forecasting steps regularly, it is possible to reduce the volatility of the forecasts. This is nothing more than a simple checklist with the steps to consider in each case. 
  • Use the Median Assessment Protocol (MAP). It basically involves determining the dimensions that will most likely influence the result and take an informed, analytical and methodological approach to improve the prediction in each case. 

Using the BIN model and accelerating the feedback loops

Scott Young is a recognized author in the world of accelerated learning and education. Young has several feats to his credit: he learned 4 languages ​​in 1 year, finished MIT’s computer science degree in 1 year instead of 4, and learned to be a professional portrait painter in just one month.  

Ultralearning is a new way of approaching educational projects proposed by Young where a very strict focus is placed on the directness with which the project is approached and on the efficiency of deliberate practice to constantly improve.  

The aim of this learning philosophy is to have very short feedback loops to accelerate learning. 

It is possible to increase forecasting ability by using the BIN model as a framework and increase the velocity in which we participate in feedback loops?

Enter the Brier Score. 

What is the Brier Score?

“If you can not measure it, you cannot improve it.”

William Thomas Kelvin 

British mathematician and physicist

The Brier Score is a metric used to assess forecasting ability.

Would it then be possible to use such a metric to rate how good an analyst is forecasting a company’s earnings, operating margin or ROIC? What happens if that score is used to score regular business forecasts? 

We have a feedback loop to improve upon.

The Brier Score ranges from 0 to 1 and the lower it is the better the forecast. 

Basically it is a number that tries to provide an overview of how often the forecasts are correct but also with how much conviction each forecast is made. 

This score is calculated by adding the squared differences of each forecast versus the result (if the event occurs it is counted as “1” and if the event does not occur it is a “0”) and then divided by the total number of forecasts. 

Let’s see some examples.

Brier Score formula and examples

The formula for evaluating forecasts about events that may or may not occur is very simple: 

Brier score= (f-o)^2             where:

f = forecasted probability

o = outcome (1 if the event occurs, 0 if the event does not occur)

If an analyst predicts that there is a 90% probability that company X closes a millionaire contract with its client Z and it does occur, then the BS is = (0.9-1)^2 = 0.01 (very good score).

If an analyst predicts that there is a 90% probability that company X will close a million dollar contract with its client Z and this does not happen, then the BS is = (0.9-0) ^ 2 = 0.81 (very bad score).

Calculating the Brier Score for each of the forecasts we make on a regular basis can help us in giving clear feedback about how our forecasting ability is improving.

Towards a world of superforecasters 

In his book Superforecasters: The Art and Science of Prediction, Phillip E. Tetlock talks about a new tribe of amateur forecasters whose accuracy in different predictions about geopolitical events, far exceeds the forecasts made by the intelligence services of the United States. 

Is it possible to use these methods to improve the forecasts made as simple individual investors or business owners? 

Is it possible to generate superior business performance and superior investment results using the BIN model framework at the same time we use the Brier Score to increase feedback loops?

These are all questions that will surely be answered in the future through a growing body of research that is being published on the relationship between forecasting ability and investment and business returns. 

Time will tell.


Zach. (2020, March 4). What is a Brier Score?. Retrieved December 11, 2020, from full

Glen W. Brier. (1950, April 15). Verification of Forecast Expressed in Terms of Probability . Monthly Weather Review, vol. 78, Retrieved December 11, 2020, from full

Brier score. (n.d.). In Wikipedia. Retrieved December 11, 2020, from

Tim Van Gelder. (2015, May 18). Brier score composition – A mini-tutorial. Retrieved December 7, 2020, from full

Morgan Housel. (2018, March 12). Expectations vs. Forecasts. Retrieved December 7, 2020, from full

Mauboussin, M.J., & Callahan, D. (2020, March 19). BIN There, Done That. Retrieved December 15, 2020, from full

Buttonwood: C’mon feel the noise. (2020, December 12). The Economist, 437 (9224),  pp. 71.

Young, S. (2019). Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career. New York: HarperCollins.