Blog

An Agile Approach to Dealing with Four Different Types of Assumptions

From an agile perspective, we never let an important assumption live long in an unvalidated or low-confidence state. The risk in doing so is that we end up making decisions or creating work products on top of the yet-to-be-validated or low-confidence assumptions that might end up being waste if we learn late that an assumption is false. Since it takes time and money to better model or validate assumptions, we want to focus on the most important assumptions first.

The purpose of this article is to provide a brief taxonomy of assumptions and then use that taxonomy to determine which assumptions are important enough to dedicate resources to validate early on. The taxonomy presented is one that I created and have found useful. It is a work in progress, so I expect it will evolve over time.

Assumption Taxonomy

An assumption is a guess or belief that is presumed to be real or certain, even though there is little to no evidence that it is true. For example, “we believe that if we build cool new feature X, then we will double product sales.

Each assumption has an underlying assertion. In the previous example, the assertion is “if we build cool new feature X, then we will double product sales.” The assumption is our belief that this assertion is true, even though we may have no evidence to back up its truthfulness.

I have found it useful to classify assertions into one of four types: statements, cause-and-effect, predictions, and faith. Based on the type of assertion we can then decide what course of action to take.

Let’s examine each assertion type.

Statement Assertions

Statements are assertions that on the surface appear to be facts. However, based on what we actually know, statements might be far from validated as a fact. For example, “the bank closes at 3pm, so we have enough time to get there to make a deposit.” The bank may indeed close at 3pm. However, if we are just assuming that it will close at 3pm, and we don’t do anything to validate that assumption (like call the bank or look on-line to confirm the closing time), we could have a consequential problem. What if the bank actually closes at 2:30pm and we arrive late and can’t make the deposit and fail to have sufficient funds to cover the account debit we expect tomorrow?

In this example, the cost of validating the assumption is very low compared to the cost of being wrong, so this would be an important assumption to validate.

On the other hand, some statements cannot be validated in a reasonable amount of time or with reasonable resources. For example, “we believe that all swans are white.”  There is no experiment we can perform that will prove all swans are white. We can collect data over the course of centuries that indicate that swans are white because we have only ever seen white swans. However, one contrary example, a black swan found in Australia in 1697, immediately invalidated this assumption.

The consequences of learning that all swans are not white was not earth shattering. Large companies didn’t go out of business because this assumption proved to be false. Probably the most important development of learning that not all swans are white was the creation of the Black Swan theory by Nassim Taleb to describe consequential rare events.

To summarize, statements are assertions that appear to be facts. Some statements can and should be validated when the cost of validation is significantly less than the cost of learning later that the statement is wrong. Other statements can’t be validated in any reasonable time or cost, in which case we let them stand unvalidated, especially if the consequences are low if the statement is wrong. If the consequences of being wrong are high, then we need to do an economically sensible amount of knowledge gathering to increase our confidence in the statement.

Cause-and-Effect Assertions

The example assertion I introduced earlier, “if we build cool new feature X, then we will double product sales” is an example of a cause-and-effect assertion. Basically, these assertions take the format of “if we do X, then we should get Y.”

Many customer-facing design assumptions are based on cause-and-effect assertions. Do we really know if cool new feature X will double sales, or is it just a guess, a hope, a belief? If we are just making an assumption, then for these types of assertions we should create a testable hypothesis and determine the proper form of experiment or data collection activity (e.g., prototype, proof of concept, experiment, etc.) to perform that will help increase our confidence that the assertion is true. The goal is to cost-effectively confirm quickly or fail fast before we expend considerably more resources to build the wrong feature.

Over the past several years I have created and refined an Assumption Validation Model that I use to help validate (or invalidate) cause-and-effect assertions, as well as certain statement assertions. I will describe this model in an upcoming article.

Prediction Assertions

Prediction assertions are proclamations about possible future events. For example, “it will rain this afternoon.” A quick test if you are dealing with a prediction is if the following statement is true regarding the prediction. “If we do nothing but just wait, time will tell.” In this example, if we take no actions to “validate” the prediction and just wait until tonight, we will know with 100% certainty whether or not it actually rained during the afternoon.

So, with most prediction assertions, time will tell – meaning there is a future point when the prediction will have proved to be valid or not. That future time might come quickly (like this evening), or it could be some indefinite or unpredictable time in the future.

Here is another prediction, “an earthquake in California will not be strong enough to knock out our data center.” Here we are predicting that some future event won’t happen, maybe because we want to avoid spending the capital to build a redundant data center in Arkansas.

In this example, like the prediction that it will rain this afternoon, there is no experiment we can run that will validate or invalidate our prediction. Time will tell if the prediction is right or wrong.

When dealing with predictions, the goal isn’t to validate or invalidate the prediction, since you mostly can’t. That leaves two other options:

  • Focus on better forecasting with the hope of becoming more comfortable with the probability that something might or might not happen
  • Focus on how best to address the impact if the event actually does or does not happen

Here’s another prediction, “our startup company will achieve 10% year-over-year (YOY) growth.” There is no experiment we can run to validate or invalidate this assumption, only time will tell whether the company actually achieves 10% YOY growth.

So, let’s look at the other two options we can pursue. We could choose to focus our time on forecasting by scrutinizing the information that was used to derive the 10% YOY number. As an angel investor in early-stage startup companies, this is something we do all the time. Basically, we ask the entrepreneurs to walk us through the details of how they arrived at the 10% number. Based on the body of evidence, we can decide whether we agree (and at what confidence level) with their prediction or not.

Of course, we can also focus on the impact if the prediction is wrong.  Let’s say, for modelling purposes, we assume the company will only achieve 5% YOY growth. Is it still worth investing in this company?  To summarize for this prediction, we would want to examine the likelihood that a 10% YOY growth can be achieved along with the consequences (impact) if it can’t.

As a rule of thumb, people are horrible about determining probabilities. Last year the Scrum Alliance Global Scrum Gathering was scheduled for May 11-13, 2020 in New York City. By late February COVID-19 had become a significant reality in the US. On March 16, 2020, the Scrum Alliance cancelled the New York City conference.

I wasn’t involved in the discussions to cancel, but I can easily imagine part of the discussion was: “what do you think the chances (probability) that COVID-19 will still be an issue in May 2020?” I am sure smart people collected whatever reasonable data that they could to try to estimate that probability. Almost certainly the estimate was wrong. Not because Scrum Alliance people did a bad job, but because accurately estimating these types of probabilities is just really hard, and most of the COVID data available in March 2020 was more conjecture than fact.

Rather than worry too much about accurately calculating probabilities, the better approach would be to focus on the impact of what it would mean if COVID-19 turns out to still be a critical factor in May and the Scrum Alliance has to cancel the conference close to its planned start date. An analysis of the impacts almost certainly informed the Scrum Alliance that if they cancel late versus canceling early, there will be a severe economic impact to the Scrum Alliance and its constituents.

This approach of focusing more on the impacts than the probabilities is at the heart of the antifragile approach advocated by Nassim Taleb. Basically, never put yourself in a situation where you have more to lose than you have to gain. The Scrum Alliance had much more to lose by waiting to cancel the conference.

To summarize, predictions are assertions about possible future events. Most predictions cannot be prospectively validated by running an experiment (like we can do with many cause-and-effect and statement assertions). With predictions, time will tell whether the prediction was a good one or not. So, when dealing with predictions, don’t focus on trying to validate them. Instead, focus on improving your forecasting to get a better handle on the likelihood the prediction will come true (e.g., try to better forecast the probability). And, more importantly, focus on the impact if the prediction is wrong and how you will protect yourself from significant downside or, worst of all, getting wiped out.

Faith

The last of the four types of assertions is faith assertions. For example, “I believe that if I do good during my lifetime then I will go to heaven in the afterlife.” Many of us believe this assumption is true and try to live our lives in accordance with it. There is no experiment we can execute to validate that this assumption is true. People either believe the assumption or they don’t. For those who believe it, they don’t need proof, they just take it on faith.

These assertions are important because they can and do govern how companies and people will act. Since we aren’t trying to validate faith assumptions, we just need to decide if we agree with them.

Summary

This blog defined an assumption as a guess or belief that is presumed to be real or certain. Underlying each assumption is an assertion, which can be of one of four types: statement, cause and effect, prediction, and faith. By classifying our assumptions according to this taxonomy, we can determine what type of actions we can take as summarized in the following table:

Assertion Type Action
Statement

If a statement can be validated in an economically sensible way, then validate it.

If a statement cannot be validated in any reasonable time or cost, let it stand unvalidated, especially if the consequences of being wrong are low. If the consequences are of being wrong are high, then spend sensible resources to gather knowledge to increase your confidence in the statement.

Cause-and-EffectValidate all cause-and-effect assertions that can be validated in an economically sensible way.
Prediction

Predictions can’t usually be prospectively validated so instead:

  • Focus on better forecasting with the hope of becoming more comfortable with the probability that something might or might not happen.
  • Focus on how best to address the impact if the event actually does or does not happen.
FaithAcknowledge faith assumptions and decide if you agree with them.

If you want to learn more about assumptions and how to apply agile principles to deal with them to generate outstanding business returns for your company, you can attend my ½ day live instructor led training class entitled: “Agile 101: A Primer for Outstanding Returns.”