Output vs. Outcome—Measuring Business Success with Agile

These days I spend more and more of my time with executives in companies that are doing larger-scale agile adoptions. As trusted stewards of their companies’ resources these executives are rightfully interested in knowing how best to measure the success of their efforts.

Often I am asked for case study examples of how well other companies have faired with their agile initiatives. To supplement data from clients I have worked with, I have searched for publicly available data that illustrates the results of organizations that have adopted an agile mindset both within and outside of IT and development groups. To be sure, there is public data available. However, much of it is output data and not outcome data; and output data is not a very effective way to communicate the business success a company might achieve by adopting an agile mindset.

Output measures tell the story of what you produced (e.g., number of widgets created, or number of hamburgers served) or the services you delivered (e.g., number of clients). Output measures do not address the value or impact of your services to either your internal or external stakeholders. Outcome measures, on the other hand, are a more appropriate measure of meaningful results or the value we create. Outcomes better quantify the true performance and assess the success of the processes we put in place and thus are a better way to gauge the business success of your agile adoption.

Think of it this way. Which is more compelling? A company that increased its velocity by 400% after adopting agile or a company that increased its share price by 40% after adopting agile?

To further illustrate the power of outcome measures, I want to explain the difference between the various measures that I might use for my agile training and coaching business, Innolution. Then I’ll discuss how output and outcome measures relate to popular agile success measures.

The Problem with Output Measures

Two obvious output measures for a training company are the number of students trained over a period of time and the number of training days delivered over the same period of time. For example, let’s say Innolution trained 2,000 students last year while delivering 100 training days over the same period of time. What can you conclude about the training side of my business?

Do these output measures provide any meaningful insight to either internal Innolution stakeholders (e.g., me and my employees) or Innolution external stakeholders (e.g., the companies that purchased the training and the individuals who received it)? Not really. What if I double these output measures next year? Does that trend provide you with useful and actionable information to evaluate the success of my business and the results I delivered to my clients? Again, not really.

I’m not saying that output measures are worthless. If I told you I only trained 25 students over two days last year I am sure that level of output would lead to a discussion as to why I trained so few people (especially if I trained 2,000 people the year before). Furthermore, since you would expect I get paid when I train classes, output measures should at least be an indirect measure of my revenue and likely a poorly correlated measure of my profitability (and profitability is an important outcome measure for my company). But the output data alone is not sufficient to tell you anything meaningful about my profitability or any other important business outcome.

For example, what if I wanted to claim that I train more people in a given year than any other Agile trainer. I could achieve this goal by giving away my training at little or no cost. In this case my revenue would actually be quite low or non-existent, and my profitability would almost certainly be negative (I have real costs when I provide training). So, just measuring output—the number of students trained or number of training days delivered—is arguably not a very good indicator of a business outcome like profitability. Worse, were I to devote a lot of time to improving these output measures, it may actual interfere with desirable outcomes, such as profitability and customer delight.

Though both of these outcomes are important, arguably the more important outcome is how well I helped my clients achieve meaningful business results through the agile training and coaching services that I provided. Delighting my customers isn’t just some altruistic goal. If I help my clients succeed, they will ask me back to do more training and coaching. And, as we all know, the easiest client to get is the one that you already have. When clients invite me to come back to do more work, my customer acquisition cost (CAC) for these clients is effectively $0. So measuring how well I am delighting my customers is an important outcome measure to me.

I can measure customer delight in a number of ways:

  • How much repeat business do I get from the client?
  • What is my Net Promoter Score (NPS) measured with my clients?
  • Communications (emails and follow-up calls) I have with clients where they tell me about their business success by applying agile. 

Yes, some of these outcome measures might require more effort to collect and analyze than output data, but they are a much better indicator of the success of my business.

Why Agile Output Measures Fall Short

Now that I have illustrated the value of outcome measures over output measures, let’s discuss why the agile output measures that many companies use to gauge the success of their agile adoptions are less than ideal. Though many agile output measures exist, including throughput and cycle time, the king of them all is velocity, so let’s use it as an example.

Velocity is the rate at which a team completes work. Velocity is easy to measure (e.g., at the end of a sprint, sum up the points on completed stories). And, most people intuitively believe that a faster velocity is better. So, if we double velocity, it implies we are getting twice the work done in the same amount of time. And, if we get twice the work done in half the time, then we increased velocity by 400%. So are we succeeding?

The answer depends on whether the velocity output measure can be correlated to better outcomes (the measures that truly matter). What if a team increases its velocity by 400% and the end result is it produces the same junk that it’s been delivering for years, but now it produces that junk 400% faster? Can we really say that that team has improved or is more successful?

In the pre-agile days, the popular output measures were lines of code, or feature or function points completed (none of these were a very good indication of outcome). It would seem ridiculous today to assume that if our team is writing 400% more code than a couple of months ago that any interesting outcome measure has changed in a positive, meaningful way.

Maximizing output could have little correlation with delivering products or services that delight customers. Also, producing more output might actually be in contradiction to a core Agile Manifesto principle: “Simplicity — the art of maximizing the amount of work not done…” From an economics perspective, you might argue that an organization that minimizes output while maximizing outcome (value to customer) is much more favorable.

So, we need to be wary of measuring the success of our agile adoption in how many new features are in the release, or how many story points were completed in a sprint. To be sure, Agile does have a delivery mindset (e.g., finish what you start, have a potentially shippable product increment each sprint). However, the goal is to deliver real customer value every sprint, not just to deliver stuff. Output measures gauge how much stuff we are producing. Outcome measures focus on the value that is delivered.

To truly measure value, we need to illustrate how applying an agile mindset affects key business metrics. And, our goal is to make a real change to these metrics every sprint. If we can correlate that increased output has a positive effect on interesting business measures such as delighting customers and increasing profitability, then increasing output is something we would certainly strive to achieve. But if I tell you that our business goal is to improve customer retention on our website by 10% and you respond by telling me that your team’s velocity is now 2x faster than it was three months ago, I have no idea as to whether or not you have achieved an important business goal for me. For all I know, a 2x increase in velocity means you are adding more features to the product faster, which might actually cause customer retention to decrease due to bloatware.

Measuring Agile Success

In 2007, Mike Cohn and I development an agile assessment instrument called Comparative Agility (we have since handed off on-going development to others). We developed Comparative Agility in part to address a question that an executive like a CIO might ask: “After spending time and money on my agile adoption, how are we doing?”

To answer this question, we defined a number of dimensions along which we measured agility (e.g., teamwork, requirements, planning, technical practices, quality, culture, and knowledge-creating). We made it clear to people who took the Comparative Agility assessment instrument (a survey) that the tool is used to assess how agile you are “doing” by comparing (along the various dimensions) your answers against others who have taken the assessment survey. You could also compare yourself with an earlier version of yourself (say six months ago) to see if the changes you have made to your agile practices are having a positive effect along the various assessment dimensions.

What the original tool did not tell you were how changes made along these different dimensions of agility affected your business outcomes (the current Comparative Agility tool does asks outcome related questions). This correlation between moving needles on the agile gauges and subsequent movement of needles on business outcome gauges is exactly what each company needs to establish to successfully measure its agile adoption.

To achieve this goal, I recommend that companies have a well-established set of business outcome measures that they use to gauge business performance. They also need to establish baseline values for each of these measures prior to beginning the agile adoption. The goal would then be to see how adopting agile principles and improving agile performance along these dimensions has a positive effect on the core business outcome measures.

At the end of the day, telling me how many people you trained, or how much your velocity has improved, or by what percentage your cycle time has been reduced, doesn’t really give me an indication of how well your agile adoption is benefiting the core business outcome measures. If you can establish a correlation between improving an agile measure and a corresponding business outcome measure, then we have a much more profound understanding of what really to measure when adopting an agile mindset and how to subsequently inspect and adapt the implementation.


Most companies that report on their agile adoption communicate results using output measures not outcome measures. The only real measure of business success comes from making meaningful improvements in outcome measures. To understand the impact of adopting agile within a company, each company should define which business outcome measures are important and then establish a correlation between making improvements in agility-related measures and business outcome measures.