The Definitive Guide to

DORA Metrics

Learn what DORA metrics are and why do we need them? Find out how to measure and improve DevOps performance in connection with value stream management.

Introduction

Digital transformation has turned every company into a software company, regardless of the industry they are part of. Companies are required to react faster to changing customer needs but on the other hand, deliver stable services to their customers. In order to meet these requirements, DevOps Teams constantly need to improve themselves.

DORA metrics address this need by providing objective data to measure the performance of software delivery teams and drive product improvement.

Read on and find out about what these metrics are and how they can help you in your value stream management efforts.

 

What are DORA metrics?

DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR), and change failure rate (CFR).

Metric Explanation
Deployment Frequency Refers to the frequency of successful software releases to production.
Lead Time for Changes Captures the time between a code change commit and its deployable state.
Mean Time to Recovery Measures the time between an interruption due to deployment or system failure and full recovery.
Change Failure Rate Indicates how often a team’s changes or hotfixes lead to failures after the code has been deployed.

Behind the acronym, DORA stands for The DevOps Research and Assessment team. Within a seven-year program, this Google research group analyzed DevOps practices and capabilities and has been able to identify four key metrics to measure software development and delivery performance.

This has revolutionized the way DevOps teams operate as these metrics create visibility and deliver actual data that can be used as a base for improvements and decision-making.

In order to establish this useful guide for DevOps, the Google research group analyzed data from more than 32,000 international professionals in the field – next to an official DevOps report, it also released an ROI whitepaper about DevOps transformation as well as the book “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, co-authored by the DORA team lead Nicole Forsgren.

In the next sections below, you will learn more about the four DORA metrics and why they are so useful in value stream management.

Free poster

How to Measure Cloud Success

Download this poster to see if you have a good cloud strategy in place and how you can measure the overall success of your cloud environment.

Get your free copy
Cloud Succes Poster

Deployment Frequency

As the name already suggests, Deployment Frequency refers to the frequency of successful software releases to production. In other terms, it measures how often a company deploys code for a particular application.

The metric that uses the total number of deployments per day as a reference guide was developed on the basis of manufacturing concepts that measure and control the batch size of inventory that a company delivers.

Naturally, more successful companies tend to do smaller and much more frequent deliveries – or in the world of DevOps, more frequent but smaller deployments.

In general, one deployment per week is the standard, while a high-performing company releases up to seven deployments a day. Of course, the standard number of deployments differentiate by product.

For example, mobile applications which require customers to download the latest Update, usually make one or two releases per quarter at most, while a SaaS solution can deploy multiple times a day.

Question it answers Elite performers High performers Medium performers Low performers
How often does your organization deploy code to production or release it to end-users? On-demand (multiple deployments per day) Between once per day and once per week Between once per week and once per month Between once per month and once every six months

Source: 2019 Accelerate State of DevOps, Google

When DevOps teams realize that they fall into a low-performing category, they can install more automated processes when it comes to testing and validating new code as well as reduce the time span between error recovery time and delivery.

Lead Time for Changes

This metric measures the time that passes for committed code to reach production. While Deployment Frequency measures the cadence of new code being released, Lead Time for Changes measures the velocity of software delivery. It is used to get a better understanding of the DevOps team’s cycle time and to find out how an increase of requests is handled. The lower the lead time for changes, the more efficient a DevOps team is in deploying code.

In order to measure Lead Time for Changes, two pieces of data (or timestamps) are required: The exact time of the commit and the exact time of the deployment – in other words, the time from start to finish of a product The average time is then used as an indicator for overall performance.

Question it answers Elite performers High performers Medium performers Low performers
How long does it take to go from code committed to code successfully running in production? Less than one day Between one day and one week Between one week and one month Between one month and six months

Source: 2019 Accelerate State of DevOps, Google

If a high lead time for changes is detected, DevOps teams can install more automated deployment and review processes and divide products and features into much more compact and manageable units.

Change Failure Rate

This metric captures the percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure. Thus, Change Failure Rate is a true measure of quality and stability while the previous metrics, Deployment Frequency and Lead Time for Changes don't indicate the quality of software but just the tempo of software delivery. According to the DORA report, high performers fall somewhere between 0-15%.

Change Failure Rate is calculated by counting the number of deployment failures and then dividing it by the total number of deployments. When tracked over time, this metric provides great insight as to how much time is spent on fixing errors and bugs vs. delivering new code. Needless to say, a DevOps team should always strive for the lowest average possible.

Question it answers Elite performers High performers Medium performers Low performers
What percentage of changes to production or end-users results in degraded service? 0-15% 0-15% 0-15% 46-60%

Source: 2019 Accelerate State of DevOps, Google

In order to improve a high average, teams should reduce deployment failures and time wasted due to delays.

Mean Time to Recover

The Mean Time to Recover metric measures the time it takes for a service to bounce back from a failure. It doesn't matter how high-performing a DevOps team is, unplanned outages or incidents happen. And since failures can’t be avoided, it is really the time it takes to restore or recover a system or application that makes a difference.

If a company has a short recovery time, leadership usually feels more comfortable with reasonable experimenting and innovating. In return, this creates a competitive advantage and improves business revenue.

The metric is important as it encourages engineers to build more robust systems. It is usually calculated by tracking the average time between a bug report and the moment the bug fix is deployed.

Question it answers Elite performers High performers Medium performers Low performers
How long does it take to restore service when a service incident or a defect that impacts users occur? Less than an hour Less than one day Less than one day Between one week and one month

Source: 2019 Accelerate State of DevOps, Google

In order to improve their performance in regards to MTTR, DevOps teams have to practice continuous monitoring and prioritize recovery when a failure happens. It is also helpful to establish a go-to action plan for an immediate response to a failure.

20 Key Questions a Microservice Catalog Answers

Poster

20 Key Questions a Microservice Catalog Answers

Download this LeanIX poster to see the 20 key questions a microservice catalog can answer.

Microservices @ LeanIX - then, now and tomorrow

Video

Microservices @ LeanIX - then, now and tomorrow

Live Recording - EA Connect Day 2020 

Per Bernhardt - Staff Software Engineer - LeanIX

 

Maximize the Development Efficiency of Your Microservices Landscape with LeanIX

Webinar

Maximize the Development Efficiency of Your Microservices Landscape with..

Watch this on-demand webinar hosted by The Open Group, where LeanIX shares insights on how we can help bring order and clarity to your complex microservices architecture.

Efficiently Navigate your Microservices with LeanIX

Webinar

Efficiently Navigate your Microservices with LeanIX

Watch this interview-style webinar on how to build reliable software using a microservice catalog - including a product demo

The benefits of DORA metrics tracking

So why should every DevOps team use DORA metrics? The answer is pretty simple: If there is no data that measures performance, it is difficult or nearly impossible to make any improvements.

DORA metrics break down abstract processes in software development and delivery and make them more tangible and visible, so engineering leaders can take specific steps towards more streamlined processes and increase the value of software.

Below is an overview of the most compelling DORA metrics benefits.

Decision making

Companies who streamline their development and delivery process increase the value software delivers and are more successful in the long run. Tracking performance with the help of DORA metrics, lets DevOps teams establish trends that form the basis of informed decisions that drive positive change.

Delivering value

In recent years, value stream management has become an important part of software development. In this context, DORA metrics play a big role as they show what kind of value is delivered to the customer and what performance level is necessary to reach desired business outcomes. Thus, once DevOps teams use DORA metrics, they usually see an increase in value over time.

Virtuous cycle

When performance is measured, there is a big chance it will be gamed. This means that people who feel responsible for a certain metric will adjust their behavior to improve the metric on their end. While this can have a distorting effect in various contexts, it is actually the desired effect in DevOps – it helps to eradicate inefficient processes and reduces waste.

Webinar

Bring Order and Clarity to your Complex Microservices

Get insights to understand how to empower autonomous teams while supporting governance and encourage fast-paced software development by automating microservice discovery and cataloging.

Watch Now
Events_Cover Image for Recording_The Open Group Webinar_042021

Key challenges of DORA metrics

While DORA metrics are a great way for DevOps teams to measure and improve performance, the practice itself doesn’t come without its own set of challenges. For most companies, the four metrics are simply a starting point and need to be customized to fit into the context of each application rather than team or organization.

Below are four DORA metrics challenges to watch out for.

  • Decentralized environment and data: Data is scattered in different sources across the IT landscape.
  • Data extraction: Data is only available in raw format.
  • Data transformation: Data needs to be transformed and combined into calculable units.
  • Speed vs. Stability: Singular metrics should be put into context. A high Change Failure Rate could indicate that there is poor quality control while a high Deployment Frequency does not say anything about the quality of the code or product.

 

DORA metrics and Value Stream Management

Earlier, we mentioned DORA metrics and their importance in value stream management. Nowadays, more and more organizations not only use DORA metrics to streamline and optimize software development and delivery, but they also use Value Stream Management to create end-to-end visibility of the entire production process.

By monitoring every step through a proper value stream management platform like LeanIX VSM – that is, from customer request to product delivery – this management technique makes sure that the full value of software is delivered to the customer in the most efficient way.

Conclusion

To date, DORA is the best way to visualize and measure the performance of engineering and DevOps teams. However, organizations shouldn’t stop there. In order to unleash the full value that software can deliver to the customer, DORA metrics need to be part of all value stream management efforts.

By connecting service catalogs, agile planning, and delivery platforms through a platform like LeanIX VSM, a software organization will gain the holistic insight that is needed to reduce waste and streamline software development and delivery.

Free trial

Build Reliable Digital Products Faster

Connect teams, technology, and processes for efficient software delivery with LeanIX Value Stream Management solution.

Free 14-Day Trial
VSM_header_mobile_edited (1)
FREE 14-DAY TRIAL

Explore LeanIX Value Stream Management

VSM Demo Video

Seamlessly source data points from all your favorite DevOps tools to analyze how DORA and flow metrics, standards, and vulnerabilities tie to business outcomes

  • Discover and catalog software artifacts
  • Identify, address and remove inefficiencies
  • Share knowledge in your development organization
Ready to try it out on your own? Sign up for a free 14-day trial and take your software development to the next level.

Start 14-Day Trial

Answers to frequently asked questions on DORA metrics

What are DORA metrics?

DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (MLT), mean time to recovery (MTTR), and change failure rate (CFR).

What is Deployment Frequency?

Deployment Frequency refers to the frequency of successful software releases to production. In other terms, it measures how often a company deploys code for a particular application.

What is Lead Time for Changes?

Lead Time for Changes measures the velocity of software delivery. It is used to get a better understanding of the DevOps team’s cycle time and to find out how an increase of requests is handled.

To measure Lead Time for Changes, two pieces of data (or timestamps) are required: The exact time of the commit and the exact time of the deployment – in other words, the time from start to finish of a product the average time is then used as an indicator for overall performance.

What is Change Failure Rate?

Change Failure Rate is a true measure of the quality and stability of software delivery. It captures the percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure. 

It is calculated by counting the number of deployment failures and then dividing it by the total number of deployments.

What is Mean Time to Recover?

The Mean Time to Recover metric measures the time it takes for a service to bounce back from a failure. It is calculated by tracking the average time between a bug report and the moment the bug fix is deployed.