EA Management
Value Stream Management
SaaS Management Platform
LeanIX Continuous Transformation Platform®
True cloud native SaaS, certified with the highest standards for security and data privacy
Learn what DORA metrics are and why do we need them? Find out how to measure and improve DevOps performance in connection with value stream management.
Digital transformation has turned every company into a software company, regardless of the industry they are part of. Companies are required to react faster to changing customer needs but on the other hand, deliver stable services to their customers. In order to meet these requirements, DevOps teams and lean practitioners constantly need to improve themselves.
DORA metrics and Flow metrics address this need by providing objective data to measure the performance of software delivery teams and drive product improvement.
Read on and find out about what these metrics are and how they can help you in your value stream management efforts.
DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR), and change failure rate (CFR).
Deployment Frequency: Refers to the frequency of successful software releases to production.
Lead Time for Changes: Captures the time between a code change commit and its deployable state.
Mean Time to Recovery: Measures the time between an interruption due to deployment or system failure and full recovery.
Change Failure Rate: Indicates how often a team’s changes or hotfixes lead to failures after the code has been deployed.
Behind the acronym, DORA stands for The DevOps Research and Assessment team. Within a seven-year program, this Google research group analyzed DevOps practices and capabilities and has been able to identify four key metrics to measure software development and delivery performance.
This has revolutionized the way DevOps teams operate as these metrics create visibility and deliver actual data that can be used as a base for improvements and decision-making.
In order to establish this useful guide for DevOps, the Google research group analyzed data from more than 32,000 international professionals in the field – next to an official DevOps report, it also released an ROI whitepaper about DevOps transformation as well as the book “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, co-authored by the DORA team lead Nicole Forsgren.
In the next sections below, you will learn more about the four DORA metrics and why they are so useful in value stream management.
Free report
Low Levels of DevOps Maturity = More Challenges for Developers
Download the report and learn:
As the name already suggests, Deployment Frequency refers to the frequency of successful software releases to production. In other terms, it measures how often a company deploys code for a particular application.
The metric that uses the total number of deployments per day as a reference guide was developed on the basis of manufacturing concepts that measure and control the batch size of inventory that a company delivers.
Naturally, more successful companies tend to do smaller and much more frequent deliveries – or in the world of DevOps, more frequent but smaller deployments.
In general, one deployment per week is the standard, while a high-performing company releases up to seven deployments a day. Of course, the standard number of deployments differentiate by product.
For example, mobile applications which require customers to download the latest Update, usually make one or two releases per quarter at most, while a SaaS solution can deploy multiple times a day.
Question it answers | Elite performers | High performers | Medium performers | Low performers |
How often does your organization deploy code to production or release it to end-users? | On-demand (multiple deployments per day) | Between once per day and once per week | Between once per week and once per month | Between once per month and once every six months |
Source: 2019 Accelerate State of DevOps, Google
When DevOps teams realize that they fall into a low-performing category, they can install more automated processes when it comes to testing and validating new code as well as reduce the time span between error recovery time and delivery.
This metric measures the time that passes for committed code to reach production. While Deployment Frequency measures the cadence of new code being released, Lead Time for Changes measures the velocity of software delivery. It is used to get a better understanding of the DevOps team’s cycle time and to find out how an increase in requests is handled. The lower the lead time for changes, the more efficient a DevOps team is in deploying code.
In order to measure Lead Time for Changes, two pieces of data (or timestamps) are required: The exact time of the commit and the exact time of the deployment – in other words, the time from start to finish of a product The average time is then used as an indicator for overall performance.
Question it answers | Elite performers | High performers | Medium performers | Low performers |
How long does it take to go from code committed to code successfully running in production? | Less than one day | Between one day and one week | Between one week and one month | Between one month and six months |
Source: 2019 Accelerate State of DevOps, Google
If a high lead time for changes is detected, DevOps teams can install more automated deployment and review processes and divide products and features into much more compact and manageable units.
The Mean Time to Recover metric measures the time it takes for a service to bounce back from a failure. It doesn't matter how high-performing a DevOps team is, unplanned outages or incidents happen. And since failures can’t be avoided, it is really the time it takes to restore or recover a system or application that makes a difference.
If a company has a short recovery time, leadership usually feels more comfortable with reasonable experimenting and innovating. In return, this creates a competitive advantage and improves business revenue.
The metric is important as it encourages engineers to build more robust systems. It is usually calculated by tracking the average time between a bug report and the moment the bug fix is deployed.
Question it answers | Elite performers | High performers | Medium performers | Low performers |
How long does it take to restore service when a service incident or a defect that impacts users occur? | Less than an hour | Less than one day | Less than one day | Between one week and one month |
Source: 2019 Accelerate State of DevOps, Google
In order to improve their performance in regards to MTTR, DevOps teams have to practice continuous monitoring and prioritize recovery when a failure happens. It is also helpful to establish a go-to action plan for an immediate response to a failure.
This metric captures the percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure. Thus, Change Failure Rate is a true measure of quality and stability while the previous metrics, Deployment Frequency and Lead Time for Changes don't indicate the quality of software but just the tempo of software delivery. According to the DORA report, high performers fall somewhere between 0-15%.
Change Failure Rate is calculated by counting the number of deployment failures and then dividing it by the total number of deployments. When tracked over time, this metric provides great insight as to how much time is spent on fixing errors and bugs vs. delivering new code. Needless to say, a DevOps team should always strive for the lowest average possible.
Question it answers | Elite performers | High performers | Medium performers | Low performers |
What percentage of changes to production or end-users results in degraded service? | 0-15% | 0-15% | 0-15% | 46-60% |
Source: 2019 Accelerate State of DevOps, Google
In order to improve a high average, teams should reduce deployment failures and time wasted due to delays.
So why should every DevOps team use DORA metrics? The answer is pretty simple: If there is no data that measures performance, it is difficult or nearly impossible to make any improvements.
DORA metrics break down abstract processes in software development and delivery and make them more tangible and visible, so engineering leaders can take specific steps towards more streamlined processes and increase the value of software.
Below is an overview of the most compelling DORA metrics benefits.
Companies who streamline their development and delivery process increase the value software delivers and are more successful in the long run. Tracking performance with the help of DORA metrics, lets DevOps teams establish trends that form the basis of informed decisions that drive positive change.
In recent years, value stream management has become an important part of software development. In this context, DORA metrics play a big role as they show what kind of value is delivered to the customer and what performance level is necessary to reach desired business outcomes. Thus, once DevOps teams use DORA metrics, they usually see an increase in value over time.
When performance is measured, there is a big chance it will be gamed. This means that people who feel responsible for a certain metric will adjust their behavior to improve the metric on their end. While this can have a distorting effect in various contexts, it is actually the desired effect in DevOps – it helps to eradicate inefficient processes and reduces waste.
While DORA metrics are a great way for DevOps teams to measure and improve performance, the practice itself doesn’t come without its own set of challenges. For most companies, the four metrics are simply a starting point and need to be customized to fit into the context of each application rather than team or organization.
Below are four DORA metrics challenges to watch out for.
To date, DORA is the best way to visualize and measure the performance of engineering and DevOps teams.
By connecting service catalogs, agile planning, and delivery platforms, a software organization will gain the holistic insight that is needed to reduce waste and streamline software development and delivery.
What are DORA metrics?
DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (MLT), mean time to recovery (MTTR), and change failure rate (CFR).
What is Deployment Frequency?
Deployment Frequency refers to the frequency of successful software releases to production. In other terms, it measures how often a company deploys code for a particular application.
What is Lead Time for Changes?
Lead Time for Changes measures the velocity of software delivery. It is used to get a better understanding of the DevOps team’s cycle time and to find out how an increase of requests is handled.
To measure Lead Time for Changes, two pieces of data (or timestamps) are required: The exact time of the commit and the exact time of the deployment – in other words, the time from start to finish of a product the average time is then used as an indicator for overall performance.
What is Change Failure Rate?
Change Failure Rate is a true measure of the quality and stability of software delivery. It captures the percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure.
It is calculated by counting the number of deployment failures and then dividing it by the total number of deployments.
What is Mean Time to Recover?
The Mean Time to Recover metric measures the time it takes for a service to bounce back from a failure. It is calculated by tracking the average time between a bug report and the moment the bug fix is deployed.