"Software development effectiveness鈥 has recently become an industry buzzword, frequently featuring in conferences, mentioned by tech giants and teams pursuing higher efficiency, such as Alibaba鈥檚 effectiveness 211 vision, Tencent's intelligent development platform and Baidu鈥檚 engineering capability white paper. But the question of what constitutes an appropriate effectiveness metric still remains. The 鈥渓ines of code鈥 metric used a decade ago is no longer applicable to the understanding of software development in the modern agile process, which has left the software industry of today with no unified or widely recognized method for measuring development effectiveness.听
听
What are the appropriate software development effectiveness metrics?听
听
The question of what appropriate metrics are first requires figuring out what data to measure. Different metrics have been proposed depending on different concerns, but according to our own personal experience, the current commonly used metrics can be categorized as follows:
听
Planning progress: assessing progress, obtaining background information and context, knowing when the task is completed, predicting the problem (future), and replaying and reviewing the problem (past).
Burn down chart (sprint/release)
Velocity chart
Standard deviation
Throughput
Cumulative flow diagram
Control chart
Kanban WIP board
听
Fast feedback: continuous integration, continuous deployment.
Build and deploy speed
Test speed
PR (pull request) approval time
Unit tests passed
Integration tests passed
听
Team transformation: using specific metrics to measure different ways of working can influence people鈥檚 behavior, for example by making it clear to management that something is unreasonable and needs to be changed or that more time and budget are needed.
Pairing time
Time spent manual testing
PR (pull request) approval time
Fix red build time
Cost of fixing bugs in Dev/Prod
Test coverage
Effort allocation (New work / Unplanned work or rework / Other work)
听
Assisted decision-making: experiments can be conducted and new metrics can be constantly searched to help make decisions.
Lead time
Number of bugs released (number of escaped bugs)
Effort allocation (New work / Unplanned work or rework / Other work)听
Value delivered
听
Engineering capability: 4 key metrics measure and identify weaknesses in the team鈥檚 engineering practices.
Lead time for changes
Deployment frequency
Change failure rate
Mean time to restore
However, it's important to remember that there is no one-size-fits-all template. You need to analyze multiple factors to find the metrics that are best suited for your team and continue to modify and iterate on them based on value delivery. Having completely different metrics between teams is normal, as the measurement of development effectiveness largely depends on the type, scale, culture, projects etc.听
听
Faced with the multiplicity of metrics and tool chains, front-line developers might ask: Do I need to put these things into practice? What is my priority at this stage? In the next section, we鈥檒l look at a potential solution to quickly select appropriate metrics and discuss how to calculate the start of lead time when the front business is uncertain ().
听
Recommended metrics for three project types听
听
In the software development process, there are generally three phases/types of projects: greenfield, yellowfield (or brownfield) and redfield projects, mirroring the lifecycle of a software system. Identifying the project type can be a quick and efficient solution to finding suitable metrics.
听
: "developing a system for a totally new environment, without concern for integrating with other systems, especially not legacy systems. Such projects are deemed higher risk, as they are often for new infrastructure, new customers, and even new owners." As the greenfield project development team is the most likely to be close to the project's end user, meaning more likely to be carrying out end-to-end measurement, and lacks 鈥渉istorical baggage,鈥 the architecture design and technology stack can be carried out normally. Assisted decision-making and engineering capabilities can be selected as the primary indicators, focusing on the end-to-end value stream while ensuring that good engineering practices are implemented at the beginning of the project. Planning progress and Fast feedback can be used as secondary indicators to assist with end-to-end value flow measurement. Primary indicators and secondary indicators generally affect and confirm each other, meaning that improvements in secondary indicators will lead to improvements in primary indicators (e.g. quick feedback-PR (pull request) approval time improvement results in improvement in assisted decision-lead time). The fuzzy front end can be defined as the moment when the end user puts forward a certain requirement to the team, while the moment when the user requirement is clearly recorded can be regarded as the beginning of lead time.
听
: "new software must take into account and coexist with live software already in situ.鈥 For such projects, planning progress and engineering capability can be selected as the primary indicators. Improving engineering capability speeds up delivery. Assisted decision-making can be used as a secondary indicator to expand the measurement of value generated by function delivery and promote end-to-end value flow measurement. The measurement of sustainable expansion can also drive the efficiency of the value stream. The fuzzy front end at this time can be defined as the moment business is handed over to development and the story card is clearly defined. Based on past experience, taking over such projects usually requires extensive coordination and communication work as they might be suffering from 鈥溾 At the same time, this type of project is accompanied by top-down change requests (especially for best-selling, long-life cycle product projects). At this time, team transformation and fast feedback can be added as primary metrics to assist with the change requests.
听
Redfield: The software system has entered the maintenance period; no new functions will be developed, only bugs discovered by end users are to be fixed. After a period of maintenance, it may be replaced entirely by a new system. For these projects, planning process can be selected as the primary indicator to ensure a good bug repair throughput. If the product team wants to increase the deployment frequency and speed up their response to end users鈥 bug reports, they can choose fast feedback as a secondary indicator. Fuzzy front end can be defined as the moment when the end user clearly reports the bug to the customer service staff.
听
Factors for rapid differentiation of greenfield and yellowfield: whether the integration and coexistence of legacy systems needs to be considered. Quick identification factors of yellowfield and redfield: whether the system is in maintenance mode only or not.听
听
听
: Developing a system for a totally new environment, without concern for integrating with other systems, especially not legacy systems.
听
: New software must take into account and coexist with live software already in situ.
听
Redfield: The software system has entered the maintenance period; no new functions will be developed, only bugs discovered by end users are to be fixed.
听
The selection of indicators is strongly related to the context of the team. It is necessary to select the recommended indicator set according to the specific context of the team, and tailor the specific indicators in the indicator set.听
听
Measurement debt and governance
听
As measurement progresses, and development moves from green to yellow or from yellow to red, you may get an 鈥榓ha鈥 moment: if measurement isn鈥檛 done during the greenfield phase, you may incur measurement debt when you move to the yellow or redfield phases. The longer the delay, the higher the cost (or interest) of initial measurement. These are the same concepts of debt and interest involved in technical debt.
听
For example, imagine a system that analyzes and counts the time spent in each stage of a request to optimize it. If you didn't want to meet the initial cost of measurement requirements, not creating TraceID, Tag or other attributes so that it can follow the request through various systems, then you might find that you can't measure time during the middle of the process. When you try to add the necessary information to make it possible, you'll find that too many systems are involved, some of which you might not even know, and that the workload is too large. It鈥檚 similar for debt measurement: in the long term, setting up the necessary attributes during the greenfield stage is easier and more cost-effective than doing it in later stages, at which point it might not even be possible.
听
How to manage measurement debt?听
听
Measurement debt is similar enough to technical debt that we can learn from Martin Fowler鈥檚 and apply the same terms to measurement debt: reckless, prudent, deliberate and inadvertent.
听
When you suggest to the team 鈥渓et鈥檚 measure development performance and see if there is anything that can be improved,鈥 you may get the following answers:
听
Unlike with real-life debt, some kinds of measurement debt are simply unavoidable, which makes it even more crucial to ensure the proper metrics are implemented early on.
听
In the absence of a universal metric, finding a way to measure development effectiveness relies on analyzing your team鈥檚 process and the type of project you鈥檙e working on. Metrics might vary wildly between teams, but what all teams should strive for is early implementation of those metrics to avoid incurring as much measurement debt as possible.听
听
Three principles of software development effectiveness metrics
听
Now that we鈥檝e explored the question of what constitutes an appropriate software development effectiveness metric and ways to quickly identify them, I hope to give more recommended metrics based on the business situation and the defined team context through three principles developed through observation.
听
Principle #1: Don't let the measurement become the target
听
The economist proposed in 1975: 鈥淲hen a measure becomes a target, it ceases to be a good measure.鈥 For example, when the French colonial government in Vietnam tried to cull the rat population by putting a bounty on their tails, it only encouraged people to breed more rats so that they could cut their tails and pocket the reward. While 鈥渢he number of rats鈥 cut tails鈥 usually positively correlates with 鈥渢he number of dead rats,鈥 which is a good metric, turning the former into the optimization goal led to the inverse intended effect. In our experience, the same principle applies to the field of software development.听
听
When you measure and set the number of story points that the team needs to accomplish in each iteration, the team will be driven to split the story into more cards than usual to reach the goal. This can manifest in several ways. For example, the team might end up with a three-point Team Building card on the wall after measurement because they want to get more story points for each iteration. Estimates can also change once measurement is done, such as the estimate for database creation going from one point to three points. Or, the team could decide to move a card back to the Backlog when it is blocked by third-party dependencies, to avoid having to continuously identify and push it back, thus increasing cycle time. The target might have been reached, but will this kind of measurement bring value to the business? Can it be implemented into specific management or technical practices?听
听
To make metrics and data collection as realistic as possible, we need to focus on trends and blockages. In the above case, what鈥檚 needed is to observe the trends of the number of completed cards in each iteration. Generally, the cards completed in each iteration fluctuate within a reasonable range, but observing the overall trend should let us identify the blockages and their causes and take targeted treatments to accelerate the flow of cards. There鈥檚 no need to compare yourself with other teams, setting a target wildly would cause unexpected results.听
听
From the chart above, which represents the number of story card points completed in each iteration of a 24-iteration project, we can analyze the project's effectiveness:
听
Overall trend is of a gradual increase, indicating that the team is becoming more proficient over time.
听
Drop between iterations seven and eight indicates that the story card is too large, leading to increased complexity of communication and development time. Solution: the developers communicate more closely with the business analysts to have smaller and more independent story cards from the beginning.听听
听
Drop between iterations 14 and 15 was caused by problems with the third-party API, which led to cards with a cumulative total of more than 10 points having their delivery delayed to the next iteration. Solution: closely track the status of dependent systems to adjust development tasks and prevent blocks and excessive waiting.
听
Principle #2: A metric that cannot be decomposed is not a good metric
听
Metrics and results that can be decomposed are good metrics. An example of such a metric is lead time for change or feature delivery time, as they can facilitate the delivery of value flow. However, it can be difficult to capture the moment when the requirement is raised and the moment when the feature goes online and to calculate the time between those two points for measurement and block identification, because of the large span between the two and the numerous factors involved. As such, getting a clear view of the causes of blockages is not easy.
The is a classic application of problem decomposition. It decomposes a large and hard to visualize problem into several sub-problems, then analyzes them to solve the original big problem. This model can also be applied to development effectiveness. Metrics can be measured and decomposed in stages, and each stage can be further subdivided.听
听
Decomposing the feature delivery time lets us find more optimization points throughout. Look for features of similar size in the last iteration, and also decompose the time and proportion of each process metric, observing whether each metric increases or decreases per iteration. For example, the feature design time increased by 2.18% compared with the similarly sized feature in the previous iteration. What caused this? Can it be improved? As it turns out, it was due to a lack of thorough analysis, which led to multiple design revisions as feedback was received, leading to a delay. Solution: add a clearer checklist for the output of the feature analysis to ensure that it has been thoroughly analyzed before being sent to design. This shows the way in which a decomposable metric can be applied to management and technical practices.
听
Principle #3: Only the measurement of sustainable expansion can drive the efficiency of the value flow
听
The measurement of development effectiveness often starts with a more global metric, such as feature delivery time, as it can more intuitively reflect the business value. However, metrics can also start locally and drive the increasing efficiency of the value flow through continuous expansion. Let鈥檚 take as an example lead time, which measures the time from code commits to the deployment to the production environment. In the second chapter of "" Forsgren gives the following description: "Lead time is the time it takes to go from a customer making a request to the request being satisfied. . .听 In the design part of the lead time, it's often unclear when to start the clock, and often there is high variability. . .听 However, the delivery part of the lead time 鈥 the time it takes for work to be implemented, tested, and delivered 鈥 is easier to measure and has a lower variability.鈥澨
听
What this passage illustrates is that we should only use lead time as a starting metric, as while a short lead time indicates that the team has strong engineering practices and impeccable CI/CD (continuous integration/continuous delivery), it doesn't necessarily mean an ability to quickly respond to customer needs. We should continuously expand metrics to drive the value flow efficiency.
听
Let鈥檚 look at an actual case: the team measured the lead time for change, which usually takes about 10 minutes. However, it took several days for some code commits to be deployed. The code had entered the pull request review phase after commit, meaning it needed to be reviewed by the client team. When they didn鈥檛 do this in time, it didn鈥檛 trigger the pipeline, leading to a delay of a few days. The pull requests that the team could review and merge by themselves had been reviewed and merged quickly, so the team extended the measurement of the lead time for change. The start time moved left from the time when merging to the master branch to the first code commit in the pull request. The points that could be optimized in the process of pull request review in coordination with the client team were found through measurement, accelerating the process of pull request.
听
Later, the start time of the lead time for change was moved further left to the time when the story card was moved into the development column. When the story was moved into the development column, it triggered the start of the counter for the lead time for change, helping us identify potential blockages in communication with BA/QA and points that can be further optimized. If the developed feature is protected by the feature toggle and the feature toggle is not turned on after the code is committed and deployed, the end time of the lead time can also be shifted right to the time when the feature toggle is turned on. Then it can be analyzed to see whether the business decision-making time is excessive to the point of affecting features going online.
听
I hope these three principles will help you in the process of using metrics for development effectiveness:
听
Don't let the measurement become a target. To make metrics and data collection as realistic as possible, we need to focus on trends and blockages.
A metric that cannot be decomposed is not a good metric.
Only the measurement of sustainable expansion can drive the efficiency of the value flow.