Rapidly increasing competition among software companies has led to a demand for shorter production cycles. Software testers are under a lot of pressure to release high-quality products within limited budgets and time frames. Effective test management is crucial for achieving these goals.
One important tool for test management is testing metrics that provide quantitative information about product and process quality, project progress, and the skill fit of people on the team.
There are a great variety of software testing metrics, but what metrics will provide you with valuable insights? What data do you need to collect to calculate them? What is the correlation between the data and your conclusions about software testing? What actions can you take to improve quality? These and many other questions are important to consider before using testing metrics.
This article contains recommendations on how to choose metrics for making proactive decisions. You’ll find the most important types of testing metrics followed by real-world examples with explanations. This article will be useful for test and quality assurance managers who want to improve their skills in defining, tracking, and reporting test activity metrics.
Testing metrics are an integral part of testing and test management. Using metrics, test managers can get objective and quantified information about the quality of software components and features during all stages of testing. Metrics also display past and present performance and can be used for predicting the future performance of a product’s components. They provide confidence in the quality of a software product and measure how close it is to meeting quality requirements.
In addition, testing metrics allow test managers to estimate schedules and required efforts, monitor testing progress, and improve processes. Thus, metrics have many advantages. They can provide information about:
- Characteristics of the product being tested (size, quality, risk, complexity)
- Test processes (progress, effectiveness, efficiency)
- Test results
- Problems with products and processes
However, testing metrics also have risks, challenges, and limitations.
Here’s a list of possible disadvantages of metrics:
- Tracking, researching, and reporting metrics requires time and money
- Risk of measuring the wrong things
- Risk of using wrong or outdated metrics
- One (or even two or three) metrics may not be enough for making decisions
- Collected data has short term of reliability
- Risk of misinterpretation of metric reports
- Late availability of metrics in software testing
Nevertheless, using metrics is still better than dispensing with any measurements at all. Keeping in mind these disadvantages, test managers can more effectively apply metrics during testing.
There are no universal metrics or best practices that should be applied to every test process. It’s critical that test managers use metrics that meet with the objectives of the particular test process.
Identifying the objectives for a test process and the actual software quality may be challenging, as test managers tend to have unrealistic expectations for their projects (such as finding and fixing all code defects) or to underestimate how many hours will actually be needed for testing.
In his work Metrics for Software Testing: Managing with Facts, Rex Black defines the four most common objectives of testing:
- Find defects, especially important ones
- Reduce the risk of post-release failures
- Build confidence in the software product
- Provide useful, timely information about testing and quality
You can also have other objectives for your project.
Metrics are part of many quality models and standards like Capability Maturity Model Integration (CMMI), Test Maturity Model Integration (TMMI), and Test Process Improvement (TPI). Using metrics properly to achieve test process objectives is an important skill for a test manager.
Your goals define the set of metrics you need. Once this set is formalized, you need to define your processes for tracking, reporting, and decision making. Rex Black offers the following order of steps in the metrics consideration process:
When defining metrics, you should define a limited set of effective metrics based on specific objectives for your project, process, or product. Selecting can be far from easy, as effective metrics should have the following attributes:
- Simple to understand and interpret
- Objective in order to meet the goals and objectives of the company
- Measurable so they can be reliably defined
- Meaningful in order to provide valuable information
- Reliable so they can provide consistent results for a specified period of time
- Valid in order to ensure that they really measure what you need to know
- Easy to collect so they can be automated or applied without intrusion to the testing process
It’s better to start with basic metrics and use more complex metrics when you need more details. Use a set of metrics that will balance and complete each other so they reflect the entire picture. Once testing metrics have been defined, their interpretation must be agreed upon by all stakeholders and team members in order to avoid confusion when the numbers are discussed.
Tracking metrics is the process of collecting data and interpreting results. You should determine who will collect data, when and how they will collect data, and where that data will be stored. After that, you need to define methods of data analysis and visualization.
Keep in mind that some tools for data collection cost money. To minimize costs, automate metrics tracking as much as possible. You can also use existing free tools like filters in your ticket system or Google Sheets.
Data analysis may require a test manager’s patience and time to carefully investigate possible divergence from expectations in actual measurements and understand reasons for the divergence.
Track metrics at regular intervals during the testing process and even after the product release. You can also analyze results in comparison with previous releases and targeted values.
Test managers must verify the information that’s reported. They should analyze it for correctness as well as trends. The reason for this is that measurements taken for a metric may not reflect the true status of the test process or may convey an overly positive or negative trend.
Before any data is presented, a test manager must review it for both accuracy and for the message that it’s likely to convey. Thus, it’s reasonable to validate metrics before reporting them.
Besides, you need to validate measurements over time as a specific metric on its own may be interpreted differently than agreed in the metric definition phase.
The main objective of reporting metrics is to provide an immediate understanding of the state of a project for management purposes.
Metrics are usually reported during meetings of team members or stakeholders using visualizations. You can also share information about metrics using your corporate content management platform so it’s available whenever necessary.
Testing metrics can be classified in various ways depending on your objectives. According to Rex Black’s Foundations of Software Testing - ISTQB Advanced Level Test Manager, test activity metrics can be grouped into four categories:
- Project metrics are used for defining whether testing is on target to achieve the plan.
- Product metrics are used to measure whether product quality is on track for successful delivery.
- Process metrics measure the capabilities of product testing.
- People metrics assess the skill level of the team as a whole and each member individually.
Testing metrics can belong to one or more categories. For example, the rate of reported defects can be interpreted either as a project, product, or process metric.
People metrics aren’t always used because of their sensitivity. Project metrics aren’t closely associated with software quality, in contrast to product and process metrics. However, people and project metrics can also be included in your metrics program as normalizing factors that measure the number of testers and their skills or the size of the project.
Let’s consider what testing metrics you can choose from each type.
These metrics measure progress toward established project completion criteria, such as the percentage of test cases executed, passed, and failed before a release, sprint, or some other event.
Percentage of test case execution
This metric indicates testing progress by providing the percentage of executed tests out of all planned test cases. For more detailed information, you can also measure a separate rate of passed, blocked, and failed test cases.
How to measure:
Percent of Test Case Execution = (Number of Passed Tests + Number of Failed Tests + Number of Blocked Tests) / Number of Test Cases
The metric above displays test case results grouped by Passed, Failed, In progress, Hardware problems, Not implemented, and Not Run. The total number of executed test cases is 169, representing 100 percent of all test cases. Using the formula above, we can measure that 81 percent of test cases were executed and of those 63 percent passed, while 18 percent of test cases were not run and 1 percent are still in progress.
The percentage of test case execution increases as testing progresses for each group of test cases during execution so that, at product acceptance, test case execution should be 100 percent. If the percentage of test case execution is not equal to 100 percent by then, the team should review each unexecuted test to determine why it wasn’t run.
Various project management methodologies provide universal project metrics, for example Earned Value Analysis.
Product metrics measure the quality of a product, for instance by measuring defect distribution across components, defect priority distribution, and customer satisfaction.
Total percentage of critical defects
Besides obvious absolute metrics such as the number of defects, one basic metric to understand the current quality level of a product or any of its components is the percentage of critical defects.
How to measure:
Total Critical Defects Percentage = (Critical Defects /Total Defects Reported) x 100
Distribution of defects across components
When fixing bugs, the number of identified defects will be reduced. However, some components may still have many flaws. Using defect distribution across components, test managers can identify these problematic functional areas. After analyzing this metric, both development and test teams can make more efforts to address them.
The defect distribution metric shows the test manager a clear picture of the location and priority of all defects discovered during testing. You can also make two charts of total (including historical) and actual (opened) defects to compare how your team is meeting stated goals alongside the number of defects that still require fixing.
How to measure:
Defect Distribution =Total Number of Defects/ Functional area(s)
The defect distribution pie charts above show the total number of bugs per component and the number of unfixed actual bugs. These charts demonstrate the components with the greatest number of defects. In the example above, we can conclude that such important components as the Reports module and the GUI require more attention from the test team.
Defect priority distribution
After compiling records regarding defects found in all functional areas of the product, test managers can see the priority of defects to discern which component requires more attention. The distribution of defects with certain priority levels across components can shine some light on the effectiveness of testing efforts.
The tables above show the number of total and actual defects along with priorities across product components. Here, all defects are grouped into Blocker, High, Normal, and Low Priority. Licensing has only normal priority defects, which mean that this part isn’t critical now. Though there are many unfixed defects in Reports and GUI, only a small number of them are high priority.
Customer satisfaction measures
It’s considered good practice in software quality engineering to take into account the customer’s perspective in order to help validate team efforts. Customer feedback is also important because customers may perceive the product differently than developers or QA testers. For example, it may turn out that a feature considered important by developers isn’t so useful for customers.
How to measure:
Customer Issues Level = Defects Reported by Customers + Help Desk Calls + Customer Surveys
The charts below show the results of a customer survey. This customer feedback provides information about customer satisfaction with particular features: GUI, Feature #1, and the Updating process.
The chart above shows that most customers are satisfied with the Graphical User Interface (GUI).
The chart for Feature #1 illustrates a feature that was considered important by developers but appears to be useless for customers.
Customer surveys can also reveal problems that aren’t usually reported by customers. Customers rarely complain about the inconvenience of the updating process, but a survey can reveal that customers aren’t satisfied with it.
This chart about customers’ readiness to recommend the product shows that the product quality is good enough.
Process metrics measure the capabilities of the testing or development processes, such as progress fixing bugs, severity of quality problems, and percentage of defects detected during testing, and defect rates across releases.
Process metrics measure process capabilities but not capabilities of test team members. The main objectives of process metrics are to find defects, especially important defects.
Absolute numbers and efficiency
Absolute numbers and efficiency can provide you with an overall picture of the testing process. Process metrics based on absolute numbers can include several measurements depending on your objectives and available data.
For instance, you can measure the following metrics:
- Percentage of fixed defects
How to measure:
Percentage of Fixed Defects = (Defects Fixed/Defects Reported) x 100
- Percentage of total critical defects
This metric characterizes the development team’s efficiency. A high percentage of critical defects should prompt a development manager to review development tactics.
- Test effectiveness using defect containment efficiency
How to measure:
Test Effectiveness Using Defect Containment Efficiency = Bugs Found in Test/Total Bugs Found (pre and post shipping) x 100
The higher the test effectiveness percentage, the better the test set and the less test case maintenance efforts will be required in the long-term.
The chart above shows that the percentage of fixed defects is 60. Among total defects, each third bug was a high priority or blocker. Test effectiveness metrics show that only one out of ten defects were found by customers, so testers detected the majority of bugs. These metrics represent only general numbers, and it’s not obvious what we should do with them. Seeing the dynamics of these rates across releases will show if efforts toward test set improvement are showing positive results.
Fixed defects rate over releases
This metric allows test manages to compare the results of testing from release to release and evaluate how effectively the development team has fixed defects. Additionally, it reveals general trends of bug fixing over releases that can influence further activity of the team. For more detailed analysis, test managers can also divide fixed defects into priority groups.
The higher the rate of fixed defects, the higher the quality of the product. However, you should understand that not all defects are possible to fix. Some bugs can be fixed only after the product is transferred to another platform. There are also some bugs that require a long time for fixing or fundamental changes to the product. Unfortunately, customers are more interested in new features and don’t want to pay for refactoring. However, any decrease in the bugfix rate is a negative trend, as you may have too many bugs that require fixing over time.
The charts above reflects a positive increase in fixing blocker, high, and normal bugs reported by the team and customers beginning from release 1.2, but it shows that the team fixed fewer after release 1.4. This trend shows that the team should fix more defects; otherwise, the number of defects will increase over time. However, there’s another trend for high priority defects and blockers. The effectiveness of fixing these defects was 90 percent from release 1.1. to release 1.2. After release 1.3, the team fixed fewer, but this rate was still above 80 percent.
People metrics measure things such as issues per reporter or test cases executed by each team member. Be careful with these metrics, as you also need to take into account employee motivation and professional skills before making any decisions.
Issues per reporter
This metric is used for monitoring how many issues are reported by each team member.
In the chart below, some team members reported few issues. However, this doesn’t mean that these members worked poorly. Most probably, they dealt with regression testing of bugs that were identified earlier.
Test cases executed by each team member
The chart showing test cases executed by each test team member can help in measuring the effectiveness of each team member.
Testing metrics help to make test management effective, as they provide objective visibility of the quality of software products and show how the testing process can be improved.
However, using metrics isn’t an easy task, as not all metrics can provide you with valuable insights. You need to consider the disadvantages of metrics before creating a metrics program. The principles we’ve provided for using metrics can help you choose the most important metrics for your project and apply them effectively.
This post was prepared by the Apriorit QA team. Our specialists are ISTQB certified. ISTQB certification helps our teams build effective and efficient quality assurance practices for Apriorit projects as well as consult our clients in building test processes within their teams.