Subscribe to receive all latest blog updates

This White Paper from our Testing Process Improvements series describes Impact Analysis as a tool to estimate required scope for regression testing. One of our articles was devoted to this method, but here I will briefly remind what it is all about and how we use it in our project work.

 

Written by:
Tatiana Kit,
Team Leader of Network Testing Team

Goal:

Define the necessary and sufficient scope and order of regression testing for each new build depending on the changes made by developers.

How it was:

Developers built the next product version and sent testing request. In our project, there are several developers who work independently on their tasks. Build is a common result of their work. Testing request was sent by the developer that was responsible for the corresponding build. Usually, developers are aware only about their own changes, so, in the testing request, there was no information about the project functionality that could be affected by the performed changes. At best, there were only brief description of the changes, introduced by the developer who initiated this build, and some recommendations concerning deeper testing of the “bottle-necks” in the functionality that he had modified.

The most complicated situations appeared when a testing request was not accompanied by any additional information. The main problem for the testers was to estimate correctly the scope and order of testing.

Problem:

We made decisions concerning testing scope relying on the previous experience and our superficial knowledge about the project architecture. Of course, the testers’ knowledge about architecture is inferior to the developers’ one. We did not know for sure, what functionality we could omit while testing, what would need just a minimal test set, etc. So, we had to perform regression testing on the maximal testing sets for all features as frequently as possible. It took a lot of time and resources, sometimes absolutely unreasonably.

Solution:

Impact analysis. The main idea is to use systematically the developers’ knowledge about the project architecture to detect features affected by changes.

Let’s provide a brief description of this method.

Testers and developers divide the project into separate features. As both sides have different project view, it is very important to produce a common list for the project in whole. The items of the list are written in the table rows. The information represented in columns depends on the project, it can be testing environment, data to be tested, subversions of the product built from an individual version control system branch, etc. In the cells, there is information concerning the changes that affected this feature (bug report number or developer’s comment), and also there can be some additional comments from the developers with suppositions about how deep this influence can be.

The table is stored in the version control system and can be related with the automatic build system.

The process of usage:

Developer:

  1. Works on his task.
  2. After finished the task, fills the Impact Analysis table for his task.
  3. Any developer builds the product version and sends testing request with the table attached. In case of automated builds, the table and testing request are generated automatically.

Tester:

  1. Studies the received information.
  2. Plans testing activity and sets task priorities using data from Impact Analysis table.
  3. Performs testing of all product parts marked in the table.
  4. Writes testing report.

Benefits:

This approach decreases the probability to miss a serious bug, as developers take part in the testing planning process using their architecture knowledge. There are also clear priorities for testing set by developers, i.e. we can avoid a situation when the most complicated for the developers bug is found last of all.

Time spent on testing planning decreases: all required data now can be found in one place and is represented in the easy-to-use form.

Time spent on regression testing decreases: we avoid redundant testing and do not increase the number of the missed bugs.

See  details about deployment and use of Impact Analysis http://apriorit.com/our-company/qa-blog/252-impact-analysis

Regression testing planning + Testing result storage

Impact Analysis and the Table to store testing results (described in previous white paper) can be used together.

In this case, we add two more columns to the table with testing results : “Impact Date” and “Changes”. In the “Impact Date” column, we write data when the developer performed changes that affected this feature, and in the “Changes” column we describe the main point of these changes (usually it’s a bug report number of some comments).

 

Impact Date

Tested Build

Full

Smoke

Accept

Windows 7 x86

Server 2008 x86

Changes

Comment

Feature1

 

07.10.2011

178

30

 

Client

Server

 

 

Feature2

12.11.2011

04.11.2011

8

8

 

Client

Server

#12042

 

Feature3

12.11.2011

11.10.2011

125

63

10

Client

Server

#53123

#53123

Feature4

 

20.09.2011

22

15

 

Client

Server

 

 

 

 

 

 

 

 

 

 

 

When the testing request is received, tester reviews Impact Analysis data and correspondingly modifies the testing result table (it can be also automated as additional activity of the automated builds). If the changes were introduced after the feature had been tested last time, then this row is marked with the blue filling that means “this feature needs regression testing”. Testing scope can be estimated by the Changes column as usually it represents full bug report titles and comments.

After tester has performed regression testing, he/she removes color filling from the corresponding table row, clears the corresponding cells in Impacte date and Changes column, and also changes date in the Tested build column.

This table per se is one more type of the testing plan, as it shows what and when is to be tested, and what the scope of the work is. Meanwhile, such test plan is flexible, takes only one page and is very easy to understand from the first sight.