Subscribe to receive all latest blog updates

Some time ago, in a conference, I heard such phrase: “At each moment, manager of the testing group must know who is occupied, what his/her tasks are, what the project status is, and what is planned to do”. Since that time I have been discovering more and more proofs that time wasting is a direct consequence of the situation when testing process fails to meet these requirements. Things are complicated also by the fact that we all are slaves to a habit: if there is some historically formed process in the project or company, everybody will continue working by it, just till the moment when somebody decides to make a revolution. In this series of the testing white papers I describe some problems we faced with in different projects, summarize and analyze them and then describe the innovations and improvement we introduced to resolve them in this endless journey to the perfect process.

 

Written by:
Tatiana Kit,
Team Leader of Network Testing Team

 

Goal:

At each moment, each project member must see what solution parts were tested, what type of testing was performed and what results were obtained, and also what parts must be tested and what type of testing must it be.

How it was:

We had test sets of the different size to test certain features. We sent email with the current testing results to the project team (tables with records about what test is passed and what is failed). At a certain moment, we started to store them in the version control system. The situation became a bit better as at least all the results were stored in the same place.

Problem:

It’s hard to answer the manager’s question: “How do things go in the project?” and plan the next testing stages. I should explain this a bit. When the manager is asked “How do things go in the project?” it means “What is already tested and what is still to be tested? What testing results have been obtained?”

Then, to plan the next testing stage, the manager should also know what is already tested and what the results are, and also what is not tested yet, when it was tested the last time, what type of testing was performed and what results were obtained.

Namely our project is rather complicated in this sense: it is large and, at the moment, separated into 35 independently tested features.

Solution:

I created a table, where there are feature names in the rows and test set names in the columns (full, smoke, acceptance) and also names of the OS, which testing was performed on. The cells in the latter columns contain the names of the components that were tested on the specified OS.

There is a color marking to show general testing results:

- red – there are some critical problems,

- yellow – there are bugs of the normal priority,

- green – all tests have been passed successfully, there can be bugs of the low priority.

Here is an example of such table:

 

Tested build

Full

Smoke

Acceptance

Windows 7 x86

Windows7 x64

Server 2008 x86

Comment

Feature1

07.10.2011

178

30

Client

 

Server

 

Feature2

04.11.2011

8

8

Client

 

Server

 

Feature3

11.10.2011

125

63

10

Client

 

Server

#53123

Feature4

20.09.2011

22

15

Client

 

Server

 

The column meaning:

Tested build – identifier of the build, where the feature was tested last time. It can be the date or version number. (Note: You can also configure your table in Excel so that it will automatically fill the cells, where there are “testing-expired” features, with some specific color).

Full/Smoke/Acceptance stands for the testing type, namely for its scope. We have 3 test sets for each feature: Full is the most detailed, Smoke is to check the basic functions, and Acceptance is the most general – and less detailed – check of the feature functioning. Cells here show the number of tests in the corresponding test sets – it helps to calculate time while planning the testing process. Color highlights the test set that was used for the latest tested version. It also helps while planning as it indicates what the testing scope was last time.

Windows7/Vista/Server2008/etc. is the name of the OS that was used for testing last time. Some components of our product, in particular, depend much on the OS. If a product depends not on the OS but on some other environment elements – accompanying software, specific data, etc. – they should be mentioned in the columns. If there are no such dependencies, you can just remove these columns at all. Cells in these columns represent the tested product component. Client-server architecture adds configuration diversity.

Comment – this column may contain any information, which is not mentioned yet, but which you want to share with the whole team or simply not to forget in several months. Particularly, you can write the numbers of bug reports, which are the reason why this row is highlighted with red.

Each time, having finished testing of some functionality, testers write the results to the table. The table is stored in the version control system, so one can look through the changes history, if required.

This table structure doesn’t fit all the projects. For example, we have another product with plug-in architecture: each plug-in works with its specific data type, and all the plug-ins have the same functionality – search, sorting, etc. This virtually converts the table to the 3-dimensional one. But testers of that project anyway store the results in a similar form, though they had to rework significantly the described table.

For web projects, columns with browsers are used instead of the ones with OS, and build date is replaced with the date of the test server check-in.

This table is not the universal one, but here I want to emphasis on the idea of storing data in such form.

Benefits:

At each moment, I can say what features include critical bugs, what are ok, and what still need to be tested.

I can see what features have not been tested for a long time thanks to the column with dates. And so I can plan testing in such a way that each feature is checked at least once during some time period, e.g. each 2 month. It helps to detect problems in time and avoid the situation when some feature, which has not been tested for a long time, produces a lot of bugs right before the release. Indicated test number helps to estimate the time for testing.

Surely, this format of testing result storing is neither common nor universal, but still it helped me to save a lot of time and make our work more convenient.