This article describes general approaches of the software performance testing from scratch. There are some theoretical aspects, testing stages, practical implementation. The article will be useful for beginners in this type of testing, and also for those, who already apply performance testing for their product and want to systematize the process, learn more about its tools and techniques. Performance testing is commonly used in the Apriorit practice, especially for distributed system testing.
1. What Is Software Performance Testing?
Performance testing is a type of testing intended to measure functioning speed, bandwidth, reliability and scalability of a software system under some loading. It also includes load testing and stress testing.
It is usually conducted to:
- assess if project is ready for release;
- estimate performance parameters, product resource requirements;
- compare some systems or configurations performance;
- discover what causes performance loss;
- find some possible ways to improve already working system.
There are two types of performance testing:
- Desktop application testing. In this case, we measure resource consumption, application functioning speed, etc. for the one user at one computer.
- Client-server application testing. In this case, we also measure general system performance against the number of users, i.e. simultaneously processed requests, as well as the speed of these request processing. And this measuring can be performed both at the server and client side, we can also monitor network infrastructure loading, etc.
Below in this article, client-server applications are discussed.
2. Main Testing Stage
So, we’ve decided to conduct performance testing. Let’s start with the brief high-level plan:
2.1) Define test environment;
2.2) Define test exit criteria;
2.3) Plan and design tests;
2.4) Prepare performance testing environment;
2.5) Develop tests;
2.6) Execute tests;
2.7) Analyze results, restart tests if required.
And now let’s give more details for each step.
2.1) Define test environment
We must define what physical environment will be used to perform testing, and what environment will be used by product customers. Here we should also define what tools and resources we can use.
Decision concerning tools depends on many factors: tool features, expertise of working with it (if team has never used it before, it will require some time to learn), cost, project technologies, client and server machine OS, etc.
Frequently, the distributed loading is used to get more accurate test result. For example, we connect 10 client machines to the server, emulating activity of 50 users on each of it.
Physical environment includes software, hardware and network configuration.
Ideal case is when performance testing environment and deployment environment are similar. But unfortunately, this case is rather rare. Usually test environment has less performance than the production one. One of the possible solutions is to use transition coefficients. Let’s consider the numeric difference between the performance of the real and test configurations as the transition coefficient.
For example, we know the configuration of the server where the solution will be deployed. We don’t have it physically, but we can get its characteristics from the vendor technical documentation, expert reviews, etc. Let’s suppose that we are interested in:
- CPU performance
- RAM capacity and performance
- Disk subsystem performance
Also we can calculate performance of the test infrastructure using some additional tools. On the ground of the obtained results we can detect how faster the work of the real configuration is as compared with the test one.
Obviously, the more test configuration resembles the customer one, the better. Sometimes it’s rather hard to calculate the coefficient. For example, comparing DDR2 and DDR3 memory, we can’t say that functioning speed is proportional to the frequency as DDR2 and DDR3 have different latency time standards. It’s even harder to compare CPU performance. We would have to compare more parameters, which influence performance: clock rate, cash capacity, system bus frequency, bus bandwidth, etc. To calculate transitional coefficient with more accuracy it is recommended to make several independent estimations using independent data about hardware performance. Thereby we can estimate the coefficient of system scale with 5-10% calculation error.
Also, it’s better to choose software components for the test environment similar to the ones of the customer environment: OS version, service packs, database versions, etc.
Read also about virtualization testing environments.
2.2) Define exit criteria
Here we define what system parameters we will measure and what obtained result will signify that the testing is finished.
The most important performance testing parameters are:
- System response time.
- Bandwidth – the number of users that can work with the system simultaneously.
- CPU usage.
- RAM usage.
- RAM usage intensity.
- Disk subsystem usage intensity.
2.3) Plan and design tests
We must define the basic load profile. It includes user groups, main test scenarios, load variations against time of day, day of the week.
If a project is already deployed, we can use server logs and data provided by users. It will help to define the basic scenarios, bottle-necks and critical elements of the project.
Let’s suppose that we have a system with web-interface, which provides users with access to the remote applications on the server. There 100 users connecting to the applications and 5 system administrators. The basic operations for admins are: add/delete user, allow/block access to an application for a user. There are 5 users added/deleted and the permissions for 20 applications changed in an hour. Let’s reassign tasks among administrators, let just one of them work with user logins and 4 others reset access permissions. Then the intensity of operations for the first administrator will be 5 add/delete operations per hour, and for the rest 4 it will be 5 change operations per hour.
Then we make calculations for the rest of the user groups by analogy. The obtained result is called Basic Load Profile:
Intensity of operations for one user per hour
Allow/block access to an application for a user
Access application A
Access application B
Here we also define what data will be used for user scenarios, design the tests, and describe the performance parameters to be measured.
Now, let’s proceed to load modeling. On the base of the project performance testing requirements and goals, we can create several models with different request intensity, user scenario execution time, user connection scheme (one by one, by groups, or simultaneously).
2.4) Prepare test environment
Planning is finished, let’s start test environment configuration:
- Prepare test bench hardware;
- If we plan to use virtualization tools for testing – prepare virtual machine pool;
- Install required software configurations (OS, service packs, databases, etc.);
- Build test network structure;
- Install and configure tools for test development and starting, and also tools to gather test results.
If it is required, we create back-ups of the “clean” environment.
2.5) Develop tests
Now we already have test set to be implemented, tools and prepared environment, so let’s start to develop autotests in accordance with the test design. A this stage, we record user actions, develop test code and group tests in the sets to be executed.
2.6) Execute tests
We start tests, monitor them running, and gather testing results for further analysis.
2.7) Analyze results, restart test if required
Test results are server load – performance relations. Let’s consider the theoretical example of relation between the number of users working in the system and its response time (delay).
As we can see, at the beginning, the dependence is almost linear. But at some moment, it becomes exponential. Then server stops responding user requests.
The tests have been started under the certain configuration. The moment when the delay fails to meet project requirements, can be used to compose minimal system requirements for this configuration.
Using this dependence and transition coefficients (see section 2.1 Define test environment) we can forecast deployed system behavior or propose some hardware improvements to support more users.
Real-life dependencies for web application server part performance parameters are more complicated (example from VS 2010):
* User Load – the number of user, Avg. Page Time – average page response delay.
Using these data we can also detect the parameters that have the most influence on the system performance, to avoid “bottle-necks”. General system performance is determined by the performance of its slowest element. For example, if the slowest part is disk subsystem, then whatever CPU clock or RAM memory space we use, general system performance will remain the same. Data will not just be read in time to be processed.
4. Load Testing
Here we can name two approaches: stability testing and volume testing.
Stability testing is performed to detect memory leakages and other aspect that can decrease performance and can be noticed only after some period of the continuous system usage. It’s usually performed using basic load profile and lasts a long period of time, e.g. several days.
Volume testing is performed to determine future system scalability. It is conducted on the normal database size with gradual increase. Usually, the one-two year forecast of system data volume is used as the upper limit.
5. Stress Testing
Stress testing is performed to determine system availability at the maximal loading, in non-standard stress situations, and its ability to recover.
For example, there are some tests for the basic load profile. We start them for our client-server application increasing the activity intensity or the number of simultaneously connected users. At some moment, the delay is so long that it does not meet requirements. We can use this point to determine the minimal system requirements for the certain user number. Than we continue to increase the server loading up to the maximal possible values. At this point, server can stop responding, there can appear some errors in server application part, etc. After that, we decrease loading to the normal one. And then we check, if the system functioning has fully recovered and if the system performance is affected. If the system has not recovered to the normal state, we search the reasons of performance problems or system crash.
6. Performance Testing and Web Testing Tools Review
Tool choice depends on many factors, e.g.:
- Tool functionality: possibility to generate required load; perform testing with several physical machines, where user actions are emulated; gather necessary information about test results and tested system state; easy-to-use, etc.
- Tool experience: if this tool has not been used before, it will take some time to teach specialists.
- The possibility to work under the required configurations: various OS, server and client hardware configurations, compatibility with other applications, etc.
Below, there is a list of frequently used load testing tools:
Visual Studio 2010 web and performance testing tools
Apache Jakarta Project
Rational Performance Tester
HP Performance Center (Load runner)
Open System Testing Architecture
And now, let’s consider some tools in more detail.
Windows performance monitor
It is the simplest tool to measure performance parameters integrated to Windows.
Visual Studio 2010 web and performance testing tools
This tool is included to the VS 2010 pack. It allows to use any .net language to develop tests and is mainly intended for .net platform testing. There is a possibility to perform distributed load testing with centralized management and report generation in real time. This tool also allows to emulate the work of the majority of browsers. If you want to generate loading of more than 250 users or distributed loading (from several computers) you will need additional license.
This tool is intended for performance testing of various servers/technologies. It is small and easy-to-use application, and in addition it’s free. It has fewer features than VS 2010 web and performance testing tools.
One problem was spotted while working with Jmeter: it can hang when 250 and more users are generated. Maybe, its number depends greatly on the environment.
See details at http://jmeter.apache.org/
It is a utility for automated load testing, included into HP Performance Center. It is really powerful but commercial tool.
Load and performance testing tool set from Micro Focus/Borland. It supports the most technologies of all above mentioned tools. With it, you can also use a great number of real-time counters from third-party applications, e.g. databases (Microsoft SQL, Oracle, IBM DB2, etc.).
See details at http://www.borland.com/us/products/silk/silkperformer/
This article considers performance, load and stress testing. I’ve described the most common stages and approaches to conduct software performance testing, forecast system performance using transition coefficients. Its particular implementation depends on the project needs, used technologies, and testing goals. Also I have mentioned the list of the most widespread tools, both free and commercial ones, as well as some criteria to choose the better one for you.
Used articles and books:
".NET Performance Testing and Optimization". Paul Glavich with Chris Farrell
"Performance Testing Guidance for Web Applications". Microsoft. J.D. Meier, Carlos Farre, Prashant Bansode,Scott Barber, Dennis Rea
Read about performance testing on the code level in this Intel Vtune tutorial.
Learn more QA tips from Apriorit specialists: Estimation techniques in software testing.
Take a look at the Apriorit specialized QA competences.