理解软件性能测试 Software Performance Testing

2012-04-28  张林 

Understanding Software Performance Testing
by Dale Perry
原文:
(建议用chrome的自动翻译功能帮助看原文)
http://www.stickyminds.com/BetterSoftware/magazine.asp?fn=cifea&id=118
中文:
http://www.51testing.com/?uid-61753-action-viewspace-itemid-220565
http://www.51testing.com/?uid-61753-action-viewspace-itemid-220600

Understanding Software Performance Testing
by Dale Perry

This is the first part of a four-part series. Parts two through four will be published on StickyMinds.com

Most software-related performance testing projects fail. When I say fail, I don’t mean the tests abnormally end. What I'm talking about is failure to provide any useful information to those who requested the test. 

Failure occurs for many reasons. In this series of articles, I describe the basic problems and how to avoid them so that if you are tasked with doing a software performance test, you will comprehend the set of problems and understand what you can and cannot accomplish. This series will not address all issues, nor is it intended for expert software performance testers, most of whom have probably learned these ideas through trial and error. 

Many of the ideas in this series are based on a course—developed with Ross Collard of Collard & Associates—that I teach for SQE Training. I cannot take full credit for all the concepts and ideas presented here. 

These are the general areas that we will investigate through this series of articles: 

Part 1

  • The role of the tester in the process
  • Identifying performance and business goals
  • How performance testing fits into the development process
Part 2
  • The system architecture and infrastructure
  • Selecting the types of tests to run
Part 3
  • Understanding load (operational profiles)
  • Quantifying and defining measurements
Part 4
  • Understanding tools
  • Executing tests and reporting results
These problem areas can be addressed in many different sequences. I will approach them in the order I have found to be most useful. This approach has been applied to all types of platforms: mainframes, client-server, embedded, Web, etc. Part one addresses the role of the tester, understanding performance goals, and how performance testing fits into the development process. 

The Role of the Tester 
In software performance testing, the role of the performance tester is much like that of a consultant—you guide, direct, and assist the technical staff in identifying and correcting problems. Performance testers test; they do not tune, debug, or fix identified problems. A good performance test is a team effort. All key players and stakeholders—anyone who might have to adjust, repair, debug, or tune a system or application component or make decisions relating to performance goals and objectives—need to be part of the performance test-planning process. 

A good performance test team comprises all of the key stakeholders:
  • Test/QA group
  • Users/customers
  • Managers
  • Marketing
  • Development staff
  • Network administrator
  • System administrator
  • Security administrator
Failure to involve the key personnel early in the process almost guarantees problems later when results and conclusions are presented. Involving key personnel early in the software performance test-planning process helps mitigate problems later, especially if the results are not optimal. 

Identifying Performance and Business Goals 
Next, we need to address the identification of the goals and objectives of the performance test. One of the earliest mistakes people make in software performance testing is thinking that all they need is a load-generating tool. (We will discuss tools in part four of this series.) While tools are essential, a tool provides an answer to a question, and you need to know what that question is. Not knowing the performance questions you need answered makes it very difficult to determine if the performance test is a success or failure. 

One of the first questions I typically get is "How do I interpret the results from my tool?" My response is "What were you expecting the tool to do?" Most people cannot answer this question. This is the first indicator that the performance test may be of little value. 

In many cases, the testers have general direction from management. However, management tends to focus on issues such as return on investment and customer satisfaction, and these are not performance goals. Without knowledge of the project’s performance goals and objectives, the tools will give you a lot of information that you will not be able to interpret or use to meet management's expectations. 

Gathering and defining performance goals and objectives can be one of the least enjoyable aspects of planning a performance test—it tends to get political at times. However, the time to tell someone you cannot do something is before—not after—you spend his time and money. If you cannot achieve the performance goals and objectives that have been requested, there is no sense in agreeing to run a test. 

The first step in understanding performance goals is to separate them from business goals. Figure 1 shows an example. 

 
Figure 1: Separate performance goals from business goals. 

To be a software performance goal, there has to be some element of the system or application that we can measure during the test. Poor performance typically occurs when some resource is under stress or reaching its maximum capacity (a bottleneck). If you cannot design a test for something, it is probably not a good idea to agree to accomplish the task. I find the following series of questions useful in determining the difference between performance goals and testing goals:
  • What test would I run?
  • What data would I collect?
    • In order to measure "performance," you have to have information about what is going on in the system or application.
  • How would I "prove" the goal was achieved?
    • What measures will I use to prove that a specific goal or objective was met?
"Performance" refers to how something behaves under a defined set of circumstances. When you look at issues such as user productivity, there is no system element that you can measure to prove a customer will be more efficient. The system or application can be operating perfectly, and users will work at whatever pace they desire. You cannot force people to work more efficiently by "tuning" them. Promising to achieve these types of goals or objectives almost guarantees failure. 

How Performance Testing Fits into the Development Process 
Many people look at software performance testing as the last thing you do before you ship a product out the door to the client. This is both incorrect and dangerous. Figure 2 shows the traditional approach to performance testing. It is based on the idea that a complete system or application must exist in order to performance test. 

 
Figure 2: Traditional approach to performance testing. 

The main problem with this approach is that issues discovered late in the process may require changes to the application or system that will require retesting the affected functions—followed by regression testing—before the performance test can be rerun. It is important to note that changing more than one feature at a time can be very difficult to coordinate. Multiple technical groups will be involved and need to be coordinated. It also is possible that making multiple changes at once could obscure the solution to the problem. Tuning the database and the application code at the same time could cause them to cancel out each other. Therefore, many groups use a one-factor-at-a-time (OFAT) approach. 

Unfortunately, OFAT does not work well for the following reasons:
  • OFAT assumes factors are mutually independent.
  • There are many complex, poorly understood, and hidden interdependencies.
  • OFAT takes too long.
  • Other methods (typically multi-factor-at-once adjustments) are harder to apply well.
For example, once the database is "fixed" and the test rerun, we may discover new problems in the network or application that have to be fixed, which in turn creates more problems in the database. It is possible to end up in a continuous loop, going from one factor to the next and back again. 

Yes, the performance test does require a stable environment and software. However, this does not mean all features have to be completed. Using a combination of preventive thinking (static testing) and incremental or iterative development, it is possible to start performance testing as soon as there are one or two functionally stable features available. 

Incremental and iterative styles of application development offer a great opportunity to implement performance testing early in a project. By identifying performance problems early in the development process, there are more opportunities to fix the design and architecture before the costs of those corrections become too high. Figure 3 shows an example. 

 
Figure 3: Incremental and iterative development allow early performance testing. 

Using simple review techniques with a focus on performance testing rather than engineering the functionality, we can include features with potential performance impacts in the earlier builds and releases. This may affect project schedules, as features that impact performance may also be more complex to develop. 

Once the first build is functionally tested, the performance test group can start testing while the functional team continues to develop the next increment. The advantage to this approach is twofold: The infrastructure gets an early test and, as there is a limit to completed features, changes to the system architecture or design are far less costly. The drawback to this approach is the more complex coordination between the functional test team and the performance test team. Performance testing cannot have activities occurring on the infrastructure or architecture that are not part of the controlled test. 

Below are lists of potential network, application, and database problems that can be detected during the early stages of the development process using static methods: 

Network issues:
  • IP connections and the use of HTTP
  • Excessive use of security features
  • Type and characteristic of load balancers
  • Possible over-interaction between application servers and database servers
  • Sizing of packets (size in application versus size on network)
  • Connection pools and connection sharing
  • Location of servers on the network
  • Increased latency and jitter
Application issues:
  • Excessive memory allocation and de-allocation
  • Improper task initialization and housekeeping
  • Improper garbage collection, especially after a fault or failure
  • Loss of sessions or sessions kept alive too long (timers)
  • Persistent applications like BlackBerry devices
  • Application configuration parameter conflicts
  • High resource consumption features (CAD/CAM, etc.)
Database issues:
  • Index design
  • Use of dynamic indexes
  • Potential deadlocks from locking contention
  • Inefficient use of cache
  • Overuse of cache
  • Use of resource-intensive features such as referential integrity
  • Use of stored procedures or trigger events
  • Table fragmentation (overuse of third normal form)
  • Improper timing of table and index reorganization
If you can identify performance problems before they get engineered into the system, they are far less expensive to correct. Granted, this may slow down the project as the requirements or designs need to be reworked and corrected. The time required later to correct design flaws will have an even greater impact on the project, as a major redesign effort at the end of a project puts the schedule at risk. 

Regardless of the method chosen, the planning and analysis activities for performance testing can and must begin at the requirements stage, very early in the project. Waiting too long can spell disaster for the performance test team. The testers are the eyes and ears of the performance specialists (typically system administrators, DBAs, network engineers, and developers) who diagnose and fix problems. 

Due to the complexity and resources needed to set up massive performance tests, many companies may decide to outsource some or all of their testing. Typically, it is easier to outsource tests that do not require intimate knowledge of the business (e.g., performance testing) than tests that require the tester to understand a company's business rules (e.g., functional testing). 

Companies that specialize in performance testing bring many critical skills to the table. However, do not look to outsource until you have planned properly. Outsourcers will need to do the same tasks as you—and they will get the information from you. You will be paying someone to ask you questions you could have asked yourself. If you are going to bring in expensive consultants to help troubleshoot a performance problem, make sure your initial set of performance test scripts is ready to go before they arrive. This can save a lot of expensive "hanging around" time. 

If the performance testing is going to be conducted remotely, and the application or system is still in development or test and sitting behind the organization’s firewall, conduct an early feasibility test to ensure that the remote testers are able to pass through the firewall and access the system under test. 

Most outsourcing providers can be categorized into one or more of the following categories:
  • On-site consultants (supplemental staffing)—Consultants work at the client's site.
  • Remote consultants—Consultants work at their own site, while the system or application is hosted at the client’s site.
  • Test lab—Application or system is shipped to a remote testing location where it is installed and tested by the consultants.
  • Application service provider or management service provider—In-house staff utilize the hardware and software hosted at a third-party location to test the application or system hosted at their own site.
Understanding the various roles in a performance test and integrating performance testing into the development process early are essential to a successful test. If you do not get the key players involved and start the planning process early, the probability of success is not good. Once these key issues are under control we can begin to analyze the problem and design a necessary set of performance tests. In the next part of this series, I will address the problems related to architecture and infrastructure and deciding on the types of performance tests needed. {end} 

Read "Understanding Software Performance Testing" Part 1 of 4
Read "Understanding Software Performance Testing" Part 2 of 4
Read "Understanding Software Performance Testing" Part 3 of 4
Read "Understanding Software Performance Testing" Part 4 of 4.
375°/3759 人阅读/0 条评论 发表评论

登录 后发表评论