Stress Testing involves using a load testing tool to attempt to break a system. This typically means taking the system out of its comfort zone. In some cases the amount of hardware needed to do so might be cost prohibitive. In other cases, it would take too long to provide the hardware resources needed to fully stress the system. In this situation, cloud computing really shines.
Cloud computing allows you to scale resources economically and provides instantaneous and cost effective access to significant computing resources. In addition, cloud computing allows you to ramp up and ramp down computing resources as needed.
The purpose of stress testing is to stress a system to the extent possible and to see when it breaks. This can often require a large hardware investment. With the advent of cloud computing this is no longer the case. Stress testing in the cloud makes a lot of sense. The cloud is elastic and allows the tester to easily and instantaneouslyscale hardware resources to ensure that stress testing can be properly accomplished. Moreover, if you use those resources only for a few days you will only need to pay for the cost of using the hardware for a few days. Cloud testing changes how we perform stress testing. We no longer have to worry about hitting a hardware resource bottleneck. We can scale the hardware much more easily and not have to worry about factors such as time and cost.
We deal with lots of customers who use automated functional testing to test their systems every day. We interact with them for technical support, for training and consulting services. We have seen certain mistakes that are repeated more often than others. Here is a list of some of the most common mistakes.
1. Record & Replay Not Enough
Many testers seem to think of automated functional testing as little more than record and replay. The fact is that an effective automated functional testing requires you to customize the generated script. The record feature should be viewed as a way to generate a skeletal script and should rarely be viewed as the final step in script creation. Script Customization could involve data parameterization, adding checkpoints for validation. It could also involve modularizing the script to allow several testers to work on it at one time.
2. No validation
It amazes me how many scripts are created and used without any validation. If you are testing an system that has a login page you would want to find out if that login succeeded. You could do this by validating the resulting page by using checkpoints. Checkpoints could be used to detect web objects, page parameters or perhaps some text on the page. Checkpoints should be placed at as many points as possible.
3. Visible Validation Only
Validations mentioned earlier should not be restricted to validations that are visible (e.g. page text etc). If you are using an order entry system to place an order, you might want to query the database to ensure that the order was in fact saved successfully into the database. Similarly, if a particular operation results in the creation of a file, you might want to validate the contents of that file.
4. Improve but Don’t Replace Human Testing
Automation is great way to augment you testing efforts. However, don’t expect automation to allow you to completely replace human testing. Automation works best when you know what to look for. In some cases you may generate a web page that has incoherent fonts. Unless you are specifically looking for this error (which is rather unlikely), you are likely to only find this issue .
5. Inappropriate Test Cases
The test cases chosen for automation need to represent a significant proportion of user activity. There are an astronomical number of paths that can be taken by the user. However, the trick is to condense all possible paths to a small sample of highly representative test cases. This is more of an art than a science.
Using our performance testing tool we have tested many websites and helped our customers improve the performance of their websites. In doing so, we have employed many methods that can be used to improve website performance. Here I will present some of the most effectively utilized techniques.
Make sure that you have saved your images in the correct format to optimize size. PNG is usually the best format for solid colors and JPEG is usually the best format for photos. JPEG size is dependent on quality. Find your minimum quality threshold and save your JPEG images based on the minimum quality threshold to minimize file size. Moreover, make sure that your saved image has the same size as the displayed image. If your saved image is larger you are unnecessarily using more bytes to store the image.
Server Side Compression
Gzip and Deflate are the two compression methods that are available to minimize the file size that needs to be downloaded. Web pages are usually compressed quite well by these methods. A 1 MB HTML page could easily be reduced to a 200 KB or lower size by using these compression techniques.
HTTP Requests Minimization
Each HTTP request incurs an overhead. It always amazes me to see how many websites have web pages that are referencing resources (images, stylesheets etc) that do not exist resulting in a 404 error code. Most performance testing tools will immediately uncover such HTTP requests. Combining multiple images into a single image will also reduce the number of HTTP requests. Image Maps and CSS Sprites are some of the many techniques that can be used to combine multiple images into single images.
Redirects are sometimes a simple way to active certain tasks. However, they do incur an overhead and they should be avoided. A very common mistake made by developers is to forget to add a forward slash / to the end of a URL (e.g. http://www.verisium.com/products is used instead of http://www.verisium.com/products/). This results in an additional 301 request that increases the page response time thus degrading website performance.
Regular Website performance Testing & profiling
Performance testing tools are important weapons in your arsenal to test website performance and discover the source of the various bottlenecks. Performance testing tools will reveal the areas that need immediate attention.
We have worked with lots of test systems and had many organizations consult with us about their performance and load testing efforts. Over the years we have encountered many common mistakes. These mistakes often greatly compromise performance and load testing effectiveness. In this post I will discuss some of the more common mistakes that are made during performance and load testing and how to avoid them.
1. Testing on a system that does not resemble the production system: The system used for load testing needs to mimic the production system as closely as possible. If your production system, for example, has 8 GB RAM but your test system only has 2 GB of RAM, it is highly likely that for a given number of virtual users the response times will differ. In addition, the operating system and any installed software on test system should be configured in the same way. This will lower the likelihood that the test system and the production system will behave differently.
2. No page validation in scripts: Load testing tools will report errors when an HTTP request produces a response with an unacceptable response code .e.g. a 404 (page not found), a 500 (server error) etc. However you may have still have errors that will produce perfectly valid HTTP response codes. For example, you may have a web page where you submit an invalid login name or password and the resulting page asks you to attempt to login once again. You might consider this situation an error but there is no way for the load testing tool to know this (since the page will probably produce a valid HTTP response code of 200). You need to specifically direct the load testing tool to invalidate such a web page response. You can do this by using a text checkpoint where you can look for specific text on the page and use that condition to validate the page.
3. Not starting load testing early enough: Performance bottlenecks are best identified as early as possible. If you wait until the system has completed development, you might find performance issues that require a architectural redesign. This would take time and would be likely to result in missed deadlines.
4. Test Cases do not represent real scenarios: This might seem obvious but I have seen many systems that underwent load testing but suffered significant performance problems and downtime after they were released. The most common reason is that the test scenarios that were used for load testing did not fully represent actual real world scenarios. The closer your test cases are to real world scenarios, the greater the likelihood that your production release will not encounter too many issues and will be successful.
There are several other mistakes that are made but the ones listed above tend to be the most common and appear to have the greatest impact on the success of the project.
You started using test automation tools early in the development cycle. You started to use these test automation tools to create scripts that worked well and automated a significant portion of the user interface. A few weeks later several of those test automation scripts failed. The user interface had changed and you had to spend a significant amount of time creating new test automation scripts. A few weeks later some other test automation scripts failed for the same reason. You wasted a countless number of hours working on this issue .and ending up completely frustrated by it.
The scenario described above is a well known issue faced during test automation. How does this occur? Let’s consider a web page that is described by the HTML below.
<input id=”submit” type=”submit” value=”Submit” />
Consider a scenario where the user loads the page above and then clicks the Submit button. When you record a test automation script against the page above, many test automation tool vendors will create a script that will extract the “id” of the INPUT tag and utilize that to identify the object at runtime. This is all well and good until the “id” changes! What happens then? The script fails. When we started building a test automation tool, we had faced this problem ourselves in our past lives. As a result we designed our test automation tools to handle this issue to the extent possible. In the previous example, our test automation tool would not identify the INPUT tag above based on simply one attribute. Instead we created our own object recognition system that uses a large number of factors to identify an object. The reason is that if something simple changes e.g. one of the element’s attributes, its position in the web page etc, the test automation script will still work. What happens if the page changes to the point that it is no longer recognizable? I’ll be honest with you that in that case nobody can do much. You will have to rerecord the script. However, most user interface changes are not as drastic. Most of them are smaller changes that cause the affected page to naturally evolve over the development cycle. Many of these cases can be handled by a test automation tool that is built with this concern in mind.
There are several other ways to avoid the issues associated with rapidly changing UI. One very effective way is to keep developers involved in the test automation process and provide them with the test automation tools and your scripts so that they can themselves run the tests from time to time. This will enable them to appreciate your problems and most of them will try to work to try to avoid these problems. It will also allow the developers to discover their own defects before they reach you. Another way to avoid this issue is to spend more time in design mode as this will minimize large user interface changes during the project.
You need to use strategies to minimize having to recreate scripts. Simply using test automation tools is not enough. You need to ensure that you are using them in a manner that maximizes your productivity.
Agile automated testing is particularly important in the lifecycle of a project utilizing the agile development methodology. Agile software development involves a constant feedback loop among team members. This is in contrast to the waterfall style of development where software testing only begins once the development phase has been completed. In agile development, software testing activities are conducted from the beginning of the project. Software testing is done incrementally and iteratively.
Automated testing is an extremely important part of agile testing. After each change in the system, it is important to run a battery of automated functional and regression tests to ensure that no new defects have been introduced. Without this automated testing harness, agile testing can become very time consuming and this can result in insufficient test coverage. This will in turn affect software quality. Automated testing is necessary for the project to maintain agility. As a matter of fact, introducing automated processes such as automated builds and automated smoke tests is important in all aspects of agile development. As budgets shrink, time spent on repeatable automated testing becomes more and more necessary.
Many automated testing tools in the market don’t focus enough on being resistant to user interface changes making them difficult to utilize in an agile environment. Automated testing tools should be designed so that test automation scripts are very resistant to user interface changes. Otherwise, agile automated testing teams will spend too much ensuring that their scripts don’t break rather than focusing on automating important use cases. This can result in a scenario where automated testing becomes more of a liability than an asset.
Automated testing is most effective when it is conducted throughout the project lifecycle rather than exclusively in the later stages of the project. Defects found early on are less expensive and take less time to fix than the same defects discovered in a more advanced stage of the project. Moreover, there are cases when the only way of handling an important defect is through system design modifications. It is clearly much easier to handle such defects in an early phase of the project rather than in a later stage.
Software testing metrics provide visibility into both the quality of the test plan as well as the maturity of the product. They enable quantitative insight into the effectiveness of the software testing process and provide feedback as to how to improve the testing process. There is no general consensus on the metrics that should be used in software testing. However, there are several metrics that are in common use that can provide valuable insights.
Test Coverage is one of the most commonly used software testing metrics. Quantitatively, Test Coverage is often defined as the total number of test cases/total number of requirements. The Test Coverage metric can give you an idea of the completeness of your test plan. As new features are added, this metric will momentarily decrease until your test plan starts to incorporate test cases that cover the newer features. Keep in mind that the Test Coverage metric can be defined in other ways. We can also define Test Coverage as the total number of test cases/total number of possible identified paths. This metric should be defined based on organizational and team needs.
Another commonly utilized metric is the Quality Ratio. This typically refers to the number of successfully executed test cases/total number of test cases. This metric provides feedback on the current release quality and the presence or absence of defects. The Quality Ratio can also be categorized by functional areas. This will provide greater insight into where the defects were actually found. Another metric that is related to the Quality Ratio but calculated a little differently is the Defect Density. The Defect Density is usually calculated as the number of defects/number of lines of code. The Quality Ratio metric is a more software tester centric metric whereas the Defect Density is a more developer centric metric.
With respect to automated testing, a frequently used metric is the Automation Index. This is calculated as the number of automatable test cases/total number of test cases. This metric describes the extent to which software testing can be automated. This leads to another automated testing metric sometimes called the Automation Coverage. This is computed as the number of automated test cases/total number of automatable test cases. Just as the Test Coverage metric described above gives you an idea of the completeness of your test plan, the Automation Coverage metric can give you an idea of the completeness of your test automation plan.
Software testing metrics such as the ones above should not be used to compare different types of projects as they are less meaningful when used to compare different projects. However, they are extremely valuable as measures of progress within the same project.