We deal with lots of customers who use automated functional testing to test their systems every day. We interact with them for technical support, for training and consulting services. We have seen certain mistakes that are repeated more often than others. Here is a list of some of the most common mistakes.
1. Record & Replay Not Enough
Many testers seem to think of automated functional testing as little more than record and replay. The fact is that an effective automated functional testing requires you to customize the generated script. The record feature should be viewed as a way to generate a skeletal script and should rarely be viewed as the final step in script creation. Script Customization could involve data parameterization, adding checkpoints for validation. It could also involve modularizing the script to allow several testers to work on it at one time.
2. No validation
It amazes me how many scripts are created and used without any validation. If you are testing an system that has a login page you would want to find out if that login succeeded. You could do this by validating the resulting page by using checkpoints. Checkpoints could be used to detect web objects, page parameters or perhaps some text on the page. Checkpoints should be placed at as many points as possible.
3. Visible Validation Only
Validations mentioned earlier should not be restricted to validations that are visible (e.g. page text etc). If you are using an order entry system to place an order, you might want to query the database to ensure that the order was in fact saved successfully into the database. Similarly, if a particular operation results in the creation of a file, you might want to validate the contents of that file.
4. Improve but Don’t Replace Human Testing
Automation is great way to augment you testing efforts. However, don’t expect automation to allow you to completely replace human testing. Automation works best when you know what to look for. In some cases you may generate a web page that has incoherent fonts. Unless you are specifically looking for this error (which is rather unlikely), you are likely to only find this issue .
5. Inappropriate Test Cases
The test cases chosen for automation need to represent a significant proportion of user activity. There are an astronomical number of paths that can be taken by the user. However, the trick is to condense all possible paths to a small sample of highly representative test cases. This is more of an art than a science.
You started using test automation tools early in the development cycle. You started to use these test automation tools to create scripts that worked well and automated a significant portion of the user interface. A few weeks later several of those test automation scripts failed. The user interface had changed and you had to spend a significant amount of time creating new test automation scripts. A few weeks later some other test automation scripts failed for the same reason. You wasted a countless number of hours working on this issue .and ending up completely frustrated by it.
The scenario described above is a well known issue faced during test automation. How does this occur? Let’s consider a web page that is described by the HTML below.
<input id=”submit” type=”submit” value=”Submit” />
Consider a scenario where the user loads the page above and then clicks the Submit button. When you record a test automation script against the page above, many test automation tool vendors will create a script that will extract the “id” of the INPUT tag and utilize that to identify the object at runtime. This is all well and good until the “id” changes! What happens then? The script fails. When we started building a test automation tool, we had faced this problem ourselves in our past lives. As a result we designed our test automation tools to handle this issue to the extent possible. In the previous example, our test automation tool would not identify the INPUT tag above based on simply one attribute. Instead we created our own object recognition system that uses a large number of factors to identify an object. The reason is that if something simple changes e.g. one of the element’s attributes, its position in the web page etc, the test automation script will still work. What happens if the page changes to the point that it is no longer recognizable? I’ll be honest with you that in that case nobody can do much. You will have to rerecord the script. However, most user interface changes are not as drastic. Most of them are smaller changes that cause the affected page to naturally evolve over the development cycle. Many of these cases can be handled by a test automation tool that is built with this concern in mind.
There are several other ways to avoid the issues associated with rapidly changing UI. One very effective way is to keep developers involved in the test automation process and provide them with the test automation tools and your scripts so that they can themselves run the tests from time to time. This will enable them to appreciate your problems and most of them will try to work to try to avoid these problems. It will also allow the developers to discover their own defects before they reach you. Another way to avoid this issue is to spend more time in design mode as this will minimize large user interface changes during the project.
You need to use strategies to minimize having to recreate scripts. Simply using test automation tools is not enough. You need to ensure that you are using them in a manner that maximizes your productivity.
Agile automated testing is particularly important in the lifecycle of a project utilizing the agile development methodology. Agile software development involves a constant feedback loop among team members. This is in contrast to the waterfall style of development where software testing only begins once the development phase has been completed. In agile development, software testing activities are conducted from the beginning of the project. Software testing is done incrementally and iteratively.
Automated testing is an extremely important part of agile testing. After each change in the system, it is important to run a battery of automated functional and regression tests to ensure that no new defects have been introduced. Without this automated testing harness, agile testing can become very time consuming and this can result in insufficient test coverage. This will in turn affect software quality. Automated testing is necessary for the project to maintain agility. As a matter of fact, introducing automated processes such as automated builds and automated smoke tests is important in all aspects of agile development. As budgets shrink, time spent on repeatable automated testing becomes more and more necessary.
Many automated testing tools in the market don’t focus enough on being resistant to user interface changes making them difficult to utilize in an agile environment. Automated testing tools should be designed so that test automation scripts are very resistant to user interface changes. Otherwise, agile automated testing teams will spend too much ensuring that their scripts don’t break rather than focusing on automating important use cases. This can result in a scenario where automated testing becomes more of a liability than an asset.
Automated testing is most effective when it is conducted throughout the project lifecycle rather than exclusively in the later stages of the project. Defects found early on are less expensive and take less time to fix than the same defects discovered in a more advanced stage of the project. Moreover, there are cases when the only way of handling an important defect is through system design modifications. It is clearly much easier to handle such defects in an early phase of the project rather than in a later stage.
Functional and regression testing is typically a mixture of manual testing and automated testing. Functional testing is used for many different tasks and at many different phases of the project. One form of testing for which functional test automation is indispensible is smoke testing. What is smoke testing? Smoke testing is testing that covers the important features of an application without delving into details. Smoke testing is often done right after a build to ensure that the build is a valid build. Daily builds are often scheduled at odd hours. The allocation of manual testers to do the daily smoke test is usually time consuming particularly in cases where many obvious defects are easily caught by a well constructed automated smoke test. A well designed automated smoke test will save you lots of time and will allow you to spot basic issues with the build. A smoke test can also be used prior to code check-in by individual developers. This will ensure that code that breaks the build is not mistakenly checked in.
How should a smoke test be designed? A smoke test is often really a set of automated regression and functional tests that are focused on very frequently used features. If you have a web application, you could create a smoke testing server. After each build the latest application is published to the smoke testing server and an automated smoke test suite is run against the smoke testing server. For an order entry system, this could involve a set of tasks such as logging into the system, viewing an order, updating an order, and logging out. It could also involve a few other tasks. However, it would be a mistake to dump every feature into this smoke test as a more comprehensive functional test (that involves both manual and automated tests) should be conducted separately once the smoke test has passed.
Should smoke testing involve performance testing as well? That depends. If the web application has a signficant amount of traffic, and performance metrics such as page response times are critical, it would make sense to run a small automated performance test in addition to the functional test. Once again, this smoke performance test should focus on commonly used scenarios and not to try to replicate a more detailed performance test that would be conducted separately.
After you have completed creating and running a script successfully in vTest, you may wish to automate the unattended execution of this script. You might want to have it execute at 4am in the night every night to allow you to inspect the results as soon as you get into work every morning. What are your options in this regard, you might ask.
vTest allows you to save a batch script by selecting the menu item File > Generate Batch Script. This saves a VBScript file to your intended location that automates the execution of the loaded project. Here is what the generated VBScript looks like
Set vtObj = CreateObject(“vTest.Document”)
vtObj.SetValue “ReportBase”, “C:\Users\Paul\Desktop\Misc”
The above script is mostly self-explanatory with the exception of the line that calls the “SetValue” function. This function can be used to set the value of “ReportBase” variable that allows you to specify where the HTML reports should be saved. “SetValue” can also be used to determine if you wish to run through all the iterations specified in the project or whether you wish to limit the execution to only the first iteration. This is done using the variable “ShouldIterate”. A value of “0″ limits the execution to only the first iteration. A value of “1″ is the default value and allows you to run through all the iterations specified in the project.
Another function called the “GetValue” function allows the caller to obtain several variables. One of these, the “ResultsXml” variable allows the user to obtain an XML representation of the result. This can be useful if you wish to eventually parse these results and save them in your own file or database. This would be called as follows.
xml = vtObj.GetValue(“ResultsXml”)
After saving and customizing your batch script, you can run it unattended using the Windows Task Scheduler at a specified time and frequency.
When you attempt to replay a vTest test automation script, vTest will execute the test and then finally display a report with the test results. In some cases, users are interested in viewing screenshots of the tested web pages. vTest enables you to automatically add screenshots to the resulting reports.
Screenshots can be automatically generated in several ways. One way is to automatically add the screenshot after each page is loaded. Alternatively, users can individually add a screenshot function at any point in the script. This will result in a screenshot of the last displayed web page. Users can also take screenshots of the full page. This includes portions of the page that are not visible since they are not scrolled into view.
To add automatic screenshots after each displayed page, select the menu item Options > Replay and check the “Automatically save screenshots” option. For full page screenshots, check the “Full Page screenshots” option. You can also add individual screenshots after any point in the script. For graphical scripts select the menu item, Edit > Insert Function. In the tree under “Web Functions” select “screenshot”. The screenshot function will be added. In text scripts simply add the line “WebScreenshot();” at any point in the script.
Many customers will use vTest to record a series of interactions with web applications. They will then add data to parameterize the script so that it can be tested against a large variety of input data. They will finally attempt to replay the script. Some of the data that was used to parameterize the script (e.g. different username/password) might cause the application login to reject the username. However, they are confused when the test report shows that the test still passed.
This is where checkpoints come in. The passing or failing of the script is user determined. There is no way for vTest to decide whether a rejected login attempt should be considered a failed script. For some users a rejected login may be a passed script. The user needs to use checkpoints to validate the script and allow it to decide where it should pass or fail the test. In the case of a rejected login attempt, the next page will often contain elements (e.g. a text saying ‘Login Failed’) that will indicate a failed login attempt. The user could use one of the many checkpoints such as the Web Object Checkpoint or the Text Checkpoint to check the contents of the next page to determine success.
vTest provides a large number of checkpoints. A page checkpoint verifies the source of a page or frame as well as its properties. It can also be used to set thresholds for the loading time of a page. A text checkpoint verifies whether a given text string is displayed in a specified part of the web page. A web object checkpoint verifies the properties of a web object e.g. the HREF value of an HTML A tag. A table checkpoint verifies the contents of the cells in a table displayed in a web page. An image checkpoint can be used to find out verify the properties of an image on the web page. A database checkpoint verifies the integrity of data in the database used by your website. A file checkpoint can be used to find out if two files are identical. A string checkpoint can be used to compare any two strings in the script.
The next time you use vTest to test a web application, make sure you utilize checkpoints. This will allow vTest to accurately determine if a script should pass or fail the test.