June 1, 2007

Model-Based Test Engine Benefit #3: Automatic handling of application changes and bugs

Posted by Ben Simo


Automated tests based on models have one important feature that scripted testing cannot: automated handling of application changes and bugs. I do not mean that model-based automation can think and make decisions like a human tester does when they discover something unexpected. Instead, the automated selection of test steps supports working around the unexpected without special exception handling code for each situation.


For example: If there are two methods for logging into an application and one breaks the test engine can try the alternate option to get to the rest of the application. If a traditional scripted automated test encounters an unexpected problem it will not be able to complete.

The model-based test engine (MBTE) can be coded to not try an action after a pre-defined number of failures. The MBTE's selection algorithm can then seek out other options that have not yet been found to fail. This also results in the MBTE reattempting failed actions and exposing failures that only occur after specific sequences of actions.

To facilitate the error detection, each action and validation should return the status to the MBTE framework. This allows for error handling to be built into the framework instead of each test model or script. Standard error codes -- either your own or the tool's built-in codes -- help standardize reporting.

For example: return a zero (0) when an action successfully completes or a validation passes, return a negative number on failure, and return a positive number for inconclusive results that require manual investigation.

Code the test engine to detect the error status of each action and validation and take appropriate action. If an action passes, perform the validations for the action's expected end state. If an action fails, restart the application or do whatever other error recovery fits your situation.

If a validation fails you can either code that the next validation be performed or identify validation failures that should stop further validation.

Validations can also be flagged to be state-changing failures by adding a "fail state" column to the oracle/validation tables. Give this field the name of the state that the application is in if the validation fails. You can even build standard states such as "restart" into the framework to indicate that the state is unknown and the application needs to be restarted. For example, a validation that an HTTP 404 error page is not displayed could have a "fail state" of "restart" defined to indicate that the application should be restarted when this validation fails.

Julian Harty has suggested that validations can be weighted and test execution be varied based on the combined score of failures.


Build error handling into the framework so that you can define the details with data instead of code.

  Edit

1 Comment:

June 02, 2007  
Madhukar Jain wrote:

Hi Ben,

I really liked your article a lot, and everything mentioned here is really the situation which we face practically when building an automation frameowrk.

Regards,
Madhukar.
http://madhukarj.blogspot.com/