February 7, 2007

People, Monkeys, and Models

Posted by Ben Simo

Methods I have used for automating “black box” software testing…


I have approached test automation in a number of different ways over the past 15 years. Some have worked well and others have not. Most have worked when applied in the appropriate context. Many would be inappropriate for contexts other than that in which they were successful.

Below is a list of methods I’ve tried in the general order that I first implemented them.

Notice that I did not start out with the record-playback test automation that is demonstrated by tool vendors. The first test automation tool I used professionally was the DOS version of Word Perfect. (Yes, a Word Processor as a test tool. Right now, Excel is probably the the one tool I find most useful.) Word Perfect had a great macro language that could be used for all kinds of automated data manipulation. I then moved to Pascal and C compilers. I even used a pre-HTML hyper-link system called First Class to create front ends for integrated computer-assisted testing systems.

I had been automating tests for many years before I saw my first commercial GUI test automation tool. My first reaction to such tools was something like: "Cool. A scripting language that can easily interact with the user interfaces of other programs."

I have approached test automation as software development since the beginning. I've seen (and helped recover from) a number of failed test automation efforts that were implemented using the guidelines (dare I say "Best Practices"?) of the tools' vendors. I had successfully implemented model-based testing solutions before I knew of keyword-driven testing (as a package by that name). I am currently using model-based test automation for most GUI test automation: including release acceptance and regression testing. I also use computer-assisted testing tools help generate test data and model applications for MBT.

I've rambled on long enough. Here's my list of methods I've applied in automating "black box" software testing. What methods have worked for you?

Computer-assisted Testing
· How It Works
: Manual testers use software tools to assist them with testing testing. Specific tasks in the manual testing process are automated to improve consistency or speed.
· Pros: Tedious or difficult tasks can be given to the computer while a thinking human being is engaged throughout most of the process. A little coding effort greatly benefits testers. A thinking human being is involved throughout most of the testing process.
· Cons: A human being is involved throughout most of the testing process.

Static Scripted Testing
· How It Works: The test system steps through an application in a pre-defined order, validating a small number of pre-defined requirements. Every time a static test is repeated, it performs the same actions in the same order. This is the type of test created using the record and playback features in most test automation tools.
· Pros: Tests are easy to create for specific features and to retest known problems. Non-programmers can usually record and replay manual testing steps.
· Cons: Specific test cases need to be developed, automated, and maintained. Regular maintenance is required because most automated test tools are not able to adjust for minor application changes that may not even be noticed by a human tester. Test scripts can quickly become complex and may even require a complete redesigned each time an application changes. Tests only retrace steps that have already been performed manually. Tests may miss problems that are only evident when actions are taken (or not taken) in a specific order. Recovery from failure can be difficult: a single failure can easily prevent testing of other parts of the application under test.

Wild (or Unmanaged) Monkey Testing
· How It Works:
The automated test system simulates a monkey banging on the keyboard by randomly generating input (key-presses; and mouse moves, clicks, drags, and drops) without knowledge of available input options. Activity is logged, and major malfunctions such as program crashes, system crashes, and server/page not found errors are detected and reported.
· Pros: Tests are easy to create, require little maintenance, and given time, can stumble into major defects that may be missed following pre-defined test procedures.
· Cons: The monkey is not able to detect whether or not the software is functioning properly. It can only detect major malfunctions. Reviewing logs to determine just what the monkey did to stumble into a defect can be time consuming.

Trained (or Managed) Monkey Testing
· How It Works: The automated test system detects available options displayed to the user and randomly enters data and presses buttons that apply to the detected state of the application. · Pros: Tests are relatively easy to create, require little maintenance, and easily find catastrophic software problems. May find errors more quickly than an unsupervised monkey test.
· Cons: Although a trained monkey is somewhat selective in performing actions, it also knows nothing (or very little) about the expected behavior of the application and can only detect defects that result in major application failures.

Tandem Monkey Testing
· How It Works:
The automated test system performs trained monkey tests, in tandem, in two versions of an application: one performing an action after the other. The test tool compares the results of each action and reports differences.
· Pros: Specific test cases are not required. Tests are relatively easy to create, require little maintenance, and easily identify differences between two versions of an application.
· Cons: Manual review of differences can be time consuming. Due to the requirement of running two versions of a application at the same time, this type of testing is usually only suited for testing through web browsers and terminal emulators. Both versions of the application under test must be using the same data – unless the data is the subject of the test.

Data-Reading Scripted Testing
· How It Works: The test system steps through an application using pre-defined procedures with a variety of pre-defined input data. Each time an action is executed, the same procedures are followed; however, the input data changes.
· Pros: Tests are easy to create for specific features and to retest known problems. Recorded manual tests can be parameterized to create data-reading static tests. Performing the same test with a variety of input data can identify data-related defects that may be missed by tests that always use the same data.
· Cons: All the development and maintenance problems associated with pure static scripted tests still exist with most data-reading tests.


Model-Based Testing
· How It Works:
Model-based testing is an approach in which the behavior of an application is described in terms of actions that change the state of the system. The test system can then dynamically create test cases by traversing the model and comparing results of each action to the action’s expected result state.
· Pros: Relatively easy to create and maintain. Models can be as simple or complex as desired. Models can be easily expanded to test additional functionality. There is no need to create specific test cases because the test system can generate endless tests from what is described in the model. Maintaining a model is usually easier than managing test cases (especially when an application changes often). Machine-generated “exploratory” testing is likely to find software defects that will be missed by traditional automation that simply repeats steps that have already been performed manually. Human testers can focus on bigger issues that require an intelligent thinker during execution. Model-based automation can also provide information to human testers to help direct manual testing.
· Cons: It requires a change in thinking. This is not how we used to creating tests. Model-based test automation tools are not readily available.

Keyword-Driven Testing
· How It Works:
Test design and implementation are separated. Use case components are assigned keywords. Keywords are linked to create tests procedures. Small components are automated for each keyword process.
· Pros: Automation maintenance is simplified. Coding skills are not required to create tests from existing components. Small reusable components are easier to manage than long recorded scripts.
· Cons: Test cases still need to be defined. Managing the process can become as time consuming as automating with static scripts. Tools to manage the process are expensive. Cascading bugs can stop automation in its tracks. The same steps are repeated each time a test is executed. (Repeatability is not all its cracked up to be.)

  Edit

6 Comments:

February 08, 2007  
Matthew wrote:

Hi Ben. Interesting list. I noticed that you are presenting at the SQAD conference this year; would you consider applying for GLSEC? (Nov 7-8; www.glsec.org.) Call for papers in the next couple months; don't worry, I'll post it on my blog.

Regards,

May 08, 2007  
Shrini Kulkarni wrote:

>> Keyword-Driven Testing

I don’t agree with this practice of appending the word testing to anything and everything....

Have you seen a human tester doing testing (in James Bach style) using "keywords" ?

Do you wanted to say "Keyword driven Automation" - then it might make sense...

I subscribe to James and Cem's definitions of testing -

- Questioning the product in order to evaluate it

- Empirical/Technical investigation to reveal quality related information to stake holders

- An infinite search for problems that bug someone who matters (Michael Bolton used this one in some conversation)

Now in this context - words like "Keyword driven testing", "Action based testing", “business process testing" - don't make sense to me ...

They actual de-mean or trivialize testing in general...

Can you question a product or undertake a technical investigation with "keywords" or "actions" or apply it to a business process (without a software application) ?

Shrini

May 09, 2007  
Ben Simo wrote:

Hi Shrini,

My use of "Testing" in the headers of this post refers to "Test Automation".

I agree with the "testing" definitions used by James, Cem, and Michael. The subtitle of this blog is "Questioning software" because that's how I usually define software testing. Although, my questioning of software involves lots of questioning of people to determine what the software should do.

I have not seen a human tester perform testing using keywords. ... and I hope I never do. Keywords are a tool that humans can use to organize test automation code.

When I use terms like "keyword-driven testing" to refer to test automation, I do not intend to imply that the whole of testing is performed by the automation. I view automation as a tool to be used by human testers -- not a replacement for cognitive testing.

A manager once expressed concern about my use of the term "monkey test" in that she thought it might imply that we testers are no better than monkeys. I certainly hope that no human testers are doing "monkey testing". If they are, I can likely automate them out of a job because they are not providing much value.

In using commonly-accepted terms, I do not intend to devalue testing. I use the terms because that's how they are commonly used. Words, names, and titles are models of ideas. They are less complex than the ideas they represent. Even though I don't always agree with the common use of terms, they usually give me a common starting point for communicating ideas.

I do understand your point. However, I believe it is better to encourage people to stretch the definition of the common terms than to make up new terms. I've tried arguing over definitions and have found that it can be an easy way to alienate myself from those I'm trying to engage in a discussion. I have discovered that different people use terms in different ways. Instead of arguing over definition, I usually prefer to figure out what the other person means when they use a word.

If I had the influence of James, Cem, or Michael ... then I might be pushing to redefine the language of testing.

:)

Ben

May 09, 2007  
Ben Simo wrote:

... and with the above being said ...

There are some terms that I no longer use as they are generally used: such as "best practices". :)

August 20, 2007  
Sarath Chander wrote:

Hi Ben,
Check out Pathmodeller, a tool that generates Test sequences from UML based code. It maximises the Test bench landscape suiting a Model-based approach. It also aids granularity and visibility of individual Test strands, which I think is (should be) an important benefit of the Modal based approach.

In fact, granularity controls redundancy in Manual Testing when Test volumes peak and have to be dealt with limited resources.

August 20, 2007  
Sarath Chander wrote:

Pathmodeller is available at www.intremsolutions.com