March 25, 2007

Common Barriers to Model-Based Automation

Posted by Ben Simo

If modeling is as simple as the previous blog entry implies, then why isn’t everyone using model-based automated testing?

1. Model-based testing requires a change in thinking.

Most testers have been trained to transform mental models into explicit test scripts – not document behavior in a machine-readable format. However, most testers will find that modeling is actually easier than defining and maintaining explicit test cases for automation.

2. Large models are very difficult to create and maintain.

Small additions to a model can easily trigger exponential growth in the size and complexity. This state explosion usually requires that large models be defined using code instead of tables. The large model problem can be solved through the use of Hierarchical State Machines (HSMs) and state variables.

Most software states have hierarchical relationships in which child states inherit all attributes of the parent states plus have additional attributes that are specific to the child. Hierarchical state machines reduce redundant specification and allow behavior to be modeled in small pieces that can be assembled into the larger system. For example, the following HSM represents the same keyless entry system with less than half as many transitions defined. Actions that are possible from each state are also possible from all the child states. Validation requirements apply to the parent and all child states. This greatly reduces the size and complexity of the model. Large systems can be modeled by merging many small models.






Defining some state information as variables instead of explicitly named states can reduce the state explosion. Sometimes it is easier to define some conditions as state variables instead of specific child states. These state variables can be used to define guarded transitions. Guarded transitions are transactions that are only possible when the specified data condition is met. A requirement that all doors be closed before the example keyless entry system will arm the alarm may be specified as shown below. Without using guarded transitions, adding the difference in behavior based on whether doors are open or closed would require many new states and transitions.






3. The leading test tool vendors do not offer model-based testing tools.

Modeling is not a “best practice” promoted by the tool vendors. Tool vendors often dictate the way that their tools are used. This results in automation practices being defined to fit the tools instead of making the tools fit the desired approach. The good news is that many test automation tools – both commercial and open source – provide enough flexibility to build new frameworks on top of the built-in functionality.

4. Model-based testing looks complicated.

The model-based testing literature often makes modeling look more complicated than necessary. The truth is that modeling does not require expert mathematicians and computer scientists. A relatively simple framework can support complex test generation and execution with less manual work than most other automation methodologies.

  Edit

March 24, 2007

Finite State Machines

Posted by Ben Simo

Software behavior can be modeled using Finite State Machines (FSMs). FSMs are composed of states, transitions, and actions. Each state is a possible condition of the modeled system. Transitions are the possible changes in states. Actions are the events that cause state transitions. For example, the following FSM shows the expected behavior of a car keyless entry system.




Images like the above are great for human use, but not machines. State transitions and the actions that trigger them can also be defined in a table format that can be processed by a computer. The above FSM can be represented using the table below.





The requirements for each state can also be defined using tables. The table below contains sample requirements for example keyless entry system.

  Edit

March 23, 2007

Artificial Intelligence Meets Random Selection

Posted by Ben Simo

Automated tests can be defined using models instead of scripting specific test steps. Tests can then be randomly generated from those models. The computer can even use the model to recover from many errors that would stop scripted automation. Although the computer cannot question the software like a human tester, the automation tool can report anything it encounters that deviates from the model. Thinking human beings can then adjust the model based on what they learn from the automated test’s results. This is automated model-based testing.

As with any explicit model, a model built for test automation is going to be less complex than the system it represents. It does not need to be complex to be useful. New information can be added to existing models throughout the testing process to improve test coverage.

  Edit

Intelligent Design

Posted by Ben Simo

All testing is model-based. Good tests require intelligent design. Testers use mental models of system behavior when creating and executing tests. Scripted tests are created from the designers’ mental models prior to test execution and do not change based on the results. Exploratory testing starts with a pre-conceived mental model that is refined as tests are executed. Whether scripted or exploratory, human testers are capable of applying information they learn during test execution to improve the testing. Computers cannot adjust scripted test cases during execution. What if automated tests could apply behavioral models to generate tests that go where no manual tester has gone before?

  Edit

March 21, 2007

Why does test automation often fail to deliver?

Posted by Ben Simo

Software Is Automation

Testers are continually asked to test more in less time. Test automation is often considered to be a solution for testing more in less time. Software is automation. It makes sense to automate the testing of software. However, functional black box test automation rarely does more in less time. Why does automation seldom deliver faster, better, and cheaper testing?


Common Test Automation Pitfalls

The following problems are often encountered during test automation projects.

1. Tests are difficult to maintain and manage.

Many GUI test automation attempts to replace just a single step of the manual testing process: test execution. This can work well when there is value in repeatedly executing the same steps and the application is stable enough to run the same script multiple times. However, in the real word, applications change and doing the same thing over and over again is rarely beneficial. A thinking human tester still needs to create the tests, code the automation, review the results, and update the automation code every time the system changes. The development, maintenance, and management of automated test scripts often requires more time and money than manual test execution.

2. Test results are difficult to understand.

Test results need to be manually reviewed. Automated test results often do not contain enough information to determine what failed and why. Reviewing results takes time. Manually repeating the tests to determine what really happened takes even longer. Insufficient results reporting can easily negate any advantages of automation.

3. Application changes and bugs can prevent tests from completing.

Application changes and bugs that will not stop a human tester can easily stop automated test execution in its tracks due to cascading failures. A single failed step in an automated test can easily prevent execution of all later steps. Updating and restarting tests every time they encounter the unexpected is not an improvement over manual testing.

4. Tests retrace the same steps over and over, and don’t find new bugs.

Scripted automation will repeat the same steps every time it is run. It will not encounter bugs that are not on the pre-cleared scripted path. Sometimes consistency is good, but consistency will not find new bugs. Many testers make the mistake of believing that their automated tests are doing the same things as a good manual tester. Scripted automation will only do what it is coded to do.

Functional black box test automation frequently requires more manual work to poorly do less than a human tester. This is the result of attempting to duplicate manual testing with automation. Attempting to duplicate manual testing processes not only fails to reduce costs or improve coverage; it creates additional manual work.

Useful software rarely mirrors the manual process that it replaces. Word processors are much more than virtual typewriters. Spreadsheet programs are more than virtual calculators. The process needs to change to take advantage of the strengths of the machine. People and machines have different strengths. A machine cannot think. A human tester is unlikely to be happy running tests all night or performing tedious calculations. Most software is built to assist human users, not replace them. Test automation should assist human testers, not attempt to replace them.


Rules of Test Automation

Arguing that computers cannot think like a human being, James Bach proposed the following test automation rules in a blog entry titled Manual Tests Cannot Be Automated.

Rule #1
A good manual test cannot be automated.

Rule #1B
If you can truly automate a manual test, it couldn’t have been a good manual
test.

Rule #1C
If you have a great automated test, it’s not the same as the manual test that you believe you were automating.


A great automated test is one that assists testers by doing what is not easily done manually without creating more manual work than it replaces. Great automation goes beyond test execution by assisting with test generation and providing useful information to manual testers. Combining Model-Based Testing (MBT) with a tester-friendly automation framework is one way to improve the effectiveness of test automation.

  Edit

March 18, 2007

SQuAD 2007 Conference Presentation

Posted by Ben Simo

Click here to download my SQuAD conference presentation slides.

Please ask questions using the blog's comment feature or email them to me at ben@qualityfrog.com.

Ben

  Edit

Software is more reliable than people!

Posted by Ben Simo

Software is 100% reliable. It does not break. It does not wear out.

We can depend on software to exactly what it is coded to do every time it does it.

This is why software quality stinks!

The consistent repeatability in software is both a blessing and a curse. We can depend on software processing the same data in the same way every time. We can also rely on software to not do what it does not do every time.

The repeatability of software is both its greatest strength and weakness. A simple mistake in design or implementation will forever repeat itself each time a computer program runs.

Over 50 years ago, three mathematicians wrote:


Those who regularly code for fast electronic computers will have learned from bitter experience that a large fraction of the time spent in preparing calculations for the machine is taken up in removing the blunders that have been made in drawing up the programme. With the aide of common sense and checking subroutines the majority of mistakes are quickly found and rectified. Some errors, however, are sufficiently obscure to escape detection for a surprisingly long time.

[R.A. Brooker, S. Gill, D.J. Wheeler, "The Adventures of a Blunder", Mathematical Tables and Other Aids to Computation, 1952]

The source of the problem is people. We are not reliable. We make mistakes. Software amplifies and repeats our successes and our mistakes equally well.

When our software encounters the unexpected, errors occur. As developers and testers, we need to expect the unexpected. Think about how requirements might be misunderstood and clarify any ambiguity. Think about how users might misuse a system -- either accidentally or intentionally -- and ensure that the system can handle that user behavior.

The software we use today is exponentially more complex than the software being developed 50 years ago. There have got to be more opportunities for blunders in today's software than there were in software development half a century ago. And with that complexity some errors become even more obscure and are more likely to escape detection by developers and testers. Users, however, seem to easily encounter these errors.


A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

-- Douglas Adams

  Edit

March 13, 2007

Expecting the unexpected. Part 2

Posted by Ben Simo

How can we create automation that can deal with the unexpected?
The first step is to create test automation that "knows" what to expect. Most GUI test automation is built by telling the computer what to do instead of what to expect. Model-Based Automated Testing goes beyond giving the computer specific test steps to execute.

Model-Based Testing is testing based on behavioral models instead of specific test steps. Manual testers design and execute tests based on their mental models of a system's expected behavior.

Automated tests can also be defined using models instead of scripting specific test steps. Tests can then be randomly generated from those models -- by the computer instead of a manual tester. The computer can even recover from many errors that would stop traditional test automation because it knows how the system is expected to behave. And by knowing how it is expected to behave, it can detect unexpected behavior. Unexpected does not necessarily mean wrong behavior. The behavior could be wrong or it could be something that was not included in the model. The computer can report the unexpected behavior to human testers for investigation and future updates to the model.

For example, if one path to functionality to be tested fails, the MBT execution engine can attempt to access that functionality by another path defined in the model.

Of course, there will always be some cascading failures that stop both automated and manual tests. MBT inherently provides better error handling than scripted test automation.

  Edit