March 18, 2007

SQuAD 2007 Conference Presentation

Posted by Ben Simo

Click here to download my SQuAD conference presentation slides.

Please ask questions using the blog's comment feature or email them to me at ben@qualityfrog.com.

Ben

  Edit

7 Comments:

April 16, 2007  
Pradeep Soundararajan wrote:

Hi Ben,

The slides or presentation appears to be interesting. It would be great if you could also put up an audio of your presentation or if it's over, you might even want to record a session and put up.

Thanks

April 17, 2007  
Ben Simo wrote:

Hi Pradeep,

I hadn't thought of adding audio. I may do that.

I am -- although slowly -- covering the details of the presentation's content on this blog.

Ben

May 08, 2007  
Shrini Kulkarni wrote:

Following statement of yours have caught my attention - would need more explanation

"Too many testers think that their automated tests do exactly same things a manual tester do".

Note the key words in above statement - "exactly" "same" "manual tester"

What do you see as a problem here? how this can lead to automation benefits being not realized?

Let me play devils advocate for a moment to add some spice to this. As a manager, is not fair expectation to expect that automation (I pay heavy for it) to do least what a manual tester does - execute tests and declare pass/fail? Manual testing effort is expensive and as a manager I have mandate to cut down QA cost (don’t ask me why development is not cutting their cost)

Role reversal - let me wear the hat of "holistic Automation consultant" -

Problems I see ---

1. Feasibility - as you right quote James' rules of automation - It is NOT humanly possible to replicate the actions of a human tester in automation. So that is rule out. Hence any expectation that automated tests replicate human testers work would lead to frustration. Hence, the failure.

2. More interesting thing here, managers are not really worried about how close automation tests can get in comparison to manual tester. Their frustration comes from the fact that they are not able to "fire" few "no more required" manual testers and save some money. As coverage from automated tests is less due to tool and application testability issues, automation code breaks, investigation is time consuming and code maintenance costs are high - hence they later realize that they were better off without automation ... That is the point of frustration....


Second line that caught my attention -

"Let testers document their mental model and allow the computer generate tests and execute"

Again what are the keywords here -
"mental model"
"document"
"generate"

What are the problems in this statement --

Mental models are often difficult to describe and document in plain English. Some are spontaneous and complex to describe. You can not read tester’s mental model as you read a bar code. Even if you manage to document a mental model to reasonable degree on a paper - you can translate that to formal one say mathematical or programmatic representation.

Even if you do that - it will be a narrow representation like FSM that Harry Robinson talks about MBT. It is surprising that you, in the presentation - jump from "mental model" and "how do we model behavior" to "Finite state machine" - implying that Mental model, Model behavior and FSM are one and the same thing. That can not be right.

"All mental models are not Finite state machines where as all Finite state machines can be mental models"

Make a clear distinction between mental model and formal model like FSM. Note that mental models represent the "universe" in which FSMs are a distinct subset that are suited for automation because of their closeness to mathematical way of representation.

For a computer to generate test from a model - it needs a machine understandable (or interpretable) language. As I said earlier it is difficult to document a mental model into programming terms.

Moreover, if it is possible for a computer to read a model, generate and execute test cases - why not make it bit more intelligent and make the model itself - then we have a true automated test....

What do you say?

Shrini

May 09, 2007  
Ben Simo wrote:

Hi Shrini,

Thanks for your comments. I'll respond in pieces. :)

"Too many testers think that their automated tests do exactly same things a manual tester do".

What do you see as a problem here? how this can lead to automation benefits being not realized?


The problem here is that scripted automation is going to do the same thing every time it executes a test case and it is only going to see what it is coded to see. A human tester executing scripted tests is going to vary the test every time they execute it -- whether intentional or not. An attentive human tester is going to see things that they did not see the first time they executed a test. Basically, I was trying to say that a human tester thinks. Scripted automation simply replays what it was coded to do.

I have often heard management and testers ask why automation did not catch bugs that were obvious to human testers. The reason is that the automation was not coded to look for the presence or absence of that bug. Automation can only do what was thought up when the automation was coded.

Automation benefits are not fully realized when we go into automation with bad assumption about what it can and cannot do.

Automation benefits are not fully realized if we treat it as a code once and replay many times process. Good automation can be a part of the interactive testing process.

Ben

May 09, 2007  
Ben Simo wrote:

Let me play devils advocate for a moment to add some spice to this.

As a manager, is not fair expectation to expect that automation (I pay heavy for it) to do least what a manual tester does - execute tests and declare pass/fail?

If you have the factory school view that software test execution is executing pre-written steps, then it may seem reasonable to expect automation to do what a manual tester does. However, even the most scripted manual testing includes more than measuring whether the written criteria passes or fails. Even though we call black-box testing "functional" testing, human testers can and will judge the "quality" of software in non-quantifiable ways. This input can be lost when robots execute tests.

Manual testing effort is expensive and as a manager I have mandate to cut down QA cost (don’t ask me why development is not cutting their cost)

Manual testing is expensive. Automated testing is also expensive. The tool vendors often prey on the above concerns of management. They often sell automation tools as time and money savers that can replace expensive manual testers. I remember magazine ads from a leading vendor several years ago that showed a tester's feet up on the desk as he watched the computer test for him. That kind of automation is rare.

Instead of looking at automation as a way to replace the entire test execution process, I propose looking for ways that automation can assist manual testers by doing things that computers do better than people: primarily process data for long periods of time without complaint.

Instead of looking at automation as just an execution tool, look for ways that automation can help generate tests. Pairwise and model-based testing tools are two examples of this.

Instead of just looking for things that automation can pronounce as "passed" or "failed", look for ways that automation can help testers focus their manual testing. For example, have automation report failed heuristic-style rules when you don't have hard pass/fail requirements. Have automation control the application and take screen shots or generate data for review by human testers.

Let the machines do the tedious chores of testing and let the people do the thinking.

Ben

May 09, 2007  
Ben Simo wrote:

Hence any expectation that automated tests replicate human testers work would lead to frustration. Hence, the failure.

As coverage from automated tests is less due to tool and application testability issues, automation code breaks, investigation is time consuming and code maintenance costs are high - hence they later realize that they were better off without automation ...

Exactly my point. Except that they could have been better off with automation -- if they had different expectations at the start.

Instead of trying to replicate human testers, good automation takes advantage of the computer's strengths and lets the people do the thinking.

May 09, 2007  
Ben Simo wrote:

Even if you do that - it will be a narrow representation like FSM that Harry Robinson talks about MBT. It is surprising that you, in the presentation - jump from "mental model" and "how do we model behavior" to "Finite state machine" - implying that Mental model, Model behavior and FSM are one and the same thing. That can not be right.

That is not right. The slides are in incomplete model of my presentation. :)

It is impossible to make an entire mental model explicit. However, we can explicitly model portions of that mental model. My goal here is to get testers to realize that they already design test cases from models. Instead of writing test cases, I am proposing that mental models be modeled (less complex than the original) in a way that automated tools can process.

I use finite state machines as the foundation and then move to hierarchical state machines with guarded transitions as a more manageable solution.

I completely understand that any explicit model is less complex than a testers mental model and that the testers mental model is less complex than the real thing. Models don't have to be completely accurate to be useful.

I have found that machine-readable explicit models can be useful tools in confirming the mental models. I can model how I think something should work, then give it to a computer to test my model and report back anything it finds that differs from my model.

For a computer to generate test from a model - it needs a machine understandable (or interpretable) language. As I said earlier it is difficult to document a mental model into programming terms.

Yes it can be difficult. And most MBT implementations I've seen are intimidating and don't scale. Instead of defining models in code, I prefer to define them in tables using hierarchical states. I've found that this makes models easier to create and manage.

Moreover, if it is possible for a computer to read a model, generate and execute test cases - why not make it bit more intelligent and make the model itself - then we have a true automated test....

I'd like to try that.... although it would not remove people from the process. I can envision a system that crawls an application (like a search engine spider) and builds a model based on what it finds. The automated tool could then report back what it finds for humans to review and feed into future automated test executions. I am doing something like this for the results validation for some model-based tests. However, without human review: all it does is confirm that the code does what it was coded to do. :)