July 8, 2007

Too much testing?

Posted by Ben Simo

In a recent blog post, Jason Gorman provides some thoughts about the following question:

How much testing is too much?

To me, this is like asking "how much cheese would it take to sink a battleship?" There probably is an answer - a real amount of cheese that really would sink a battleship. But very few of us are ever likely to see that amount of cheese in one place in our lifetimes.
As Jason states, we may never encounter too much testing. However, I believe that we testers often include too much repetition in our testing and miss many bugs that are waiting to be discovered. This becomes especially likely when we limit our testing to scripted testing or put our test plans in freezers. Repeating scripted tests -- whether manual or automated -- is unlikely to find new bugs. To find new bugs, we testers need to step outside the path cleared by previous testing and explore new paths through the subject of our testing.


Executing the same tests over and over again is like a grade school teacher giving a class the same spelling list and test each week. The children will eventually learn to spell the words on the list and ace the test; but this does not help them learn to spell any new words. At some point, the repeated testing stops adding value.
If you have men who will only come if they know there is a good road, I don't want them. I want men who will come if there is no road at all.
Like Dr. Livingstone, I want testers that are willing to explore paths that have not been trampled by the testers that went before them. I want automated tests to go out like probe droids and bring back useful information. I want each manual tester on a team to think and explore the system under test in a different manner than the rest. There is a time and place for repeatable consistency, but that's just a part of testing. Real human users don't follow our test scripts. I don't want testing to be limited to testers and robots (automation) that follow scripts through pre-cleared paths.


Want to learn more about exploratory testing?


Try exploring the web with Google or your favorite search engine:

  Edit

2 Comments:

July 08, 2007  
Joe Strazzere wrote:

"Executing the same tests over and over again is like a grade school teacher giving a class the same spelling list and test each week. The children will eventually learn to spell the words on the list and ace the test; but this does not help them learn to spell any new words. At some point, the repeated testing stops adding value."

I agree with the point you are trying to make, but this is a poor example. Learning to spell isn't really the point of software - being good enough for production is.

If executing the same tests were really like a teacher using the same spelling list and test each week, then the students would
- occasionally forget everything they previously knew
- have a batch of new schoolmates arrive every single day - some of whom don't even know how to tie their shoes
- periodically forget random words for no obvious reason
- be allowed to graduate when they knew every word on the list and nothing more - acing the test is the only requirement

July 08, 2007  
Ben Simo wrote:

Joe,

You have tested my example and found it wanting. In software, the "students" sometimes change without notice and forget how to do what they used to do well.

I just read a much better comparison of testing to music by Jonathan Kohl. Take a look at it here.