July 25, 2007

For Better or For Worse

Posted by Ben Simo


I just stumbled across a 1986 quote from then-president of now defunct Ashton-Tate. I think it has application to automation in software testing.

"A computer will not make a good manager out of a bad manager.

It makes a good manager better faster and a bad manager worse faster."

-Ed Esber

All software is automation. Therefore, all software testing involves some level of automation.

Automation has potential to do good or bad faster. And, faster is not necessarily better.

Let's be smart about how and what we automate.

  Edit

Keys to Innovation

Posted by Ben Simo

Lee Copeland's CAST keynote address referenced in a previous post was not only about books. Good books was one of the items on Lee's list of eight recent innovations in software testing. Lee's complete list is shown below.

Innovations in Software Testing
(Lee Copeland's List)
  1. Context-Driven School
  2. Testing Specialties
  3. Test-First Development
  4. Really Good Books
  5. Open Source Tools
  6. Session-Based Test Management
  7. Testing Workshops
  8. Certification

I was glad to see most of the items on this list. I am especially happy to see the Context-Driven School and Session-Based Test Management on the list. I believe that these have had a significant impact on software testing and have great potential that has not yet been realized.

Tester certification may be an innovation but I don't think its impact has been good. In my opinion, the current certification options are bad. (There was a certification debate hosted by AST at CAST this year. Please take a look at Tim Coulter's review: AST Certification Debate.) Most of the certifications show nothing more than one's ability to pass a certification exam. And, many of the certifications are based on context-free and outdated views and techniques. I reviewed some ISTQB sample questions with a group of very smart testers and we could not identify a correct answer for many of the questions. We could, however, make a good guess at what we thought was expected by the exam writers. Matters of opinion and guessing at implied contexts should not be the basis for any exam. I think the following statement summarizes this concern quite well.

We don’t mind that some people hold (and teach) views or techniques that we consider antiquated. We do mind that in prep courses that teach you how to pass “objective” exams, there is no place for presentation of controversy or thoughtful analysis of the fundamentals.
- Cem Kaner and Tim Coulter

Now back to Innovations...

In typical CAST style, Lee asked the audience for things they thought he missed. The audience came up with the following additions. Yes, Model-Based Testing was mentioned twice -- followed by applause from Harry Robinson. :)

  1. Collaborative groups
  2. Model-Based Testing - specifically model-based automation
  3. Testers help define what is correct - testing is more than comparing dictated expected and actual results
  4. Model-Based Testing
  5. Fluidity - not freezing plans - recognizing the need to be adaptable
  6. Study of software development and testing history - learning from the past
  7. Toolsmithing
  8. Ethnomethodology - the study of common sense (guess who this came from)
  9. Test management as project management
  10. High volume semi-random test automation
  11. Academic research in testing techniques
  12. Prediction based on source code

What would you add to the innovations list?

What innovations do you think might be just over the horizon?

  Edit

July 12, 2007

Woodpeckers, Pinatas, and Dead Horses

Posted by Ben Simo

Here's some short blurbs of a few things I took away from CAST sessions.

From Lee Copeland's keynote address:

  • "It's nonsensical to talk about automated tests as if they were automated human testing."
  • Write or speak about something you're knowledgeable and passionate about.
  • Combine things from multiple disciplines.

From Harry Robinson's keynote address:
  • Weinberg's Second Law: If Builders Built Buildings The Way Programmers Write Programs, Then The First Woodpecker That Came Along Would Destroy Civilization.

From Esther Derby's keynote:
  • To successfully coach someone, they must want to be coached and want to be coached by you.

From James Bach's tutorial:
  • Pinata Heuristic: Keep beating at it until the candy comes out. ... and stop once the candy drops.
unless ...
  • Dead Horse Heuristic: You may be beating a dead horse.
yet beware ...
  • If it is a pinata, don't stop beating at it until the candy drops; but if it is a dead horse, your beating is bringing no value. It can be a challenge to determine if its a pinata or a dead horse.
From Antti Kervinen's presentation:
  • Separate automation models into high level (behavior) and low level (behavior implementation) components to reuse test models on a variety of platforms and configurations.

More from James Bach's tutorial:
  • Testing does not break software. Testing dispels illusions.
  • Rational Unified Process is none of the three. (attributed to Jerry Weinberg)

From the tester exhibition:

  • Testing what can't be fixed or controlled may be of little value. Some things may not be worth testing.
  • There is great value in the diversity of approaches and skills on a test team.
  • It may be possible to beat a dead horse and test (and analyze) too much. Sometimes we should just stop testing and act on the information we have.

From Doug Hoffman's tutorial:
  • Record and playback automation can be very useful for testing for the same behavior with many configurations. And, once the script stops finding errors: throw it out.

From Keith Stobie's keynote:
  • Reduce the paths though your system to improve quality. Fewer features may be better.
  • Free web sites often have higher quality than subscription sites. This is because it is easy to measure the cost of downtime on ad-supported systems.

From David Gilbert's session:
  • People expect hurricanes to blow around and change path. We should expect the same with software development projects. (David has some interesting ideas about forecasting in software development.)
  • Numbers tell a story only in context. You must understand the story behind the numbers.
One more from James:
  • Keep Notes!


What did you take away from CAST?

  Edit

Exploratory Scripted Automated Manual Testing

Posted by Ben Simo


Exploratory testing is often said to be the opposite of scripted testing. Automated testing is often said to be the opposite of manual testing. Instead of selecting one or the other, I find it helpful to look at these as opposite ends of spectra. I believe good testing contains components spread throughout multiple spectra.

I think we get into trouble when we apply these labels to “testing” as a whole. I believe all software testing has some aspects that are exploratory, some scripted, some automated*, and some manual. Hopefully all are driven by sapience.

Here are some things I remember about one of the best testing projects in my experience:

• Organizational testing processes were well documented
• Requirements were well documented
• We had enforceable quantitative requirements
• Tests were well planned
• Test plans were well documented
• Test cases were well defined and traceable to requirements
• Tests were well scripted
• Automation was an integral part of the testing

All of the above look like attributes of scripted, plan everything first, testing. We did a lot of planning and scripting for this testing. However, the following things are also true for this testing project:

• Process and documentation requirements were adaptable to the context
• Test "script" execution was mostly exploratory and mostly manual
• Bugs were only reported and officially logged if they were not fixed quickly
• Many requirements were qualitative
• Automation was “driven” by human testers


This scripted exploratory automated manual testing worked very well in its context. It was both the most scripted and most exploratory testing I can think of at the moment: it was the best testing.

Exploration does not mean leaving the map at home. Exploration is not the same as wandering. Testing was well planned while being exploratory in nature and practice. Human beings were involved at every step of the test planning and execution. Introducing automation reduced each iteration’s the test execution time from weeks to days without removing the human tester.

This testing was exploratory. It was dynamic. It was also well
scripted. The combination of static and dynamic components was central to the success of the project.

Instead of thinking of performing exploratory or scripted or automated or manual tests, find the balance of the four to create testing that works best in each situation.


* If you think no aspect of your testing is automated, try testing the software without a computer. And now that I wrote that, I am thinking of software tests that may not require a computer or software. It all depends on context. :)

  Edit

July 10, 2007

Read any good books lately?

Posted by Ben Simo


"I've never read a book about software testing."

- too many testers

In a CAST keynote address about recent innovations in software testing, Lee Copeland relayed a story about asking all the testers at a large respected financial company about their favorite software testing books. Lee said that every one of the testers said they had never read a book about software testing.

Lee compared this to a surgeon informing a patient that they've never read a book about surgery, but not to worry because they are a good surgeon.

I too have asked a number of testers about their training to be a tester and have often received responses similar to those reported by Lee.

I want to pass on Lee's encouragement to read. Lee also heralded the benefits of applying lessons learned outside technology fields to testing (e.g., philosophy and psychology) to software testing.

There was a time that there weren't many testing books from which to choose. This has changed. Today, there are many. There are some good books out there, but there are also some terrible books that promote practices that have not adapted to the past 30 years of advances in software development.

The list in the sidebar of this blog contains some books I've found useful in software testing. Inclusion in this list does not imply my endorsement of everything in the book. I don't necessarily agree with a book to like it. To me, a good book is one that makes me think.

What good testing books have you read?

  Edit

July 8, 2007

Too much testing?

Posted by Ben Simo

In a recent blog post, Jason Gorman provides some thoughts about the following question:

How much testing is too much?

To me, this is like asking "how much cheese would it take to sink a battleship?" There probably is an answer - a real amount of cheese that really would sink a battleship. But very few of us are ever likely to see that amount of cheese in one place in our lifetimes.
As Jason states, we may never encounter too much testing. However, I believe that we testers often include too much repetition in our testing and miss many bugs that are waiting to be discovered. This becomes especially likely when we limit our testing to scripted testing or put our test plans in freezers. Repeating scripted tests -- whether manual or automated -- is unlikely to find new bugs. To find new bugs, we testers need to step outside the path cleared by previous testing and explore new paths through the subject of our testing.


Executing the same tests over and over again is like a grade school teacher giving a class the same spelling list and test each week. The children will eventually learn to spell the words on the list and ace the test; but this does not help them learn to spell any new words. At some point, the repeated testing stops adding value.
If you have men who will only come if they know there is a good road, I don't want them. I want men who will come if there is no road at all.
Like Dr. Livingstone, I want testers that are willing to explore paths that have not been trampled by the testers that went before them. I want automated tests to go out like probe droids and bring back useful information. I want each manual tester on a team to think and explore the system under test in a different manner than the rest. There is a time and place for repeatable consistency, but that's just a part of testing. Real human users don't follow our test scripts. I don't want testing to be limited to testers and robots (automation) that follow scripts through pre-cleared paths.


Want to learn more about exploratory testing?


Try exploring the web with Google or your favorite search engine:

  Edit

July 6, 2007

I'm a user and I just did that

Posted by Ben Simo


Michael Bolton just blogged about the sometimes common exchange between testers and developers that often goes something like this:

TESTER: I found this really important bug. Look at this. Let me show you ...

DEVELOPER: No user would do that.

TESTER: But, I'm a user and I just did that.

DEVELOPER: But, the real users won't do that.

Michael states that what the developer really means is "No user that I've thought of, and that I like, would do that on purpose." This is very true. Michael also points out that we testers are not the real users and may do things that the real users are not expected to do.

Thinking of users that the developer did not think of is an important service we testers provide. This becomes especially important when we put applications on the Internet. We need to consider the users and user behavior that the developers did not consider. I believe it is our responsibility as testers to tactfully provide the development team with the information they need to make an informed decision about what real users might do. We can't stop the conversation at "I just did that and I'm a user." We can communicate the likelihood and impact of users doing what the developer's friendly users won't do.

Michael's post reminds me of two such recent exchanges.

The first was a bug that was the result of developers (and project managers, and business folks) assuming that all users would do something in only one of several possible ways of performing an activity. The application worked as expected if users behaved as the development team expected. However, if a user did not behave as expected that user would be locked out of future access to this system. Paying customers aren't likely to be happy when they can't access the service for which they paid. After the initial exchange that went much like the sample above, it took only a few minutes of my time to document how easy it was to get locked out of the system and why real users would not be happy with getting locked out. It also took only a few minutes to walk a project manager through the process of locking himself out of his system. The design was changed to account for the user behavior that was not originally considered.

The second was a bug that caused errors in a web application when a user selected list items in an unexpected manner. Further investigation revealed that a single user's actions could impact the performance for all users of all applications residing on the application server processing a user's activity. A single user could peg the application server's CPUs with the flick of a finger. The cause of the errors and performance issues was that this method of selecting list items triggered numerous page refresh requests as a user viewed the available options. Development's first response was that no user would do that. I then explained that I am a user of web applications and I often do what caused the problem. I wasn't even intentionally testing the functionality with the bug when I discovered it. It was only after demonstrating how easy it was to cause the problem and its impact on other users that the problem was addressed. Had I stopped at the start, the problem would not have been fixed. Had I just stood my ground and pointed fingers while insisting that it be fix, the problem may not have been fixed.

When I hear that no user would do something, I usually respond with "I just did that, and I'm a user" but I don't stop there. Like a good attorney, we need to convince our audience of our case for fixing a bug or adding an enhancement.

Bugs need good solicitors.

  Edit