September 30, 2008

The Antonym of Testing

Posted by Ben Simo


"... one usually encounters a definition such as, 'Testing is the process of confirming that a program is correct. It is the demonstration that errors are not present.' The main trouble with this definition is that it is totally wrong; in fact, it almost defines the antonym of testing."

- Glenford Myers,

Software Reliability: Principles & Practices, 1976

People keep telling me that testing is a validation activity -- that the purpose of testing is to validate that the software meets all the specifications, has no errors, meets performance SLAs, meets expectations of anonymous users, or some other lofty goal.

I read about testing processes designed to validate software. I use testing tools built to support validation. I listen to service companies pitch testing services to validate software. I read about testing metrics built on the assertion that software systems can be proved correct. I attend testing presentations explaining the presenters' best practices for validation.

The trouble is that we cannot prove software correct. We cannot prove the absence of bugs. We cannot test every possible state and input. We cannot evaluate every possible output. We cannot fully understand the desires of stakeholders. We cannot prove that customers will be happy. We cannot prove that a software product will solve the problems it was built to solve. If all this were possible, I suspect insurance companies would find a way to make a profit selling software quality insurance.

"If you think you can fully test a program without testing its response to every possible input, fine. Give us a list of your test cases. We can write a program that will pass all your tests but still fail spectacularly on an input you missed. If we can do this deliberately, our contention is that we or other programmers can do it accidentally."

- Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, Second Edition, 1999

Now, thirty-two years since Glenford Myers called testing to prove correctness the opposite of testing, we're surrounded by testing practices and tools based on proving correctness. The myth of proving correctness is alive and well.

Activities designed to try to prove correctness are the antonym of testing.

So if testing is not validation, what is testing? Testing is investigation; and communicating useful information about quality to decision makers.

"Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system."

- James Bach,
James Bach on Risk-Based Testing, STQE Magazine, Nov 1999


"Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information."

- Cem Kaner, James Bach, Bret Pettichord,
Lessons Learned In Software Testing: A Context-Driven Approach, 2002


"A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions."

- Brett Pettichord,
Don't Become the Quality Police, StickyMinds.com, 2002


Once we admit that we cannot prove the software correct, we can refocus our efforts on finding useful quality-related information. Instead of pretending to assure quality or validate correctness, we can gather and communicate useful information. Investigate the software. Find information about threats to the quality of the systems under investigation. Communicate that information in terms that matter to stakeholders. Help managers make informed decisions.

  Edit

July 26, 2008

Pause at the Pump

Posted by Ben Simo

  • My fuel gauge is on empty.
  • I don't want to stop, but pull into the gas station.
  • I tell the children to stay in the car.
  • I get out of the car.
  • The children are asking me questions from inside the car that I can't hear well enough to understand.
  • I swipe my credit card in the pump's card reader.

  • The pump responds by prompting me to "SELECT WINDOW OR OUTSIDE".
  • The children are still talking to me. I still don't understand.
  • I pause and stare at the keypad.





Is there a problem here?

  • I pause to think for a moment.
  • I hear the children asking me questions that I still don't understand through the closed car windows.
  • I scan the keypad again.
  • I cant find the "OUTSIDE" button.


Recognizing Bugs

At CAST last week, Pradeep Soundararajan gave a Lighting Talk about the importance of testers being able to recognize a bug. Tests may be of little use if the tester doesn't recognize the bugs triggered by the test.

Sometimes bugs are obvious. Sometimes bugs are not clearly violations of requirements documents. This is especially true when it comes to human computer interaction problems.

Requirements are not always clear and objective.

So, do you recognize why I may have paused when I read the prompt on the gasoline pump? It wasn't the price of the $4 per gallon fuel. It wasn't because I had to think about how I wanted to pay.

I paused because I didn't see a button labeled "OUTSIDE". Plus, I already swiped my credit card indicating that I wanted to pay at the pump. And even if I were to pay at the cashier window, I would still be outside.

I wonder how many minutes are wasted each month prompting customers to select where they want to pay after they have swiped their credit card. I wonder how many other people pause and read twice in search of the button to indicate that they want to pay outside.

The developers and testers of the software in this pump may have not recognized this problem. Maybe the makers executed test scripts -- either automated or manual -- and were blind to the problem. Or maybe they didn't deem it important enough to change.

Sometimes familiarity with the technical details of a system can hide problems that are obvious to those that don't know the technology, the requirements documents, and the test scripts. As testers it is important that we be careful not to let our familiarity with a system make us blind to to bugs -- things that bug our users.

Will you recognize a problem if you see it?

  Edit

July 19, 2008

Images of CAST 2008

Posted by Ben Simo

The third annual Conference of the Association for Software Testing (CAST) happened this past week in Toronto, Ontario. This was not your typical sit-down, shut-up, and listen conference. Everywhere I looked, I saw testers conferring -- and that makes a good conference.


Day 1 - Tutorials


Day 2 - Conference


Day 3 - Conference

  Edit

July 12, 2008

Announcing FROSST 1 - Front Range Open Space Software Testing Conference

Posted by Ben Simo

ThinkTalkTest, Ltd., co-founded by Ben Simo and Heidi Harmes-Campbell, is pleased to announce the first FROSST (Front Range Open Space Software Testing) conference.

FROSST 1 will be held on October 10th and 11th in Westminster, Colorado. The conference will be held on a Friday evening and all day Saturday.

The purpose of this conference is to facilitate collaboration amongst software testers in Colorado and beyond. Invite your friends and colleagues.

FROSST is free to all participants. Anyone with something to share or learn about software testing is welcome. Participation is limited to the first 150 testers that sign up. Sign up now to claim a spot.

FROSST is an open space conference. This means that the participants will create the agenda for the conference. This will be done leading up to the event through a yahoo group and discussions the first evening of the conference. If you have something to present, propose it. If there is something you want to learn, ask about it. This is a participatory conference.

Please see www.FROSSTCON.com for more information. Join the FROSSTCON Yahoo Group for the latest information and to participate in planning for the event.


  Edit

June 12, 2008

Let's Talk Testing at CAST

Posted by Ben Simo

Attend CAST
I attended the Conference of the Association for Software Testing (CAST) last year and discovered something new: a conference that does a great job of mixing presentations by testing thought leaders and practitioners with conferring. CAST is now my favorite testing conference.

In addition to learning from expert keynote presenters (including Jerry Weinberg and Cem Kaner), I look forward to time spent with people that care about software testing.

Testers tend to question things. The organizers of CAST recognize this and allow for questioning of every presentation. I saw this in action last year. I heard audience members add insight to what was presented. I heard assertions challenged and discussed. I even observed discussion move into another room and continue beyond the scheduled session. I observed testers teaching and learning from one another. Discussions I usually see happen in the hotel bar after scheduled conference activities occurred during the conference. Conferees conferred.

Although the early-bird registration has past, you can still get the reduced pricing by joining AST. CAST is about half the cost of typical testing conferences. Read below and check out www.CAST2008.org for more information.

See you in Toronto.



FOR IMMEDIATE RELEASE

High Demand Causes Three-Day Software Testing Conference to Add Fourth Day

TORONTO -- June 9, 2008 -- The Association for Software Testing announced Monday an update to its traditional 3-day conference program -- a fourth day.

The decision came from conference organizers on Monday because of demand for Jerry Weinberg's Monday, July 14 tutorial titled "The Tester's Communication Clinic", which sold out last week. Weinberg has agreed to host the tutorial again on Thursday, July 17.

Weinberg, who many consider the software testing industry's first tester, has a 50-year track record of influencing the craft of exposing bugs and issues in software. Among his notable accomplishments was establishing the first separate software testing group, aiding in producing life-critical software for Project Mercury.

Since then he has produced hundreds of articles and over 30 books. According to his website (www.geraldmweinberg.com), his book "The Psychology of Computer Programming" published in 1971 is considered "the beginning of the study of software engineering as human behavior." The conference will also be the site for the launch of his new book, "Perfect Software and Other Testing Myths."

Weinberg will also be a keynote speaker, presenting a talk titled "Lessons from Past to Carry into the Future", about the steps needed to create software testing into a bona fide profession.

"Jerry's a legend in our business," said Jon Bach, AST Vice President for Conferences, "and by helping us in this way, he seems to understand that like a tester on a software project, you adapt your strategy to act on emerging information. He always seems to practice what he preaches, and maybe that's why he continues to be such an influence."

Weinberg isn't the only draw. The conference also features a keynote from industry influencer Cem Kaner, author of the best-selling book on software testing ("Testing Computer Software"). Kaner, a professor at the Florida Institute of Technology, coined the term "exploratory testing" and is actively involved in issues surrounding the integrity of electronic voting machines. His keynote is titled: "The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers."

Held in Toronto, Canada -- a city chosen for its diversity in culture, businesses, educational institutions and the arts -- the conference theme is: "Beyond the Boundaries: Interdisciplinary Approaches to Software Testing". In that spirit, it will include a keynote by Rob Sabourin and his wife Anne, titled "Applied Testing Lessons from Delivery Room Labor Triage" about their experiences helping women give birth at the Royal Victoria Hospital in Montreal.

Also following the interdiscipline theme is noted performance testing expert Scott Barber who will be presenting a talk titled "Lessons Learned from Civil Engineering." Weinberg, Kaner, Sabourin and Barber are among the few notables in the software testing industry in attendance at CAST. For the full program, see http://www.cast2008.org.

The conference expects to draw software professionals from 60 companies around the world. Registration is open to AST members and non-members. Fees are posted at http://www.associationforsoftwaretesting.org/drupal/CAST2008/Registration .

  Edit

May 20, 2008

Is There A Problem Here?

Posted by Ben Simo




msn video

To use this product, you need to install free software

This product requires Microsoft Internet Explorer 6 with Microsoft Media Players 10 and Macromedia Flash 6 or higher versions, or Mozilla Firefox 1.5 with Macromedia Flash 8, or Safari 2.0.4 with Macromedia Flash 8. To download these free software applications, click the links below and follow the on-screen instructions.

Step 1: download firefox 1.5
download firefox 1.5

Step 2: Download Macromedia Flash Player
Macromedia Flash player is free to download.
If still having problems, uninstall Flash and then re-install Flash.


Once the installations are complete, reload this page.

  Edit

May 10, 2008

Aggravation Testing

Posted by Ben Simo

An example:



How Long Do I Have To Wait?



A few hours? I don't have hours. I am sitting in the car using borrowed WiFi from a campground. I had to seek out Internet access to use software that came on a CD. I finally find Internet access and now it says I may have to wait a several hours. Can I abort if it takes longer than I have? What happens if I lose my internet access while the firmware update is underway?

I'm already frustrated with this device. I'm already frustrated with the software. I was hoping that a firmware update might fix bugs and usability issues on the device itself. I have reached the tipping point. This thing is going back to the store.

  Edit

May 4, 2008

Terrified by Improvisation

Posted by Ben Simo

[Improvisational comedy] involves people making very sophisticated decisions on the spur of the moment, without benefit of any kind of script or plot. That's what makes it so compelling -- and to be frank -- terrifying. ... What is terrifying about improv is the fact that it appears utterly random and chaotic. It seems as though you have to get up onstage and make everything up, right there on the spot. But the truth is that improv isn't random or chaotic at all. ... Improv is an art form governed by a set of rules... How good people's decisions are under the fast-moving, high-stress conditions of rapid cognition is a function of training, rules, and rehearsal.


Now, reread the quote above and replace improv with exploratory testing. See a connection? Just as improvisational theater may appear to be random and chaotic (although entertaining) to the ignorant, exploratory testing can appear to be random and chaotic to those that have been taught to rely on scripts. Good improv and exploratory testing is neither. There are rules -- heuristics.

Heuristics are rules of thumb that help solve problems. In improvisational comedy, there are rules. These are not hard rules that guarantee comedy. These are rules that skilled improv actors can use to help keep things funny. Sometimes these rules don't work and actors have to adapt. And, because they aren't following a script, they can adapt when things don't work out. Some parts of improv are scripted. I am a fan of the television show Whose Line is it Anyway. Each comedy sketch in this show is given a structure (think charter) to direct the improvisation. This structure defines and restricts (think script) specific components of each sketch while leaving the bulk of the activity open to each actor to adapt to what happens as the sketch plays itself out. While we do not see it on screen, I suspect that a great deal of training, rules, and rehearsal went into the production of Whose Line. The shows did suffer from an occasional guest participant (usually a trained script actor) that was not as skilled at improv as the regulars. However, other guests (sometimes not actors) who understand the rules of improv have helped produce some of the funniest sketches.

Good exploratory testing works in the same way. Skilled exploratory testers set out with a charter -- a goal for each testing session. Skilled exploratory testers use heuristics to help them learn about the systems they test. Skilled exploratory testers practice.

Improvisation can have scripted aspects and rules that guide it. It is not chaotic and random. It is smart people using simple rules to make quick decisions and adapt to a changing environment under pressure.

  Edit

May 3, 2008

Don't be fooled by the green lights

Posted by Ben Simo


If we're uncertain about the reliability and value of code, writing more code seems like a highly fallible and paradoxical way to resolve the uncertainty.
- Michael Bolton

There is a disturbing trend in software testing. This is a trend towards redefining test to be code and testing to be a coding activity.

In his book, Testing Object Oriented Systems, Robert Binder writes "Manual testing, of course, still plays a role. But testing is mainly about the development of an automated system to implement an application-specific test design." While this book contains a great deal of useful information about test design and test automation, I wholeheartedly disagree with this statement -- and I make a living developing test automation.

I find Bob Martin's statements about manual testing being immoral to be very disturbing. I know that he is referring to manual scripted testing, but that's not what I hear repeated. I can also think of some scripted testing that requires thinking manual testers and would be immorally expensive or dangerous to automate. (Context matters.)

I am concerned when I hear Ken Schwaber refer to "QA" as people doing incredible things in a "hopeless profession". (I've also heard him praise smart testers.) I am also concerned by QA people trying to defend their role by becoming process police.

All too often, I hear and read the words test and testing being tossed around with the assertion that all things called tests are the same, and are therefore interchangeable. If all things called testing are equal, then I would be the first to lobby for replacing skilled testers with developers that can create code called tests. TDD may be a great tool for helping developers build whatever they decide to build, but it is not the same a testing focused on providing stakeholders with information about value. These are very different things. (See What is Software Testing? for a sampling of testing diversity.)

Developers create. Testers critique. Good developers test what they create. The Agile and TDD emphasis on developers testing their own work is wonderful. I believe that a developer who is good at testing their own work is more valuable than one that is not so good at testing. However, the idea that code-centered TDD can replace value-centered testing by skilled testers is bad.

Testing is much more than exercising code. It is about finding and communicating useful information about value with limited time and resources. At the heart of good testing is a thinking person that questions the software and the people designing and building the software. The tools of testing are secondary.

It is difficult for a person to be both creator and critic. These require different skills and focus. In my experience, leaning too much towards create or critique hampers the other.

As a teenager, I created a software management program for floppy disk based computers. I was proud of my creation. My creation took advantage of new technology and had features that similar products did not have. I tested my creation. I refactored my code many times. The code was clean. The program was fast. I used my creation on a daily basis for over a year. I shared it with close friends. I thought I had created something really cool. I submitted my creation to a company that was soliciting programs for publication. My baby was rejected. It was not rejected due to being poorly designed, coded, or tested. It was rejected because it was deemed to not be of value to enough of the publisher's customers. I may have discovered this earlier if I had requested input from more than my closest friends.

Just like an American Idol contestant that can't sing, we can save ourselves time, money, and embarrassment if we solicit the input of good critics before the world is watching.

I want great developers that can create beautiful music on my development team. I also want a few Simon Cowells and James Bachs to let us know when we may be fooling ourselves.

  Edit

April 4, 2008

A Good Practice

Posted by Ben Simo


The Association for Software Testing (AST) is a professional organization dedicated to advancing the understanding and practice of software testing. The AST provides forums for academics, students, and testing practitioners to discuss testing. AST does this through online forums, workshops, education programs, and conferences. The third annual Conference of the Association for Software Testing (CAST) provides a great forum for face-to-face conferring. This is not your typical conference where experts talk at the masses. This is the software testing conference that puts the confer back in conference.

Ever sit in a presentation about testing and think anything like the following?

  • Yeah that works for you but it'll never work in my situation.
  • What do you mean by X?
  • She must work with idiots.
  • How does he know what he says? I want to see data.
  • My management would never go for it.
  • What planet is he from?
  • You're full of it.

Your are not only free to think these things at CAST, but you are free to question presenters. Time is built into the program for facilitated discussion of every presentation.

If you'd like to become a better software tester, join AST and come to CAST.

If you'd like to meet and confer with peers from around the world, join AST and come to CAST.

If you'd like to meet and confer with testing experts, join AST and come to CAST.

If you'd like to be challenged, join AST and and come to CAST.

If you'd like to hear Gerald Weinberg talk about the past, present, and future of software testing: join AST and come to CAST.

If you can't afford those other testing conferences, join AST and come to CAST -- its about half the price of other conferences.

If you'd like to compete against other testers, join AST and come to CAST.

If you care about software testing, join AST and come to CAST.


Only through judgment and skill,
exercised cooperatively throughout the entire project,
are we able to do the right things at the right times
to effectively test our products.
context-driven-testing.com


While I am not a believer in best practices, I believe its a really good practice to participate in CAST 2008. See you in Toronto.

  Edit

March 2, 2008

Retraining the unskilled to code software

Posted by Ben Simo


I stumbled across a 46 year old newspaper article about how automation is changing business. The following statement caught my attention.


"Unskilled workers can then be retrained to handle peripheral jobs in the EDP system such as coding, card punching and so on."
Computor Invasion Scares The Unskilled,
WINNIPEG FREE PRESS, 1962



Times sure have changed. If retraining unskilled workers to code software was a viable option, then they must have been coding some pretty simple software. Maybe "coding" really refers to the process of entering code designed by someone else. Maybe "coding" meant data entry.

Regardless of what "coding" meant in 1962, computers and the software we create today are more complex than they were 46 years ago. It is too bad that some who would never think of coding as unskilled work still seem to think of testing as work for the not-so-well-skilled. Good developers and testers are good thinkers.

"You've got to be smarter to run a company with a computor than without one. The information on which you will base your decisions comes at you faster. If you're going to take advantage of this, you have to think faster, more decisively and more clearly."
- George Aitken, Vice-President and Comptroller,
Great West Life Assurance Co, 1962
Computor Invasion Scares The Unskilled

  Edit

January 14, 2008

Evidence That Quality Has Everything To Do With Value

Posted by Ben Simo

“Quality is value to some person.”

Gerald Weinberg,
Quality Software Management – Systems Thinking


This morning, Jason Gorman's blog post title Proof That Value Has Little To Do With Quality? caught my attention. This title contradicts my definition of Quality. To me, Quality is all about value to stakeholders.

Quality is not about implementing the best development practices. Quality is not about writing solid code. Quality may not be about impressive features. Quality may have no relation to elegance. Quality may not even be reliable. Quality may be cheap or it may be expensive. Quality may be well planned or it may be haphazard.

Quality is all about value. Quality is about value to people that matter.

Jason references an interesting article about a web site that started as a learning exercise and "seems to come from the Anti-Perfectionist School of Design", yet is profiting its creator millions of dollars annually. In spite of many flaws, this web site is profitable because users find it valuable and users bring advertising dollars. I consider this to be a high quality web site in spite of its obvious flaws because it has value to people who matter. Instead of viewing this as an example of value having little to do with quality, I see this as a great example of Quality having everything to do with value. I suspect that Jason and I define Quality differently. This story is an example of how value sometimes has very little to do with all the other things we often call Quality.

Perhaps my thinking is too touchy-feely for those who think we need to measure and assure Quality through quantitative metrics and processes enforcement. By it's very nature, quality is subjective. Sometimes we can quantify the results of Quality: as in the $10 million in annual advertising profits. I suspect that some of you are subjectively estimating how better metrics and process might lead to better profits.

The real measure of Quality is a measure of value (not necessarily quantitative value) to those who matter.

"As professionals, we have no real control over the ultimate value of the software we create. And neither do our customers, or requirements analysts, or product owners, or whoever it is who's been charged with figuring out what the best use of the budget would be. It's all guesswork, like choosing lottery numbers or selecting which horse to bet on."
- Jason Gorman,
Proof That Value Has Little To Do With Quality?

While there is guesswork in determining who matters and what they value, it is not as random as selecting lottery numbers. Better understanding of who matters and what they value can help us reduce the guesswork. Ongoing dialog can bring better understanding. If Quality is nothing more than a lottery, then we might as well limit ourselves to BUFD and scripted manual testing.

Interactions, collaboration, and responding to our changing understandings can help us take control over Quality.

"Above all, listen to what your customers are telling you about Quality. ... Your customers are in a perfect position to tell you about Quality, because that's all they're really buying. They're not buying a product. They're buying your assurances that their expectations for that product will be met. ... Your customers may not have all the hard business facts. They may not be aware of your specs and your standards and your inspection reports ... They may not be able to give you a precise definition of Quality, but one thing's for certain -- they know it when they see it."

-John Guaspari,
I Know It When I See It: A Modern Fable About Quality

And, as Jason rightly points out, what satisfies users (and the business) today may not satisfy them tomorrow. Keep the dialog going.

  Edit

January 11, 2008

Regular Expressions

Posted by Ben Simo

(bb|[^b]{2}); [Tt]hat is the \?\.

Regular expressions are great tools for testers. I have found them useful for describing GUI objects to GUI test automation tools. I have found them useful for automation results validation. I have found them useful for extracting data I care about from voluminous log files. I've also found them useful for manipulating data.

What are regular expressions? Regular expressions are patterns for finding text of interest. They are supported by many test tools, system utilities, text editors, and programming languages.

Regular expressions can include the following meta characters to define patterns.

  • ^ Matches the beginning
  • $ Matches the end
  • . Matches any single character
  • * Matches zero or more occurrences of the preceding character
  • \ Escape character
  • ? Matches zero or one occurrence of the preceding character
  • + Matches one or more occurrences of the previous character
  • [ ] Defines a character class
  • [^ ] Defines an exclusion-based character class
  • \{ \} Matches a specific number or range of instances of the previous character
  • \( \) Treats the expression between \( and \) as a group
  • | Or. Use to match one of many expressions
  • \< Matches the beginning of a word
  • \> Matches the end of a word
  • \b Word boundary
  • \B Not a word boundary

* Many tools do not support all meta characters

Here are some example regular expressions:

“frog”
  • Matches “frog”, “bullfrog”, and “tree frog”; but not “Frog”

“^Frog”
  • Matches “Froggy went a courting”, but not “Quality Frog”

“frog$”
  • Matches “frog”, “bullfrog”, and “tree frog”; but not “froggy” or “The frog sat on a log.”

“.at”
  • Matches “cat”, “rat”, “bat”, “goat”, and “gnat”

“20*5”
  • Matches “2005”, “20005”, “20000000000000000000000005”, “25”; but not “2ABC5” or “2006”

“Spee?d”
  • Matches “Sped” and “Speed”; but not “Speeed”

“20+5”
  • Matches “2005” and “20005”, but not “25”

“200[5-9]”
  • Matches “2005”, “2006”, “2007”, and “2009”; but not “2004”

“199[0-9]|200[0-9]”
  • Matches years 1990 through 2009.

“[0-9][0-9]*\.[0-9][0-9][^0-9]”
  • Matches “1.29”, “1.29%”, and “1234.55”; but not “1.299” or “.29”

“A[LKRSZ|C[AOT]|D[CE]|F[LM]|G[AU]|HI|I[ADLN]|K[SY]|LA|M[AFRHINOPST]|N[CDEHJMVY]|O[HKR]|P[ARW]|RI|S[CD]|T[NX]|UT|V[AIT]|W[AIVY]“
  • Matches any valid 2-letter US postal state or territory name abbreviation.


Want to learn more?

Take a look at my slides from last night's presentation to the Denver Mercury User Group. Check out Wikipedia. Or try a Google Search. If you ask bb|[^b]{2}, check out Think Geek.

Ha[p]{2}y T[ea]sting\.


  Edit