November 27, 2007

What is Software Testing?

Posted by Ben Simo



I know what I mean when I say testing, but what do you mean when you say testing?

Bill Cosby does a comedy routine on his 1964 album "I Started Out As A Child" about a patient in surgery (under local anesthesia) hearing the doctor say "Oops!" The patient says "What did you say?! What did you say?! Did you say oops?! I know what I've done when I say oops! What did you do saying oops there?!"

Hopefully none of us ever hears our doctor say "Oops!" while we are being treated. As ambiguous as "Oops" is, I suspect that any English speaker hearing it understands that it indicates that the speaker has accidentally done something bad. However, some other words with seemingly less ambiguous definitions can lead to misunderstandings and conflict amongst people.

This confusion can be caused by both the natural ambiguity of language and each person's experience and understanding. I believe the words "test" and "testing" are great examples of words that should cause us to ask what definitions are being used.

"Wittgenstein tried to show philosophers the way out of the fly bottle of philosophy by getting them to pay closer attention to the meanings of words. Just try to define a simple word like 'game'. Wittgenstein pointed out that if you say a game has to involve luck or have a winner or be fun, you will always be able to find an example of a game that doesn't fit your definitions. When philosophers give words overly precise or restrictive definitions, when they ignore the complexity and the irredeemable vagueness of language, they often fall into confusion."
-John Mighton,
The End of Ignorance

Some think we can create a common definition of testing. There are tester certification programs that attempt to standardize vocabularies. This may appear to be a good idea on the surface; however software testing is too broad a subject to be stuffed into a one-size-fits-most box. First, the testing label is applied to many different things that serve different purposes. Second, testing can have different meanings in different situations. Third, there are differences of opinion.

Differences in purpose, context, and opinion lead to a vast spectrum of practices related to test planning, scripting, exploring, automating, managing, reporting, outsourcing, hiring, measuring, and many other activities. While I do believe that standardized vocabularies can be useful within specific contexts, I am not one to argue that we need to standardize on a single definition that meets everyone's needs. Instead, I argue that an understanding of the differences of context and opinion can help us better understand our own ideas and practices. Understanding does not require agreement. And as a bonus, we can learn to better relate to those with different ideas.

In my quest to better understand what we all might mean when we say testing, I have been recording various statements that I believe to be related to testing. Most of these statements came from published works and internet postings. Below is a list of quotations that I believe are related to testing -- presented in an approximate chronological order.

This list contains references to many different types of testing. This list contains things that apply to some situations but not others. This list contains ideas that can co-exist as part of a single project. This list contains some opposing views that are cannot be logically combined with other ideas. Some of these statements ring true and confirm my own thinking. Some I find a bit ridiculous. Some challenge me to think about a time and place in which they might be true.

How do these statements confirm and challenge your thoughts about testing?

If you disagree with a statement, try to think of a situation in which the statement might make sense. Search this list for examples that don't match your definition. If you think a quote is not related to testing, consider how it might relate to testing.


"A man paints with his brains and not with his hands."
- Michelangelo
(1475-1564)


"No battle plan survives contact with the enemy."
- Helmuth von Moltke the Elder (1800-1891)


"Experience has shown that such mistakes are much more difficult to avoid than might be expected. ... Since much machine time can be lost in this way a major preoccupation of the EDSAC group at the present time is the development of techniques for avoiding errors, detecting them before the tape is put on the machine, and locating any which remain undetected with a minimum expenditure of machine time."
- Maurice V. Wilkes, David J. Wheeler, Stanley Gill,
The Preparation Of Programs For An Electronic Digital Computer, 1951


"Those who regularly code for fast electronic computers will have learned from bitter experience that a large fraction of the time spent in preparing calculations for the machine is taken up in removing the blunders that have been made in drawing up the programme. With the aid of common sense and checking subroutines the majority of mistakes are quickly found and rectified. Some errors, however, are sufficiently obscure to escape detection for a surprisingly long time."
- R. A. Brooker, S. Gill, and D. J. Wheeler,
The Adventures of a Blunder, 1952


"Special simplified data, such as trivial solutions or hand-calculated intermediate answers to one of the sets of actual problems for computer solution can be used to check out the computer. The data and program codes are entered into the computer memory. The computer is started, and when it comes to a breakpoint, the operator checks the results. If they don't check, the operator and/or programmer must determine what is wrong. Once the program has been checked out completely in this fashion, the programmer can have confidence in the results for other problem data submitted to the computer."
- Ivan Flores,
Computer Logic: The Functional Design of Digital Computers, 1960


"Errors plague us, but hidden errors make our job impossible. One of our recurring problems lies not in finding errors but in not finding errors. We must be alert for them; we can never be complacent about our results."
- Herbert D. Leeds and Gerald M. Weinberg,
Computer Programming Fundamentals, 1961


"In the hardware world, maintenance means the prevention and detection of component failure caused by aging and/or physical abuse. Since programs do not age or wear out, maintenance in the software world is often a euphemism for continued test and debug, and modification to meet changing requirements."
- D. David Weiss,
The MUDD Report: A Case Study of Navy Software Development Practices, 1975


"Probably the major weakness in all software testing methods is their inability to guarantee that a program has no errors."
- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"The goal of the testers is to make the program fail. If his test case makes the program or system fail, then he is successful; if his test case does not make the program fail, then he is unsuccessful."
- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"... the 'best' approach varies from organization to organization and from program to program." - Glenford Myers,
Software Reliability: Principles & Practices, 1976


"... one usually encounters a definition such as, 'Testing is the process of confirming that a program is correct. It is the demonstration that errors are not present.' The main trouble with this definition is that it is totally wrong; in fact, it almost defines the antonym of testing."

- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"A good test case is a test case that has a high probability of detecting an undiscovered error, not a test case that show that the program works correctly."

- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"Avoid non reproducible or on-the-fly testing."

- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"Never use throw-away test cases (except on a throw-away program). Also, test cases should be documented sufficiently and stored in such a form to allow them to be reused by anyone."

- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"Testing, particularly test case design, is the area of software development that demands the most creativity."

- Glenford Myers,
Software Reliability: Principles & Practices, 1976


"The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the objectives ... may be erroneously or imperfectly specified, as well as [in] later design and development stages. ... Because of human inability to perform and communicated with perfection, software development is accompanied by a quality assurance activity."

- Michael S. Deutsch,
Verification and Validation: Realistic Project Approaches, 1979


"Although one can discuss the subject of testing from several technical points of view, it appears that the most important considerations in software testing are issues of economics and human psychology. In other words, such considerations as the feasibility of 'completely' testing a program, knowing who should test a program, and adopting the appropriate frame of mind toward testing appear to contribute more toward successful testing than do the purely technological considerations."

- Glenford J. Myers,
The Art of Software Testing, 1979


"Testing is the process of executing a program with the intent of finding errors. ... This definition of testing has many implications ... it implies that testing is a destructive process, even a sadistic process, which explains why most people find it difficult."

- Glenford J. Myers,
The Art of Software Testing, 1979


"TESTING PRINCIPLES ...

* A necessary part of a test case is a definition of the expected output or result. ...

* A programmer should avoid attempting to test his or her own program. ...

* A programming organization should not test its own programs. ...

* Thoroughly inspect the results of each test ... Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions. ...
* Examining a program to see if it does not do what it is supposed to do is only half of the battle. The other half is seeing whether the program does what it is not supposed to do. ...
* Avoid throw-away test cases unless the program is truly a throw-away program. ...
* Do not plan a testing effort under the tacit assumption that no errors will be found. ...
* The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section. ...
* Testing is an extremely creative and intellectually challenging task."
- Glenford J. Myers,
The Art of Software Testing, 1979


"Test means execution, or set of executions, of the program for the purpose of measuring its performance. That a program was executed with no evidence of error is no proof that it contains no errors; program errors are sensitive to the specifics of the data being processed."

- Robert Dunn and Richard Ullman,
Quality Assurance for Computer Software, 1982


"The goal of testing ought to be the uncovering of defects within the program."

- Robert Dunn and Richard Ullman,
Quality Assurance for Computer Software, 1982


"One may prove the code correct with respect to its specification, but is the specification itself correct?"

- Robert Dunn and Richard Ullman,
Quality Assurance for Computer Software, 1982


"Testing is any activity aimed at evaluating an attribute of a program or system. Testing is the measurement of software quality."

- Bill Hetzel,
The Complete Guide to Software Testing, 1983


"1. Testing starts with known conditions, users predefined procedures, and has predictable outcomes. Only whether or not the program passes the test is unpredictable.
...
2. Testing can and should be designed and scheduled beforehand. ...
3. Testing is a demonstration of error or apparent correctness. ...
4. Testing proves a programmer's failure. ...

5. Testing should strive to be predictable, dull, constrained, rigid, and inhuman.
...
6. Testing, to a large extent, can be designed and accomplished in ignorance of the design. ...

7. Testing can be done by an outsider; ..."

- Boris Beizer,
Software System Testing and Quality Assurance, 1984


"Testing is the act of executing tests. Tests are designed and then executed to demonstrate correspondence between an element and its specification. There can be no testing without specifications of intentions."
- Boris Beizer,
Software System Testing and Quality Assurance, 1984


"If the objective of testing were to prove that a program is free of bugs, then not only would testing be practically impossible, but it would also be theoretically impossible."

- Boris Beizer,
Software System Testing and Quality Assurance, 1984


"Testing is like playing pool. There's real pool and kiddie pool. In kiddie pool, you hit the balls in whatever pocket they happen to fall into, you claim as the intended pocket. It's not much of a game and although suitable to ten-year-olds it's hardly a challenge. The object of real pool is to specify the pocket in advance. Similarly for testing. There's real testing and kiddie testing. In kiddie testing, the tester says, after the fact, that the observed outcome was the intended outcome. In real testing the outcome is predicted and documented before the test is run. If the programmer cannot reliably predict the outcome for a specified path, then the programmer has misconceptions as to what it is the routine should be doing and is doing. Such misconceptions perforce lead to bugs."

- Boris Beizer,
Software System Testing and Quality Assurance, 1984


"For certain kinds of testing, it is impossible to automatically record the results of tests. ... Whenever possible, however, an audit trail of both stimuli and results should be mechanized."

- Robert H Dunn,
Software Defect Removal, 1985


"A simplistic criterion is that one has successfully completed running the full set of planned tests. Unfortunately, this ignores the opportunity to learn from the tests themselves how well they were designed to under latent defects. Moreover, it fails to account for the inability to successfully complete all tests."

- Robert H Dunn,
Software Defect Removal, 1985


"Since certain tests may not be precisely repeatable with respect to input values, there is no guarantee that the final run through the test series will not produce problems previously unseen."
- Robert H Dunn,
Software Defect Removal, 1985


"Both design and testing are components of software development. Design may be described as a thought-intensive task which is the pivotal point of any fair size project. ... The other component of software development is testing. It may be described as the process of executing a program with the objective of discovering software errors. ... In testing, the purpose of the software tester is to make the program under consideration fail."
- B. S. Dhillon,
Reliability in Computer System Design, 1987


"The Testing stage involves full-scale use of the program in a live environment. It is here that the software and hardware are shaken down, anomalies of behavior are eliminated, and the documentation is updated to reflect final behavior. The testing must be as thorough as possible. The use of adversary roles at this stage is an extremely valuable tool because it ensures that the system works in as many circumstances as possible."
- Henry Legard,
Software Engineering Concepts: Volume 1, 1987


"Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that is meets its required results."

- Bill Hetzel
The Complete Guide to Software Testing, 2nd Edition, 1988


"A separate group has the sole responsibility to devise, perform, and report on the results of tests. With no knowledge of the design, this group devises tests based on the requirements specification. It sends components that fail tests back to their developers with descriptions of failures and no attempts to diagnose the reasons for failures."
- Office of Technology Assessment, U.S. Congress,
SDI: Technology, Survivability, and Software, 1988



"The test process resembles a reversal of the design process. Subprograms are first tested individually, then combined into components for integration tests. Components are integrated again and tested as larger components, the process continuing until all components have been combined into a complete system."

- Office of Technology Assessment, U.S. Congress,
SDI: Technology, Survivability, and Software, 1988



"There's a myth that if we were really good at programming, there would be no bugs to catch. If only we could really concentrate, if only everyone used structured programming, top-down design, decision tables, if programs were written in SQUISH, if we had the right silver bullets, then there would be no bugs. So goes the myth."

- Boris Beizer,
Software Testing Techniques, 2nd Edition, 1990


"The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component."

- IEEE Standard Glossary of Software Engineering Terminology, 1990



"Consider this: If a small change made to the code produces no change in the results of any of the tests, we have evidence of insufficiency of the full set [of tests]."

- Robert H. Dunn,
Software Quality: Concepts and Plans, 1990


"In fact, typically the bulk of software quality work happens in testing, wherein attempt after partially successful attempt is made to get newly produced software to execute correctly. It seems to be human nature to err -- and frequently."

- Robert L. Glass,
Building Quality Software, 1992


"In many organizations, SQA is just a fancy name for testing. And testing is what programmers do to find bugs in their programs. Unfortunately, there are a number of problems with this classical view: ..."
- Edward Yourdon,
Decline and Fall of the American Programmer, 1992


"Testing is the execution of software to determine where if functions incorrectly."

- Robert L. Glass,
Building Quality Software, 1992


"There are three phases of [testing]: the design of test cases, the execution of test cases, and the analysis of test case results."

- Robert L. Glass,
Building Quality Software, 1992


1. A good test has a high probability of finding an error.
2. A good test is not redundant.
3. A good test should be 'best of breed'.
4. A good test should be neither too simple nor too complex.

- Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, 2nd edition, 1993


"Because each input to and each response from the system is not clearly defined, disagreements can arise regarding the appropriateness of an input or the correctness of a response."

- Thomas C. Royer,
Software Testing Management: Life on the Critical Path, 1993


"Testing helps detect defects that have escaped detection in the preceding phases of development. Here again, the key byproduct of testing is useful information about the number of and types of defects found in testing. Armed with this data, teams can begin to identify the root causes of these defects and eliminate them from the earlier phases of the software life cycle."
- Lowell Jay Arthur,
Improving Software Quality: An Insider's Guide to TQM, 1993


"[Unit] Testing is a hard activity for most developers to swallow for several reasons: Testing's goal runs counter to the goals of other development activities. The goal is to find errors. A successful test is one that breaks the software. ... Testing can never prove the absence of errors -- only their presence. ... Testing by itself does not improve software quality. ... Testing requires you to assume that you'll find errors in your code. ..."
- Steve McConnell,
Code Complete: A Practical Handbook of Software Construction, 1993


"But, if we have no knowledge of the way a software component was constructed, then to be absolutely sure that it works, we must prepare a test case for every possible input condition. For anything other than the a completely trivial program, that is obviously impossible."

- Thomas C. Royer,
Software Testing Management: Life on the Critical Path, 1993


“One of the saddest sights to me has always been a human at a keyboard doing something by hand that could be automated. It’s sad but hilarious.”

- Boris Beizer,
Black-Box Testing: Techniques for Functional Testing of Software and Systems, 1995


"Testing is an unnecessary and unproductive activity if its sole purpose is to validate that the specifications were implemented as written. ... testing as performed in most organizations is a process designed to compensate for an ineffective software development process. It is unrealistic to develop software and not test it. The perfect development process does not exist ..."
- William Perry,
Effective Methods for Software Testing, 1995


"A tester is given a false statement ('the system works') and has the job of selecting, from an infinite number of possibilities, an input that contradicts the statement. ... [You want to find] the right counterexample with a minimum of wasted effort."
- Brian Marick,
The Craft of Software Testing: Subsystem Testing, 1995


"Testing is obviously concerned with errors, faults, failures, and incidents. A test is the act of exercising software with test cases. There are two distinct goals of a test: either to find failures, or to demonstrate correct execution."

- Paul C. Jorgensen,
Software Testing: A Craftsman's Approach, 1995


"The penultimate objective of testing is to gather management information."
- Boris Beizer,
Black Box Software Testing: Techniques for Functional Testing of Software and Systems, 1995


"Planning entails the future, and in dealing with the future we are dealing with uncertainty. A fundamental reality of planning, then, is that it involves uncertainty. This means that our very best plans are estimates, mere approximations of what the future may hold. ... More often, though, our estimates are quite rough, because what we want to do has never before been done in precisely the way we need. This is especially true on information-age projects. In carrying out these novel projects, we are to a large extent trailblazers, and the maps we devise (our plans) are much like the maps of the fifteenth-century Portuguese explorers, filled with broad, vague spaces labeled terra incognita."
- J. Davidson Frame,
Managing Projects in Organizations: How to Make the Best Use of Time, Techniques, and People, Revised Edition, 1995


“Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else’s footprints minimizes the chance of being blown up by a land mine.”
- James Bach,
Test Automation Snake Oil, 1996


"The most common quality-assurance practice is undoubtedly execution testing, finding errors by executing a program and seeing what it does."
- Steve McConnell,
Rapid Development: Taming Wild Software Schedules, 1996


Software testing is the action of carrying out one or more tests, where a test is a technical operation that determines one or more characteristics of a given software element or system, according to a specified procedure. The means of software testing is the hardware and/or software and the procedures for its use, including the executable test suite used to carry out the testing.

- NIST, 1997


"Run enough tests to ensure that the program meets all its requirements and runs a comprehensive set of tests without error."
- Watts S. Humphrey,
Introduction to the Personal Software Process, 1997


"Since situations (or the information available about them) continuously change, we must continue to adapt our plans as time allows. Planning is a process that should build upon itself—each step should create a new understanding of the situation which becomes the point of departure for new plans. Planning for a particular action only stops with execution, and even then adaptation continues during execution."

- U.S. Marine Corps,
MCDP 5: Planning, 1997


"If you think you can fully test a program without testing its response to every possible input, fine. Give us a list of your test cases. We can write a program that will pass all your tests but still fail spectacularly on an input you missed. If we can do this deliberately, our contention is that we or other programmers can do it accidentally."

- Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, Second Edition, 1999


"It's tedious and unreliable to do much testing by hand: proper testing involves lots of tests, lots of inputs, and lots of comparisons of outputs. Testing should therefore be done by programs, which don't get tired or careless."
- Brian W. Kernighan and Rob Pike,
The Practice of Programming, 1999


"Software must be tested to have confidence that it will work as it should in its intended environment. Software testing needs to be effective at finding any defects which are there, but it should also be efficient, performing the tests as quickly and cheaply as possible."
- Mark Fewster and Dorothy Graham,
Software Test Automation: Effective use of test execution tools, 1999


"A mature test automation regime will allow testing at the 'touch of a button' with tests run overnight when machines would otherwise be idle. Automated tests are repeatable, using exactly the same inputs in the same sequence time and again, something that cannot be guaranteed with manual testing. Automated testing enables even the smallest maintenance changes to be fully tested with minimal effort. Test automation also eliminates many menial chores. The more boring testing seems, the greater the need for tool support."
- Mark Fewster and Dorothy Graham,
Software Test Automation: Effective use of test execution tools, 1999


"Testing is skill. ... Automating tests is also a skill but a very different skill from testing."
- Mark Fewster and Dorothy Graham,
Software Test Automation: Effective use of test execution tools, 1999


"In operational terms, exploratory testing is an interactive process of concurrent product exploration, test design, and test execution. The outcome of an exploratory testing session is a set of notes about the product, failures found, and a concise record of how the product was tested. When practiced by trained testers, it yields consistently valuable and auditable results."
- James Bach,
General Functionality and Stability Test Procedure, 1999


"Software testing is partly intuitive but largely systematic. Good testing involves much more than just running the program a few times to see whether it works. Thorough analysis of the program lets you test more systematically and more effectively."

- Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, Second Edition, 1999


"Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system."
- James Bach,
James Bach on Risk-Based Testing, STQE Magazine, Nov 1999


"I view software testing as a problem in systems engineering. It is the design and implementation of a special kind of software system: one that exercises another software system with the intent of finding bugs. ... test design provides the requirements for the test automation system. This system automatically applies and evaluates the tests. ... Manual testing, of course, still plays a role. But testing is mainly about the development of an automated system to implement an application-specific test design."
- Robert Binder,
Testing Object-Oriented Systems: Models, Patterns, and Tools, 2000


"Software testing is a difficult endeavor that requires education, skill, practice, and experience. Building good testing strategies requires merging many different disciplines and techniques."

- James A. Whittaker,
IEEE Software (Vol 17, No 1), 2000


"Phrases like 'Zero Defect Software' or 'Defect Free Systems' are hyperbole, and at best can be viewed only as desirable but unattainable goals."
- John Watkins,
Testing IT: An Off-the-Shelf Software Testing Process, 2001


"Exploratory Testing, as I practice it, usually proceeds according to a conscious plan. But not a rigorous plan ... it's not scripted in detail. ... Rigor requires certainty and implies completeness, but I perform exploratory testing precisely because there's so much I don't know about the product and I know my testing can never be fully complete. ... To the extent that the next test we do is influenced by the result of the last test we did, we are doing exploratory testing. We become more exploratory when we can't tell what tests should be run, in advance of the test cycle."

- James Bach,
Exploratory Testing and the Planning Myth, 2001


"Software testing is the process of applying metrics to determine product quality. Software testing is the dynamic execution of software and the comparison of the results of that execution against a set of pre-determined criteria."
- NIST,
The Economic Impacts of Inadequate Infrastructure for Software Testing, 2002


"Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information."

- Cem Kaner, James Bach, Bret Pettichord,
Lessons Learned In Software Testing: A Context-Driven Approach, 2002


"Testing is a concurrent lifecycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software being tested."

- Rick Craig and Stefan Jaskiel,
Systematic Software Testing, 2002


"Fact 35: Test automation rarely is. That is, certain testing processes can and should be automated. But there is a lot of the testing activity that cannot be automated."
- Robert Glass,
Facts and Fallacies of Software Engineering, 2002


"The result of building the prototype is a set of test cases that defines the acceptance tests. We will rerun them many times throughout the various development phases ... However, the final step is the formal acceptance test by executing all the documented acceptance test cases on the finished target system."
- Thomas Fehlmann,
Business-Oriented Testing in E-Commerce,
Software Quality and Software Testing in Internet Times, 2002


"The 'freezing' of requirements, remains an unachievable ideal in almost every project; requirements changes are inevitable. However, change must be controlled to avoid the potentially chaotic condition where software testing cannot proceed because test specifications cannot keep pace with requirements changes."
- Richard E. Nance and James D. Arthur,
Managing Software Quality: A Measurement Framework for Assessment and Prediction, 2002


"The difference between excellent testing and mediocre testing is how you think: your test design choices, your ability to interpret what you observe, and your ability to tell a compelling story about it."
- Cem Kaner, James Bach, Bret Pettichord,
Lessons Learned In Software Testing: A Context-Driven Approach, 2002


"With modern analysis and design tools, dynamic testing has moved up the life cycle, to earlier and earlier phases ... However, most developers view testing as the activity of executing software code to see how it performs."

- Robert T. Futrell, Donald F. Shafer, and Linda Shafer,
Quality Software Project Management, 2002


"A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions."
- Brett Pettichord,
Don't Become the Quality Police, StickyMinds.com, 2002


"Software testing is a process of analyzing or operating software for the purpose of finding bugs."
- Robert Culbertson, Chris Brown, and Gary Cobb,
Rapid Testing, 2002


"In most software-development organizations, the testing program functions as the final 'quality gate' for an application, allowing or preventing the move from the comfort of the software-engineering environment into the real world. With this role comes a large responsibility."
- Elfriede Dustin,
Effective Software Testing: 50 Specific Ways to Improve Your Testing, 2003


"For repeatability, consistency, and completeness, the use of a test procedure template should be mandated when applicable."
- Elfriede Dustin,
Effective Software Testing: 50 Specific Ways to Improve Your Testing, 2003


"Somehow, we must focus on the vital test conditions we should assess, removing from consideration the enormous set of relatively unimportant conditions we might assess. What separates what we might test from what we should test? Quality."
- Rex Black,
Critical Testing Processes: Plan, Prepare, Perform, Perfect, 2003


"All testing efforts require exploratory testing at one time or another, whether test procedures are based on the most-detailed requirements or no requirements are specified. As testers execute a procedure, discover a bug, and try to recreate and analyze it, some exploratory testing is inevitably performed to help determine the cause."
- Elfriede Dustin,
Effective Software Testing: 50 Specific Ways to Improve Your Testing, 2003


"The most powerful testing effort combines a well-planned, well-defined testing strategy with test cases derived by using functional analysis and such testing techniques as equivalence, boundary testing, and orthogonal-array testing, and is then enhanced with well-thought-out exploratory testing."
- Elfriede Dustin,
Effective Software Testing: 50 Specific Ways to Improve Your Testing, 2003


"How do you test your software? Write an automated test.
Test is a verb meaning 'to evaluate'. No software engineers release even the tiniest change without testing, except the very confident and the very sloppy. ... Although you may test your changes, testing changes is not the same as having tests. Test is also a noun, 'a procedure leading to acceptance or rejection'. Why does test the noun, a procedure that runs automatically, feel different from test the verb, such as poking a few buttons and looking at answers on the screen?"

- Kent Beck,
Test-Driven Development By Example, 2003


"Software testing practices have been improving steadily over the past few decades. Yes, as testers, we still face many of the same challenges that we have faced for years. We are challenged by rapidly evolving technologies and the need to improve testing techniques. We are also challenged by the lack of research on how to test for and analyze software errors from their behavior, as opposed to at the source code level. We are challenged by the lack of technical information and training programs geared toward serving the growing population of the not-yet-well-defined software testing profession."

- Hung Q. Nguyen, Bob Johnson, and Michael Hackett,
Testing Applications on the Web, Second Edition, 2003


"We are expected to check whether the software performs in accordance with its intended design and to uncover potential problems that might not have been anticipated in the design." - Hung Q. Nguyen, Bob Johnson, and Michael Hackett,
Testing Applications on the Web, Second Edition, 2003


"Exploratory testing is a process of examining the product and observing its behavior, as well as hypothesizing what its behavior is. It involves executing test cases and creating new ones as information is collected from the outcome of previous tests."
- Hung Q. Nguyen, Bob Johnson, and Michael Hackett,
Testing Applications on the Web, Second Edition, 2003


"Software testing is not such an exact science that one can determine what to test in advance, execute the plan, and be done with it. This would take god-like powers of foresight. Instead of a plan, intelligence, insight, experience, and a nose for where the bugs are hiding should guide testers."
- James A. Whittaker,
Foreword,
How to Break Software: A Practical Guide to Testing, 2001


"My conclusion is that testing is an intellectual endeavor and not part of arts and crafts. So testing is not something anyone masters. Once you stop learning, your knowledge becomes obsolete very fast. Thus, to realize your testing potential, you must commit to continuous learning."

- James A. Whittaker,
How to Break Software: A Practical Guide to Testing, 2003


"Scripted testing is, by definition, inflexible. It follows the script. If, while testing, we see something curious, we note it in a Test Incident Report but we do not pursue it. Why not? Because it is not in the script to do so. Many interesting defects could be missed with this approach."

- Lee Copeland,
A Practitioner's Guide to Software Test Design, 2004


"Exploratory testing is a good idea for any project. regardless of the amount of test planning you do, you are likely to uncover more defects, in short time, by doing exploratory testing."
- Gary Pollice, Liz Augustine, Chris Lowe, and Jas Madhur,
Software Development for Small Teams: A RUP-Centric Approach, 2004


"As a software tester, the fate of the world rests on your shoulders. This statement is not an exaggeration if you accept the dual premises that computer software runs the modern world and that all software has bugs ..."

- Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,
Software Testing Techniques: Finding the Defects that Matter, 2004



"All software has bugs. It's a fact of life. So the goal of finding and removing all defects in a software product is a losing proposition and a dangerous objective for a test team, because such a goal can divert the test team's attention from what is really important. ... [The goal of a test team] is to ensure that among the defects found are all of those that will disrupt real production environments; in other words, to find the defects that matter.
"
- Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,

Software Testing Techniques: Finding the Defects that Matter, 2004



"It's okay to not know something; it is not okay to test something you do not know.
"
- Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,

Software Testing Techniques: Finding the Defects that Matter, 2004



"[A test plan] represents a testing strategy based on the best knowledge available at the outset of a project. It's not a carved stone tablet revealing divine knowledge that is immune to change. As the test progresses, more will be learned about the software's strengths, weaknesses, and vulnerabilities. Ignoring that additional insight would be foolish; rather, it should be anticipated and exploited.
"
- Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,

Software Testing Techniques: Finding the Defects that Matter, 2004


"Testing
1. Starts with known conditions, user predefined procedures, predictable outcomes
2. Should be planned, designed, scheduled
3. Is a demonstration of error / apparent correctness
4. Proves a programmer's 'failure'
5. Should strive to be predictable, dull, constrained, rigid, inhuman
6. Much can be done without design knowledge
7. Can be done by outsider
8. Theory of testing is available
9. Much of test design and execution can be automated"
- Nina S. Godbole,
Software Quality Assurance: Principles And Practice, 2004


"Software Testing [is] The act of confirming that software design specifications have been effectively fulfilled and attempting to find software faults during execution of the software."
- Thomas H. Faris,
Safe And Sound Software: Creating an Efficient and Effective Quality System, 2006

“Testing is the infinite process of comparing the invisible to the ambiguous so as to avoid the unthinkable happening to the anonymous.”
- James Bach,
Becoming a Software Testing Expert, 2006



"Software testing is a process where we check a behavior we observe against a specified behavior the business expects. ... in software testing, the tester should know what behavior to expect as defined by the business requirements. We agree on this defined, expected behavior and any user can observe this behavior."
- Andreas Golze, Charlie Li, Shel Prince,
Optimize Quality for Business Outcomes: A Practical Approach to Software Testing, 2006


"testers need to master proven techniques. Organizations like the British Computer Society and the International Software Testing Qualifications Board (ISTQB) have begun building standardized training and granting testing certification levels."

- Andreas Golze, Charlie Li, Shel Prince,
Optimize Quality for Business Outcomes: A Practical Approach to Software Testing, 2006


"I often criticize pre-scripted testing. It’s not a fundamentally bad idea, but it’s strangely over-hyped."
- James Bach,
Tools for Recording Exploratory Testing, 2006


"Unfortunately, intuitive testing based on undocumented expectations is a common approach in the industry today. This testing approach is limited because it only works with the completed product and does not allow testing to be done early enough in the software development life cycle or in parallel with development."

- Andreas Golze, Charlie Li, Shel Prince,
Optimize Quality for Business Outcomes: A Practical Approach to Software Testing, 2006


"Manual testing was originally the only method of software testing. ... Testing was mostly ad-hoc and inconsistent, and test coverage was poor. ... Automated software testing was invented to address these issues. ... As a matter of fact, based on what we see in the industry, about 80 percent of software applications are still testing manually."

- Andreas Golze, Charlie Li, Shel Prince,
Optimize Quality for Business Outcomes: A Practical Approach to Software Testing, 2006


"An organization is at a quality expert level when it has formal quality processes, often enforced and supported by products, and dedicated product and process QA experts ... It emphasizes a continuous improvement of process best practices, and uses metrics to measure quality."

- Andreas Golze, Charlie Li, Shel Prince,
Optimize Quality for Business Outcomes: A Practical Approach to Software Testing, 2006


"Testing is a process of gathering information by making observations and comparing them to expectations."
- Elizabeth Hendrickson,
From the Mailbox: What’s the Definition of Testing?, 2007


"[Testing is]

* An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.

* A process focused on empirically questioning a product or service about its quality."

-
Cem Kaner, 2007


"When writing software, we create individual functions, data structures, and classes and glue them together into a working system. Our main testing strategy is to exercise all this code and validate its behavior by writing more code -- test code. This forms a harness around the test subject that prods, pokes, and drives it, provoking it to respond and checking that its response is correct."
- Pete Goodliffe,
Code Craft: The Practice of Writing Excellent Code, 2007


"[Unit] Testing Isn't Hard . . . Unless you do it badly, and then it's really hard. It does take thoughtful effort, though."
- Pete Goodliffe,
Code Craft: The Practice of Writing Excellent Code, 2007


"An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product OR SERVICE under test."
-
Scott Barber, 2007


Instead of trying to narrow your definition of testing, embrace the complexity of testing. Challenge those who challenge you. Discover what testing means to you in your situation. Test testing, learn more about testing, and adapt. Testing and learning are continual processes. Sometimes we have to decide when to stop testing, but we should never stop learning.



Two final quotes:

"There really are conflicts in this field. We need to deal with this fact. ... I believe our reluctance as a field to deal with these conflicts has not led to a strong field, but rather has kept us in a state of persistent underdevelopment. Historically, disputes among rival scientists and philosophers have often enriched the field."
-
James Bach, Nov 2007


"Iron sharpens iron, and one man sharpens another."

- Solomon (1000 - 931 BC),
Proverbs 27:17, The Holy Bible, English Standard Version



See a testing idea above that you want to challenge? Challenge it!

How do you define testing?

  Edit

November 24, 2007

The Bananananananana Principle

Posted by Ben Simo


... as the little boy said, "Today we learned how to spell 'banana', but we didn't learn when to stop." ... In honor of that little boy, we can elevate his idea to a principle, The Banana Principle:


Heuristic devices don't tell you when to stop.

- Gerald M. Weinberg,
An Introduction to General Systems Thinking

I just had the following exchange with my 12 year old daughter Jessica.

Me:
How do software testers know when to stop testing something?

Jessica: When you die! . . . Or when you get really tired of it.





The Banana Principle does not mean that heuristics cannot be useful in determining when to stop. It means that heuristics do not tell us when to stop using the heuristic. There is a tendency to start transforming the most useful heuristics into laws -- in our minds. Heuristics should help us think and not replace thinking. This includes continual questioning of even the most useful heuristics.

  Edit

November 23, 2007

Maybe it would be better if code changes broke our tests

Posted by Ben Simo


For those that put faith in code coverage metrics:

Consider this: If a small change made to the code produces no change in the results of any of the tests, we have evidence of insufficiency of the full set [of tests].

- Robert H. Dunn,
Software Quality: Concepts and Plans

  Edit

November 22, 2007

Arranging Abstract Absolute Artifacts

Posted by Ben Simo


For any system of interesting size it is impossible to test all the different logic paths and all the different input data combinations. Of the infinite number of choices, each one of which is worth of some level of testing, testers can only choose a very small subset because of resource constraints.
- Lee Copeland,
A Practitioner's Guide to Software Test Design

The complexity of software makes it impossible to test all the possible things we could test for all but the most simple systems. (And I have often argued that even very simple systems cannot be completely tested.) This inability to test everything requires that we testers (and testing developers) identify the things that we believe are most likely to help us fulfill our testing mission. This makes test design very important. We not only need to design our tests in a way that supports our mission -- we need to communicate our testing in a way that supports our mission.

There are many ways that we can design and document tests: from very high level exploratory testing charters to the very specific step-by-step procedures of scripted automation. We often need to communicate a single test using various levels of detail.

A high-level test charter might be fine for communicating to project managers but we may need to describe step-by-step tasks when we document how to reproduce a bug found during exploratory testing. (Tools like Test Explorer can help document exploratory testing.)

High-level test execution steps may be fine for some manual test execution but these steps need to be made explicit for automation. And sometimes the details matter for tests executed by humans.

Detailed test procedures aren't very good for communicating functional coverage to product owners or managers. Sometimes we need to think about even the most scripted tests at a high level and not get bogged down in the details.

Sometimes we need to communicate tests designed and defined at a high level with great detail. Other times we need to communicate low-level automated tests at a high level. Different levels of detail are required by different people at different times.

There was a great deal of discussion at the Agile Alliance Functional Testing Tools Visioning Workshop about the desire to easily define tests using a variety of levels of abstraction and to communicate tests in different ways for different people. We considered how tools could be built to support the disparate needs of people involved in software development.

Elizabeth Hendrickson nicely summed up how tools can help support this by providing "A Place To Put Things". I am all for separating the essence of tests from automation code. Tools like xUnit and FIT aren't great because they do good testing. In fact, these tools don't really do the testing. They are useful tools because they give people a place to put things. When we have a place to put things, we are better organized. When we are better organized, we can communicate better.

Having a place to put things helps keep our testing organized and helps us communicate -- whether we are documenting tests as examples, designing FIT tests, scripting GUI automation, or documenting exploratory testing ideas.

One group of us at the workshop broke off to discuss the things that we testers need a place to put. We considered the possibilities of defining parts of tests at different levels of abstraction.

What if we could easily define tests at the highest possible (or reasonable) level of abstraction and then add details only when and where details are required?

What if a test could be defined at a high enough level that automated test execution engines could run the same tests on different platforms, or with different user roles, or with different data?

We did a little brainstorming and wrote down things we use to document a test and then divided these into three categories -- or levels of abstraction: business requirements (goals), interaction design (activities), and implementation design (tasks). Some items ended up in the twilight zone -- between or occupying multiple levels.

Business Requirements (Goals)
  • Goal
  • Expectation
  • As a ... I want to ... so that ...
  • Exploratory testing charter
Interaction Design (Activities)
  • Present / Communicate Results
  • Communicate test to users, dev, business
  • Actor
  • Domain objects
  • Action
  • User Preferences
  • wait for so long
  • set up pre condition
  • model-based test generation
  • Given; When; Then
  • Wait until
  • orchestration
  • Roles
  • Time Passes...
  • Branding
Twilight Zone (Somewhere crossing over activities and tasks?)
  • Domain Models
  • Verify
  • System state
  • data
  • state transitions
  • user state
Implementation Design (Tasks)
  • objects
  • Check Results
  • Show
  • STATES
  • Do ...
  • Control GUI, API, test harness
I'm sure that there are many other things we testers would like a place to put that we didn't think of in our few minutes of brainstorming.

Traditionally, defining tests at various levels of abstraction has been difficult. I've seen people try to add abstraction to tests and spend more time maintaining and documenting the various abstractions levels than I think the benefits were worth. I've also successfully used abstraction in test automation to make the same tests executable on multiple platforms.

If we can find the right abstractions to communicate the intent of an example, we might be able to finally break free of the perception of functional tests as brittle, hard-to-understand, write-only artifacts. Even better, we might find a way to layer new tools on top of these abstractions so that, if I want to write my examples in plain text and you want to drag boxes around on a screen and she wants to use the UML, we can each use the form that speaks most clearly to us.
...

I want more names for my common things. I want to deal in goals and activities not checkboxes and buttons. I want to give the system a few simple bits of information and have it tell me something I didn't know. I want to show my examples to everyone in the project community and have them lift up their understanding rather than drown it in permutations and edge cases and "what happens if the user types in Kanjii?".
- Kevin Lawrence,
In Praise of Abstraction

Another group at the workshop worked on devising a framework to give us a common place to put things. If we had a common place to put things, then a variety of tools could use the same data and users could select whatever tools best work for their needs -- and desired level of abstraction. Thanks to Elizabeth for clarifying what I think many were thinking but did not express so clearly: we need a place to put things.

Whether you are trying to create a one-size-fits-many testing framework or a specialized tool to support a specific need: first develop places to put things.

Given the infinite testing possibilities, the best testing tools are those that help us organize, understand, and communicate our tests.

  Edit