November 22, 2007

Arranging Abstract Absolute Artifacts

Posted by Ben Simo

For any system of interesting size it is impossible to test all the different logic paths and all the different input data combinations. Of the infinite number of choices, each one of which is worth of some level of testing, testers can only choose a very small subset because of resource constraints.
- Lee Copeland,
A Practitioner's Guide to Software Test Design

The complexity of software makes it impossible to test all the possible things we could test for all but the most simple systems. (And I have often argued that even very simple systems cannot be completely tested.) This inability to test everything requires that we testers (and testing developers) identify the things that we believe are most likely to help us fulfill our testing mission. This makes test design very important. We not only need to design our tests in a way that supports our mission -- we need to communicate our testing in a way that supports our mission.

There are many ways that we can design and document tests: from very high level exploratory testing charters to the very specific step-by-step procedures of scripted automation. We often need to communicate a single test using various levels of detail.

A high-level test charter might be fine for communicating to project managers but we may need to describe step-by-step tasks when we document how to reproduce a bug found during exploratory testing. (Tools like Test Explorer can help document exploratory testing.)

High-level test execution steps may be fine for some manual test execution but these steps need to be made explicit for automation. And sometimes the details matter for tests executed by humans.

Detailed test procedures aren't very good for communicating functional coverage to product owners or managers. Sometimes we need to think about even the most scripted tests at a high level and not get bogged down in the details.

Sometimes we need to communicate tests designed and defined at a high level with great detail. Other times we need to communicate low-level automated tests at a high level. Different levels of detail are required by different people at different times.

There was a great deal of discussion at the Agile Alliance Functional Testing Tools Visioning Workshop about the desire to easily define tests using a variety of levels of abstraction and to communicate tests in different ways for different people. We considered how tools could be built to support the disparate needs of people involved in software development.

Elizabeth Hendrickson nicely summed up how tools can help support this by providing "A Place To Put Things". I am all for separating the essence of tests from automation code. Tools like xUnit and FIT aren't great because they do good testing. In fact, these tools don't really do the testing. They are useful tools because they give people a place to put things. When we have a place to put things, we are better organized. When we are better organized, we can communicate better.

Having a place to put things helps keep our testing organized and helps us communicate -- whether we are documenting tests as examples, designing FIT tests, scripting GUI automation, or documenting exploratory testing ideas.

One group of us at the workshop broke off to discuss the things that we testers need a place to put. We considered the possibilities of defining parts of tests at different levels of abstraction.

What if we could easily define tests at the highest possible (or reasonable) level of abstraction and then add details only when and where details are required?

What if a test could be defined at a high enough level that automated test execution engines could run the same tests on different platforms, or with different user roles, or with different data?

We did a little brainstorming and wrote down things we use to document a test and then divided these into three categories -- or levels of abstraction: business requirements (goals), interaction design (activities), and implementation design (tasks). Some items ended up in the twilight zone -- between or occupying multiple levels.

Business Requirements (Goals)
  • Goal
  • Expectation
  • As a ... I want to ... so that ...
  • Exploratory testing charter
Interaction Design (Activities)
  • Present / Communicate Results
  • Communicate test to users, dev, business
  • Actor
  • Domain objects
  • Action
  • User Preferences
  • wait for so long
  • set up pre condition
  • model-based test generation
  • Given; When; Then
  • Wait until
  • orchestration
  • Roles
  • Time Passes...
  • Branding
Twilight Zone (Somewhere crossing over activities and tasks?)
  • Domain Models
  • Verify
  • System state
  • data
  • state transitions
  • user state
Implementation Design (Tasks)
  • objects
  • Check Results
  • Show
  • Do ...
  • Control GUI, API, test harness
I'm sure that there are many other things we testers would like a place to put that we didn't think of in our few minutes of brainstorming.

Traditionally, defining tests at various levels of abstraction has been difficult. I've seen people try to add abstraction to tests and spend more time maintaining and documenting the various abstractions levels than I think the benefits were worth. I've also successfully used abstraction in test automation to make the same tests executable on multiple platforms.

If we can find the right abstractions to communicate the intent of an example, we might be able to finally break free of the perception of functional tests as brittle, hard-to-understand, write-only artifacts. Even better, we might find a way to layer new tools on top of these abstractions so that, if I want to write my examples in plain text and you want to drag boxes around on a screen and she wants to use the UML, we can each use the form that speaks most clearly to us.

I want more names for my common things. I want to deal in goals and activities not checkboxes and buttons. I want to give the system a few simple bits of information and have it tell me something I didn't know. I want to show my examples to everyone in the project community and have them lift up their understanding rather than drown it in permutations and edge cases and "what happens if the user types in Kanjii?".
- Kevin Lawrence,
In Praise of Abstraction

Another group at the workshop worked on devising a framework to give us a common place to put things. If we had a common place to put things, then a variety of tools could use the same data and users could select whatever tools best work for their needs -- and desired level of abstraction. Thanks to Elizabeth for clarifying what I think many were thinking but did not express so clearly: we need a place to put things.

Whether you are trying to create a one-size-fits-many testing framework or a specialized tool to support a specific need: first develop places to put things.

Given the infinite testing possibilities, the best testing tools are those that help us organize, understand, and communicate our tests.