October 17, 2007

Green Means Go?

Posted by Ben Simo

Traffic control devices are used on roads to help regulate the flow of traffic. When I taught defensive driving classes, I ensured that each class included a discussion about these devices. It is imperative that all drivers understand what each device means. My first Driver License test (in Germany) required that I properly identify 94 of 100 different signs to pass. There was a time that each local governing authority created its own traffic control devices. However, it was not long after the automobile became common that governments began working together to standardize these safety-critical devices. While there is no universal standard that is really followed (the USA being one of the countries that differs from most), standardization within each country (and some continents) has led to safer streets and highways.

These devices include signs, signals, pavement markings, barricades, and policemen. These devices are tools used to control the flow of traffic. However, these devices cannot really control traffic -- except for massive barricades and armed policemen. They provide information to drivers but they cannot force drivers to be safe or legal. As can be seen here, sometimes drivers ignore the signals.





Traffic lights are mostly standardized around the globe. Green means go. Yellow means caution. Red means stop. ... except for the extraterrestrial visitor in the movie Starman who learned by watching a bad example.

Red means stop;
Green means go;
and Yellow means go very very fast!

- Starman

The red, yellow, and green traffic light colors have become common in software development, testing, and production monitoring. Traffic light colors are regularly used to report the status of projects, systems, and individual tests.

I like simple status indicators -- in context. One of the first test execution tools I helped create used smiley faces to indicate passed tests and fulfilled requirements. I currently use color coding to indicate status in the test automation I develop. Colors help me quickly find test results that need attention. Simple color coding helps communicate test results at a high level.

I like to see green. I don't like to see red. However, I am not a member of the Church of the Green Bar*. I do not worship the green light. I do not trust the green light. I find green lights and bars useful but wrong. Green lights remove the story from the status.

Essentially, all models are wrong, but some are useful.
- George E. P. Box

A green traffic light may tell us that it should be safe to go but it does not guarantee that it is safe to go. A green light does not indicate that the intersection is clear. A green light does not indicate that it is safe to drive through the intersection. Drivers need to wait for the green light but they still need to check the intersection, identify potential risks, and decide if they believe it is safe to drive through the intersection. A green light means go if it has been determined that it is safe to go.

In the same way, a passed automated test does not mean that the software is good. Green simply means that the coded criteria was met.

Every time we test software -- whether with human eyes and mind, or with automation -- we only monitor the things that we choose to monitor. A human tester may notice things about the quality of a product that are not scripted. Automated test execution will only notice things that are coded in the script -- no matter how many time we run the test. If a factor that might indicate a problem is not part of the automation's green/red (pass/fail) criteria, it will not turn the light (or bar, or text) green.

I also find that there is often a misunderstanding of what automation does and does not do. The coder of the automation may know what it does when they code it. But will they really understand what a green light does and does not mean six months later? How about a year later? Five years later? Do other testers understand what the automation does? Does management understand what the automation does do and what it does not do?

Software systems can easily become complex. Computers allow us mortals to create complex systems that are beyond our ability to fully understand. We testers seek out software problems based on what we understand. We cannot completely test the software. We use a variety of tools and approaches to learn as much as possible. However, we are unlikely to completely understand a complex software system.
- Ben Simo^

We mere mortals and the automation we create are unable to monitor every thing that might matter. Therefore I believe it is dangerous to conclude that green means good.

The same goes for project management and system monitoring systems that use quantifiable metrics to set status.

Traffic lights based on the judgment of the people involved in the project are better indicators of status. If a light is green because a person set it to green, that person should be able to tell me the story behind the decision to make the light green.

Beware automated traffic lights.

[Update]

Automated traffic light indicators in testing tools are only badometers+. A badometer tells us when something is suspected to be bad cannot tell us if it is good. Better traffic light indicators are set by people that consider the risks associated with the information reported by our tools.

So, Does green mean go? Yes, but only after a human being has judged it safe to go.


* I don't know who first coined the term, but I first heard it from Brian Marick.
^ Since I regularly quote other people, I think I am entitled to quote myself. :)

+ A term I think I first heard used by Gary McGraw. Or what it Kim Possible?

  Edit

October 16, 2007

Problems: So What's On All Those Sticky Notes?

Posted by Ben Simo

In my previous post about the Agile Alliance Functional Testing Tools Workshop , I wrote the following:

After reviewing existing tools used by agile teams: we identified software testing issues that have been solved (yellow), those that have been partially solved (orange), and those that have not been solved (pink). As I recollect, most of the solved issues were technical problems and most of the unsolved problems were people problems. Many of the partially solved problems were those for which I believe we have technical solutions but have not yet been integrated and presented in ways that best support people.
In case you are wondering what problems we wrote down on these notes, Frank Maurer kindly transcribed them for the workshop participants and I have posted them below.

Looking at this list reminds me of the traffic safety problem lists I made in the defensive driving classes I used to teach. As unsafe driving practices were brought up by students, I would add them to a list on the whiteboard. I then asked the students to identify whether each item on my list was primarily due to driver skill or driver attitude. The students usually blamed most of the problems on driver attitude.

Skill and attitude play important roles in software development and testing. Team members need both. Brian Marick addresses this in his guiding values of discipline, skill, ease, and joy. Instead of looking at the functional testing problems as skill or attitude problems, I looked at them as man or machine problems. I asked myself if each problem appeared to be mainly a human problem or a technical problem.

Software development and testing involves a mix of people and technical problems. Interfacing people with people and people with technology is often harder than interfacing technology with technology. Most of the identified problems have both technical and human aspects to the problem and possible solutions. Some are due to the nature of people or the nature of software and will likely never be completely solved.

I find that identifying whether a problem is primarily a people or technology problem helps me identify possible solutions. I quickly scanned this list and identified whether I thought things were primarily human or technical issues. My notes are included to the right of each item. I don't necessarily interpret each item as its author (a human communication problem), and I do not necessarily agree that each item is a problem or belongs in the specified group.

As you review this list, ask yourself if the problem and possible solutions are grounded in people or technology.

Unsolved Define Test Human
Unsolved Organizing large sets of Tests/Expect. Actions/Examples for a large, complex system so you can wrap your head around the whole thing. Human
Unsolved Having tests survive handoffs. Project team -> op support -> proj team. Human and technical
UnsolvedGetting people to care Human
Unsolved Transferability of ubiquitous language to other projects Human
Unsolved Write SW that is understood Human
Unsolved Reducing Uncertainty Human
Unsolved Limitations of natural language Human
Unsolved How would we act if we really believed code was an asset? Mostly Human
Unsolved Allowing Customers to articulate their expectations in a format/tool/way that is comfortable for them Mostly Human
Unsolved Multi-model specification text + table + graphic in one test. Mostly Technical
Unsolved Domain experts Human
Unsolved Fully Automated regr. That does not reduce dev.velocity. Mostly Technical
Unsolved Generate a domain model from tests. Human and technical
Unsolved Testing usability as part of acceptance testing in incremental development. Human
Unsolved Common language to express GUI based tests. Mostly Human
Unsolved Conveying "experts" perspective to majority of development team. Human
Unsolved Automated Software Development Technical (Machines aren't creative)
Unsolved Functional tests that can be easily re-used later in lifecycle. Mostly Technical
Unsolved Test business requirements independent of current interaction/Api design. Mostly Human
Unsolved Composing tests into useful larger tests Technical and Human
Unsolved Test first performance Human
Unsolved Getting BA to write the tests Mostly Human
Unsolved Having customer to be able to write test and enjoy it. Mostly Human
Unsolved Different test notations for different user groups. Human problem, technical solution?
Unsolved Acceptance/Functional Tests good for communication and automation. Human problem, technical solution?
Unsolved Model the time domain " and 3 months later an email. Mostly Technical? (Don't want execution to take 3 months.)
Unsolved Change touch-point dynamically. ?
Unsolved Terminology (Test or not a Test?) Human



Partially Solved Understand what has not been tested. Human and Technical
Partially Solved Trace tests into project management tools Human and Technical
Partially Solved Getting buy-in for need to automate.(docs,tests,specs) Human and Technical
Partially Solved Accurately & completely communicating requirements. Human problem, partial technical solutions
Partially Solved Satisfying every role's need/desire to be at center, in control. Human
Partially Solved Having functional tests specify requirement specifications. Human and Technical
Partially Solved Sustain a productive conversation with all stake holders. Human problem, partial technical solution
Partially Solved How do we get across what the project would feel like if things were going well. Human
Partially Solved Write robust (U.I) tests that are not brittle. Mostly Technical
Partially Solved Fragile tests. Mostly Technical
Partially Solved Valuing individuals and interactions over processes and tools. Human
Partially Solved Describing customer intent. Human
Partially Solved Executable (as tests) Models (as specifications) Human and technical
Partially Solved Test partitioning Human and technical
Partially Solved Tests as support artifacts. Human and technical
Partially Solved Test Generation Automation. Human and technical
Partially Solved Running tests parallely ( Fast feedback) Mostly technical
Partially Solved Reconciling preferred style of abstraction. Human and technical
Partially Solved Dealing with size. Human
Partially Solved Finding the right words in which to write a test. Human
Partially Solved Express requirement in the domain language, graphical, word based , table based. Mostly Technical
Partially Solved Functional Test driven development..not just for agilists. How to sell to waterfallists? Mostly Human
Partially Solved How to Integrate tools? Mostly Technical
Partially Solved Common test case format. Human and Technical
Partially Solved Cooperation and collaboration between tool developers (tool stack, tool platform). Human and Technical
Partially Solved Change from one notation to another. (graphic -> tabular) Mostly technical
Partially Solved Test/Example -> model (generated model based tests) Mostly technical
Partially Solved IDE for testers and BAs Human and technical
Partially Solved Get all roles actively involved Mostly Human
Partially Solved View specs/examples/tests, differently for different roles. Mostly Technical
Partially Solved Different editors for different roles? Mostly Technical
Partially Solved Super-power IDE Technical solution to human problems?
Partially Solved Test Refactoring Human and technical
Partially Solved Refactoring tests Human and technical
Partially Solved Prioritize and execute tests to get faster feedback. Mostly Human
Partially Solved Describe a test at an appropriate level of abstraction Mostly Technical
Partially Solved Choosing what to automate (when you can't automate everything) Mostly Technical
Partially Solved Tools to support exploratory testing Human and Technical
Partially Solved Reusable test artifacts (poor modularity cohesion) Mostly Technical
Partially Solved Build community with BAs Mostly Human
Partially Solved Allow for refactoring from /to code <-> tests <-> req'ts Mostly Technical
Partially Solved Setup a wiki to discuss smaller problem solving. People
Partially Solved Capture war stories + testimonials + experiences. Human, partial technical solution
Partially Solved Test Maintenance Human and Technical
Partially Solved Book: Patterns of Self testing software ?
Partially Solved Shared vocabulary around parts of a functional testing solution ("Fixture",etc) Human
Partially Solved Ensure adequate test coverage. Mostly Technical
Partially Solved Communication of what has been tested Human and technical
Partially Solved Communication using the ubiquitous language. Human



Solved Express automated/able tests in tables Mostly Technical
Solved Correctly implementing programmer intent Mostly Technical
Solved Deliver SW to test that doesn’t crash immediately Mostly Technical
Solved Provide traceability between Story or Requirement and accpetance/functional test Mostly Technical
Solved Gui Testing - functional testing is more than GUI testing Technical
Solved Data-Driven Testing Technical
Solved Driving Apps Technical
Solved Integrating test executors/drivers with build process Technical
Solved Edit Fit tests from eclipse Technical
Solved Report Results Technical to report, human to be understood
Solved Express expectations in code Human and technical
Solved Unit testing Mostly Technical



I think we can solve the technical issues and use technical solutions to help people manage some of the people problems. Applying discipline and skill to solve the technical problems may help add to testers' ease and joy.

Where do you think the solutions lie? Have a solution? Please share it.

  Edit

October 14, 2007

Better Tools for Individuals through Collaboration

Posted by Ben Simo


Individuals and
interactions
over
processes and tools


I spent the second half of last week at the Agile Alliance Functional Testing Tools Visioning Workshop. (How's that for a long name?) Before the workshop, I was thinking that it seemed a little oxymoronic to have an agile workshop with a focus on tools. Perhaps my thinking was triggered by my concerns about those who seem to value "agile" processes and tools (often ones they sell) more than people.

Agile people are supposed to care about people and not care about tools. Right? Wrong.

while there is value in the items on
the right, we value the items on the left more

Software is developed by people for people. Agility involves building better software by adapting to the needs of people instead of letting processes and tools lead the way. Process and tools do best when they have a supportive role in software development. Better tools can support agility but they cannot make anyone agile.

The tool-centric discussions at the workshop were driven by a desire to build better software for people that build and test software. It is about people.

It then seems quite apropos that the book I indiscriminately grabbed off the shelf (well, I picked it for its size more than its content) to read on the airplane to and from the workshop is Ben Shneiderman's Leonardo's Laptop: Human Needs and the New Computing Technologies. The first chapter contains the following paragraphs that affirm my thinking about the role of automation in software testing. (Emphasis is mine.)
The first transformation from the old to the new computing is the shift in what users value. Users of the old computing proudly talked about their gigabytes and megahertz, but users of the new computing brag about how many e-mails they sent, how many bids they made in online auctions, and how many discussion groups they posted to. The old computing was about mastering technology; the new computing is about supporting human relationships. The old computing was about formulating query commands for databases; the new computing is about participating in knowledge communities. ...

The second transformation to the new computing is the shift from machine-centered automation to user-centered services and tools. Instead of the machine doing the job, the goal is to enable you to do a better job. Automated medical diagnosis programs that do what doctors do have faded as a strong research topic; however, rapid access to extensive medical lab tests plus patient records for physicians are expected, and online medical support groups for patients are thriving. ... Natural language dialogs with computerized therapists have nearly vanished, but search engines that enable users to specify their information needs are flourishing. The next generation of computers will bring even more powerful tools to enable you to be more creative and then disseminate your work online. This Copernican shift is bringing concerns about users from the periphery to the center. The emerging focus is on what users want to do in their lives.
- Ben Shneiderman, Leonardo's Laptop
Although many think of us software testers and developers as eccentric nerds, software developers and testers are human too. Like other humans, we desire tools that help us do a better job. This was the theme of the workshop: envisioning ways that tools can help us do a better job testing software.

After reviewing existing tools used by agile teams: we identified software testing issues that have been solved (yellow), those that have been partially solved (orange), and those that have not been solved (pink). As I recollect, most of the solved issues were technical problems and most of the unsolved problems were people problems. Many of the partially solved problems were those for which I believe we have technical solutions but have not yet been integrated and presented in ways that best support people. Much of the "what's next" discussion at the workshop was focused on how to integrate existing tools that each partially solve problems but together could move problems to the solved group.

Once the technical problems are solved, we can work on the tools to help with the people problems: we can move from old computing to new computing.

In Leonardo's Laptop, Ben Shneiderman presents a framework for integrating creative activities of people. This framework for mega-creativity consists of four activities:
  • Collect: Learn from what exists
  • Relate: Consult with peers and mentors
  • Create: Think: explore solutions
  • Donate: Disseminate the results and contribute
This is not a waterfall process. It is an interactive iterative framework for innovation. Shneiderman's book focuses on the need to develop software to support this framework. The participants in the functional test tool workshop focused on the need to develop testing software to support those developing software to support this framework. And in doing so, we exhibited this framework in action -- without even identifying the framework. (I read about the framework on the plane home from the workshop.)

Gathering people that are interested in and working on solutions together accelerates the collection, creation, and donation. I expect great things to come from this gathering.

My thanks and appreciation go to the Agile Alliance for sponsoring this workshop; and to Ron Jeffries, Elizabeth Hendrickson, and Jennitta Andrea for organizing it.

Let's keep the innovation ball rolling and build "new computing" tools.



Links

  Edit

October 5, 2007

Are you smarter than a 3rd grader?

Posted by Ben Simo

"I guess you could say I like to figure out how stuff works, I just like new adventures."
- Carson Page, 8 year old junior beta tester
Carson Page, 8, junior beta tester.
Rodolfo Gonzalez
AMERICAN-STATESMAN

Good testers can be hard to find. It looks like Actel Corp has found a good one. He is young. He is smart. He has excellent growth potential. And he works cheap -- for now.

Check out these stories:
I suspect that this kid does not know many testing buzzwords. I suspect he doesn't know much about testing tools and processes. However, Carson knows how to ask "why?" and communicate with engineers.
"We would ask what he liked and didn't like about it and he could explain it on a very high-end level."
- Mark Nagel, Actel Corp, Field Applications Engineer

A tester that can think, ask questions, and communicate can go far.

  Edit

October 2, 2007

Imaginary Testing

Posted by Ben Simo

Ever thought of a test that you haven't executed? Ever wonder if it might be valuable to add a use case, test case, or test charter? Ever feel like you spend more time talking about how to solve problems than it would take to try some of the proposed solutions?

I've been involved in discussions about software bugs that take more time than it would take to fix and retest. I understand that it is important to consider the risks associated with a bug and any proposed solutions. However, some times we just need to do it.

I've been in test planning meetings that take more time than it would take to execute the proposed tests. I've also wasted time performing unnecessary tests. The problem is that the usefulness of a test is usually not known until we have the results from that test.

When it makes sense, stop hypothesizing and start testing.

Imaginary testing is unreliable.

  Edit