May 30, 2007

When testers create bugs

Posted by Ben Simo

How come dumb stuff seems so smart while you're doing it?
Dennis the Menace

Debasis Pradhan's blog entry Testers don't make Bugs. Oh Really? got me thinking about a time that I as a tester actually introduced a bug into a system. Debasis' post is about bugs that slip by testers and escape into the wild. This is not the case in my story. I asked developers to put a bug in the software and they followed my instructions.

I was testing a data mastering system that assembled and converted data from a data repository's format to a variety of other formats for distribution to customers and inclusion in a variety of software products. I created a data validation tool that was used to inspect the huge volume of transformed data: comparing the actual output of the mastering system to the expected format and presentation. The validation tool also performed some heuristic-based tests that alerted testers and developers to data that may require manual inspection.

Over the course of many months, I reported numerous data transformation defects. Most of these were due to input data that the developers did not expect. (Interesting things happen with the data authors copy and paste text from a variety of other applications.) Some of my reported defects were fixed but many were rejected as "data issues". I eventually figured out that this was the label that development group gave to problems that could be resolved by changing the data. Instead of implementing a lasting fix in the code, they insisted that the users change the input data.

In some cases, another development group created post-processing scripts to fix the data created by the first system -- defeating one of the goals for the development of the first system: consolidating a multitude of mastering processes into a single system.

I continued reporting these data issues and worked with development to fix the most important of the "data issues". One run of the data validation tool reported thousands of instances of a new error. I reported the problem and was told that the data was formatted as I had requested.

Sure enough, I had previously listed the badly formatted data I was seeing as the "expected result" in a bug report. One of the few times that one of these "data issues" was fixed in the code, it was fixed wrong and it was my fault. Normally, the project team would have reviewed it and verified my expected transformation before coding any changes. However, this time my improving credibility with development hurt: I was trusted and my mistyped expected result was implemented. I had worked hard to gain the respect of some of the developers and feared that this mistake could setback some of the good will.

I worked with a developer to undo my mistake. Developers got a good laugh out of it. I was humbled. The bug was removed in a following build.

Be careful what you ask for. You just may get it. Double-check those bug reports. And if you make a mistake, admit it and help fix it.

All men make mistakes, but only wise men learn from their mistakes.
- Winston Churchill



May 31, 2007  
Anonymous wrote:

Hi Ben,

I think I can claim to be a regular reader of your blog considering the fact that I visit and read your posts at least once in a week. But about this particular post of your's I was informed by one of my regular readers.

I am honoured to be mentioned by you in your blog. But the credit for my post should actually go to Mr. James Lyndsay. I had gathered the insight after reading his excellent article "Things Testers Miss". So just wanted to clarify.

Thanks & Regards,
Software Testing Zone

May 31, 2007  
Ben Simo wrote:


Thanks for reading.


And for anyone looking for James Lyndsay's article: it may be found here.