July 6, 2007

I'm a user and I just did that

Posted by Ben Simo


Michael Bolton just blogged about the sometimes common exchange between testers and developers that often goes something like this:

TESTER: I found this really important bug. Look at this. Let me show you ...

DEVELOPER: No user would do that.

TESTER: But, I'm a user and I just did that.

DEVELOPER: But, the real users won't do that.

Michael states that what the developer really means is "No user that I've thought of, and that I like, would do that on purpose." This is very true. Michael also points out that we testers are not the real users and may do things that the real users are not expected to do.

Thinking of users that the developer did not think of is an important service we testers provide. This becomes especially important when we put applications on the Internet. We need to consider the users and user behavior that the developers did not consider. I believe it is our responsibility as testers to tactfully provide the development team with the information they need to make an informed decision about what real users might do. We can't stop the conversation at "I just did that and I'm a user." We can communicate the likelihood and impact of users doing what the developer's friendly users won't do.

Michael's post reminds me of two such recent exchanges.

The first was a bug that was the result of developers (and project managers, and business folks) assuming that all users would do something in only one of several possible ways of performing an activity. The application worked as expected if users behaved as the development team expected. However, if a user did not behave as expected that user would be locked out of future access to this system. Paying customers aren't likely to be happy when they can't access the service for which they paid. After the initial exchange that went much like the sample above, it took only a few minutes of my time to document how easy it was to get locked out of the system and why real users would not be happy with getting locked out. It also took only a few minutes to walk a project manager through the process of locking himself out of his system. The design was changed to account for the user behavior that was not originally considered.

The second was a bug that caused errors in a web application when a user selected list items in an unexpected manner. Further investigation revealed that a single user's actions could impact the performance for all users of all applications residing on the application server processing a user's activity. A single user could peg the application server's CPUs with the flick of a finger. The cause of the errors and performance issues was that this method of selecting list items triggered numerous page refresh requests as a user viewed the available options. Development's first response was that no user would do that. I then explained that I am a user of web applications and I often do what caused the problem. I wasn't even intentionally testing the functionality with the bug when I discovered it. It was only after demonstrating how easy it was to cause the problem and its impact on other users that the problem was addressed. Had I stopped at the start, the problem would not have been fixed. Had I just stood my ground and pointed fingers while insisting that it be fix, the problem may not have been fixed.

When I hear that no user would do something, I usually respond with "I just did that, and I'm a user" but I don't stop there. Like a good attorney, we need to convince our audience of our case for fixing a bug or adding an enhancement.

Bugs need good solicitors.

  Edit

4 Comments:

July 06, 2007  
Anonymous wrote:

Sometimes I feel like I'm in a timewarp. Bolton's conversation is something I heard often ten years ago, but rarely - if ever - today. I am sorry (and embarrassed for the profession) that this is still the case.
It doesn't matter if a user would never do that - too many of these types of bugs end up as exploits or DOS attacks for this argument to ever hold up.

July 06, 2007  
cloosley wrote:

Ben,
This is so familiar. About 20-25 years ago, when I worked at IBM Santa Teresa Lab (now renamed to Silicon Valley Lab), my friend and colleague Mike Golding always used to send me his latest programmer tools to try out.

Now I am pretty sure that anyone who knows Mike (now retired from IBM) would agree that he is a very good programmer. But since I usually had very little idea of his tool's intended use, I would usually crash his code after entering a few random responses.

Naturally, Mike would then protest that "you're not supposed to do that." But he never complained, and always sent me his latest code with the expectation that -- as a dumb user -- I would uncover bugs that he could not even imagine.

Maybe that attitude is what separates a good developer from a great one, because I have no doubt that Mike Golding was one of IBM's most talented developers.

July 07, 2007  
Ben Simo wrote:

@Alan,

Some organizations mature better than others. I don't see any reduction over the past ten years in the frequency of these conversations; but I do see that it is often easier to get bugs fixed by bringing up the security implications. People seem to understand security risks once they are explained.

These types of bugs are exactly what become security holes that allow unfriendly users to do things that impact business operations. Sometimes, these things can also be triggered by friendly users doing what they were not expected to do. I agree that anyone in software development these days should be aware of the security risks. The truth is that too many people don't seem to think of security and put on their "functional" development and testing blinders. Applications need to do what the functional requirements require AND they need to not do what they should not do.

I remember testing a system many years ago that had a flaw in its search functionality. It was possible for a user to "crash" the server by performing a search that returned "too many" documents. Instead of fixing the bug, users of this internal application were instructed to not perform searches that returned more than a number of results that was determined to be safe. These SME uses were likely to have a better idea of how many results may be returned than I as a tester. However, it was silly to expect users to always know what results would be expected; and it was foolish to expect all users (even internal users) to behave in a friendly manner.

Let's see... Joe is late on completing his document edits. To create an excuse for not meeting his deadlines, he knows that he can perform a search that will take the system down. That should give him some time to get caught up and then paste his new data into the system after it has been restarted. The problem is that if this unscrupulous Joe were to follow through with such a plan, he could impact other important work and data. And that's just an example of selfish ambition; not malicious action against the company or others.

This problem was eventually fixed after some "important people" encountered the "limitation".

September 11, 2007  
Anonymous wrote:

Hi,

This blog is very good and informative.
Ecommerce Solutions
Free Directory

khan