June 15, 2007

Modeling the Windows Calculator: Part 2

Posted by Ben Simo

Adding Basic Validations

In the previous post, I created a simple model for starting and stopping the Windows calculator, and for switching between standard and scientific modes. I then created the code needed to execute that test and ran a test that hit each of the defined actions once.

As the next step, I reran the test with the MBTE configured to capture GUI object information as it executes. This created a checkpoint table that I then ran through a script that removes duplicates and combines rows that are the same for multiple states. I also manually reviewed this table to verify that the reported results are as I expected. I made some tweaks to the table based on my expectations. I can then use this checkpoint table as input for the next test execution. You may view the edited checkpoint file using the link below.

The checkpoint table is one of two table formats that I use for defining test oracles. I call the other format a state table. The state tables contain one validation per row and have additional fields for creating user-friendly descriptions of the validation. The state tables can also be used to reference external code for complex results validations. The checkpoint files contain one GUI object per row and the columns define the properties to validate and the expected values. While not as user-friendly as state tables, checkpoint tables are easy to automatically generate during test execution and reuse as input for future tests.

My calculator checkpoint table currently contains only positive tests to ensure that expected objects appear as expected. It does not yet contain any validations to ensure that the unexpected does not occur. For example, it contains no check to ensure that the calculator stops when the window is closed.

I then created a state table and added two oracles stating that the calculator window should exist when running and not exist when stopped. I gave each of these a failState value of "restart" to indicate that if these checks fail, the application should be restarted to resume testing.

My model currently contains the following files:

I then ran a test with this model. The MBTE executed a test that hit each of my test set actions once without me needing to give it a sequence of test steps. The MBTE automatically generated the test steps based on the model.

The results from this test execution may be viewed here. Some features in the results require Internet Explorer and may not function in other browsers. These results are usually placed on a file server, so there may be issues I have not yet noticed when accessing them from a web server.

There are some failures reported in the results. These appear to be tool issues rather than bugs in the Windows Calculator. I will look into these failures later. Do you have any ideas about the failures?

The color-coded HTML results make it easy to tell what happened. Each row indicates what happened, where the action or validation was defined, the code executed, and other pertinent information. Please explore the results and send me any feedback.

What would you like to add to this test next? More validations? Additional actions?

Do you have any observations or questions about this automation approach? Please add them to the comments.


Modeling the Windows Calculator

  Edit

2 Comments:

June 18, 2007  
PlugNPlay wrote:

Hi Ben - The initial postings are very helpful, and I'm especially happy to have a copy of the excel sheets that contains the modeling info.

Your daughter seems to have outwitted me, though, I don't see a bug in the calulator app from what the model shows me. Arg!

I am confused about the "if" statements guarding the detect transitions, they both seem to be able to be true at the same time, because one asks if the value is greater than 0 and the other asks if the value is less than 1, which are not mutually exclusive.

Geordie

June 18, 2007  
Ben Simo wrote:

Hi Geordie,

The guards for the detect transitions are exclusive due to the fact that the function should only return an empty string or a zero for false and a one for true. It probably would have been better for me to code this as an equals one ( == 1) and does not equal one ( != 1). It works as coded, but its not very user-friendly code. Thanks for pointing this out.

The potential bug that my daughter noticed does not show up in these results. You'd have to see the test execute or enable screen shot captures for every step to notice it. ... so maybe its not really a bug; or maybe it is a symptom of a bigger problem. Bug or not, it is an unexpected difference.

My daughter has not outwitted you. Instead she has demonstrated a sapient power of the human mind: she noticed unexpected behavior that was not part of the explicit model. She didn't even have any written requirements to use for her "testing". She noticed something she did not expect and asked me why it was doing what she saw.

Automation will only do what we tell it to do and will only report what we tell it to report. Model-based automation is able to better recover from errors than traditional scripted automation, but it is still limited by the explicit model. Automation can easily miss something that even a child notices.

Now that I've seen this potential bug, I can add a validation to my model to look for it.

Ben