July 12, 2007

Woodpeckers, Pinatas, and Dead Horses

Posted by Ben Simo

Here's some short blurbs of a few things I took away from CAST sessions.

From Lee Copeland's keynote address:

  • "It's nonsensical to talk about automated tests as if they were automated human testing."
  • Write or speak about something you're knowledgeable and passionate about.
  • Combine things from multiple disciplines.

From Harry Robinson's keynote address:
  • Weinberg's Second Law: If Builders Built Buildings The Way Programmers Write Programs, Then The First Woodpecker That Came Along Would Destroy Civilization.

From Esther Derby's keynote:
  • To successfully coach someone, they must want to be coached and want to be coached by you.

From James Bach's tutorial:
  • Pinata Heuristic: Keep beating at it until the candy comes out. ... and stop once the candy drops.
unless ...
  • Dead Horse Heuristic: You may be beating a dead horse.
yet beware ...
  • If it is a pinata, don't stop beating at it until the candy drops; but if it is a dead horse, your beating is bringing no value. It can be a challenge to determine if its a pinata or a dead horse.
From Antti Kervinen's presentation:
  • Separate automation models into high level (behavior) and low level (behavior implementation) components to reuse test models on a variety of platforms and configurations.

More from James Bach's tutorial:
  • Testing does not break software. Testing dispels illusions.
  • Rational Unified Process is none of the three. (attributed to Jerry Weinberg)

From the tester exhibition:

  • Testing what can't be fixed or controlled may be of little value. Some things may not be worth testing.
  • There is great value in the diversity of approaches and skills on a test team.
  • It may be possible to beat a dead horse and test (and analyze) too much. Sometimes we should just stop testing and act on the information we have.

From Doug Hoffman's tutorial:
  • Record and playback automation can be very useful for testing for the same behavior with many configurations. And, once the script stops finding errors: throw it out.

From Keith Stobie's keynote:
  • Reduce the paths though your system to improve quality. Fewer features may be better.
  • Free web sites often have higher quality than subscription sites. This is because it is easy to measure the cost of downtime on ad-supported systems.

From David Gilbert's session:
  • People expect hurricanes to blow around and change path. We should expect the same with software development projects. (David has some interesting ideas about forecasting in software development.)
  • Numbers tell a story only in context. You must understand the story behind the numbers.
One more from James:
  • Keep Notes!


What did you take away from CAST?

  Edit

4 Comments:

July 19, 2007  
Pradeep Soundararajan wrote:

What did you take away from CAST?

Before reading this post:
"I didn't take anything from CAST because I couldn't attend it."

After reading this post:
"There is valuable information and ideas I took from CAST although I couldn't make it there"

Pinata Heuristic usage: I keep pushing hard enough each year to make it to CAST. Hopefully the candy comes out next year.

Thank you!

July 22, 2007  
Rahul Verma wrote:

Hi Ben,

I used to wonder what some guys keep noting down, during the course of such conferences. I don't have this habit so far, but after reading this post, I strongly feel that I should start culturing the habit of making notes.

I am sure that these were one of the best points of presentations at CAST. Thanks for sharing the same.

Regards,
Rahul Verma

July 25, 2007  
Roberta wrote:
This comment has been removed by a blog administrator.
July 25, 2007  
Ben Simo wrote:

@Pradeep: You were missed. I heard your name a few times during the conference. Keep smacking that pinata. :)

@Rahul: I'm not a great note taker. I used to think that I had to record every bit of information to have good notes. I have since learned that jotting down a few highlights that I find interesting helps me recall the rest of the presentation.

Now that I think about it, I recall that Harry Robinson referred to Jerry Weinberg's second law in the tester expo. I don't remember him mentioning it in his keynote. So here's something from Harry's keynote:

Harry did a great job of demonstrating the difficulty of creating and managing test cases -- especially for automation -- prior to test execution. Harry showed that after creating just a few test cases for testing the Google Talk functionality for creating and ending chat rooms it was easy to lose track of what was and was not covered by the test cases and that the test cases designed so far covered very little. Harry then showed how model-based automation could be used to generate and execute test cases based on a few simple rules.

Harry also showed a great use for bitmap comparisons in GUI test automation. I've never been a fan of bitmap comparison features in tools because I have found them to be more trouble than they are worth. Now I know of a good use for fuzzed bitmap comparisons. (And it is not comparing bitmaps to an expected result that is defined before test execution.)

Harry's demonstration reinforced my reasoning for describing behavior for automation instead of scripting test steps. He also gave me some new ideas for describing and selling model-based automation to those that are accustomed to scripted tests.