May 30, 2007

When testers create bugs

Posted by Ben Simo

How come dumb stuff seems so smart while you're doing it?
-
Dennis the Menace


Debasis Pradhan's blog entry Testers don't make Bugs. Oh Really? got me thinking about a time that I as a tester actually introduced a bug into a system. Debasis' post is about bugs that slip by testers and escape into the wild. This is not the case in my story. I asked developers to put a bug in the software and they followed my instructions.

I was testing a data mastering system that assembled and converted data from a data repository's format to a variety of other formats for distribution to customers and inclusion in a variety of software products. I created a data validation tool that was used to inspect the huge volume of transformed data: comparing the actual output of the mastering system to the expected format and presentation. The validation tool also performed some heuristic-based tests that alerted testers and developers to data that may require manual inspection.

Over the course of many months, I reported numerous data transformation defects. Most of these were due to input data that the developers did not expect. (Interesting things happen with the data authors copy and paste text from a variety of other applications.) Some of my reported defects were fixed but many were rejected as "data issues". I eventually figured out that this was the label that development group gave to problems that could be resolved by changing the data. Instead of implementing a lasting fix in the code, they insisted that the users change the input data.

In some cases, another development group created post-processing scripts to fix the data created by the first system -- defeating one of the goals for the development of the first system: consolidating a multitude of mastering processes into a single system.

I continued reporting these data issues and worked with development to fix the most important of the "data issues". One run of the data validation tool reported thousands of instances of a new error. I reported the problem and was told that the data was formatted as I had requested.

Sure enough, I had previously listed the badly formatted data I was seeing as the "expected result" in a bug report. One of the few times that one of these "data issues" was fixed in the code, it was fixed wrong and it was my fault. Normally, the project team would have reviewed it and verified my expected transformation before coding any changes. However, this time my improving credibility with development hurt: I was trusted and my mistyped expected result was implemented. I had worked hard to gain the respect of some of the developers and feared that this mistake could setback some of the good will.

I worked with a developer to undo my mistake. Developers got a good laugh out of it. I was humbled. The bug was removed in a following build.

Be careful what you ask for. You just may get it. Double-check those bug reports. And if you make a mistake, admit it and help fix it.

All men make mistakes, but only wise men learn from their mistakes.
- Winston Churchill


  Edit

May 29, 2007

Where No Confabulation Goes Untested

Posted by Ben Simo

confer

verb
  • have a conference in order to talk something over


The Conference of the Association for Software Testing (CAST) is coming this July.

I missed last year's conference but have heard great things about it. Based on all the wonderful things I've heard from those that were there, I am looking forward to this year's conference.

The CAST is different than most conferences where people sit and listen to someone present to an audience without public questioning of what is presented. AST encourages testers to test the presentations. Time is allowed for discussion at every presentation. Challenging ideas is encouraged. I could go on. However, I don't think I can push this conference any better than David Gilbert. Therefore, please take a look at David's blog post: CAST in stone.

CAST early bird registration ends this week. Register at
http://associationforsoftwaretesting.org/conference/registration.html

I hope to see you there.

Ben

  Edit

May 24, 2007

Don't Ignore The Little Bugs

Posted by Ben Simo

And that's how it happened.
Believe me. It's true.
Because . . .
just because . . .
a small bug
went KA-CHOO!




One of my favorite children's books is also one of my favorite testing books. (Thanks go to Rob Sabourin* for alerting me to the testing connection.) Because a Little Bug went Ka-CHOO! tells the story of a multitude of problems that cascade from a little bug's sneeze. A worm gets mad. A turtle gets bopped. A bucket gets stuck. A policeman takes flight -- in a motorcycle sidecar. A boat nearly sinks. Pandemonium ensues.

This book illustrates how things that seem to be insignificant can have substantial lasting impact on a larger system.

Software bugs that appear to be trivial can be a sign of a larger problem. When we testers encounter bugs, we are usually looking at a symptom of a problem and not the underlying error that produced the bug. This requires that we do some investigation to determine if a bug is more serious than it first appears and if it is more wide-spread than it first appears.

After a bug is encountered, it is likely that the system under test is in an unexpected state -- and that unexpected state may lead to a bigger problem. Don't stop testing after you reproduce the bug. Look for bigger problems that may exist only after the bug is encountered.

Even if you can't find a bigger problem related to a bug, report it. Someone else may have knowledge about how it may impact other things in the system. At the very least, MIP it.


*Rob Sabourin wrote a great book titled I Am A Bug about software testing for children and others who may be technically challenged. The book is illustrated by his lovely daughter Catherine. An online version of the book may be viewed here.


  Edit

May 22, 2007

Driving for quality

Posted by Ben Simo

Years ago, I taught defensive driving classes to people cited for violating traffic law in Arizona. The central theme of my classes was that our attitudes behind the wheel often have a bigger impact on safe driving than our skill as drivers. I would start each class by having each student describe what they did to get in my class and what other drivers do that annoy them. We then reviewed the two lists (which usually ended up being identical) and the class discussed whether each item was mostly due to driver skill or attitude. Nearly everything on the lists could be traced to an attitude problem.

We are likely guilty of the same faults that we find in others. I believe the attitudes of those involved in software projects can impact the quality of software more than the skills of the team. And when skill is lacking, the right attitude fosters learning. A little patience and a good attitude can go a long way.

I used to teach the SIPDE mnemonic to my driving students to help improve safety on the road. This mnemonic can also be applied to software testing.

  • Scan all that's happening around you -- be aware
  • Identify potential hazards
  • Predict what hazards are most likely to impact you
  • Decide on a safe action to deal with those hazards
  • Execute the action
Another method I taught in the driving classes was the "Smith" system. This too can be applied to software testing.
  1. Aim high - look ahead
  2. Keep your eyes moving - don't get too focused on any single thing
  3. Get the big picture - watch all around
  4. Make sure others see you - communicate
  5. Always leave yourself an out - don't put yourself in a situation that you can't escape
Here's to better driving and better software.

  Edit

May 21, 2007

Don't forget to think

Posted by Ben Simo

This past week at STAR East, James Bach presented a number of questions, magic tricks, games, and riddles to testers that volunteered to be tested. I feel like I did fairly well on some of them and failed miserably on others. James uses these tests to teach testers to think. I thought I had learned some valuable lessons until I was presented with two riddles from children today. If only I could learn to think more like a child. :) I think that I sometimes let my search for hidden meanings keep me from seeing the obvious.


A riddle from my children:

You are blindfolded, placed at the start of a maze, and told to get to the other end. How do you navigate the maze?





You can feel your way through.










or a better answer is












Take off the blindfold.



It can be easy to blindly feel our way through though a challenge -- one obstacle at a time. At times it can be good to isolate problems but problems taken out of their context can be misleading. We need to look at problems in the context of their environment. Sometimes we just need to stop and take off the blindfold and look around.

With this in mind, consider the following riddle passed on by a colleague's grandchild:



How do you put a giraffe into a refrigerator?
























You open the door, insert the giraffe, and close the door.



Were you trying to make something simple more complex than it needs to be?





How do you put an elephant into the refrigerator?
























Open the door, remove the giraffe, put in the elephant, and close the door.



Did you forget about the giraffe? You need to consider the consequences of your past actions.




The Lion King is hosting a party. All the animals attend except one. Which animal does not attend?























The elephant that is in the refrigerator.



How's your memory? You just put the elephant in there.




There is a river inhabited by crocodiles. How do you cross it?























You jump in and swim across. All the crocodiles are at the lion's party.




Did you learn from your mistake with the elephant?





Let's not forget to think.

  Edit

Model-Based Test Engine Benefit #2: Simplified test result analysis

Posted by Ben Simo


Automation is of little value if it does not report useful information that can be quickly reviewed by testers.

Reported results should contain enough information to answer the following questions:

  • What happened?
  • What is the state of the application?
  • How did the application get in that state?
  • What automation code was executed?
  • What automation data/parameters were used?

Some failures reported by automated tests will be errors in the system under test and others will be errors in the automation model or code. It is important that results point the reader to both.

I have found logging of the following information to be useful:

Test (test configuration information)
  • Title
  • Start Time
  • Script File(s)
  • Model Files
  • Test Set
  • Severity
  • Environment
  • Object Map
  • Action Table(s)
  • Oracle Table(s)
  • Computer Name
  • Operating System
  • Tester

Actions (controlling the application)
  • Source (where is the action defined?)
  • Title
  • Start Time
  • Action Details
  • Duration
  • State Transition
  • Automation Code
  • Result Details
  • Snapshot (screen capture, saved files, etc)
  • Status (Pass, Fail, Inconclusive)

Oracles (validating the results)
  • Source (where is the oracle defined?)
  • Title
  • State
  • Automation Code
  • Error Code / Description
  • Validation Details
  • Snapshot (screen capture, saved files, etc)
  • Status (Pass, Fail, Inconclusive)

Messages (report useful information not directly connected to an action or oracle)
  • Message
  • Link
  • Snapshot

Once you have decided what data to report, it is important to present the data in a manner that is conducive to efficient analysis. Results need to be both comprehensive and summarized (or linked) in ways that aid human testers and toolsmiths in quickly answering the questions listed above. A 10 hour automated test execution may be of little value if it takes another 10 hours to interpret the results.

Standardizing reporting and presentation is the first step to improving results analysis. Do not rely on your tool's built-in reporting. An expensive test automation tool should not be required to view results -- especially the incomplete results reported by many tools. Create a common reporting library that can be used by all your tests and use that library. Users of the reported results will not need to learn new formats for every project or test. Some suggested output formats are:

  • HTML: Human users like color-coded well-formed results presented in HTML. A little JavaScript can be added to customize the experience.
  • XML: Extensible Markup Language (XML) files can be processed by machines and can be displayed to human users when style sheets are applied.
  • Tab-Delimited / Excel: Simple tab-delimited, CSV, or Excel tables are useful reporting formats that are easily processed by both people and machines.
  • Database: Results written directly to a database can be easily compared to results from previous test executions.

Determine your needs and select the output formats that best meet those needs. If you standardize your reporting through a single small set of reporting functions, you can easily adapt reporting as your needs change.

  Edit

May 19, 2007

STAR East 2007 Conferred

Posted by Ben Simo

I am sitting in the airport waiting to fly home from STAR-East. The conference was great. It was not great due to the many wonderful presentations. It was great because of what happened outside the scheduled activities. I got to confer with colleagues from around the world.

The best part of conferences such as STAR-East is the opportunity to confer with peers and thought leaders in our industry. It is an opportunity to discover that we are experiencing common problems and share possible solutions. It is an opportunity to learn from the best. I often learn more over dinner and in the hallways than I learn in the presentations.

I was amazed at how quickly the conference attendees disappeared once the scheduled activities were completed. I know that many of us computer geeks are introverts. We may not be the most social bunch of people, but I believe a conference without conferring is a wasted opportunity.

See y'all at CAST.

  Edit

Model-Based Test Engine Benefit #1: Simplified automation creation and maintenance

Posted by Ben Simo

Model-Oriented Design

Procedural automated test scripts may be easy to record or script. However, they are difficult to maintain when applications change. They are also difficult to adapt to new test ideas. Maintenance is simplified by automating the procedure generation in addition to the execution. New actions, validations, and data can be added to existing tests. This allows testers to spend more time thinking up new test ideas instead of maintaining procedural scripts.

Simplified GUI Interaction Coding

Most GUI automation tools contain complex vocabularies for controlling objects and retrieving information from those objects. There are usually different methods for interacting with different classes of objects. This requires that toolsmiths learn a class-sensitive vocabulary and be aware of the class as they code tests. There is an easier way: create functions that automatically detect an object's class and apply the appropriate method. This allows for the same command to be used whether you are selecting from a list box or entering text into an edit box. The parameters for the functions can then be specified in tables that are processed by the test generation and execution engine.

Common framework functions for interacting with the applications under test also allows for common solutions to tool bugs and limitations. Workarounds and enhancements can be put in the common framework code instead of being reimplemented for each test script.

Separate result validation from actions

Separating expected results definition from the test action execution simplifies maintenance and supports easy reuse of test oracle code. Validations can be specified at whatever level in the model hierarchy they apply and the test engine automatically applies them to all sub-states.

  Edit

May 16, 2007

Faking It

Posted by Ben Simo

Pradeep Soundararajan recently posted a podcast about fake experience on resumes. [listen] This reminded me of an experience I had with a fake resume.

A colleague came to me, dropped a resume in my hand, and asked if I had worked for a company listed on the resume. I quickly scanned the resume and noticed a former employer listed in the experience. I checked the dates and discovered that they included a period that I worked for that company. I then read details that listed projects in which I had been intimately involved. However, I did not recognize the name at the top of the resume. I then called several people at that company and could not find anyone that knew this person.

The experience listed on the resume was fake. It was a lie.

Lying on your resume can come back to haunt you -- sometimes even many years down the road. Don't fall into that trap.


This blatant lie was easily caught. Even if I had not worked for the company listed on the resume, whether or not someone worked for a company is usually easy to check. Former employers may be unlikely to give details about what a person did and why they left, but they will generally confirm whether or not someone was an employee.

Faking it may get your foot in the door, but once you are in you still have to perform. The person that submitted this fake resume was interviewed. It was reported to me that it quickly became clear that the person did not have the amount of experience they claimed.

Job hunting can be tough. Faking it does not help. It only makes it tougher. Tell the truth.

The truth may hurt for a little while but a lie hurts forever.

  Edit

May 15, 2007

Automating outside the box

Posted by Ben Simo

test
  1. any standardized procedure for measuring sensitivity or memory or intelligence or aptitude or personality etc • the test was standardized on a large sample of students
  2. the act of testing something
  3. the act of undergoing testing • he survived the great test of battle
  4. trying something to find out about it

automation
  1. the act of implementing the control of equipment with advanced technology; usually involving electronic hardware • automation replaces human workers by machines
  2. the condition of being automatically operated or controlled • automation increases productivity
  3. equipment used to achieve automatic control or operation • this factory floor is a showcase for automation and robotic equipment

What is test automation?


I just read some marketing literature from some leading test automation tool vendors. According to one of the vendors, their tool supports the following: Novice testers can create robust and easily maintainable tests that mimic real-life use of an application with a few mouse clicks; and the automation tool will troubleshoot errors without human intervention. (I'd give the actual text if it didn't give away the vendor. You may be guessing correctly as you read this.) If this is test automation, I want some. This is what the tool vendors are telling the executives that authorize spending large sums of money. And when such claims are believed, a very high standard is set for test automation. Most testers that implement these tools quickly learn that the automation nirvana promised by tool traffickers is not available at any price.

Wikipedia currently has a decent definition for test automation -- if you read the whole thing. It starts out with a classic definition of test automation...

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, ... Commonly, test automation involves automating a manual process ...

There are many processes in testing that are good candidates for some form of automation if we do not try to remove the cognitive aspects of testing. Automation that retraces steps that have already been executed manually and reports "pass" or "fail" is unlikely to find bugs or help testers improve their understanding of a software system under test. The Wikipedia definition for test automation includes the following important aspect.

Another important aspect of test automation is the idea of partial test automation, or automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.
I believe that partial test automation is not just an important aspect. It is essential. It is not possible to replace all aspects of a thinking human being with a machine. Test automation that helps automate testing tasks is likely to be a greater benefit than attempts at complete automation.

Instead of trying to create end-to-end test execution automation, think of how a doctor uses medical tests to help diagnose a patient's problems. No blood test or x-ray can diagnose or heal a patient. Doctors use the information reported by these tests in making a cognitive diagnosis. Look for ways that automated execution can help gather data that is useful in diagnosing software.

Test execution automation can be very useful -- but it may not be the best place to start.

Test automation can also be useful in generating test data and test cases. I believe the potential for automated test generation is often overlooked. Wikipedia mentions test generation automation yet implies that it is more academic than practical.

One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so.
Pairwise testing has become a fairly common implementation of test generation automation. Combinations generated by a pairwise or other orthogonal array data generation tool can even be used for creating tests for manual execution.

I ask you to challenge your assumptions about test automation. Think beyond regression testing. Think beyond test execution. Look for ways that automation can help make you more efficient and put your automation efforts there first.

If you are a toolsmith, talk to and watch manual testers work. Look for ways that tools can help them do their work. You may find that the most beneficial automation has nothing to do with your initial assumptions about test automation.

  Edit

May 9, 2007

Distracted by the machinery

Posted by Ben Simo

Let's not allow the machinery of testing to distract us from the craft of testing.


Over ten years, ago James Bach published his first version of Test Automation Snake Oil.

In this article, James identified eight "reckless assumptions" of the classic arguments for test automation. If we aren't careful, it can be easy to start believing statements based on these assumptions.

    1. Testing is a "sequence of actions."
    2. Testing means repeating the same actions over and over.
    3. We can automate testing actions.
    4. An automated test is faster, because it needs no human intervention.
    5. Automation reduces human error.
    6. We can quantify the costs and benefits of manual vs. automated testing.
    7. Automation will lead to "significant labor cost savings."
    8. Automation will not harm the test project.
Have you made any of these assumptions? Read the article for details. Then take a look at Sam Burgiss' great review.

  Edit

May 7, 2007

Falsifiability Testing

Posted by Ben Simo

The best experiments deduce an effect from the hypothesis and then isolate it in the very context where it may be disproved.
- Michael Kaplan & Ellen Kaplan,

Chances Are: Adventures In Probability

In Focusing on Falsifiability, Stuart Thompson writes the following about a tester-friend's statement that testers can add value from the start of a project if they understand the project and its direction.

"With a clear understanding of what the software was actually trying to do, his team was able to provide useful feedback to the developers even within the first couple of release cycles."

When testers know the problem that software is trying to solve, they can first focus on the things that matter. They can test the assumptions. They can identify contexts in which the assumptions and the implementation can be disproved. Project managers and developers are usually focused on the solution. Testers can help identify the new problems created by proposed solutions and provide information to help the team determine if the new problems are not as bad as the ones they solve.

EACH SOLUTION IS THE SOURCE OF
THE NEXT PROBLEM
We never get rid of problems. Problems, solutions, and new problems weave an endless chain. The best we can hope for is that the problems we substitute are less troublesome than the ones we "solve".


As Stuart points out, testers often have difficulty proving their worth when they prevent bad things from happening. It is usually in hindsight that we see where bypassed testing could have helped.

An example came to my mind as I read Stuart's post.

A change was made to an application. The testers knew about the problem that forced an update to the software. However, the testers we not told the details of the proposed solution. Shortly before the planned release, the testers discovered that the solution created new problems that were worse than the problem solved. In addition to creating new problems, the solution only worked in one of many likely contexts.

The developers told the testers that the solution was good because the "business" people approved it. The "business" people trusted that the developers knew how to implement the business need. It appears that no one specifically tested the assumptions. No one tried to put the solution in contexts that it may not work. Early information sharing with testers likely would have exposed the new problems created by the solution before time was spent coding, testing, and then re-engineering the solution.

It is ironic -- although common -- that a decision made in haste due to time pressure delayed the update to the software. Time spent on QA and testing early is usually going to save time and money.

Had someone focused on the falsifiability of the solution, the problems caused by the solution would have been prevented before a single line of code was written.

I have heard testers complain that no one invites them to be involved early in the process. I too have joined in that chorus. A colleague recently reminded a group of testers that sometimes we aren't included early because we don't ask. Yes, sometimes getting involved early is as easy as asking.

Ask to be involved early. Seek out ways to focus on falsifiability instead of nit-picking incomplete implementations, and developers are likely to invite you back.

  Edit

May 5, 2007

Coffee Break Machine Testing

Posted by Ben Simo

It's good to have some idea as to what something does before you start testing it -- or eating it.

Cookie Monster does a little testing in a 1967 IBM training film.


Thanks to UtterlyGeek.

What do you call this kind of testing?

  Edit

Not Gonna Bow

Posted by Ben Simo

Individuals and interactions
over processes and tools

Working software
over comprehensive documentation

Customer collaboration
over contract negotiation

Responding to change
over following a plan

- Agile Manifesto


Neary 400 years ago, Francis Bacon challenged the status quo in scientific thought in “The New Organon”. James Bach recently pointed out some interesting quotes from this work that apply to software testers. I agree.

Bacon argued that placing our preconceived beliefs over what we observe causes great harm. He went so far as to describe these harmful preconceived notions as “idols”. Bacon put these idols into four categories:

  • Idols of the Tribe: Errors common to mankind.

  • Idols of the Cave: Errors specific to each individual’s education and experience.

  • Idols of the Market Place: Errors formed through association with others — often due to misunderstanding others.

  • Idols of the Theater: Errors formed from dogma (institutionalized doctrine) and flawed demonstrations.


All of these exist in software testing. As testers, we should be questioning these “idols”, not worshiping them. Sometimes questioning them may prove them right.

Bacon did not ask anyone to abandon their beliefs without cause. Instead he asks that we not make them idols capable of leading us to ignore what would be obvious if we weren’t looking through the distorted mirror of our idols.

A modern day simplification of Bacon’s arguments may be the Agile Manifesto. We should not let our idols of process, documentation, contracts, and plans prevent us from accomplishing the desired goal. Process, documentation, contracts, and plans are only good in as much as they help. They should not prevent us from seeking improvement.

In some ways I believe that the promotion of testing folklore is the result of an industry-wide desire to show that we are mature — as mature as the engineering of physical products. I believe that eagerness to demonstrate maturity helps lead to the implementation of bad processes and cerfifications. Ironically, enforced process (see the bottom of the FSOP cycle) works best for the immature and gives the impression that anyone that can follow the process can test software.

Don't get me wrong. Process and documentation are good things that help even the smartest people when appropriately applied.

"The only thing that interferes with my learning is my
education." - Albert Einstein


We need to seek continual improvement. It is sad that process and certification often become idols that overshadow the real goals.

  Edit

May 4, 2007

Heuristics in Test Automation

Posted by Ben Simo

"A Requirement is a quality or condition that matters to someone who matters."
- Cem Kaner, James Bach, and Bret Pettichord
Lessons Learned In Software Testing
Automated tests are usually coded to perform validations against written requirements. Computers are deterministic in that they need specific instructions regarding what to test, what counts as a passed test, and what counts as a failed test. This is one of the weaknesses of test automation. Many requirements exist beyond the hard written requirements. Test automation can be a great tool for measuring hard requirements. For example, automation can be great for validating mathematical calculations.

Test automation can also be a great tool for testing the fuzzy requirements through the use of heuristics. In addition to coding validations (oracles) for hard facts, automation can be used to report things that require human attention. Automation can report information to help direct the attention of human testers.

"Only weak bugs have logic to them ... Subtle bugs have no definable pattern -- they are wild cards."
- Boris Beizer
Software Testing Techniques

A home-grown (by someone else -- later enhanced by me) test automation tool I used many years ago was built to report "pass", "fail" or "inconclusive" for each test it performed. It had been identified that there were many cases where human judgement and/or investigation was required to determine if something really passed or failed. In some cases, it was just not economical to code a validation for something that humans process better than machines. Therefore, instead of trying to make automation do it all, create automation that does what computers do best and let thinking human testers pick up where the computers stop.

I once automated data validation for a large pricing database. It was suspected that there were numerous errors in this database. Finding and fixing possible errors in millions of records was a daunting task. Instead of creating complex calculations to try to completely automate the validation, I coded simple heuristic rules. When these rules failed, then the "failure" was reported to human testers and data editors for investigation. These rules were things like:

  • Suggested retail price is greater than wholesale price
  • Current price is within 10% of the previous price
  • Generic equivalent price is less than the name-brand price

Were these test oracles always true? No. In some cases it was correct for the data to fail the above tests. However, the automation helped direct the attention of testers. These heuristic validations also helped expose unexpected patterns and led to finding bugs in the software that processed and formatted the data.

Instead of creating complex test automation tools, Harry Robinson suggests that we build "The Simplest Thing That Could Possible Find A Bug". Sometimes this means that we code heuristic validations instead of complex validations that report results with absolute certainty. Let the computers report things that a human tester should investigate. Instead of "inconclusive", Harry uses the term "suspicious". I like that.

The next time you automate testing, in addition to thinking of things that computers can report as "passed" or "failed", think of things that it might be able to report as "suspicious".

  Edit

May 2, 2007

Hey Dad, when I grow up, I also want to be square.

Posted by Ben Simo

Erkan Yilmaz, a fellow tester blogger from Germany, recently pointed out that the slogans in my post "Slogans are models" may not transmit their message across languages and cultures. He attempted to guess at what some of the slogans meant without the context of American culture and advertising. These slogans that most Americans will instantly understand didn't work very well out of their context.

We both saw this as an example of how recipients of information do not always have the full context in which the information originated. As testers, we need to admit when we don't understand and seek the answers (and context) from those that know. Sometimes we may need to bring subject matter experts into the conversation to fully understand what we are testing. If you don't know, ask questions. If you think you know, ask questions. You are bound to learn something.

I recently sat in some presentations by people from the "business" (as in not IT) side of some projects in which I am involved. I learned a great deal about how customers use our products and the business' vision for the future of the products. This information will help me better test the products. It is good to know more about the context in which the products I test are used.

Know thy user, for he is not thee.
-
David S. Platt


After our exchange about the American slogans, Erkan provided the following list of slogans from German-speaking countries for interpretation by those of us that live outside that context.

1. “Hey Dad, when I grow up, I also want to be square.”
2. “We wake up earlier.”
3. “We can do everything but speak Standard German.”
4. “With the second eye you see better.”
5. “Firm as a rock in the surge.”
6. “We demand and bring forward personalities.”
7. “Try it in a gentle way.”
8. “It was never so valuable as in these times.”
9. “If it makes you beautiful…”
10. “Well, is today already christmas ?”



What do you think these mean?

See Erkan's original post of this list here.
And after you have tried to interpret the list, look here for their real meanings.

Viel Spaß!

Ben

  Edit