The online community for software testing & quality assurance professionals
 
 
Calendar   Today's Topics
Sponsors:




Lost Password?

Home
BetaSoft
Blogs
Jobs
Training
News
Links
Downloads



Miscellaneous Forums >> General Discussion

Pages: 1 | 2 | 3 | 4 | >> (show all)
vijay_testing
Newbie


Reged: 08/10/07
Posts: 5
Explorative Testing
      #406151 - 08/10/07 09:01 PM

Hi all,

i'm new to this forum.Could any one please tell me "what is explorative testing?"


Post Extras: Print Post   Remind Me!   Notify Moderator  
Joe Strazzere
Moderator


Reged: 05/15/00
Posts: 12344
Loc: Massachusetts, USA
Re: Explorative Testing [Re: vijay_testing]
      #406164 - 08/11/07 03:41 AM

Most likely you mean "Exploratory Testing"

Search here and on Google, and you'l find plenty.

--------------------
- Joe
Visit AllThingsQuality.com to learn more about quality, testing, and QA!

I speak only for me. I do not speak for my employer, nor for anyone else.


Post Extras: Print Post   Remind Me!   Notify Moderator  
rsherry
Junior Member


Reged: 05/21/02
Posts: 6
Re: Explorative Testing [Re: Joe Strazzere]
      #406622 - 08/14/07 03:21 AM

www.satisfice.com is a good resource for exploratory and rapid software testing.

--------------------
Software Testing Blog
Software Testing Jobs
Software Testing Club


Post Extras: Print Post   Remind Me!   Notify Moderator  
Steve_Bartkowski
Newbie


Reged: 08/14/07
Posts: 7
Loc: York, PA, USA
Re: Explorative Testing [Re: rsherry]
      #406792 - 08/14/07 11:25 AM

In addition to www.satisfice.com, check out www.quardev.com. Click on the 'testing' tab and on the bottom right, click on 'more whitepapers and articles' to find an article titled 'Dynamics of Exploratory Testing'.

This has some excellent information.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Raj Majoka
Junior Member


Reged: 07/26/05
Posts: 3
Loc: Noida (India)
Re: Explorative Testing [Re: Steve_Bartkowski]
      #408697 - 08/21/07 04:32 AM

It is similarly as Ad hoc testing. In this testing we don't have test case and requirement docs for testing so we just need to explore all functionality with our eyeque and we treat as a unknown about the application.

Thanks
Raj

--------------------
Raj Majoka


Post Extras: Print Post   Remind Me!   Notify Moderator  
rajeshy
Newbie


Reged: 08/21/07
Posts: 4
Re: Explorative Testing [Re: Raj Majoka]
      #409088 - 08/22/07 02:55 AM

Explorative testing: in this type of testing we dont have any test case document or requirement specification document
we have explore the functionality and test the funcyionality


Post Extras: Print Post   Remind Me!   Notify Moderator  
PVB1979
Junior Member


Reged: 06/13/06
Posts: 154
Loc: mumbai,India
Re: Explorative Testing [Re: rajeshy]
      #409090 - 08/22/07 03:04 AM

Exploratory testing is simultaneous learning, test design, and test execution.

--------------------
If you win you need not explain...But if you lose you should not be there to explain.
With Regards,
Prashant


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: PVB1979]
      #409199 - 08/22/07 06:27 AM

Quote:

Prashant: Exploratory testing is simultaneous learning, test design, and test execution.


Prashant, would you please explain this? I once thought I understood the concept of exploratory testing. I no longer do. Please help!

Post Extras: Print Post   Remind Me!   Notify Moderator  
cemkaner
Member


Reged: 04/17/01
Posts: 47
Loc: Melbourne, FL USA
Re: Explorative Testing [Re: JakeBrake]
      #409457 - 08/22/07 11:04 PM

I use Prashant's definition as well.

Check these sources, http://www.satisfice.com/bbst/slides/ETat23.pdf
http://video.google.com/videoplay?docid=-6217339535521340225&q="cem+kaner"+exploratory

Cem Kaner
kaner@kaner.com

--------------------
Cem Kaner, Professor of Computer Sciences, Florida Tech. kaner@kaner.com


Post Extras: Print Post   Remind Me!   Notify Moderator  
PVB1979
Junior Member


Reged: 06/13/06
Posts: 154
Loc: mumbai,India
Re: Explorative Testing [Re: cemkaner]
      #409462 - 08/22/07 11:33 PM

JakeBrake
Please Check these source http://www.satisfice.com/articles/et-article.pdf

--------------------
If you win you need not explain...But if you lose you should not be there to explain.
With Regards,
Prashant


Post Extras: Print Post   Remind Me!   Notify Moderator  
PVB1979
Junior Member


Reged: 06/13/06
Posts: 154
Loc: mumbai,India
Re: Explorative Testing [Re: PVB1979]
      #409464 - 08/22/07 11:37 PM

Yester day I asked My collogue about exploratory Testing
He told me it is nothing but Ad Hoc testing.
Is it correct...?

--------------------
If you win you need not explain...But if you lose you should not be there to explain.
With Regards,
Prashant


Post Extras: Print Post   Remind Me!   Notify Moderator  
saju1982
Member


Reged: 05/25/06
Posts: 83
Loc: mumbai
Re: Explorative Testing [Re: PVB1979]
      #409479 - 08/23/07 12:11 AM

If Exploratory testing is simultaneous learning, test design, and test execution then my question is when are we going to start the test design?Is it after we do the explorative testing?In that case we would be starting off the testing process quite late.
I think why the testers are so important in the software devlopment process is because of their capability to interpret most of the scenarios in the system.
Explorative testing in my words would be exploring your system to know the various unknown scenrios which isnt been handled in the test case design.

--------------------
Regards
Saju Thomas


Post Extras: Print Post   Remind Me!   Notify Moderator  
bru
Super Member


Reged: 05/08/03
Posts: 1383
Loc: Austria
Re: Explorative Testing [Re: saju1982]
      #409520 - 08/23/07 02:33 AM

saju1982, exploratory testing should not be your only way of testing a product. So you'll still have to develop your test design as early as possible (and useful).

But (as far as I understand exploratory testing), it should be an "enhanced" (or more systematic) way of ad-hoc testing, wich means that the experiences you make while testing your product in an exploratory way shall go back into test design (e.g. by creating some new test cases you've never been thinking of before).

Regards,
Juergen


Post Extras: Print Post   Remind Me!   Notify Moderator  
saju1982
Member


Reged: 05/25/06
Posts: 83
Loc: mumbai
Re: Explorative Testing [Re: bru]
      #409526 - 08/23/07 03:14 AM

Bru, thats exactly what i meant.

--------------------
Regards
Saju Thomas


Post Extras: Print Post   Remind Me!   Notify Moderator  
jamesbach
Moderator


Reged: 03/01/01
Posts: 79
Loc: Front Royal, VA
Re: Explorative Testing [Re: Raj Majoka]
      #409891 - 08/23/07 09:38 PM

It is important to distinguish between exploratory testing, and SKILLED exploratory testing.

Exploratory testing is very simple in concept: all it means is that your learning about the product, your test design, and your test execution, are all part of the SAME process. They are not divided into independent activities.

Doing this well is a matter that requires skill. It is a systematic, potentially rigorous process. In fact, doing ANY testing well requires skill. But when most people talk about "ad hoc" testing, they are not talking about excellence, they are talking about sloppiness. So, to say ET is merely ad hoc testing is misleading. The way *I* do ET is probably not the way most testers do ad hoc testing.

Those of us who talk about exploratory testing, and study it, and teach it, and further the state of its art, are interested in doing ET excellently.

Quote:

If Exploratory testing is simultaneous learning, test design, and test execution then my question is when are we going to start the test design?Is it after we do the explorative testing?In that case we would be starting off the testing process quite late.
I think why the testers are so important in the software devlopment process is because of their capability to interpret most of the scenarios in the system.
Explorative testing in my words would be exploring your system to know the various unknown scenrios which isnt been handled in the test case design.




Dude, you need to understand what "simultaneous" means. Test design begins right away. It doesn't WAIT, because exploratory testing IS test design.

I sit down and I start testing. That's test execution. That's learning. That's test execution. It's all three. All at once. Your test ideas evolve. You let them evolve. What's the alternative to this? The alternative is that you refuse to let these three activities influence each other. If that's your attitude, it will be pretty hard for you to find more bugs than me!

The opposite of exploratory testing is scripted testing. However, it is a simple matter to blend exploratory testing and scripted testing. In fact, mostly these approaches are blended.


Post Extras: Print Post   Remind Me!   Notify Moderator  
saju1982
Member


Reged: 05/25/06
Posts: 83
Loc: mumbai
Re: Explorative Testing [Re: jamesbach]
      #409928 - 08/24/07 12:27 AM

Thanks a lot JAMES,
i think i have got the difference in between the adhoc and Exploratory testing with your explaination.
But one more doubt , at which stage do you think this approach would be more effective of the following?
1)At the requirement stage where i create a paper prototype or a prototype of the system
OR
2)After the system is built and is ready for testing.

--------------------
Regards
Saju Thomas


Post Extras: Print Post   Remind Me!   Notify Moderator  
TargetTesting
Member


Reged: 11/30/02
Posts: 285
Loc: England
Re: Explorative Testing [Re: saju1982]
      #409963 - 08/24/07 02:38 AM

I tend to blend scripted and exploratory testing. Often I use ET on new builds (smoke test) or on rushed jobs (e.g. you have 1 day to test a build before it is released) or on new/changed features. Generally there will always be a charter/focus for the ET sessions.

Testing also contains scripted tests to ensure a measurable level of coverage. Interestingly Ive been experimenting with using only high-level test case descriptions and checklists rather than detailed scripted tests so we are kind of using ET on our scripted tests.

I find that companies are more comfortable with the idea of having scripted tests since there is a feeling that the testing is more measurable and controllable and there can be less reliance on having skilled testers (not a good idea in my book). However some areas of testing such as Application Security or Penetration testing rely less on scripted testing and more on an exploratory approach.

One last point, while I find scripted tests are good to build confidence that the product has been tested and works as expected under certain conditions, I find some really interesting bugs when doing ET. Since you are designing the tests as you go and in response to how the system reacts, you build tests that are not always apparent when sitting looking at a spec.

Hope my comments are useful.

Regards
Bill

--------------------
Many thanks
Regards
Bill Matthews
Target Testing Ltd.
http://www.TargetTesting.co.uk/


Post Extras: Print Post   Remind Me!   Notify Moderator  
cemkaner
Member


Reged: 04/17/01
Posts: 47
Loc: Melbourne, FL USA
Re: Explorative Testing [Re: saju1982]
      #410258 - 08/25/07 08:34 AM

Exploratory testing is a comprehensive approach to testing, as is scripted testing. You can run an entire testing project, competently, in a scripted way, an exploratory way or in some mix of them.

From an exploratory viewpoint, which is how I almost always work, I start doing exploratory testing at the first moment that I start thinking about the product. I start learning about the product, its market, its environment, its risks--and I start thinking, right away, how would I test that? "How would I test that?" is a design question.

I develop a strategy--a framework in which I select my designs--to suit my broader information objectives--what information do I need, what information do my clients need--this time, for this project? Developing that strategy requires work over time and that work involves a lot of learning.

At the start of the project, I know less about the product and its test-related issues than I will know at any time in the future. So I cannot announce my testing strategy at the start (unless I want to be wrong). I cannot script my tests yet, even if there are detailed specifications, because there is too much to test and I don't yet know what is most important. Part of how I learn is to try to test the product, and see what its vulnerabilities are, see what concerns me about the environment, etc.

James and I often use the phrase, "parallel, interacting activities" to describe learning, design and execution rather than simultaneous. Some people find this clearer. Again, the idea is that each activity is assisted by the other, from the start of the project to the end.

I have never personally seen a project well tested if it was primarily tested by scripted, preplanned tests. But colleagues have told me that they have seen this and I believe them. I have personally seen projects that were well tested that relied on exploratory testing. Therefore, I reject the notion that exploratory testing is an adjunct, a secondary activity to scripted testing.

- Cem Kaner
again, if you want more, see my video for an introduction to exploratory testing (see my links above).

--------------------
Cem Kaner, Professor of Computer Sciences, Florida Tech. kaner@kaner.com


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: cemkaner]
      #410312 - 08/26/07 08:36 AM

It appears we have definitions, which are as similar as they are in subtle ways different. Given the various definitions for testing terms I am not surprised. And of course the type of project and/or level of formality required and/or contractual requirements drive the concepts in this respect.

Cem

JakeBrake

  • At the start of the project, I know less about the product and its test-related issues than I will know at any time in the future.

I submit that it depends upon your exposure to both, 1) similar products or systems or applications, and 2) common challenges within a specific application domain.

  • So I cannot announce my testing strategy at the start (unless I want to be wrong).

In my experience, it is best to get a strategy out for informal and formal review as soon as possible. In my experiences a strategy has been contractually required and is required relatively early in the projects I refer to. Im a risk-taker (grin) and rely on teammates to point out erroneous thinking.

  • I cannot script my tests yet, even if there are detailed specifications, because there is too much to test and I don't yet know what is most important.

I believe a form of informal scripting can occur at this point, even if only notes/thoughts in an engineering notebook. I have used this method previously to script scenarios for review per my response just above. I am thinking that if one were familiar with the domain, then one would have an understanding of what is most important. Reviews and inspections would serve to adjust rankings/priorities as necessary. Additionally, if there is "too much to test" then the project manager and/or Program Office should be made aware as soon as possible.

  • Part of how I learn is to try to test the product, and see what its vulnerabilities are, see what concerns me about the environment, etc.

I refer to this as analysis and rely upon item 2) above. Should I not have "pre-knowledge" then I will test the waters as you have specified.

Given the above, I tend to think of exploratory testing as nothing more than analysis and design. I do not use the terms "analysis" and "design" to frame or drive a strategy. I use the terms to describe what I have done after the fact. I will often refer to this as the "exploratory phase."



Post Extras: Print Post   Remind Me!   Notify Moderator  
DSquared
Moderator


Reged: 04/02/03
Posts: 4546
Loc: Wisconsin, USA
Re: Explorative Testing [Re: PVB1979]
      #410785 - 08/28/07 04:35 AM

Raj wrote:
Quote:

It is similarly as Ad hoc testing. In this testing we don't have test case and requirement docs for testing so we just need to explore all functionality with our eyeque and we treat as a unknown about the application.




Prashant wrote:
Quote:

Yester day I asked My collogue about exploratory Testing
He told me it is nothing but Ad Hoc testing.
Is it correct...?




James wrote:
Quote:

But when most people talk about "ad hoc" testing, they are not talking about excellence, they are talking about sloppiness.




I'll add my opinion. Please note that this is only an opinion, and your mileage may vary.

If you look up the definition of "ad hoc" as an adjective, you will find something along the lines of "concerned or dealing with a specific subject, purpose, or end". Ad hoc does NOT mean "unplanned", "unstructured", "exploratory" or "sloppy". Therefore, trying to equate exploratory testing to ad hoc testing is incorrect.

As James says, most people equate "ad hoc" with "sloppy". I will posit that those people don't fully understand testing, or the definition of ad hoc, or the proper use of an ad hoc test. A real ad hoc test is anything but exploratory. They are really on the opposite ends of the spectrum. An ad hoc test is carried out with a very specific end goal in mind. Exploratory doesn't have to have this specific end goal in mind.

In the real world, exploratory would be along the lines of sitting down to test an application and letting the application, so to speak, tell you where to go next. You go where the thread takes you.

On the other hand, perhaps you find a specific issue but don't have time to fully explore it right now. You might sit down and plan (or not) a purpose built test to explore that aspect of the application. You will use the test once and probably throw it away afterwards. THAT is ad hoc. Not really exploratory, but you CAN let an ad hoc test lead you into more exploratory testing.

The difference is that there is a very specific end purpose for an ad hoc test, while the end purpose for exploratory may not be as well defined.

In my experience, most people I have met who use the terms "exploratory" or "ad hoc" testing are really using that as shorthand to say "I don't want to write a script" or "I don't want to have to plan before I start testing". Those would be incorrect applications of the terms.

I know that this is splitting hairs regarding definitions, but I come from a business analysis background, where words, definitions and understanding of these are everything.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Walen
Super Member


Reged: 05/09/01
Posts: 1254
Re: Explorative Testing [Re: DSquared]
      #410809 - 08/28/07 05:09 AM

I think you have the right of it, D^2. I've said in other threads on exploratory testing that it takes a great deal of discipline to do well - far more than many people have, I suspect. Certainly more than those who consider "ad hoc" (and really meaning sloppy) and exploratory testing to be interchangeable terms.

If a person or team do not have good testing disciplines already, exploratory testing will probably not bring any notable benefit to them. In that case, they will see no difference at all between exploratory testing and slop.

Jakes notes are interesting to me. In my own experience, I have not really been in a position to use exploratory testing without knowing something about the purpose of the system or the project. Having said that, a "strategy" as a high level statement of intent may be delivered to certain management types as a simple "This is the approach I intend to use, and will alter as I gain mroe understanding."

I don't see the differences as terribly great in the long-run.

--------------------
P. Walen

My Blog: http://rhythmoftesting.blogspot.com/


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: Walen]
      #410831 - 08/28/07 05:56 AM

Unfortunately, whilst I do perform Exploratory on an almost daily basis and can find value in it as a small PART of my overall testing approach, I find that people who have been "taught" exploratory testing in a course format have almost unanimously come back into the office
environments I have worked in, chanting the mantra that "documentation must die". Seriously, they seem to come back absolutely convinced that exploratory testing
means no documentation - or at most a one-liner saying something pointless like 'Test security'. One manager actually chastised me when he got back from a course because he found me writing test cases.

The fundamental flaw in the argument for Exploratory testing, as represented in recent conversations, is that it is supposedly "Agile" and that in Agile environments it is the appropriate method because there is no time to plan tests and execute to those plans. As has been mentioned elsewhere, Exploratory Testing - done badly, results in what I like to call "Secret Analysis"to describe an analysis effort that leaves no legacy for re-use purposes. The end result is that the person that did the original testing can do it again but anyone else who tries will need to go and learn all the same things again.

I guess what I am trying to say is that YES Exploratory Testing can be a useful and possibly invaluable tool for producing a quality outcome BUT it is TOTALLY dependant upon the mindset of the tester. I don't believe that it is a technique that can be effectively taken on by many testers.

I also believe that the more testers read stuff that has been regurgiated by people who have read texts or done courses on the subject, the worse the problem will become. It is a topic for experienced testers, not uni graduates and this isn't explicitly pointed out (because even many of the 'teachers' don't understand that.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jamesbach
Moderator


Reged: 03/01/01
Posts: 79
Loc: Front Royal, VA
Re: Explorative Testing [Re: Walen]
      #411438 - 08/29/07 10:46 AM

To do any testing well requires discipline and skill. Exploratory testing may seem like it requires more skill, because that's mainly because it mixes several activities together, giving the illusion that there's more to it than scripted testing.

I think there really is something to ET that you don't find with ST, but the opposite is also the case, so I wouldn't say that ET requires more skill or discipline as a general matter.

Another thing that contributes to the confusion is that scripted testing and exploratory testing are usually found mixed together. You who continue to talk as if ET and ST are completely separate things are perpetuating the confusion. When most of the people on this forum use the term "exploratory testing", they mean what in my community (the community that coined the term, popularized it, and studies it) we would call "freestyle exploratory testing". The completely freestyle approach is relatively rare, whereas a more structured form of ET is extremely prevalent, even in companies that, in their confusion, claim that they don't do ET.

--------------------
James Bach, Satisfice, Inc., james@satisfice.com, [url=http://www.satisfice.com]www.satisfice.com[/url]
Author of Lessons Learned in Software Testing: A Context-Driven Approach


Post Extras: Print Post   Remind Me!   Notify Moderator  
jamesbach
Moderator


Reged: 03/01/01
Posts: 79
Loc: Front Royal, VA
Re: Explorative Testing [Re: martinh]
      #411440 - 08/29/07 10:59 AM

Hi Martin,

You do analysis every day, all the time, that does not leave a legacy, except in your own head. Why do you do that? You do that because you have no choice but to learn from what happens. I suspect that you, like so many other technical people, have not really sat down and watched yourself test. I urge you to do that. You'll discover that you have many many more thoughts than you could possibly document, or could benefit by documenting.

Our craft, I hope, will someday get out of kindergarten and realize that human minds are wonderful and powerful. Don't fear them, but learn to use them.

Documentation can be very important. Bad documentation, however, is the norm.

I don't know which people you have experienced who have been trained in ET (there are unfortunately some bad classes out there), but if they came from my classes, and they say that documentation is bad, then they could not have been paying much attention. One of the things I emphasize is the importance of notetaking. When I coach testers, they end up taking far more notes while testing than they ever have before.

I think most documentation in most test projects is a complete waste. You could burn most of it and no one would notice. The legacy argument is generally specious, considering that you know you won't read that stuff, and you know if you do you will find it full of fluff and misinformation.

Any skilled tester should know the difference between documentation that helps, and documentation that just wastes our valuable time. I advocate a concise approach to documentation so that the focus remains on the minds of the testers (and on how much more efficient it is for them to talk to each other than to write long documents to each other) but the golden rule is this: learn what problems documents can solve, and learn how to use them to solve those problems.

This is not an anti-documentation attitude. It's an anti-waste attitude.

The next time you hear an exploratory tester say "no doc", point them to session-based test management. This is a highly structured approach to documenting ET that I created in 2001. Google it.

--------------------
James Bach, Satisfice, Inc., james@satisfice.com, [url=http://www.satisfice.com]www.satisfice.com[/url]
Author of Lessons Learned in Software Testing: A Context-Driven Approach


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: jamesbach]
      #411726 - 08/30/07 05:23 AM

James Bach

JakeBrake

You'll discover that you have many many more thoughts than you could possibly document, or could benefit by documenting.

I can agree with the first clause. I would tend to think of this as common knowledge. Is this any different than an analogue from days gone by in the computing sector where prior to printer buffering and spooling, computers were forced to slow down by orders of magnitude in order to match printer speed during output? The 2nd clause is one I cannot agree with. Since the amount of evidence is too voluminous for this format I will default to my usual position on this topic. The world of tactical and strategic combat systems (et al) would suffer a tremendous setback if those worlds embraced this notion.

Our craft, I hope, will someday get out of kindergarten and

Are you suggesting that you have graduated and others have not? Are you suggesting that you define what is kindergarten for the entire industry? Please explain. It is already understood that all technologies evolve and mature.

realize that human minds are wonderful and powerful. Don't fear them, but learn to use them.

I tend to believe that this notion is not new.

Documentation can be very important. Bad documentation, however, is the norm.

I think we have all seen bad documentation. I struggle without any objective evidence from you that this is the norm. I would trust that you could supply such evidence?

. One of the things I emphasize is the importance of notetaking. When I coach testers, they end up taking far more notes while testing than they ever have before.

Many NASA, FAA, FDA-regulated projects, and defense systems contracts require far more than note taking with respect to testing. These same contracts also require detailed test procedures all for very good reason.

I think most documentation in most test projects is a complete waste. You could burn most of it and no one would notice.

I understand that you do not mean burn literally. However, if you actually did burn any DoD documentation you would find yourself subject to many criminal charges; charges such as arson, sabotage, destruction of government property, and treason to name a few. Please allow for the fact that documentation is critical in the domains I speak of. Please appreciate all the testing documentation behind this. Be thankful that those procedures were used to detect any navigational defects where without those, one of these units may have gone astray and created some issues for you. :)

The legacy argument is generally specious, considering that you know you won't read that stuff, and you know if you do you will find it full of fluff and misinformation.

The legacy argument may be specious in your sphere, but your sphere is not all encompassing correct? I personally know that I will read the "stuff". My job and career depend upon it! I trust that is not full of the ingredients you indicate. Reviews and inspections tend to identify any instances of misinformation. Processes are invoked to correct misinformation or other documentation defects.

Any skilled tester should know the difference between documentation that helps, and documentation that just wastes our valuable time.

Thank you and on behalf of the members of this community Thank you!

I advocate a concise approach to documentation so that the focus remains on the minds of the testers (and on how much more efficient it is for them to talk to each other than to write long documents to each other) but the golden rule is this: learn what problems documents can solve, and learn how to use them to solve those problems.

After reading your post and the post at DrivenQA, I am confused about what you advocate. It is very difficult to pick it out of your many insults and trashing of domains you clearly know little about. Would you like to recant?

This is not an anti-documentation attitude. It's an anti-waste attitude.

Good! Then I would guess you join in with most of us.

The next time you hear an exploratory tester say "no doc", point them to session-based test management. This is a highly structured approach to documenting ET that I created in 2001. Google it.

Would it be better to point them to their job requirements and any requirements of a contract they might be working on?

For the benefit of all potential readers who do not wish to join DrivenQA I will include here, our discussion from there. The following questions are excerpts from a post by Bill Matthews at DrivenQA:

"When I'm working with smaller development outfits they seem to be quite comfortable with the idea that testing can be done without reams of scripted tests (i.e. the detailed test instructions that testers should follow). Instead they rely upon the fact that the testers know what they are doing and don't need scripted tests to find the bugs...or perhaps it is the cost of producing the scripted tests that makes them willing to accept this :-)
When I work with larger development outfits or end-user companies (e.g. banks, local government) I generally find they are more reluctant to give up on the idea scripted tests. It seems that there is a certain comfort in having a set of scripted tests to follow, particularly if they perceive that the tests will be of use in future cycles or releases.
So, in general terms...

Does anyone else find this?...or is it just me?
Do you think it is a case of the larger the project, the more there is a need for scripted testing? Can exploratory testing work on large projects?
Do you think that scripted tests are built automatically because companies think that is the right way to do it?
Do you feel that it is worth investing in scripted tests if you plan to "repeat" the tests on the next release?
Do you feel that managing the testing process is more difficult and risky if you don't have scripted tests?
Thoughts? Points of view?"

James Bach at DrivenQA

JakeBrake at DrivenQA

Except there is a certain kind of scripting that to me is just junk food of the worst kind: step-by-step detailed procedural test scripts. Although there are situations where these actually help a test project, in general they hurt it.

I disagree with this broad sweeping statement. Among many examples I could cite, I will use the Naval Tactical Data System (NTDS) and Automatic Carrier Landing System (ACLS) as examples. The keystone of success to the NTDS and ACLS is reams of scripts. I would shudder to think that any one or more of 30 or more people would be expected to memorize and faithfully repeat tests without those.

Step-by-step detailed procedural test scripts, I find, are usually vacuous, insulting, and incredibly inefficient.

To you they may indeed be so. To my counterparts and me they are/were both critical and fruitful.

They do not result in many bugs being found, but rather tend to discourage bug finding.

I think you would have a difficult time in selling this notion to the 30 testers of NTDS who found ~10,700 defects in the course of one year.

They do not engage the minds of the testers who follow them, but rather tend to make those testers wish they worked elsewhere. People who are forced to write them are implicitly discouraged from doing any testing that requires, say 200 mouse clicks or 500 key presses to perform.

Having been there I must disagree. I loved that job. I believe I can say without equivocation that the others of this team loved their jobs as well. Example: How can I as a tester assure that a pitch/bank command is issued to a specific aircraft within the required periodic, and that the actual command factors the pitch/roll/yaw and wind inputs?

They tend to stick to boneheaded simplistic tests.

This is just not true in the case I speak of else our air defense identification zone would be full of holes.

Such test scripts are popular, I believe, simply because they are EASILY WEIGHED,

In the cases I speak of they are popular because they are/were contractually required and most importantly made sense!

and their sheer weight tends to discourage management from looking too closely at them.

Management should not be required to look closely at reams of scripts. Management should trust that the designers and developers of these scripts have "nailed it" so to speak.

What I'm saying is, I suspect people who make heavy use of test scripts are doing so either because they don't know how to test, or they are actually trying to fake the testing

That is a rather dreadful statement. I cannot subscribe to that. One of the reasons that the ACLS can put an aircraft into a 10-foot by 20-foot box on a pitching, rolling, and yawing aircraft carrier is due to the exhaustive testing brought to bear per the reams of scripts.

The arguments in favor of scripting are potentially powerful if applied to a lighter form of scripted testing, such as using a test coverage matrix to aid exploratory testing, or to provide a structure for an improvisational form of testing, such as scenario testing or usability testing.

I suppose it all depends upon the industry and types of systems and/or applications.

James, I am not acquainted with your level of exposure to the various systems and applications in existence. I can assure you that scripting exists in abundance throughout the world of military strategic/tactical systems, factory automation, medical devices, NASA, etc.



Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: jamesbach]
      #411778 - 08/30/07 06:47 AM

Quote:

Hi Martin,
I don't know which people you have experienced who have been trained in ET (there are unfortunately some bad classes out there)



Sorry for the confusion. (whilst I will side-step location and dates):



The Trainer was you.



.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: jamesbach]
      #411787 - 08/30/07 07:14 AM



Quote:

I suspect that you, like so many other technical people, have not really sat down and watched yourself test.




Wrong... oh so very very wrong.
I consider my career to be a line of branching continuous improvement both at the Industry level and introspecitvely. However, that doesn't mean that I will drop something that I have determined has the maximum benefit for my clients in favour of what tends to amount to a load of marsh gas.



Quote:


I think most documentation in most test projects is a complete waste. You could burn most of it and no one would notice. The legacy argument is generally specious, considering that you know you won't read that stuff, and you know if you do you will find it full of fluff and misinformation.





This comment, written this way (more on that later) makes 2 assumptions:
1) That I will remain forever as the only person responsible for testing the App/System.
2) That you have no regulatory requirements. Heard of SOX? SOD? - Google them.



Quote:


The next time you hear an exploratory tester say "no doc", point them to session-based test management. This is a highly structured approach to documenting ET that I created in 2001. Google it.





I have heard of session based testing and have read some of the documentation - Extended impression is that it does not translate well from the classroom without a very malleable audience (but thats another pointless story).



Personally, I think you have developed a major problem in your delivery. Most professional, experienced Testers agree with a lot of what you have written in THIS post on THIS thread on THIS forum. Unfortunately it appears that most of the time, you condense your thoughts too much in print and very very clearly come off as being easily definable as "Anti-Documentation". I think you also tend to make the assumption that where you are called out on this the people doing the calling are rigid Documenters who are pathologically opposed to Exploratory Testing concepts. Your reactions only go to reinforce the impression that people originally have. Just a thought.



Quote:


Any skilled tester should know the difference between documentation that helps, and documentation that just wastes our valuable time. I advocate a concise approach to documentation so that the focus remains on the minds of the testers (and on how much more efficient it is for them to talk to each other than to write long documents to each other) but the golden rule is this: learn what problems documents can solve, and learn how to use them to solve those problems.

This is not an anti-documentation attitude. It's an anti-waste attitude.




and POSSIBLY the problem with this is that you made the mistake (and I chose that word with care) of labelling it Exploratory Testing. Once again, I think that some BUT not all would agree with your point of view if you had chosen a more sensible term (ie one that didn't link so readily to a completely different arguement). If you had selected a better term you may have found most of your online discussion would be about the method itself rather than defending the concept.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: JakeBrake]
      #411800 - 08/30/07 07:53 AM

Quote:


Do you think it is a case of the larger the project, the more there is a need for scripted testing?




No, its largely irelevant. The bigger issue is the seriousness of negative outcomes. In the example I have used earlier - small project of 4 people, mission critical system. Rigid documentation required. OR Large project team of 120, laid back website development, rigidity not necessarily present.


Quote:

Can exploratory testing work on large projects?




It can, has, and still does. The problem is the deliniation between the act of exploratory testing and the use of the term to describe a complete methodology. The technique definitely has a palce in almost any project - the methodology doesn't.




Quote:

Do you think that scripted tests are built automatically because companies think that is the right way to do it?




No, very few testers, QA managers, and Project Managers are actually idiots. They have the capacity to review new approaches and embrace where appropriate (and toss on the floor where that is appropriate too)



Quote:


Do you feel that it is worth investing in scripted tests if you plan to "repeat" the tests on the next release?



Yep - but see my comments earlier that would indicate that in a lot of circumstances I would consider scripting if the test is only ever going to be run ONCE.



Quote:


Do you feel that managing the testing process is more difficult and risky if you don't have scripted tests?
Thoughts? Points of view?"




I think that you can quite appropriately manage the testing process at the team level or higher with very high level testers (particularly if you trust the staff) Test Management itself is a different area. Whether it can be managed or not is one question - the much bigger questions are Compliance issues and actual project risk in terms of delivery and quality.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
Steve_Bartkowski
Newbie


Reged: 08/14/07
Posts: 7
Loc: York, PA, USA
Re: Explorative Testing [Re: martinh]
      #411904 - 08/30/07 12:58 PM

This thread is an excellent example of why I am a tester (I am a recovering programmer!). It is also an example of why the phrase "Best Practice" should NEVER be used in a general, all-encompassing statement related to testing. A "Best Practice" testing method, approach, technique - whatever - must be used in the context of the application/industry/etc. that is under test.

JakeBrake - Obviously your environment necessitates the rigorous attention to scripted detail. It is obviously an example of where scripted testing not only is mandated, but works and works well. This thread is about exploratory testing however, and I think the great lesson learned here is that ET is an option for testers, not the Best Practice throughout our profession.

In answering Vijay's original question however, I question your comment to Cem Kaner that "...I tend to think of exploratory testing as nothing more than analysis and design." I don't quite agree. It has helped me in broadening the pseudo-limitations that were previously imposed by our scripted testing and has found bugs that I have proven would not have been found otherwise. I was not analyzing nor designing, I was testing. Remember though, my system under test was a perfect fit for Exploratory Testing.

James comments - although biased - hit the spirit (in my opinion) of the original question. I have recently attended a course of James' and agree that those who dismiss the notion of documentation were obviously not paying attention. It is part of the whole process!

Vijay and others - when identifying the approach that is Exploratory Testing - go to James' website. If nothing else, Jon Bach's take on 'Session-Based Exploratory Testing' at www.quardev.com is a must read (see earlier post). Remember, it may not be for you. But it is another option when determining what is best for one's own application to be tested.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jamesbach
Moderator


Reged: 03/01/01
Posts: 79
Loc: Front Royal, VA
Re: Explorative Testing [Re: JakeBrake]
      #411949 - 08/30/07 06:59 PM

Jake Said: "The 2nd clause is one I cannot agree with. Since the amount of evidence is too voluminous for this format I will default to my usual position on this topic. The world of tactical and strategic combat systems (et al) would suffer a tremendous setback if those worlds embraced this notion."

I'm not sure you know what you disagreeing with, nor do I know what you mean when you say that combat systems would be dealt a tremendous setback if they were to do.... what? I don't even know what you're concerned about. Is it that you really do think that "everything" that a person thinks should be written down? If you have studied this issue than you know that military documentation is a complex subject. You must know that there is a huge amount of real knowledge that cannot be usefully documented. Have you read any of the research? Try Ed Hutchins' book Cognition in the Wild, or Salas' Making Decisions Under Stress. Both of these were based on research done for the Navy. Please see also The Social Life of Information, by Seely-Brown and Duguid, or Things that Make Us Smart, by the great Don Norman.

What I mean when I say get out of kindergarten, is geez, start reading some of the basic research that's been done in the past 50 years on the subject of how to organize, develop, and train people in a complex cognitive task. And yes, I have graduated from kindergarten, thank you. I've been swimming in this material for nearly twenty years now.

Cem Kaner's wife, Becky, recently got her Ph.D. in Education Theory, and her research was on activity theory. She walked me through an activity theory-based task analysis of boundary testing, and as she interviewed me, I discovered that there is a lot I know about boundary testing that I didn't realize I knew, until that moment! How should I document what I don't know that I know, Jake?

Have you heard of activity theory? Do you know what ethnomethodology is? Have you heard of distributed cognition or situated action theory? Herbert Simon won the Nobel Economics prize in 1978 for his work on bounded rationality and it's role in organizations (his Sciences of the Artificial is a must read). Look into any one of these subjects, and you will discover that there are important limitations to what can or should be written down. So, yes, I'm not an expert in any of the fields, above, but I'm serious about learning my craft. I need to be aware of this stuff because I don't want to be one of those people who are absolutely sure they know what's right and wrong, and absolutely ignorant of research relevant to making that determination.

My aim is not to be a bully about it. I'm just a little impatient with the mythology of "document everything", which is what I perceive you to have just endorsed. I think it's a toxic philosophy and we need to get over it.

Jake said: "I tend to believe that this notion is not new."

It's not at all new. I recently read papers about testing from 1965 that discuss the importance of skilled human minds. And yet, still, people talk about exploratory testing as if it is some strange and dangerous new-fangled magic. Therefore, I feel I have to go back to some basic principles and remind folks of that a sapient process operates by different dynamics than does a non-sapient process.

Jake said: "I think we have all seen bad documentation. I struggle without any objective evidence from you that this is the norm. I would trust that you could supply such evidence?"

Yeah, that is a struggle. But the burden of proof is not on me. I'm not the one claiming that it's important to document. I'm not the one presuming to tell others that good testing requires that you spend your time writing on paper instead of ACTUALLY TESTING. Your position is the counterintuitive one, not mine.

That said, I travel and see a lot of doc. I've reviewed stuff from Motorola to HP to IBM, to Microsoft. By now, in the range of 100 to 200 companies. I've seen test documentation in 10 countries, so far (albeit all in English). Nearly all of it crap. The RUP test strategy template is crap. The IEEE-829 template is crap.

I don't expect you to take my word for it that it really is crap. I expect you to note my opinion, that's all (so that you don't think that the experts in the field, of which I am often said to be one, all agree that there's no problem with over-documenting), and then consult your own experience and your own examples. You can disagree with me if you want to. The only way to proceed deeply would be for you to read up on some of the research, maybe, or for you and I to sit down and go over some examples. We would have to watch people using (or not using) the doc. We would perhaps have to try using a document and watch what happens (an ethnomethodological study!) Maybe we would discover that you and I have radically different value systems when it comes to testing, or maybe we would come to a common understanding. This is difficult to do in this forum.

In a shallow way, we could proceed by me telling you about some of the problems I see that are endemic to test documentation. Here are three:

1. Documentation written by people who don't know why they are documenting, don't know what to document, and don't use the documentation that they create, tends to be nearly useless to all involved. I once did a study at Apple Computer of some 17 test plans in our department. Of those, only 3 were claimed to be used. The rest were gathering dust. When asked, the testers claimed that they only wrote the doc because they thought they were supposed to. It takes skill and motivation to go beyond vapid, vacuous documentation.

2. Documentation that tries to simultaneously serve multiple audiences and purposes often serves none of them. I can take nearly ANY technical document at HP, tear off the first 10 pages of it, and not lose ANY technical content. I say this to people at HP and they usually laugh and nod. They know what I'm talking about. The reason for all that fluff (such as an approvals page, or a change history, or a table of contents) is that somebody somewhere is worried about something, and the author of the document is worried about getting into trouble. But most of us who might want to use the document don't care. We just want the content. And that content, especially if it's in a tutorial format with lots of text, is annoying and useless for the everyday user.

3. Test documentation is usually written by people who are nearly clueless about what is more efficiently and effectively communicated by voice, or could be learned on one's own (this is the Social Life of Information thesis). This leads to a lot of insultingly insipid statements like "press OK" or rehashes of the product documentation, or information about expected results that reads "the result should be as expected" or some other bloody obvious tidbit. I once reviewed the test plan for the Abrams Tank. It was some 75 pages long. A more helium inflated document I have rarely seen. I once reviewed a test procedure from Intel that was 50 pages long. I sent them back one that covered more ground and was only one page long (I did this by removing everything from the test procedure that would be obvious to someone who already knew the product).

I've been an expert witness on a court case where the defense trotted out their test document as evidence that the product was tested well. But the depositions of the testers showed that they never followed that plan.

Haven't you seen these pathologies? Aren't you concerned about it? Come on, man.

I guess that's all I will say on this thread, because I'm impatient and I can feel my temper slipping. I guess I think you should already know this stuff, Jake. There are many articles on my website and others. There are good books out there.

Good luck on your further research. Email me privately if you want to continue the conversation.

(Note to readers: I rarely post on this forum. I sometimes read it. The forum I regularly post upon and read is the software-testing@yahoogroups.com newsgroup.)

-- James

--------------------
James Bach, Satisfice, Inc., james@satisfice.com, [url=http://www.satisfice.com]www.satisfice.com[/url]
Author of Lessons Learned in Software Testing: A Context-Driven Approach


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: jamesbach]
      #412058 - 08/31/07 03:06 AM

Thank you for responding James. Well we certainly have more to talk about! I will use part of this weekend to digest and respond to the remainder of your post. For now...

Quote:

James: What I mean when I say get out of kindergarten, is geez, start reading some of the basic research that's been done in the past 50 years on the subject of how to organize, develop, and train people in a complex cognitive task. And yes, I have graduated from kindergarten, thank you. I've been swimming in this material for nearly twenty years now.


Im sure you have many good experiences. I am acquainted with some of your material. Perhaps our combined 53 years of swimming will lead to some fruitful dialog on this topic.
.
.
Quote:

James: Yeah, that is a struggle. But the burden of proof is not on me. I'm not the one claiming that it's important to document. I'm not the one presuming to tell others that good testing requires that you spend your time writing on paper instead of ACTUALLY TESTING. Your position is the counterintuitive one, not mine.


I guess one of us is shirking responsibility for supplying proof. As an "expert" in the court you mentioned, I doubt that the statements that follow would hold any water - correct?
Quote:

Documentation can be very important. Bad documentation, however, is the norm. I think most documentation in most test projects is a complete waste. You could burn most of it and no one would notice.


Yes there is bad documentation. There is also good documentation. I will not attempt to quantify it or even state that there is more good than bad or vice versa.
.
.
Quote:

James: Good luck on your further research. Email me privately if you want to continue the conversation.


Thank you. I would rather continue this in public here since this is related to the original posters question. I do not use yahoo groups.

Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: JakeBrake]
      #412073 - 08/31/07 04:13 AM

James,
I'm sorry that you are losing your temper on this topic here and have decided to leave it.

I want to ask one question though and I hope that you will think about it, even if you don't decide to answer:

Why do you keep going on the attack by claiming that those that publicly challenge Exploratory Testing are demanding that EVERYTHING must be written down, when I am not aware of anyone here actually suggesting that should be attempted?

Quote:

"Is it that you really do think that "everything" that a person thinks should be written down? ...... You must know that there is a huge amount of real knowledge that cannot be usefully documented."




--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
DSquared
Moderator


Reged: 04/02/03
Posts: 4546
Loc: Wisconsin, USA
Re: Explorative Testing [Re: jamesbach]
      #412088 - 08/31/07 04:45 AM

Quote:

2. Documentation that tries to simultaneously serve multiple audiences and purposes often serves none of them. I can take nearly ANY technical document at HP, tear off the first 10 pages of it, and not lose ANY technical content. I say this to people at HP and they usually laugh and nod. They know what I'm talking about. The reason for all that fluff (such as an approvals page, or a change history, or a table of contents) is that somebody somewhere is worried about something, and the author of the document is worried about getting into trouble. But most of us who might want to use the document don't care. We just want the content. And that content, especially if it's in a tutorial format with lots of text, is annoying and useless for the everyday user.





This thread is getting a bit far afield from the original question, but I had to respond. James, you do certainly give a lot of food for thought. But one thing does occur to me. This entire thread is mixing and matching a lot of thoughts on ET and documentation, and I for one am getting lost in the mix. In my opinion, the argument for documentation vs. no documentation (a continuum, not polar) and ET vs. ST are entirely separate and non-related. (I also get hot when people say "Agile" means no documentation, but that is a separate argument). As an example, when I do ET, I typically document at least a summary statement of what I tested (not necessarily how I tested it) and document the results. That is a minimum requirement for SOX approvals where I am.

I do have to also follow up with some info on documentation in the military. There is a reason for it. (I came out of the Navy). When you have a lot of technical "stuff" that needs to be done and you bring in a crew of people that range anywhere from PhD's down to a person who can barely read and write and got a GED instead of graduating highschool, you need to write documents that can be used by that entire spectrum of people. That generally ends up being the lowest level of documentation, right down to the steps that outline what switch has to be pushed when a certain meter reads a certain value, including pictures so that those who don't read well can still use the document. In the Navy, it's called making the procedures "sailor proof". Therefore, the documents ARE aimed at the monkey off the street.

I quoted the passage above from your thread to ask this question: have you ever heard of InfoMapping? It is a documenation technique that I have seen and used to get around a lot of the problems that you lay out. If you haven't seen or heard of it, you may like what you find.


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: DSquared]
      #412144 - 08/31/07 06:21 AM

Darrel, where do you recommend we split this topic?

Post Extras: Print Post   Remind Me!   Notify Moderator  
Corey_G
Veteran


Reged: 09/14/01
Posts: 4281
Loc: Boston, MA
Re: Explorative Testing [Re: JakeBrake]
      #412185 - 08/31/07 07:40 AM

> the experts in the field, of which I am
> often said to be one

According to whom? Thats a very subjective pat on the back for yourself. Personally I find that thought laughable. While I'm sure it is pleasing to your ego, we prefer to discuss things in terms of factual evidence not fallacious arguments.

An "expert" (in my opinion) is a credible source with integrity and a history of pushing the craft forward. Someone with a history of personal attacks is hard to take seriously as a self proclaimed "expert".

http://www.satisfice.com/blog/archives/52

--------------------
Corey Goldberg
Homepage: goldb.org
Twitter: twitter.com/cgoldberg
Google+: gplus.to/cgoldberg


Post Extras: Print Post   Remind Me!   Notify Moderator  
Steve_Bartkowski
Newbie


Reged: 08/14/07
Posts: 7
Loc: York, PA, USA
Re: Explorative Testing [Re: Corey_G]
      #412204 - 08/31/07 08:12 AM

How does this apply to the topic? At least in Jake's posts and James' posts, the topic is Exploratory Testing. It is providing insight to the question at hand - pro, con - even the attacks provide a passionate view of the TOPIC. Jake provides some great feedback on examples on when NOT to use this approach.

This post, however is unnecessary. Where are the monitors? Oh, we're making Muppet quips!


Post Extras: Print Post   Remind Me!   Notify Moderator  
Jeff Nyman
Moderator


Reged: 12/28/99
Posts: 1875
Loc: Chicago,Illinois,USA
Re: Explorative Testing [Re: Steve_Bartkowski]
      #412304 - 08/31/07 11:35 AM

What's interesting to me about this topic thread as a whole is that it has thrown about much heat, very little light. That's an interesting aspect that I find a lot of in the testing community lately. What's clear is that people define what they mean by "exploratory testing" differently --- and yet some people appear to feel that their way is the only way or the right way, thus they pronounce what it "IS" rather than a general focus of what it might mean under various contexts. (I often found this amusing since this whole test industry seems to be moving to "context-driven" ideas, which, presuambly, would mean that definitions can be contextual as well.)

To me, the whole notion of being "exploratory" is open to a lot of interpretation just by what the concept of exploration, as a term, means. To me, I've often found it most useful to talk about "exploratory" with people as an approach, one with which I might apply various techniques as I'm exploring. To the extent that these techniques teach me something about the application or allow me to provide information about the application, they are effective. Some techniques may not readily fit within an "exploratory" approach and, in those cases, I would not use those techniques.

Most of the "exploratory testing" I've seen can be categorized a bit by looking at the intent: is it solely to discover or is it to scrutinize closely? Both of these can be structured activities so I don't like making a distinction between "exploratory testing" and "structured testing." I find that this distinction often makes people (like managers) shy away from the concept because then they equate "exploratory" with "unstructured" and thus "something bad."

And, yes, I know: being unstructured is not automatically a "bad thing." But I've found that people, even when exploring in their "freestyle" way, do bring a structure of sorts: it's a structure based on how they best acquire and conceptualize information.

I should probably note that because of discussions like this thread, I've come across cases where I find it even more effective to simply not use a term like "exploratory testing" at all.


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: Jeff Nyman]
      #412310 - 08/31/07 11:47 AM

I agree Jeff. to extend:

The structure is there when exploratory testing (note the lack of caps) is done properly.

It is dependant upon the tester and their orgainsation to what level the structure can be used to address the significant concerns that groups like military organisations have about quality.

It is a PART of a wider Test Strategy in almost all cases (small caps version again) but definitely NOT the only part.

Exploratory Testing (with Caps) is not the cure for Cancer and cannot enable Cold fusion. It has positive elements AND negative elements and if the positives are to be discussed and accepted then the negatives MUST be acknowledged rather than deflected with attacks on existing methodologies. Contrary to recent claims, this must happen FIRST as it is the proponents of Exploraty testing who must prove themselves, not the existing establishment. (why is that so confusing?)

Balance is needed in the conversations and discussions and evangelism on any one side only serves to turn the discussion into a flame war.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
supratim
Junior Member


Reged: 06/28/06
Posts: 204
Loc: India
Re: Explorative Testing [Re: martinh]
      #412392 - 08/31/07 11:21 PM

If I could have..........I would have renamed this topic as

"ET........The Clash of Titans"

--------------------
Regards,
Supratim.

"Look at all the sentences which seem true and question them."
http://supratimmodak.blogspot.com/


Post Extras: Print Post   Remind Me!   Notify Moderator  
thekid
Advanced Member


Reged: 01/07/07
Posts: 415
Loc: Castle Grayskull
Re: Explorative Testing [Re: martinh]
      #412440 - 09/01/07 11:20 PM

Quote:

I cannot script my tests yet, even if there are detailed specifications, because there is too much to test and I don't yet know what is most important.





Cem....I'm gonna have to disagree with you on this point. If one doesn't know what is important to test and one feels there is too much to test......I'd be interested to know how one would staff a project + staff that project with people who are competent in the areas that are critical to the success of the project.

At the end of the day it comes down to revenue and if product management is not aware of which features play a pivotal role in generating revenue and R&D is unable to give you fairly detailed information regarding what changes in the code will occur and the risk level of these changes impacting other components.......I think the entire project is in potential jeopardy of failing.

In terms of importance, I look at the following
1. If a legacy feature is being enhanced how critical is that feature to customer's workflow and what's actually changing. Will customer workflow actually change with the addition of this enhancement or is it so minor it wouldn't be captured in a workflow diagram of how the customer uses the app
2. If it's a new feature what is the expected adoption rate of customers + who are the customers that will adopt and how much do they contribute to the revenue opportunities for the product.
3. For legacy components what is the historical data about this component.....fragile code? poor performance? constant source of calls to tech support?

In regards to the rest of this post....oh my goodness. Reading documentation, testing the app and learning the app really shouldn't warrant a unique term. This year I tilled part of my front lawn and planted flowers, shrubs, etc. I did some initial reading and through trial and error, soil samples and again reading more.....well, I ended up with a flower-bed that turned out quite nice. My plants were chosen based on their bloom duration, recommended soil and sunlight and most importantly my sense of what is pleasing to the eye.

What is really unnecessary, though, is to define a term to describe this. The term "gardening" is plenty accurate enough to cover what I mentioned above.

Although this has turned into quite the debate there is one key topic which is missing.

How does one perform effective "exploratory" testing?

1. First off, know the areas of integration both at a functionality level and, at a high level, at a code-level. Know this data like the back of your hand so when you ask R&D "Hi friendly R&D developer.....what areas of the code were modified, added to, etc. since the last build delivered to QA or the last version of the product?

2. Ask a somewhat silly question to project stakeholders who are familiar with the app and the mods that are occurring based on the project scope. "I'm going to pay you $100 for each high-priority bug you find over the next week......what areas do you focus on?"

3. Understand the customer workflow and how they use the product and if possible take a look at the formal User Acceptance Tests that will be executed prior to the customer signing off on the app" Use this information to guide testing.

4. Understand what features generate the most call volume for tech support. (This should probably be discussed at project initiation since there may be an impact on documentation, training programs, testing, etc.

Distill this information into a document, at project beginning define priorities and then constantly refine throughout the project as change happens.


5. Understand the key decision paths in the software + the amount of function coverage per component in the app. Use this knowledge to be effective during ad-hoc/exploratory testing.


Trying to group all this under a single test type though is not something I would advocate though.

Once I have all the above info I then feel comfortable proposing a staffing plan and estimates. What we've been doing lately is actually changing around QA to R&D ratio based on component risk/priority. Some components may be 1:1 and others 4:1.

--------------------
Reserve a few months every so often and preview retirement throughout your career. You won't regret that a 35 year career was reduced to 34 years to take vacations measured in months in order to remember what a stress and care-free life is all about.

Books and hard work will get you anywhere you want to go.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Shane_MacLaughlin
Super Member


Reged: 09/22/05
Posts: 1736
Loc: Dublin, Ireland
Re: Explorative Testing [Re: thekid]
      #412447 - 09/02/07 05:35 AM

One thing that I think is missing here, and the cause of some of heat the Jeff refers to, is that the (collective) goals of the testing are implicit rather than explicit, and there is an assumption that they are broadly similar for various groups which is quite possibly not the case. How can you identify a tool or technique to achieve a given goal when you haven't clearly stated what the goal is?

If the goal is 'to verify an application is of an acceptable quality', we have more ambiguity. What is 'acceptable' and is there anyone out there willing to put there neck on the line and define 'quality'?

If you go with the old ISO9000 goals of 'Say what you do', 'Do what you say', and 'Be able prove it', exploratory testing possibly is weaker than scripted testing. That said, there are many out there (myself included) who feel that many ISO9000 implementations are detrimental to software quality. (That is, if I could define software quality )

My quality goals are more modest. One of the more important ones can be broadly stated as;

Make sure that the software does everything explicitly stated in all manuals, tutorials, notes and supporting documentation issued for all the interim releases over the last 10 years.

This goal is obviously not suited to exploratory testing; it is necessary regression testing that by definition has a script to follow. This in no way lessens the value of exploratory testing, it merely suggests that it is not a viable technique to meet one of my stated goals. Because exploratory testing is not a viable technique to meet my goals should I be re-examining my goals? I don't believe so.

In terms of efficient resource usage, as a small company we write a lot of step by step tutorials that get added onto the user documentation. These are also used as test scripts for manual testing, and in turn used to develop automation scripts which get generalised to handle similar cases. These become as valuable a long term resource to both our product and our customer base as the code itself. Both english scripts and automation scripts get re-used and refactored on a pretty regular basis, at this stage we rarely have to do anything 'from scratch'. As such I would argue strongly against the statement

Quote:

there is a certain kind of scripting that to me is just junk food of the worst kind: step-by-step detailed procedural test scripts.




I like this style of scripting for the reasons mentioned, but I guess its all in the context. One mans junk food appears to be my meat and two veg

my rambling 2c,

Shane

--------------------
My LinkedIn profile


Post Extras: Print Post   Remind Me!   Notify Moderator  
DSquared
Moderator


Reged: 04/02/03
Posts: 4546
Loc: Wisconsin, USA
Re: Explorative Testing [Re: thekid]
      #412461 - 09/02/07 01:53 PM

Quote:

I cannot script my tests yet, even if there are detailed specifications, because there is too much to test and I don't yet know what is most important.




I would also have to disagree with this statement. Let's take that statement out of the testing world and apply it to the development world. The quote then becomes:

"I cannot develop the application yet, even if there are detailed specifications, because there is too much to develop and I don't yet know what is most important."

I don't know of ANY development shop in the world that would take that position. Why then should we take the identical position when we apply it to developing a tests instead of code?

Somewhere, SOMEONE has an idea of what is important. That is the basis for risk based development and should also serve as the basis for risk based testing. If Cem's statement were true, all the risk based testing algorithms in the world are all just gas, because they would fall apart with larger systems. On the contrary, that is precisely where risk based testing becomes most important.

I don't buy the argument.


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: jamesbach]
      #412470 - 09/02/07 06:43 PM

James Bach

JakeBrake

(Note! hyper-links will within open in this window)

Jake Said: "The 2nd clause is one I cannot agree with. Since the amount of evidence is too voluminous for this format I will default to my usual position on this topic. The world of tactical and strategic combat systems (et al) would suffer a tremendous setback if those worlds embraced this notion."

I'm not sure you know what you disagreeing with, nor do I know what you mean when you say that combat systems would be dealt a tremendous setback if they were to do.... what?

I disagree with what you stated, " You'll discover that you have many many more thoughts than you could possibly document, or could benefit by documenting." If one allows this to be an insurmountable obstacle, then one is trapped in a state that I refer to as analysis paralysis. I would never allow that position to slow or stop my progress. I would certainly look to get thoughts into a document and/or audio recorder for later transfer to a document.

More specifically, I disagree with both clauses of your statement. Let me tackle the obvious here first. If I assumed there was no benefit to documenting and then documented none of my thoughts, then I would probably fall well short of the job expectations. Every job I have ever had in the computing industry required documentation some from me and from others on the project. At this point I would echo "thekids" sentiments from his most recent post on this topic.

If Dan Brown documented none of his thoughts, we would not be calling him an author.

If the DoD or any other company or unit within had no documentation with respect to critical systems or any systems, then what???

I don't even know what you're concerned about. Is it that you really do think that "everything" that a person thinks should be written down?

I would never propose that all be documented. However, I will always do my best to fulfill and exceed the job requirements, company requirements, program requirements, project requirements, record-retention requirements, and any other requirements such as standards that may be invoked for specific projects with efficiency; efficiencies that I have learned to that point in my career. But it doesnt stop there. I believe in giving my personal best. I hold myself to a higher standard. Is there more I can do to improve the project and product? Are there lessons from previous versions or similar systems that need to be considered? Are there items that should be documented and are not?

If you have studied this issue than you know that military documentation is a complex subject.

I might know a tiny bit about the subject.

You must know that there is a huge amount of real knowledge that cannot be usefully documented.

I personally do not care when I work on projects. I focus on what is both useful and needs to be documented. Size of an unknown neither intimidates me nor does it prevent me from making progress.

Don Norman. Have you read any of the research? Try Ed Hutchins' book Cognition in the Wild, or Salas' Making Decisions Under Stress. Both of these were based on research done for the Navy. Please see also The Social Life of Information, by Seely-Brown and Duguid, or Things that Make Us Smart, by the great

I have read neither. I would not expect that you and I could say we have a perfectly matched list of what we have read or what we intend to read. If it was a job requirement I would certainly read both. Could they benefit me? Perhaps and perhaps not; I will not judge a book by its cover. Making Decisions Under Stress sounds like an interesting book. It came out in 1998 or about twenty one years after I left Air Traffic Control. Facilities not unlike those under this link helped me learn to deal with stress. Could I do more to this end? Certainly, but do I need to for my current job? My own internal sensors tell me I am doing just fine.

What I mean when I say get out of kindergarten, is geez, start reading some of the basic research that's been done in the past 50 years on the subject of how to organize, develop, and train people in a complex cognitive task. And yes, I have graduated from kindergarten, thank you.

This question leads me to believe you might think I have not read any of the basic research. I do not feel it productive to list what I have read. Also, I do not typically limit myself to a 50-year window. Training people is not my current job at least in a direct and customary setting or fashion. Forgive me please for my response to your earlier related statement. It appeared that you were suggesting that "Our craft, I hope, will someday get out of kindergarten"

I've been swimming in this material for nearly twenty years now.

I sincerely hope that your near 20-year swim has been less shark-infested than my own 33-year swim.

Cem Kaner's wife, Becky, recently got her Ph.D. in Education Theory, and her research was on activity theory.

I applaud Mr. Kaners wife.

She walked me through an activity theory-based task analysis of boundary testing, and as she interviewed me, I discovered that there is a lot I know about boundary testing that I didn't realize I knew, until that moment!

In this respect, I have had revelations and/or experiences similar to yours.

How should I document what I don't know that I know, Jake?

Maybe the membership here can demonstrate that using some known techniques?

Have you heard of activity theory? Do you know what ethnomethodology is? Have you heard of distributed cognition or situated action theory?

I have heard of neither. I can take an educated guess at ethnomethodology based upon its Greek root. Do I feel that I need to study it at this point in my life? No, but could it benefit me? Perhaps it could and then perhaps not.

Herbert Simon won the Nobel Economics prize in 1978 for his work on bounded rationality and it's role in organizations (his Sciences of the Artificial is a must read). Look into any one of these subjects, and you will discover that there are important limitations to what can or should be written down.

I applaud Mr. Simon. His works do not supersede or take precedence over contractual requirements and the constraints of limitations of the contractually required deliverables on projects I have worked on. Should I encounter a contract that invokes his standards, I would take it upon myself to be diligent in reading the applicable sections.

So, yes, I'm not an expert in any of the fields, above, but I'm serious about learning my craft. I need to be aware of this stuff because I don't want to be one of those people who are absolutely sure they know what's right and wrong, and absolutely ignorant of research relevant to making that determination.

I think most of us here have demonstrated that we as a community are also serious. I generally find that to the level of research you have indicated I need not go so far to find out whether I am right or wrong in my job. I rely on input from others via technical reviews, et al.

My aim is not to be a bully about it

Do not concern yourself with this. I would not let you bully me.

I'm just a little impatient with the mythology of "document everything", which is what I perceive you to have just endorsed.

I would never say such a thing and per my recollection I have not made such a statement. Therefore I would never endorse such a position.

I think it's a toxic philosophy and we need to get over it.

Along with my response just above, I cannot agree, as I do not have sufficient evidence in front of me. Therefore at this point, "we" does not include me with respect to your opinions at this point.

Jake said: "I tend to believe that this notion is not new."
(in response to James Bachs statement, "realize that human minds are wonderful and powerful. Don't fear them, but learn to use them.")
It's not at all new. I recently read papers about testing from 1965 that discuss the importance of skilled human minds.

I think the evidence shows that human thinking and usage of the mind began before 1965 and therefore long before these papers you read made any related declarations.

And yet, still, people talk about exploratory testing as if it is some strange and dangerous new-fangled magic. Therefore, I feel I have to go back to some basic principles and remind folks of that a sapient process operates by different dynamics than does a non-sapient process.

I personally have not witnessed such discussion. It is clearly your right to feel that some additional action may be required of you in your personal crusade.

Jake said: "I think we have all seen bad documentation. I struggle without any objective evidence from you that this is the norm. I would trust that you could supply such evidence?"

Yeah, that is a struggle. But the burden of proof is not on me. I'm not the one claiming that it's important to document.

These are your claims: " Bad documentation, however, is the norm. I think most documentation in most test projects is a complete waste. You could burn most of it and no one would notice. The legacy argument is generally specious, considering that you know you won't read that stuff, and you know if you do you will find it full of fluff and misinformation." Since you have testified as an expert witness, then I would expect that you would understand that such claims would never fly in court. In this case having made such claims, you would be asked for what? Let me guess documented proof or other compelling evidence.

I'm not the one presuming to tell others that good testing requires that you spend your time writing on paper instead of ACTUALLY TESTING.

Nor am I and I certainly do not see where I stated this. I typically "write" using my keyboard. The ratio of documenting to testing varies throughout the development life cycle, and in some cases before and after.

Your position is the counterintuitive one, not mine.

Within the confines of this topic, generally speaking my position is based upon contractual requirements and the criticality of the systems I speak of.

That said, I travel and see a lot of doc. I've reviewed stuff from Motorola to HP to IBM, to Microsoft. By now, in the range of 100 to 200 companies. I've seen test documentation in 10 countries, so far (albeit all in English). Nearly all of it crap. The RUP test strategy template is crap. The IEEE-829 template is crap.

I believe these companies have provided avenues for your opinions.

I don't expect you to take my word for it that it really is crap.

Rest assured that I do not.

I expect you to note my opinion,

I have considered it to this point. Given objective and factual evidence, I would consider and give more weight to your opinion.

that's all (so that you don't think that the experts in the field, of which I am often said to be one, . all agree that there's no problem with over-documenting), and then consult your own experience and your own examples.

I am not said to be of the same breed as you. I would shudder to hear someone refer to me as an expert. I know that I could be good at something yet I consider myself as a trainee in all walks of my life.

You can disagree with me if you want to.

Thank you. Amendment I to the Bill of Rights affords me the same opportunity.

The only way to proceed deeply would be for you to read up on some of the research, maybe,

I am pleased that you have made this reading assignment an optional requirement.

or for you and I to sit down and go over some examples. We would have to watch people using (or not using) the doc. We would perhaps have to try using a document and watch what happens (an ethnomethodological study!) Maybe we would discover that you and I have radically different value systems when it comes to testing, or maybe we would come to a common understanding.

I would prefer to have this whole concept clarified for the benefit of the remaining interested onlookers. I do not think that I need a specific common understanding with you relative to my work and any contracts or other projects that I may be or will be working on. Should I encounter a situation that calls for your consult and expertise, I commit to call upon you.

This is difficult to do in this forum.

In my opinion, it is difficult only if you believe so.

In a shallow way, we could proceed by me telling you about some of the problems I see that are endemic to test documentation. Here are three:

N/A.

1. Documentation written by people who don't know why they are documenting, don't know what to document, and don't use the documentation that they create, tends to be nearly useless to all involved. I once did a study at Apple Computer of some 17 test plans in our department. Of those, only 3 were claimed to be used. The rest were gathering dust. When asked, the testers claimed that they only wrote the doc because they thought they were supposed to. It takes skill and motivation to go beyond vapid, vacuous documentation.

IMO this is not necessarily a compelling argument for "burning most documentation". This could be symptomatic of many problems that may be tied to any one or more of the following:

1) Poor management, 2) Right people for the wrong job or vice-versa, 3) Weak or lack of technical reviews. I am not certain but I think that testers are not typically trained as technical writers. I was not but I cannot speak for the rest of the world.

2. Documentation that tries to simultaneously serve multiple audiences and purposes often serves none of them. I can take nearly ANY technical document at HP, tear off the first 10 pages of it, and not lose ANY technical content. I say this to people at HP and they usually laugh and nod. They know what I'm talking about. The reason for all that fluff (such as an approvals page, or a change history, or a table of contents) is that somebody somewhere is worried about something, and the author of the document is worried about getting into trouble. But most of us who might want to use the document don't care. We just want the content. And that content, especially if it's in a tutorial format with lots of text, is annoying and useless for the everyday user.

I have two technical documents from HP in front of me. If I gave the first 10 pages of each to a passenger pigeon, I would neither be typing this document nor have the ability to print the same. At any rate, I do not see anything in your statements that dissuade me from detailed test procedures. When I do encounter a document such as you described I am usually grateful to the author(s) for complying with the standards that call for the items you listed. In so many instances I need to be sure that I am working with the correct version of an approved document. I find this information to be confidence building as opposed to annoying.

3. Test documentation is usually written by people who are nearly clueless about what is more efficiently and effectively communicated by voice, or could be learned on one's own (this is the Social Life of Information thesis).

I havent run into many clueless people in my work domains. If your claim to the left is true, it of its own right does not prove that "most documentation in most test projects is a complete waste."

3. (contd) This leads to a lot of insultingly insipid statements like "press OK" or rehashes of the product documentation, or information about expected results that reads "the result should be as expected" or some other bloody obvious tidbit.

You will not find the word "Ok" in the test documentation for what is depicted here, nor will you find an "Ok" button. The issues you listed to the left can be fixed in a number of ways. Document conventions, callable tests, common test entry/exit points, and listing or pointing to the expected results, all represent a few repair methods. Still this is not a reason to not document.

3. (contd) I once reviewed the test plan for the Abrams Tank. It was some 75 pages long. A more helium inflated document I have rarely seen.

One document measured per your experience for this proven system does not indicate that all documentation for it was "a complete waste." Nor does it give me any more reason to believe "most documentation in most test projects is a complete waste." I am surprised that it was only 75 pages given the numerous systems and components within depending upon your definition of test plan. If you used the same criteria as you indicated in your item 2 left and above I would question your methods of making such a determination. Which test plan did you inspect? Given all the components listed here, I would think the various suppliers/manufacturers would each have plans. At a minimum I would expect GD to have had several plans.

3. (contd) I once reviewed a test procedure from Intel that was 50 pages long. I sent them back one that covered more ground and was only one page long (I did this by removing everything from the test procedure that would be obvious to someone who already knew the product).

Would you have removed this is if Intel intended that they would outsource this plan and its subject? Did they replace theirs with your own? Either way or again, this falls short of making a strong case against documentation.

I've been an expert witness on a court case where the defense trotted out their test document as evidence that the product was tested well. But the depositions of the testers showed that they never followed that plan.

Are you going to tell us why they did not follow the plan? On the surface this appears to be a case of insubordination and/or neglect. This again gives me no more reason to believe "most documentation in most test projects is a complete waste."

Haven't you seen these pathologies? Aren't you concerned about it? Come on, man.

I am not certain which pathologies you refer to. I am concerned about many job-related things. I am also concerned that people judge a serious book by its rather dull cover.

I guess that's all I will say on this thread, because I'm impatient and I can feel my temper slipping.

I am not impatient. The only slippage on this end is me slipping out of my chair.

I guess I think you should already know this stuff, Jake.

Should I know your stuff or the stuff required for my job?

There are many articles on my website and others.

I can generally find what I need at many websites. If not, I visit a bookstore or library.

There are good books out there.

I agree and perhaps we can celebrate that agreement exists on this point?

Good luck on your further research. Email me privately if you want to continue the conversation.

We began here. I do not mind a public debate.

I would prefer that the remaining interested audience gives or is given some examples real examples. I have some candidates in mind relative to showing reasons for comprehensive test procedures. I commit to that within a week of this post. Here is one example. I think anyone here is willing to bring examples to the table. In summary James, I see no compelling arguments in support of your premise. I do see an irony here. Your position is clearly anti-documentation. Yet you push documentation (books, papers) by others in an attempt to solidify your position.



Post Extras: Print Post   Remind Me!   Notify Moderator  
DSquared
Moderator


Reged: 04/02/03
Posts: 4546
Loc: Wisconsin, USA
Re: Explorative Testing [Re: JakeBrake]
      #412676 - 09/03/07 07:47 AM

It strikes me that throughout James' comments, there is one underlying assumption that maybe even he doesn't realize that he has made.

That assumption is that communications take place in real time.

Take that assumption away, and espouse the same positions about documentation, and what do you get? At a minimum, lost knowledge. If communications can't take place in real time, how else will they take place? Video? Perhaps. But there are limits to the information that can be conveyed by video.

There is also another unstated assumption - that documentation often serves no purpose. Learning theory tells me that this position is flawed.

James: Has your research led you to the different styles of learning? Some people learn best by listening (auditory), some by reading or watching (visual), and some by touching and doing (kinesthetic). If you get rid of documentation, or try to minimize it, you have a very real risk of losing that part of the audience who best learn by reading.

Quote:

What I mean when I say get out of kindergarten, is geez, start reading some of the basic research that's been done in the past 50 years on the subject of how to organize, develop, and train people in a complex cognitive task. And yes, I have graduated from kindergarten, thank you. I've been swimming in this material for nearly twenty years now.




I've been swimming in this material for nearly 30 years. Before I was in QA, I was a trainer, both in the US Navy and in the "real world", as well as teaching at the elementary and high school levels. I'm not sure your statement adds anything to the argument, other than being a subtle way of saying "mine is bigger than yours." Let's not descend to that. Your statements sometimes come across as arrogant and condescending. Perhaps if your delivery was different, more people would hear the content?

In my swim, I have NEVER come across any materials or any person who goes to the extreme you do on documentation. Yes, there are various methodologies to make documentation more efficient (see my previous reference to InfoMapping as an example). And I know that you have not come right out and said that you espouse dumping documentation. BUT....have you ever heard the expression "perception is reality"? Even if you haven't said it, the perception is that you have.

Perhaps you need to do more to mitigate that perception?



Post Extras: Print Post   Remind Me!   Notify Moderator  
yagsy
Active Member


Reged: 11/26/01
Posts: 917
Loc: Greater Boston Area finally
Re: Explorative Testing [Re: DSquared]
      #414355 - 09/08/07 02:29 PM

To answer the original question...

I will start with an opening statement, all is said using ... MY OPINION, VALUE SYSTEM AND EXPERIENCE (so take that for what it's worth) ,,,

Exploratory testing = have a specific result in mind. Achieve a goal that might be how a system (hardware/software or both) would react under certain conditions using chosen variables to implement those tests. This is a planned, methodical event whether or not the project contains documentation.

Compare exploratory testing with exploratory surgery. Would you want your surgeon to just open you up and have a look around? Physicians and surgeons have an idea of what they want to achieve, they have explicit procedures to follow when doing exploratory surgery. They do not just open a person up and have a poke around, do they? Is this considered to be efficient use of resources and still achieve the goal of diagnosing what is wrong?

Or would you like to be the tester with the mindset of Christopher Columbus when he set out to discover the Americas? He had no real proof other than a belief that the earth was not flat and he wouldn't fall off. He still set sail anyway and had a plan of just sailing until he came back around the other side of Europe. Maybe that works when discovering another world but can this be applied to testing your project? AGain, I ask the question, is it efficient use of resources to take this approach to any and all projects?

The answer - It depends.
Depends on expectations from your customer base, marketing team, project management team, scope/schedule of the project, your executive management team, the size and breath of your company, QA team, and the development team. And what about that technical support staff? Can they handle the volumne of service based on your findings or no findings?

Darryl spoke about risk based development as the basis for risk based testing. When decisions are made to not test or to stop testing at a certain point, it involves some risk as we know. With regards to the company's capabilities, resources and market share, can the company accept those risks to meet their goal of making a profit? Because in the end, that is what companies are tying to do, right, to make a profit? Profits may not necessarily be monetary, so don't allow yourselves to be caught in one mode of thought here. And we forget that goal all too often as we get caught in our world, don't we? Sometimes a reminder of what the bigger picture is helps us to see outside the box and in the end can we be using exploratory testing in and of itself.

In my opinion, people tend to talk about what is relevant to them as the end all and be all. So when someone asks the question "what is exploratory testing", the answers have come out as specific methods and applications relative to one's own experience without necessarily looking at a bigger, broader picture. Opinions are based on value systems, prejudices and experience. There is some fabulous information or portions of ideas in this thread that can be used anywhere. In the "right" situation, the application might be contradictory to the theory that inspired it. But in the end, we are still exploring what is the "right" way to do things to achieve our goals.

Sounds to me like QA is in a constant exploratory mode by it's nature. We place academic labels on ideas to help facilitate in our own minds what is right and wrong. This is how we learn, as children, and it continues through adulthood. We conclude with black and white or grey and purple, what is the right and wrong way to do things. Remember that what is right for one situation may not remotely work for another situation but can still be an inspiration for a whole new track, method, or a new label.

--------------------
Going out of your comfort zone requires failure. True genius is measured by your recovery.

...Jean Ann
www.perfectpitchmarketinginc.com
http://on.fb.me/PPM100
www.projectrealms.com/


Post Extras: Print Post   Remind Me!   Notify Moderator  
yagsy
Active Member


Reged: 11/26/01
Posts: 917
Loc: Greater Boston Area finally
Re: Explorative Testing [Re: yagsy]
      #414356 - 09/08/07 02:30 PM

The Documentation argument deserved another post entirely. However, my thoughts are relevant to my first post. Documentation is another method about what works for one situation is not appropriate for another and varies with differing levels of how much documentation is necessary. Documentation is most definitely necessary for some situations but not for others. The level of documentation depends (there's that word again) on overall variables but with the bigger picture in mind, all the while considering the expected audience. I have specific examples of various degrees but do not want to bog this thread down anymore than it is. I'll be happy to open or contribute to another thread. I will offer one example that is fabulously displayed is the story of Apollo 13 and is not specific to computer hardware or software testing but very clearly is a story about the quality assurance process.

The movie gave us some great scenes which brilliantly unite together James and Jim's views on documentation. The scene where the astronauts were following written procedures to power down the capsule so that they would be able to power it back up again in order to get the astronauts home. These procedures were written in a "sailor proof" militaristic fashion. These procedures are written as such, that when there is a crisis which forces those who are following those procedures to avoid the "what do I do next" situation. The astronauts needed to focus on the task at hand and not get caught up in emotional reaction which could cost them their lives and the entire space program.


Where is this level of documentation appropriate? Depends on the type of industry, other dependencies of other industries, profit of the company and remember profit isn't always monetary. You can be in a military situation like the one during Apollo 13's mission recovery where one wrong move of "let's try this sequence" could have sent the capsule towards Jupiter. As we see here, procedures must not only be "sailor proof" (LOVE this term) but thoroughly tested. There was still an element of risk though, wasn't there?

What about the scene where they show the NASA engineers dumping a bunch of paraphenailia out onto a table? They had to come up with a procedure that fits a square peg into a round hole...to stop the level of Carbon Monoxide infiltrating the breathable space in the capsule. Or how about the scene where, in the space capsule's simulator, Ken Mattingly had to come up with a procedure to power up the capsule's computer without over budgeting on the amps, using only the variables and conditions which existed on Apollo 13? The key was a new sequence of an already pre-existing written procedure to power up the computer. Isn't this not the unification of exploratory testing of a Christopher Columbus with that of an exploratory surgeon with all his/her knowledge of procedures? And in the end, both sets of procedures had to be written down in a "sailor proof" way because these guys in the capsule, with Kevin Bacon playing the role of Jack Swigert states, "we're getting punchy", he didn't want to make a mistake. The lives of all three astronauts and the space program depended upon it. Remember what the original mission of Apollo program was? To explore space, land on the moon and to scientifically explore the moon's surface. They used written procedures to become Christopher Columbus.

--------------------
Going out of your comfort zone requires failure. True genius is measured by your recovery.

...Jean Ann
www.perfectpitchmarketinginc.com
http://on.fb.me/PPM100
www.projectrealms.com/


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: yagsy]
      #414393 - 09/09/07 07:53 AM

Jean Ann, I think those are excellent assessments. I have spun off the documentation discussion:
http://www.sqaforums.com/showflat.php?Cat=0&Number=414391&an=0&page=0&vc=1


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: cemkaner]
      #414405 - 09/09/07 10:30 AM

Synopsis of Exploratory Testing (E.T)

Compiled from Statements Made by Mssrs. Kaner and Bach

What it is or is About or What are Some Concerns

A. Exploratory testing is simultaneous learning, test design, and test execution.

B. Exploratory testing is very simple in concept: all it means is that your learning about the product, your test design, and your test execution, are all part of the SAME process. They are not divided into independent activities. I start testing. That's test execution. That's learning. That's test execution. It's all three. All at once. Your test ideas evolve. You let them evolve. Test design begins right away. It doesn't WAIT, because exploratory testing IS test design.

C. Exploratory testing is a comprehensive approach to testing, as is scripted testing. You can run an entire testing project, competently, in a scripted way, an exploratory way or in some mix of them.

D. At the start of the project, I know less about the product and its test-related issues than I will know at any time in the future. So I cannot announce my testing strategy at the start (unless I want to be wrong). I cannot script my tests yet, even if there are detailed specifications, because there is too much to test and I don't yet know what is most important. Part of how I learn is to try to test the product, and see what its vulnerabilities are, see what concerns me about the environment, etc

E. James and I often use the phrase, "parallel, interacting activities" to describe learning, design and execution rather than simultaneous. Some people find this clearer. Again, the idea is that each activity is assisted by the other, from the start of the project to the end.

F. How should I document what I don't know that I know, Jake?

G. Your test ideas evolve. You let them evolve. What's the alternative to this? The alternative is that you refuse to let these three activities influence each other.

H. The opposite of exploratory testing is scripted testing. However, it is a simple matter to blend exploratory testing and scripted testing. In fact, mostly these approaches are blended.

I. I have never personally seen a project well tested if it was primarily tested by scripted, preplanned tests. But colleagues have told me that they have seen this and I believe them. I have personally seen projects that were well tested that relied on exploratory testing. Therefore, I reject the notion that exploratory testing is an adjunct, a secondary activity to scripted testing.

I have some thoughts about item D.:

Knowing less at the start of a project is normal less than one will know at the end of the project or beyond. Based upon my own experiences I do not feel that this is a progress-stopper in terms of announcing a test strategy. For DoD Contracts involving critical systems, one must announce et al a test strategy when responding to a Request-For-Proposal (RFP). Granted the amount of detail required in a strategy in the RFP phase is certainly a challenge, and it is generally understood that strategy refinement will occur when more is known about the architecture, the design, design constraints, and other technical challenges that might be yet unknown. To omit a strategy is to remove ones chance of being awarded a contract. I might add that some of these contracts actually require source code before the contract is awarded source code to demonstrate that the bidder has the capability to provide solutions to the most complex aspects of the proposed system. Why does the DoD require all this stuff? I would guess that their processes and practices as so many others have evolved to minimize the risks and the horror stories many of us have heard of with respect to expensive hammers, ashtrays, and other DoD contracts gone awry. More importantly, I think any of us can assume the reasons are to minimize risk and have assurances that the bidder has the capabilities to design, develop, test, and deliver what is being requested. I think that perfectly reasonable. One would certainly not award a contract for cruise missiles to a company that makes inflatable swimming pools.

More general thoughts on item D In the cases I am citing much is already known in terms of the domain and an individual is not expected to carry the testing burden alone as is almost suggested above. So one can do quite a bit of work with the known aspects and in fact some of those known items will lead to revealing relevant indicators of unknowns; and indeed accept unknowns as an ineluctable reality. So one is not an island and otherwise ignorant of relevant domain information. One is usually part of a team. That team is usually comprised of domain and technical experts. So we have the domain at RFP time. What we do not have is the total solution at this time. That is not to say there are no solutions to particular pieces/parts of the total solution. In many contract cases, Off-The-Shelf (OTS) components may be used and will certainly be investigated. In summary, one could say at this point in an RFP, we have a narrowed domain with a large solution space. I think it goes without saying (but stating it anyway) that it is both normal and expected that the goal of the project is to narrow the solution space to a single solution that meets the contract specifications. So it seems to me that the idea here is to initially focus on what is known and begin strategizing at that point. Well, what is known? In these contracts, one has a very specific RFP for starters. In many cases these have been put through the technical wringer before announcing them. Within that domain there is probably a vast library of information, much of which can be called upon as reusable information for purposes of fashioning a contract bid. The RFP usually cites guidelines and other technical resources. If the proposed system must incorporate OTS components/technologies those are referenced and the bidder is fully expected to become very knowledgeable about them. I think it time for me to back up my words with a simple and small example experience to the extent I can freely discuss classified material. A paraphrased example with respect to Cems strategy issue follows:

The contractor shall design, develop, test, and deliver an Air-Intercept-Control system that permits ground-based controllers the ability to assist pilots in aerial combat maneuvers and does all it can do to prevent mid-air collisions. The system shall take as input, high-speed telemetry and three existing radars. All controller display information with respect to aircraft and missile tracking shall be based upon the high-speed telemetry and radars. The radars are geographically separated and therefore all tracking of aircraft and resulting display information shall account for the geographic bias. The contractor must test this at the contractor development facilities, as it is logistically impractical to test onsite during design and development. Those in-house tests may be used as pre-qualification tests to the operational test and evaluation phase.

In this case we had the domain established. We also knew quite a bit about this system based upon existing domains. This RFP went on to list the specifics about the radars and telemetry systems and included specific references. The remainder of the RFP included very specific requirements with respect to system/program functions and features real-time and otherwise, development requirements, quality requirements, hardware and environmental requirements, and so on. We also had some constraints at this point as related to testing. The system had to be tested at the contractor facility prior to being shipped, installed, and tested 2,000 statute miles away at the intended site.

A strategy specific to testing was required of the bidder. More specific to this example, let us talk about a specific piece of a larger testing strategy where the specific piece zooms in on the geographical limitation and testing this system specific to aircraft and missile tracking at the contractor facilities where there were no live radar or telemetry feeds. So how could we have gone about testing this aspect? We could have arranged for very costly feeds and did indeed consider it. Aside from the exorbitant price tag, those feeds would have affected the performance of the existing radar and telemetry feeds. Anyway, the specific strategy that was developed for this constraint was actually quite simple use recordings of radar data and telemetry from the existing facilities. The strategy went on to describe environmental and situational parameters for the recordings such that specific tracking requirements could be tested at the contractor facilities. This simple strategy lead to other efforts requiring input from the development teams hardware and software, e.g., "Once we have this data, how do we feed it into the development/test system?" That was solved and is out scope here with respect to E.T. and test strategizing.

I have some thoughts about item F.:

With respect to required documentation, you document what you do know. One cannot worry about "what I don't know that I know". That is known as analysis paralysis. The show has to go on! Within the domain and ever-narrowing solution space, coupled with technical reviews, team-oriented approach, and access to technical resources this notion cannot be a showstopper. James, perhaps you could cite some examples of this in order to permit definition of a response domain - one that allows the reader to catechize those examples.

In summary, I am not dismissing the concept of E.T. I believe this describes what I have learned to refer to as analysis and design both of which embody some of the E.T. concepts. I simply disagree with the notion that one can do little or nothing in terms of strategizing and/or documenting such.



Post Extras: Print Post   Remind Me!   Notify Moderator  
J_Brody
Member


Reged: 08/22/07
Posts: 36
Loc: RTP, NC
Re: Explorative Testing [Re: PVB1979]
      #415483 - 09/12/07 09:13 AM

Quote:

Yester day I asked My collogue about exploratory Testing
He told me it is nothing but Ad Hoc testing.
Is it correct...?




Absolutely not. Ad hoc testing applies no testing skills, prior experience, or 'intuition'. It is just random testing with no purpose and generally poor results.

Exploratory testing on the other hand is a technique that allows an experienced tester to understand his AUT and develop a strategy for where to attend to next. For example that random crash that you get once in 70 tries is an excellent exploratory opportunity. Pull all of your intelligence tools together, begin exploring and monitoring, waiting for a tell tale sign to pop so you can apply your test design skills to the problem and replicate it 100% of the time. Then you have a bug a developer will sink his teeth into.


Brody


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter Ruscoe
Veteran


Reged: 03/18/02
Posts: 7686
Loc: Tampa Bay
Re: Explorative Testing [Re: J_Brody]
      #415503 - 09/12/07 10:13 AM

Quote:

Ad hoc testing applies no testing skills, prior experience, or 'intuition'. It is just random testing with no purpose and generally poor results.



*sigh* Wrong! (On all points).

I'll just leave it at that, since we have been over this ground many, many times in these forums.


Post Extras: Print Post   Remind Me!   Notify Moderator  
blueinatlModerator
Active Member


Reged: 10/20/06
Posts: 756
Loc: Atlanta, GA
Re: Explorative Testing [Re: Peter Ruscoe]
      #415534 - 09/12/07 11:20 AM

Quote:


*sigh* Wrong! (On all points).

I'll just leave it at that, since we have been over this ground many, many times in these forums.




Thank you - I didn't know how to say what you said w/o a 6 page answer. You got it spot on.


Post Extras: Print Post   Remind Me!   Notify Moderator  
J_Brody
Member


Reged: 08/22/07
Posts: 36
Loc: RTP, NC
Re: Explorative Testing [Re: Peter Ruscoe]
      #415629 - 09/12/07 07:21 PM

Gee, sorry to make you sigh. But what part of my post do you feel is wrong?

That ad hoc testing is undisciplined?
That ad hoc testing lacks documentation?
That ad hoc testing is generally performed by people new to the testing field and is performed randomly, without focus, and misses great defects to the detriment of high impact defects?

Or perhaps you don't feel there is a difference between ad hoc and exploratory testing.

But, your opinion that my opinion is wrong is not an argument I accept just because you believe that it has been disproved in this forum.

Now perhaps you and I don't define ad hoc and exploratory testing in the same ways. But I do utilize exploratory testing as one of my test methodologies, and actually plan for it, and it is extremely productive from a defect profit perspective. I have interns and new hires who perform what I call ad hoc testing and can't find a severity 1 defect if their life depended on it.

Again, just my humble opinion.

Brody


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: J_Brody]
      #415673 - 09/12/07 11:59 PM

Quote:

Gee, sorry to make you sigh. But what part of my post do you feel is wrong?




I think the consensus was - all of it.

Quote:

That ad hoc testing is undisciplined?




That bit


Quote:

That ad hoc testing lacks documentation?




That bit


Quote:

That ad hoc testing is generally performed by people new to the testing field




DEFINITELY that bit


Quote:

and is performed randomly, without focus, and misses great defects to the detriment of high impact defects?




Actually, none of that is necessarily true either


Quote:

Or perhaps you don't feel there is a difference between ad hoc and exploratory testing.





One way to ensure that you get attacked for your comments is if you add a post to the end of an intense discussion and plainly haven't bothered to read and understand the other posts that came before yours.


Quote:

But, your opinion that my opinion is wrong is not an argument I accept just because you believe that it has been disproved in this forum.




Valid point - HOWEVER, when your post is an extension onf an existing thread and given that the 'disproved in this forum' you refer to actually partly refers to what has happened right above your post in the same thread, it IS a valid arguement against your opinion.


Quote:

Now perhaps you and I don't define ad hoc and exploratory testing in the same ways.... ... what I call ad hoc testing




Redfining industry standard concepts for your own purposes is also one of the major topics that has been discussed in this very thread. Just because you chose your own definition of ad hoc, doesn't mean that you can then use that to defend yourself when people respond to your comments.


Post Extras: Print Post   Remind Me!   Notify Moderator  
AOQA
Active Member


Reged: 04/12/07
Posts: 1044
Re: Explorative Testing [Re: martinh]
      #415788 - 09/13/07 05:39 AM

You'd have to be crazy not to read this whole thread, it's like a free book about QA best practices written as a debate between real practitioners. Extremely informative and interesting.

Post Extras: Print Post   Remind Me!   Notify Moderator  
DSquared
Moderator


Reged: 04/02/03
Posts: 4546
Loc: Wisconsin, USA
Re: Explorative Testing [Re: J_Brody]
      #415801 - 09/13/07 05:56 AM

Quote:

That ad hoc testing is generally performed by people new to the testing field and is performed randomly, without focus, and misses great defects to the detriment of high impact defects?





Brody - please get a dictionary out and look up the definition of adhoc. Your characterization of ad hoc as "performed randomly, without focus" is 180 degrees out from the definition.


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: AOQA]
      #415806 - 09/13/07 06:10 AM

Quote:

You'd have to be crazy not to read this whole thread, it's like a free book about QA best practices written as a debate between real practitioners. Extremely informative and interesting.




True

I would try to point out that many of us are backing away from using the term 'Best Practise' here in these forum conversations as, as is indicated in this and many other threads, it causes other arguements to spring up around TAHT definition.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
AOQA
Active Member


Reged: 04/12/07
Posts: 1044
Re: Explorative Testing [Re: martinh]
      #415840 - 09/13/07 06:54 AM

Ok, how about something less loaded, like "effective testing methods"?

Post Extras: Print Post   Remind Me!   Notify Moderator  
J_Brody
Member


Reged: 08/22/07
Posts: 36
Loc: RTP, NC
Re: Explorative Testing [Re: AOQA]
      #415854 - 09/13/07 07:29 AM

I will confess that I had not read the entire thread before I fired off my first response. I spent lunch hour going through the thread.

I still stand by what I said. Come on down to my lab some time and I will show you the different results that ad hoc testing will derive vs exploratory testing. I am pretty sure we will quickly acknowledge the difference even if we don't agree on the niceties of the debate here.

And the trip out this way is quite nice

Brody


Post Extras: Print Post   Remind Me!   Notify Moderator  
AOQA
Active Member


Reged: 04/12/07
Posts: 1044
Re: Explorative Testing [Re: J_Brody]
      #415858 - 09/13/07 07:34 AM

His point is that your arbitrary definiton of ad hoc testing doesn't match up with actual ad hoc testing, which is testing that is not planned in advance but is meant to test a specific and particular area or function.

That doesn't meet your definition of testing that is random and unfocused.


Post Extras: Print Post   Remind Me!   Notify Moderator  
martinh
Moderator


Reged: 02/14/01
Posts: 1087
Loc: Melbourne
Re: Explorative Testing [Re: AOQA]
      #415876 - 09/13/07 08:04 AM

To be clear - From your description of what YOU call ad hoc testing, I call
<please insert your locally favoured swearword here>.

I don't hire people who work that way and I don't retain those that I accidentally hired due to not realising that they were incompetent.

The accepted description of ad hoc testing is generally targeted testing that occurs 'off' plan (based upon hunches or previous experience etc) and when done well will include as an output - traceable descriptions of steps performed and results obtained.

In MY case (and in all places that I have worked where the term Ad Hoc was used), I take these outputs from Ad Hoc testing and use them to add the useful tests for one run into the planned scripted testing for the next run either as a Funtional test of the fixes to defects found or as an addition to the regression pack.

On that basis, I don't really find it useful to book flights to the US to see your team fail to test an appliction properly.

--------------------
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: J_Brody]
      #416779 - 09/17/07 05:10 PM

Hi Brody,

Quote:

Absolutely not. Ad hoc testing applies no testing skills, prior experience, or 'intuition'. It is just random testing with no purpose and generally poor results.


As we continuously see at this forum - definitions vary. In my opinion ad hoc testing requires intuition,
and again in my experience - gray box knowledge. Under this link is an example.
.
.
Quote:

Exploratory testing on the other hand is a technique that allows an experienced tester to understand his AUT and develop a strategy for where to attend to next.


It is my understanding that exploratory testing spans the upstream project processes as well per Cem's earlier statements. I refer to E.T. as analysis and design in those earlier phases. I would like to think that much of "where to attend to next" is understood through proactive processes. It seems that the case you cite here is rather reactive. Granted, some testing situations require a shift to a reactive model similar to what it appears you are suggesting. Do you have another example?
.
.
Quote:

For example that random crash that you get once in 70 tries is an excellent exploratory opportunity. Pull all of your intelligence tools together, begin exploring and monitoring, waiting for a tell tale sign to pop so you can apply your test design skills to the problem and replicate it 100% of the time.


What happens if each attempt costs many hours or days? It seems to me that a project manager or bean counter would raise brows. What about risk management for this case? Do you have an example that you could share; one where you could walk us through a mini-version of exploratory thinking?

Thank you


Post Extras: Print Post   Remind Me!   Notify Moderator  
TestingMentor
Member


Reged: 12/28/06
Posts: 235
Loc: Seattle, Washington
Re: Explorative Testing [Re: yagsy]
      #426329 - 10/22/07 09:07 AM

Quote:

Compare exploratory testing with exploratory surgery. Would you want your surgeon to just open you up and have a look around? Physicians and surgeons have an idea of what they want to achieve, they have explicit procedures to follow when doing exploratory surgery. They do not just open a person up and have a poke around, do they? Is this considered to be efficient use of resources and still achieve the goal of diagnosing what is wrong?




Actually, exploratory surgery is performed very little today with the exception of suspected cancer, and as medical imagry technology increases the practice of exploratory surgery in humans will decrease even further. Also, a biopsy and detailed examination of the removed tissue are performed after exploratory surgery. How many 'exploratory testers' perform an in-depth analysis of the defect to root cause? Exploratory has its place just as exploratory surgery; however, we must realize it is generally not the only, or best option in all circumstances and as technology improves it use may become less desirable.

Quote:

Or would you like to be the tester with the mindset of Christopher Columbus when he set out to discover the Americas? He had no real proof other than a belief that the earth was not flat and he wouldn't fall off. He still set sail anyway and had a plan of just sailing until he came back around the other side of Europe.




In fact, no I don't approach testing at all with the mindset of Christopher Columbus. This is really an absurd analogy when you think about it. Columbus didn't set out to discover America. Columbus set out to find a sea-route to the east (rather than having to travel overland). So, this analogy really means that exploratory testing starts off to prove one thing and then simply by happen chance it discovers something else, and never completes what it set out to prove in the first place because the 'thing' it discovers becomes distracting.

--------------------
- Bj -
I.M. Testy blog
Testing Mentor


Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: TestingMentor]
      #427056 - 10/24/07 07:49 AM

Summary of this multi-topic topic to-date. Please feel free to correct and/or continue the exploration.

E.T.

  • In concept, E.T. works well as describing what many of us were doing before the term was coined.
  • E.T. in practice is not a global solution with respect to postponing test strategy development. In other words as with all practices it has limited potential, the exception being the above conceptual aspects.

Documentation Specifically Test Scripts/Procedural Documents

  • Clearly substandard documentation exists just as does substandard code, applications, products, and services throughout the world.
  • Without metrics or otherwise comprehensive factual evidence, one cannot state exactly how much exists.
  • Apple growers do not dismiss an entire orchard because the first apple they pick up is rotted. Encountering some poorly written test procedures/scripts does not automatically indicate all are poorly written. Nor does such an encounter mandate that the world should not prepare test procedures/scripts.
  • Scripts in some instances are necessary and apparently in some instances, they are unnecessary.

Is that a fairly accurate representation at this point sans the discussion about Ad Hoc?

I propose yet another new term for all of this EAT, or Early Attack Testing. Again, that may or may not be useful since the term suggests what many of us already do???

Your thoughts?



Post Extras: Print Post   Remind Me!   Notify Moderator  
JakeBrake
Moderator


Reged: 12/19/00
Posts: 15290
Loc: St. Louis - Year 2025
Re: Explorative Testing [Re: JakeBrake]
      #432160 - 11/10/07 05:55 AM

For both the Exploratory Testing (E.T.) and Scripted Test Documentation-detail (test procedure) discussions here is some additional information. I think it only fair to offer some objective evidence to support my counter points made in the discussion above.

E.T.

Here is a document that clearly calls out the need for a test strategy in Table 2. Pre-Systems Acquisition activities. In other words the Dept. of Homeland Security requires that Requests-For-Proposal (RFP) ask for a test strategy from the bidders before a contract is awarded. https://buildsecurityin.us-cert.gov/daisy/bsi/892.html This is a derivation of the standard for DoD contracts. In these cases, Cems position would need to adapt in order to have any chance of a successful bid.. "At the start of the project, I know less about the product and its test-related issues than I will know at any time in the future. So I cannot announce my testing strategy at the start (unless I want to be wrong)." Cem or anyone, how would you adjust to such a situation? Are there vanilla test strategies along with some domain-specific detail that could be provided here? There are two E.T. exercises offered here at SQAForums where both are attempting to illustrate the concepts required without actually providing a full-blown and otherwise voluminous strategy. In both of these cases some readily available and easily accessible vanilla templates could be plugged in. Those templates would of course speak to the whole gamut of the testing life cycle across the whole program and/or projects spawned by those programs. Then one would simply add the domain-specific items that would address the specific testing complexities of said domains.

Again, my own position with respect to E.T. amounts to fundamental agreement with Cem except for the test strategy business. The computing worlds I "grew" up in simply called it test analyses and design. Cem simply coined a new term as he stated at CAST 2006, "Twenty-three years ago, I coined the phrase "exploratory testing." I didn't invent the practice but, as far as I can tell, I was the first public advocate of it. The idea was disparaged widely, sometimes thoughtfully. Many of the discussions have been and still are whether it is a good / bad idea in principle, rather than how to do it more effectively and how to assess the quality of the work done. Twenty-three years later, some of the same old attacks on the idea are still widely repeated. Why? What legitimate problems and concerns are they addressing? To what extent has our progress as explorers addressed those concerns? What problems have we actually solved and what should we be telling test groups about how to develop effective exploratory skills and practices in their work? Several well-known explorers are at this conference. This talk opens a discussion that I hope to see as a theme throughout the meeting. It's time to publicly take stock, to identify areas of agreement and areas of controversy, areas of progress and areas of ongoing concern." [Kaner, http://www.associationforsoftwaretesting.org/conference/cast2006/CAST2006Program.pdf ]

In the spirit of supporting and promoting Cems desires stated above, SQAForums provides exercises accessible via at this link http://www.sqaforums.com/showflat.php?Cat=0&Number=430135&page=0&vc=1#Post430135

Script Documentation

I think the contributors to this portion of this topic have adequately demonstrated that there are domains (medical, etc.) where the need for detailed test procedures is clear and that need is understandable. Per James Bachs invitation to discuss this further, I have flushed out some additional backing evidence to support my own position with respect to being in support of detailed scripts. While the invitation suggested email, given the importance of this issue to test-engineers worldwide I feel it important to keep this in the public eye. The reference material supplied below is in response to this specific segment of the E.T./Script Documentation discussion located as follows. http://www.sqaforums.com/showthreaded.php?Cat=0&Number=412470&page=0&vc=1 All of the DoD contracts that I was a part of required procedural documentation to follow the following standard or its predecessor standards: http://nas.cl.uh.edu/helm/MS_2167/std_80015A.doc The following NASA standard understandably specifies procedural detail. http://satc.gsfc.nasa.gov/assure/docstdae.html#A200 With respect to detail the following standard states, "It shall be written in enough detail that if an anomaly occurs, the test can be exactly repeated to show the anomaly." http://qa.jpl.nasa.gov/PQA/QCMatrix/QC90S.html Here is another supporting document. http://meweb.larc.nasa.gov/amsd/refs/tst_pln_sample.html#xtocid509112 While I would like to post some examples here from my DoD experience, I cannot, as I would be compromising confidential or secret material still under jurisdiction of the General Declassification Schedule (GDS).

Anyway James, I just may have exposed some additional opportunities for you in terms of providing more domains that may want to hear your message about improving their test procedures!



Post Extras: Print Post   Remind Me!   Notify Moderator  
Pages: 1 | 2 | 3 | 4 | >> (show all)



Extra information
0 registered and 11 anonymous users are browsing this forum.

Moderator:  Rich W., AJ, blueinatl 

Print Topic

Forum Permissions
      You cannot start new topics
      You cannot reply to topics
      HTML is disabled
      UBBCode is enabled

Rating:
Topic views: 24032

Rate this topic

Jump to

Contact Us | Privacy statement SQAForums

Powered by UBB.threads™ 6.5.5