Well, I am not sure I really understand what you are asking here. Are you asking if you should test over a range of data to validate the responses from the application to both good and bad data? Are you asking about driving load and performance test with different data sets? The topic just seems too big to condense into a few lines here. There are several reasons to test over a data range. For example in my case I need to validate the look up of a batch id against some know results. I expect that when I query against some batch ids in my test data that the application should responde with pending states where other ids should result in completed states. If for some reason the application response does not meet with my expected value then I know I have a problem. My application, like many others, relies on the basics of data entry to operate correctly. It needs to be able to add new batches, update existing batchs and retire/delete completed or cancelled batches. To validate this in a timely way I need my test scripts to enter in a value and test the results. I do this with e-Tester and the databank utility. This allows one test script to validate over a range of different data (including bad data).
I guess I am kind of rambling here ... but like many things in this line of work, it is hard to come up with answers when you don't have a complete grasp of the question.
I think I know what you are asking. You want to write scripts which are controlled by the data rather than having a huge number of difficult to maintain scripts which cover all of your tests.
Data-driven testing is in my view the best way to approach the automation effort.
Rather than go into detail here, I will save you some time. Go to the "Downloads" link to the left of this discussion window. When the new page loads, have a look in the "White Papers" section for some excellent articles on Data Driven Testing. Also, on the same page is a link to the discussion and at the moment the top of this list is "Data Driven Testing"
Regads and Good Luck,
"Not every solution was derived to address an obvious problem" - Me (quite recently indeed)
Following are the tools that you can use to generate the test data.
Later integrate these data to your automated test tool. All the automated tools will support the test data selection from external databases.
Well, data-driven automated testing methodology: only the the input and expected results are updated - therefore your tests are more "generic" and you concentrate on creating the test data, "feeding" this data to the application, obtaining the results, comparing with the expected results. Depending on the results a Pass or Fail criteria is determined.