I have not been able to find alot of information on this topic. I am currently implementing keyword based automation and it works just fine, however, I am trying to come up with the most common sense and simple way to gather and analyze the results.
Currently I am storing all the results in XML files. Each test case run has its own XML file and supporting imagefiles.
If anyone has had success with a logging/analysis scheme or can point me towards some more information it would be appreciated.
For a matured automation, data driven and automated results processing are very important. For data driven, we are having lot of articles and different techniques. But Results processing is very common for test automation as well as general applications. You can follow like any standard log files format. You can maintain 2-3 log files for single script. One for all the activities of your suite and another one for Just giving Pass or Fail result for each testcase. It will be better, if all the log files are HTML files... Previously I have read one article (Mango.. Inc) .. I couldn't remember the name. If I get the link again, I'll post here.
We're using XML for input and output. Actually, the input and output are the same exact format. When a test is run, I create a copy of the XML input (actualy I use the XML DOM object inline) and as the test is running, I update a "status" attribute at the testset, testcase, and step levels with PASS or FAIL. When the run is completed, we apply an XSL stylesheet to the resultant XML file (with a web browser as opposed to server side XSL transformation) for easy viewing. You should be able to do that pretty easily and point to your image files in an <img> tag in the XSL stylesheet. Hope this helps.
Quality Assurance Engineer
The Weather Channel