First of all, sorry for the subject which can be meaningless for the english native people. I will try to explain my problem.
- At the end of my TestExecute, I get the log result and I can see all my projects with their log. Everything looks correct.
- At the end, I also send an email to some people to warn them that a log is available.
I would like to put in this email a 'quick report' which display the "what to know" without see all details reported in log (I let imagine if my project suite contains 25 project and each project have several sub routines).
For example I would like to know:
* Total number of test which have been executed by TestExecute
* Total number of test in error Status
* Total number of test in warning Status
* Total number of test in OK Status
* Test Start time
* End Start time
I'm pretty sure that it exists a easy properties/method in log object for that but I do not find it in HelpSection.
I don't if it keeps a running tab of errors. Also, I don't think there is a way to mark a test as failed. For instance, if there were multiple errors in a single test, is it going to be reported as multiple errors?
One way yo can general this is by holding global variables and using some logic to track this manually. You should be able to use Events to keep a running tally, though.
So you should have a variable for each thing you're testing. Also, you need to have a variable which tells you whether or not an error has already been reported on the current test. This way, in the OnLogError event you can add some code that will check this variable. If the flag is zero, then it sets the flag to 1 and also increments the number of tests in error. Then, at the end of the test case you can set the error flag back to zero and move onto the next test case. This should end up returning the true number of test cases experiencing an error.
You should be able to do similar things with each of the other items you're tracking. For OK, you just need to check all the other variables at the end of the test case. If they are all still zero (haven't been triggered) then you know the test was ok, and you can increment the OK value.
I do very similar things in my automated regression suite. But I've found the problem to be MUCH easier to solve with the use of a "helper" console application that I wrote in C#. I have a logging function in my C# application that I can invoke from within my test script. I arbitrarily decide which events are "significant" events, and then send them to my logger. This allows me to carefully construct the messages that are actually being logged so that they are much more helpful than some of the default messages (e.g. "The object does not exist." Yeah...I got that....WHICH object??).
At the end of the test run, my helper application generates an email message based on all of the "significant events" that I passed to it throughout my test run. I serialize the results to XML, and then I run an XSLT transformation on them to produce the actual HTML report that is being included in my email messages.
My philosophy on this is that my test script should only do one thing: test the application that needs testing. Any thing that I want to do as part of a test run that is only tangentially related to the actual test, I push off onto my C# application to accomplish. This approach has greatly simplified my test scripts, because many tasks are much more simply handled by managed code. It has also greatly expanded the flexibility of my testing environment. Using the managed-code application as a "test controller" allows me to fine-tune the automations environment (as well as do some other neat things, like integrate the automated testing into our CI/SCM build process without relying on @-jobs to schedule test runs).