There are plenty of possible units of measurement. You could base it on test cases executed, defects found, test coverage, time spent testing, etc. Then you could also have any number of combinations of these, I suppose.
Now, with that said, I will go back to my unchanging stance on this subject. You can't! There is no way to accurately judge the productivity of your testers based on any sort of calculation. You might be able to tell whether a test project is on or off schedule. However, if you've set specific standards that MUST be met in a given position, then I suppose you could enforce some sort of productivity standard. The only difficulty you might find with that is that people will concentrate more on the standard than the actual testing itself. So if it's test cases executed, then maybe little Johhny TesterPants will just Pass his last 10 test cases to make the grade. Or just sip the really simple ones that should be fine. I'm all for risk assessment, but I'll take care of that, thanks [img]/images/graemlins/smile.gif[/img]
Anyway, just my 2 cents.
9 out of 10 people I prove wrong agree that I'm right. The other person is my wife.
Software Productivity could be measured using multiple size measures.If the size of the project is measured in Function points, then the productivity is measured in terms of no of FPs per man month or no of FPs per man day or no of FPs per man hours.