I recently had an interviewer ask me how I would ensure that an offshore team was reporting valid execution status. So if they report 80% complete and really are only 8% complete. Aside from reporting results and tracking execution status and results leveraging test management tools, the only other thing I can think of would be to spot check major functionalities myself. I am looking for the real answer on how you ensure results reported are accurate. There are tools that are instrumented in code to track testing against code base but I don't think this was the answer the person was looking for.
I would have Total test cases count to be executed for the entire 1 cycle. Then drilled down to daily expected execution.
I would have access to review the test cases and its results.
So if daily total execution expected is 100 test cases
teams executes 80 then its 80%( irrespective of result i.e. pass, fail, blocked). so the actual daily execution was 80 test cases.
you can wish to cross check the test results by picking few test cases and execute yourself to makesure that offshore is doing it correctly.
I have nothing to declare except my genius. -Oscar Wilde
Often with outsource arrangements there is the question of trust whether the agreed work is being done or not and if it is being reported correctly. The most effective way to get the wanted results is in managing the relationship as all the verifications you can do of performance tend only be achievable after the fact.
For myself I want visibibility of their test materials, results and defect logs to do a detailed review of a few items to gain confidence in their approach. To make this effective you really need to be reviewing material before they start testing. If I'm happy with their apporach and some of the detailed materials and results I can be more confident in the numbers reported.
A bi-directional review of test cases can be really beneficial for both parties in eliminating misunderstandings. I've worked with one vendor where we reviewed each others test cases and this improved the quality of tests on both sides and also identified a couple of misunderstandings on how certain functions worked.
Lastly there is no harm in visiting the offshore test team in the early part of the relationship. Spend some time talking to the actual testers about their work and not just management. Sit down with a tester and ask them to walk you through one of their recent tests. If visiting is too expensive then consider webex but this is not nearly as powerful.
The story so far:
In the beginning the Universe was created.
This has made a lot of people very angry and been widely regarded as a bad move.
This is a good question. I was working as a QA manager from US for some times and I had a big offshore team. It was quite embarrassing when I reported some part done to the client but later on I found that is not done actually. Here is the corrective action on this:
1. Make one person accountable to provide you the report and that must come from the tool. So from the tool get the data in one excel sheet and in other tabs of the excel have pivot formula to auto calculate the value. Tool does not like and this kind of report cannot be manipulated.
2. Ask people to attach test results along with the test execution both pass and fail.
3. Before you report any number to the client ask any of your onshore tester to spot check these results.
4. Never and mean Never allow the testers to fail multiple test cases using the same defect. Another wrong information. 80% executed but out of that 70% failed by the same defect. I was literally beaten by the dev lead by producing this result. So be very careful on this. Make sure they block that defect and highlight it in the status. It takes only few mins to fail 100 test cases using the same defect.
Yes I agree. Sometimes the team is not bad but they feel it is not so good to tell a bad news to my manager etc which leads to some worst situation.
Regarding my 4th point I think until unless we have a real sev 1 situation we cannot say a high % of test cases are blocked by a single defect, Then the test cases are not unique enough. People should try to check the objective of the test cases carefully. so if the objective of the test case says in the search page we should see A, b and C options present then based on that they should pass that test case even if the search function is not working at all. In many cases I have seen a tendency of not reading the objective carefully and coming to a conclusion that this is blocked. We should try to execute as much as possible in terms of steps until we are hitting a roadblock.