Microsoft's testing solution. You will run into philosophical issues when you look at Microsoft's offering versus HP's. Microsoft does not really look at independent testers, they look at developers who test. Their tools follow this philosophical bent.
Can you define in more objective terms "...not working for us..." The reason why I ask is complex but I have seen a lot of people throw away their incumbent tool of late where the issue was not the tool's capability but that the people using the tool did not have the skills to be successful with the tool. This is across vendors, HP, Borland (Now Microfocus), Microsoft and open source. I am not suggesting that this is the case here, but this is a growing and disturbing trend industry wide as it points out the paucity of skills in the market.
Knowing in objective terms what is "not working" could lead to a better recommendation to address the specific gap that you have.