I think the term that is being looked for is 'exit criteria'. Something that you might want to document in a test plan.
Good term to use the Search functionality for. I know the subject has come up more than a few times. Beyond that, there are plenty of resources both online and in paper form that go into more discussion.
"The single biggest problem in communication is the illusion that it has taken place."
-George Bernard Shaw, Irish playwright and Nobel Prize winner, 1856-1950
It's also going to depend on which type of development team you're working for in the lifecycle.
In the waterfall approach, you can stop testing until the product is delivered to you. In XP, you can stop testing when you're busy working with the customer on new requirements.
If you're asking "how much testing is enough?" That can only be answered by you and your company.
It varies. Some places are content with minimal testing efforts before release (they've either never been burnt or weren't reprimanded for releasing software too soon). Other places wanted 100% coverage (never gonna happen, but that's a different topic).
It's going to take some trial and error work on your part.
I think this is a good question, and I also think the easiest way to determine when to stop is to set up a quality goal in advance of the actual testing. I like to have exercised each test at least once, and then follow the 80/20 rule. 80% of all errors have been fixed and the remaining 20% are non-critical.
If everyone agrees to this, it makes the testing process considerably easier and reduces what can be a tense phase of the project to simple, unemotional metrics. Either you're at 80% or you aren't.
I'll add my post as a "me too" to Jason and Linda. You do set up the test exit criteria in your test plan before you even start testing right? Simplistically, something along the lines of "no critical defects" and/or "all tests completed satisfactorily".