A grand interview question. Many answers to that however the trendy one is.
"when the cost of testing outweighs the risk of failure of the untested requirements." If you interpret the question as how much testing you should do.
I think you need to be pragmatic depending on the information that you have at your disposal. If you work in an environment with excellent requirements to test traceability you can look at the above approach. If you don't (as most of us don't) work in such an environment then it becomes more a judgement call as to when you stop testing.
The other angle to this question is when you stop testing due to the unacceptable nature of the code. In this scenario you should have defined, in your test plans, your acceptance criteria for each test phase. These should be based upon the effort involved in testing, the timeline for your testing and the acceptable levels of defects (of varying classifications) you can accept.
A vague answer I know but it is a vague question. If you want more then ask some more specific questions.
The trendy answer was good, indeed. The practical answer will be like this..
The testing is stopped when all critical and major bugs are closed (or deferred for next release) or good workarounds have been agreed upon and the application is ready to deliver the business functionalities with reasonable confidence level. The practical wisdom has also seen down grading of defect severity so that no major or critical defect remains open.
When we run out of testing time, we usually follow the 80/20 rule. If a majority of the applications are working, the client is notified of the issues. We also indicate that the issues will be resolved in the next iteration of the application.