The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.
The process of finding and removing the causes of failures in software.
Supratim covered bebugging pretty good, actually. If you're looking for an example, then.
I have a software application, "A" which I feel is extremely stable. I have a developer intentionally add 100 bugs to the software. I then have QA perform their regular test cycle against the applilcation. As an end result, QA ends up finding 90% of the issues which were inserted into the program.
Based on this controlled experiment, I can assume that when a stable software application is released, that we are releasing it with 10% of the bugs still remaining in the system. So if I had an application where we had found and resolved 1000 bugs, I can assume that there were actually 1111 bugs in the system and we uncovered 90% of those, or 1000, meaning the software is being released with, approximately, 111 bugs still remaining.
Some companies will also employ additional testing measures to increase the find rate, including cycles of exploratory testing and beta testing.
Bebugging is only a form of estimating how many bugs actually remain in the software following a release. When you have collected a good deal of historical data like this, you are able to make fairly accurate assumptions as to how stable your release candidates actaully are.
9 out of 10 people I prove wrong agree that I'm right. The other person is my wife.