Defects/KLoc is a good measure, but I think it's even more effective when you can use additional historical data as well. Generally speaking, there is a curve you'll find from a product release. In the first version of the product it will be much more pronounced, but even as the product matures you'll see a similar curve, just smaller.
Speaking of historical information, really, anything worth monitoring will have less relevance the less experience you have with it and the less historical data you have, so any metric needs time to mature, much like a product.
Brent reminded me of a couple of practices I've started using over the years.
The first one refers to measurements within the project and is reflected in the release criteria.
The second one is related to reviewing and benchmarking the development and testing processes between versions and between teams based on rejected and reopened defects, something we usually tend to disregards.
Thank you Inder for the link.
Brent, can I use defects/KLOC and check on that curve to see if I am doing ok, right?
I want to make a scorecard based on that indicators.
For a general indicator, or a group of 2-3 indicators, that reflect the whole Testing process, based on the indicators from the link, what you suggest?