I would guess they are looking for defects per thousand lines of code. That's the usual one. But I've never found it to be very useful. Especially with the tools available for development now. For instance, our web app has maybe 2000 lines of code total. If I used defects/kloc, the quality index would be incredibly low (and it's not that bad).
I don't like the per KLOC of code metrics either. I think they mislead people to believe that lines of code in some way relates to program functionality.
Having programmed for something like 14 years now, I've learned otherwise. Even looking back 2 years, I can see code that I wrote, that I could write now in half the LOC, or even lower. Going back 5 years I did some quick scanning of code and realized one of my 400KLOC projects could have been done in about 90KLOC.
Then I agree, should simply reducing the code size decrease the quality score?
Could the metrics you refer to relate to time spent testing and retesting bugs? As part of our quality (ISO 9001) procedures we are required to estimate the time spent pinpointing, testing and retesting particular bug fixes. It's typical that low priority bugs/fixes take about 10 - 15 mins to test, and higher ones approaching 45 - 60 mins. Yet another effort by management to make sure we are all working hard!
I have to agree with all that have been said on (# of defects/ # lines) of code measure.
May be that measure was appropriate in old programs where most of the features had to be programmed from scratch. But now that you can use implicetely thousands of line of code with one line, i think that measure is not appropriate.
I also believe that the defects per function point could be a better measure since the complexity of a software seems to depend directly on the number of function points.