I'm not quite sure why, but I've been hearing a lot of code coverage horror stories recently. It's great that people are discovering that automated testing is good, but you can't measure quality with coverage. Yes, coverage does give you a measure of effort. It does, after all, take effort to get coverage numbers up, but sadly it's possible to get a lot of coverage that doesn't do you much good at all.
Brian Marick wrote a wonderful paper about code coverage misuse (pdf). I won't repeat his points here, but the one thing that I'll add is that coverage is a particular lousy measure of a quality effort because ultimately you want the team to pay attention to quality, not numbers. The numbers might tell you something about quality, but it's dangerous when they start to become a goal rather than the means to a goal. It's hard to hit one target when, in reality, you're aiming for another one.
I used to call this sort of thing a surrogate goal until Tim Ottinger came up with a better name: prosthetic goal. A prosthetic goal is something that you hold up as the goal when you want people to achieve some other goal that you can't measure easily. Want employees to produce more value, add an incentive program. Want people to work with quality? Adopt ISO9000.
Sometimes prosthetic goals and your ultimate goals dovetail nicely, but at other times they don't. There's a gap. You can spend so much time working on the prosthetic goal that you lose on other fronts.
Want to improve quality? Then measure quality. Measure bug count. Or better yet, measure delays that are due to bugs and try to reduce them. Time is money after all, and that's money down the drain.
Speaking of wasted time.. I'm hoping that I run into a team someday that measures average build time on developer's machines. How long does it take for each developer to get feedback whenever they hit the compile button? There are many teams that could work a lot faster if it didn't take them so long to compile.