Dead code is my nemesis. Okay, that may overstate it a bit, but not by much. I spend a lot of time with teams looking at ways to refactor complicated code. Often they have the sense that a lot of the code isn't necessary, that it is special-case code that was put in for one customer or another years ago. The business has moved on, yet the code remains, stymying us with its inscrutability.
Recently, I've been suggesting that teams put probes into their code to discover whether particular bits of code are ever executed. The probes are simple logging objects which register on startup and are then called when control flows to particular points in the program. At the end of a run, you know whether code was executed at those points. At the end of month's worth of production runs you know whether that code was executed at all.
Of course, this doesn't tell you whether the code is technically dead, but at least you have a good indication of where to look. If you tie this data with tracing that tells you which features are being exercised, you may be able to retire particular features.
One thing I've been wondering about, though, is what it would be like if we took this process further? What if we started running coverage reports in production? On its face it seems like an odd idea. Typically teams run coverage on their tests to get a (loose) sense of how well they are testing. Running coverage in production is rarely considered because it is often expensive. Imagine, though, what we could learn if it became a common practice. For every line of code, you'd have a sense of how often it is run and whether it is ever run. You'd get a new sense of the risk of modifying a particular bit of code and a sense of what that piece of code's value is in the application.
In the industry, we've incurred runtime overhead at various times for far less useful reasons. It's something to think about.