The biggest indicators all seem in some way to link back to a high Kolmogorov complexity. The basic idea is that the code is more complicated than it needs to be to do a given task. Sometimes tasks require complicated code; sometimes, your given task is to optimize in a way that requires complicated constructs. But in general, one should not make code more complicated than it needs to be.
Some of the indicators I've been able to come up with off the top of my head:
- 1) Lack of data hiding
Any halfway complicated software module is going to have some internal state that it manages, and that nobody else has any business touching. Bad code often has cases where unrelated modules directly mess with data belonging to some other module.
Data hiding takes many forms, and can be enforced by tools or simply willpower. Whether classes, undefined structure types, local variables, file scope data, or intentional exclusion of header files, there are many ways to hide and protect data. Bad code typically ignores those protections.
- 2) Lack of chokepoint interfaces
Chokepointing is a crucial part of managing complexity. It's very similar to an object-oriented interface, but I think that calling it a chokepoint better conveys what's happening: At the chokepoint, the complexity is the bare minimum of what is required for communication between the two sides.
In short, by chokepointing complexity at well defined locations, you can more easily manage global complexity. Bad code typically doesn't have good chokepointing.
- 3) Logic holes and unhandled cases
Few things increase complexity faster than unhandled or fallthrough logic. Consider the case of a well chokepointed module, with a single commonly used interface that has a 'fall through' condition: every external module that uses that interface can potentially get a different output for exactly the same input.
While this would seem an obvious bug to be fixed, keep in mind that computers are deterministic - all of the modules on the system may always get the same consistent result. And it may be that all of them get a usable result, except for an obscure usage in some minor library.
- 4) Presence of any unexpected behaviour in the output
If I've learned anything over the years, it's that any unexpected output needs to be examined and understood. Far more often than not, unexpected output is an indicator of #3, logic holes and unhandled cases.
- 5) Bad customer interfaces
If the publicly exported interface to a paying customer is crap, odds are good the code inside is crap. Customer interfaces should be simple, powerful, and obvious. They require attention to detail, and if that attention isn't present, it's probably not present in the code behind the interface either.
- 6) Too little, or too much unfilterable debug logging
The purpose of debug logging is to give visibility into the system to allow for debugging and analysis. If there is too little debug, odds are good failures are simply being ignored. If there is too much debug and no good way to filter it for important messages, odds are good the programmer missed something.
- 7) Lack of sanity checks
Sanity checks directly increase complexity, because they generally change the actual behaviour of the program when they trigger (unlike debug logging, which simply reports state without modification). However, they are cheap, fast, and very, very effective. While they do increase program complexity, they typically add a layer of reliability to the system.