when is a program too complex?

Everyone seems to agree that programs shouldn’t be too complex. However, for technical debt purposes a specific complexity limit needs to be set. Sonar sets the limit at 60 McCabe Cyclomatic Complexity points. The Software Engineering Institute sets it at 50. It considers programs larger than that to be virtually untestable.

In my experience complexity correlates to defects. One logistic regression model I built with actually program failure data shows a near 100% chance of failure when complexity exceeds a certain amount. Troster in 1992 and Craddock in 1987 report the same.

Complexity is a coding metric but can also be a proxy for design issues. In fact it’s really the only design metric available in the C# Sonar plugin. So tracking the complexity metric in that environment serves to reduce defects and also to improve design quality. In a Sonar Java environment there are other design metrics like cohesion and coupling that can be used to better target which complex programs should be re-factored.

Unfortunately, complex programs can easily become a large part of the code base if they’re not monitored from the beginning. A large program attracts other code like flies as the inevitable one-line changes are made to the program. The maintainer doesn’t really know what the program does. They just find a safe place to insert the method or line, do a happy path test and declare victory, tip-toeing away from the class before it blows up.

In a new one-year old 1 million line system, only 3% of the classes exceed the 50 cc limit. That doesn’t seem so bad until you realize those 3% represents 34% of the code base by LOC.

It’s hard to remediate that kind of debt. Programmers resist re-factoring a class they don’t understand, especially for a one-line change. But you have to start somewhere. I usually run the cohesion metric, LCOM4, and see if there’s a natural break in the program structure. Eclipse allows methods to be extracted, which helps in the re-factoring. Train developers on safe re-factoring using the good resources out there on re-factoring.

Moral of the story for class complexity? Like voting in Chicago, track it early and often. Re-factor and fix it as you go.

Advertisements

How much does zero technical debt cost?

A question I’m often asked is how much extra does it cost to operate with no debt?  What’s usually meant by this is how much extra effort is required to have 100% unit test branch and line coverage.  There’s the effort involved with managing the extra code that’s involved in unit tests.  About 35% of the commit activity is for unit tests on a 2 million LOC project with 37,000 unit tests.  There’s effort in maintaining the tests and re-factoring the tests or changing them if an API changes.  Sometimes the tests are brittle or change global static causing an unrelated test to mysteriously fail.  So there’s work involved.

Studies at 3 major companies show that the extra effort is around 20%.  This is the amount show by all three, independently of each other.  So that’s the number I use.  It jives with my own experience as well.

Of course, one can always say the the extra testing costs nothing because you recover the testing cost by defect reduction.  True enough but most project managers are responsible to estimate specific tasks and projects which don’t usually include (or track) defect remediation.  They need to estimate code and test effort.  So it’s fair to consider the testing burden.