返回首页

Complexity Hiding and Transfer in "Maintainability Optimization"

When complexity is simply moved from large functions to the class hierarchy, configuration, and call chains, the system is usually not more maintainable.

When many teams do “maintainability optimization”, the first step is to take apart the code.

A 300-line function was split into 12 classes; a process with many branches was changed to “strategy + factory + configuration”; a logic that could be understood by following the call was changed into events, subscribers, rule tables and several clean-looking directories.

The code is indeed less crowded, and a single file is shorter. When reviewing, it even gives people a feeling of “this is advanced”.

But my judgment is: **Many so-called maintainability optimizations do not reduce complexity, but only change complexity from partially visible to scattered, jumpy and hidden. ** The most common result of this type of change is that the problem is harder to locate, the change is harder to evaluate, and it is harder for new people to understand.

The core of maintainability has always been: **When requirements change, online errors occur, and boundary conditions arise, can the team quickly see the real constraints and make safe modifications within a limited scope? **

Complexity will not disappear because of splitting, it will just stay in another place.

A common situation is that the intuition of “maintainability” is too visual.

They feel uncomfortable when they see big functions, they feel backward when they see a lot of if/else, and they instinctively want to tear it apart when they see multiple business judgments stuffed into a class. So complex logic was split into many thin files, conditional branches were translated into object levels, business rules were moved into configurations, and a little interface and naming were added, and the code surface was immediately cleaner.

The problem is that although the original 300 lines of code are ugly, the complexity is at least spread out on the desktop. Reading from top to bottom, you can see how branch conditions, shared state, exception handling and final results are connected.

Once the complexity is broken down, the situation changes:

  • You need to jump through 7 files along the call chain to know where a field was finally changed;
  • You need to understand the interface, implementation class, registration logic and runtime assembly at the same time to confirm which branch you are taking;
  • On the surface, it seems that the business rules are in the code, and the results are half in YAML, half in the database, and half in a mapping table generated at startup.

The complexity has not decreased, but has changed from “a little tiring when reading” to “much slower when locating problems.”

The maintenance cost is usually settled three months later when someone corrects the wrong logic, fails online, and troubleshoots the link.

The most common misjudgment for teams is to mistake “partial cleanliness” for “overall maintainability”

This type of misjudgment is common because many refactoring benefits appear to be real in the short term.

For example, a function that contains multiple order processing branches can be changed to the following structure:

Handler h = handlerFactory.get(order.type());
h.validate(order);
h.price(order);
h.persist(order);
h.notify(order);

This code certainly looks cleaner than a long list of branches.

But the real question is:

  1. How to decide which implementation to use for handlerFactory;
  2. Is there any shared prerequisite between validate/price/persist/notify;
  3. Whether behavioral drift is allowed between different implementations;
  4. Should a certain requirement change be made in one place, four places, or a dozen places?

If these problems are not constrained, then this kind of “elegant structure” often just rewrites the business differences that were originally explicitly written in if/else into implicit differences scattered in the class hierarchy.

From a review perspective, it becomes cleaner; from a maintenance perspective, it becomes more context-dependent.

**Maintainability refers to whether the entire system is easier to answer “Where will this change affect?” **

What really determines maintenance costs are usually four things

I prefer to use the following four questions to judge whether a refactoring makes the system more maintainable.

1. When the problem occurs, is the location path shorter?

When reporting a problem online that “certain types of orders occasionally issue duplicate coupons”, the most valuable thing is whether the engineer can quickly find: where the judgment conditions are, where the idempotent protection is, and where the side effects are triggered.

If after splitting, the troubleshooting path changes from “looking at a function” to “looking at the interface definition, finding the implementation class, checking the assembly, tracing events, and flipping the configuration”, then the maintenance cost will actually increase.

2. When requirements change, is the modification scope more convergent?

Good abstraction keeps changes focused. Bad abstraction allows change to spread.

The worst type of refactoring is to split the logic into multiple responsibilities on the surface. In fact, every time the requirements change, they must be changed simultaneously: rule definition, factory registration, default configuration, test samples, and monitoring points. The file has become smaller, but the change area has become larger.

This kind of system looks modular, but is actually more brittle, because every time you make a change, you have to bet that you haven’t missed any corner.

3. Are constraints becoming more visible rather than more hidden?

The reason why a lot of business logic is difficult is that on the surface it looks like the code is ugly, but in fact it is closer to it and has many prerequisites:

  • This state can only go from A to B, not directly to C;
  • This field can only be modified by certain types of customers;
  • This action must be successful with another side effect.

If after refactoring, these constraints no longer appear in one place, but are scattered in multiple classes, annotations, configurations or listeners, then there is a risk of amnesia.

4. Is the test feedback closer to real behavior?

Many “maintainability optimizations” will conveniently lead to a bunch of single tests that are easy to write because each class is smaller and dependencies are mocked away.

However, the increase in the number of single tests does not mean that the system is easier to improve.

If the test can only prove “this class will return the expected value in the mock world” but cannot cover the assembly relationships, shared state and timing constraints in the real process, then it is more about protecting the structure than protecting the behavior.

A common misunderstanding: In order to eliminate if/else, rewrite business differences into a type system

Of course if/else can be poorly written, but “eliminating if/else” is not a goal in itself.

I have seen many systems that originally had only two or three clear branches and very stable business semantics. However, in order to pursue the correct design pattern, they were split into policy interfaces, abstract base classes, registration centers, and extension points. Half a year later, the number of types increased from 3 to 9, but it became increasingly difficult for callers to judge which differences were real business differences and which were just structural differences left over from historical evolution.

In many cases, having many branches does not mean that the object model must be adopted; it only means that there is business judgment here. The first thing to do is to distinguish which of these judgments are stable axes of change and which are just conditional bifurcations in the same process.

If it is just a few conditional judgments in a process, then forcing them to be “object-oriented” will probably just rewrite the conditions that can be seen at a glance into several layers of method dispatch.

**Hide the condition into polymorphism will not make the condition disappear, it will only make the reader realize its existence later. **

Another common misunderstanding: treating configuration as a complexity recycle bin

Another approach that is particularly easy to be mistaken for “more maintainable” is to configure business rules as much as possible.

The reasons are usually very good: there is no need to change the code in the future, the operation is configurable, and the expansion is more flexible.

But configuration is not inherently cheaper, it just moves the complexity from the compile time to the runtime.

Once a rule configuration starts taking on too much responsibility, these problems can quickly arise:

  • There is a priority and coverage relationship between configurations, but there is no place in the system where they can be fully seen;
  • Which scenarios are affected by a change can only be verified online;
  • Legal configuration values do not mean correct semantics, errors will be exposed at runtime;
  • Code review becomes “I can’t understand what this JSON means.”

If a rule changes frequently, but the change still requires engineering judgment, linkage testing, and rollback plans, then it is essentially a code problem and will not suddenly become a low-cost maintenance item just because it is written into the configuration.

A common cost of over-configuration is that “no one dares to touch the system anymore.”

Counterexample: Some abstractions will indeed make the system more maintainable

Nor can it be said as “Don’t abstract, don’t split.”

There are situations where abstraction is not only worthwhile, but necessary.

For example:

  • Facing stable and clear axes of change, such as different storage backends, different payment channels, and different serialization protocols;
  • These changes really need to be replaced at runtime, rather than just imaginary “possible extensions later”;
  • Each implementation can comply with the same set of strong constraints, rather than seemingly having the same interface but actually having different semantics;
  • Team boundaries also follow abstract boundaries, and different modules can be evolved and tested independently.

The value of abstraction at this point is that it really reduces the friction of future changes.

Similarly, if a long function is responsible for parameter verification, business decision-making, side-effect orchestration and exception compensation, it is usually right to split it into several steps with clear boundaries. The premise is that after disassembly, the backbone of the process is still visible and the key constraints are not hidden.

So the question has always been: after the demolition is completed, the complexity is contained, or is it just transferred to other cognitive corners. **

A more practical judgment method: first look at the most common modifications in the future, don’t first look at today’s structural cleanliness

If I suspect that a “maintainability refactor” is just structural polish, I usually start by asking three very practical questions:

  1. Next time the product changes this requirement, what are the most likely changes for engineers?
  2. If something goes wrong online next time, which link should the person on duty look at first?
  3. If a new person takes over, he needs to first understand the business rules or the structure of the framework.

If the answers to these three questions become more complicated, then there is a high probability that this refactoring will not improve maintainability.

Maintainability is for future modification costs, not for today’s code screenshots.

Summary

The reason why many “maintainability optimizations” are dangerous is that on the surface they look like they are completely useless, but in fact it is much closer to them being too easy to appear partially correct.

There are more classes, the functions have become shorter, the directory has become neater, and the review process has become smoother. But the real maintenance cost comes from understanding, positioning, modification and verification, not from visual neatness.

So my suggestion is very simple: **Don’t dismantle complex logic until it is invisible, first dismantle the complex logic until it can be changed. **

If a refactoring simply moves complexity out of the current file and into the call chain, configuration layer, and abstraction layer, it’s usually not improving maintainability, but just delaying the pain of the next troubleshooting.

FAQ

读完之后,下一步看什么

如果还想继续了解,可以从下面几个方向接着读。