POSTS
What is the SOLID pattern and why you should avoid it
- 9 minutes read - 1914 wordsWhat is SOLID
The SOLID is an acronym created in 2004 based on principles that Uncle Bob introduced in the 2000s to identify the best practices of software design.
Although the SOLID principles may apply to any object-oriented design, they usually bring with them the Clean Code and Clean Architecture philosophies.
Single Responsibility Principle
It dictates that there should be only one reason for a piece (class, function) to change. As a consequence, it is usually stated as “a class/method” should do only one thing.
Open / Closed Principle
This principle aims to help with maintainability. It says that a piece (class / function) should be open for extension, but closed for modification. In that sense, it is indirectly saying that software should be built in a way that adding a new feature should not, or should not try to, modify existing code.
Liskov substitution Principle
This one is based on the roots of polymorphism. It was introduced by Barbara Liskov and it is also called strong behavioral subtyping. it says that a subclass (subtype) can replace the parent and this does not break the program.
Interface Segregation Principle
The interface segregation principle is the simplest one in my opinion. It states that clients should not be forced to depend on methods they do not use; Therefore, interfaces should be small.
Dependecy Inversion
The final principle is the base of the concepts of IoC (Inversion of Control) and DI (Dependecy Injection) that we constantly see in frameworks like Spring, Hilt, DependencyInjection, Symphony and etc. In practice, it indicates that a piece (class/function) should depend on abstractions and not on real implementations, i.e., the class should not actively instatiate a concrete implementation but receive the instance/pointer as a parameter during creation.
The issue with SOLID
1. The froggy navigation and code reading of Single Responsibility
My main issue with the Single responsibility principle is that it forces the developer to spread the functionality across a lot of different files and methods. This principle is really misleading, tricky and tempting because when you read the statement (“your code should do only one thing”) it makes sense. It seems to be fair and more maintainable, also, the code seems “neat” after you refactor it. Small methods and classes seem “pretier” compared to a long “legacy” code.
However, in my opinion, the beauty of the “clean” code happens only to the eyes of the author because it already has the mental model of the code. If a person knows everything about a algorithm or business rules, having a lot of method calls seems simpler; however, for a newcomer, having the code spread across a lot of different methods and classes only adds complexity and cognitive loading. Instead of reading everything from top to bottom, the new reader needs to keep moving and jumping from one place to another trying to collect the pieces and keeping the sequence in their heads. This adds load to the short term memory.
Let’s see a real example. I’m going to extrapolate for the sake of the exercise.
Here’s a piece of code from kubernetes
// SINGLE FUNCTION
// converts `go mod graph` output modStr into a map of from->[]to references and the main module
func convertToMap(modStr string) ([]module, map[module][]module) {
var (
mainModulesList = []module{}
mainModules = map[module]bool{}
)
modMap := make(map[module][]module)
for _, line := range strings.Split(modStr, "\n") {
if len(line) == 0 {
continue
}
deps := strings.Split(line, " ")
if len(deps) == 2 {
first := parseModule(deps[0])
second := parseModule(deps[1])
if first.version == "" || first.version == "v0.0.0" {
if !mainModules[first] {
mainModules[first] = true
mainModulesList = append(mainModulesList, first)
}
}
modMap[first] = append(modMap[first], second)
} else {
// skip invalid line
log.Printf("!!!invalid line in mod.graph: %s", line)
continue
}
}
return mainModulesList, modMap
}
Original here
As you can see, it’s a function doing a lot of different steps (iterating, spliggin, checking, adding, verifying with a else (you see clean coders.. you can use else)) it’s relatively long 30 lines.
If I had to rewrite this using the purest SOLID (ps: It’s not the same rules but you will get the idea)
// SOLIDIFIED VERSION
// converts `go mod graph` output modStr into a map of from->[]to references and the main module
func convertToMap(modStr string) ([]module, map[module][]module) {
mainModulesList, mainModules = createVariables()
modMap := make(map[module][]module)
for _, line := range splitLinex(modStr){
populateModules(line, mainModyleList, mainModules)
}
return mainModulesList, modMap
}
func populateModules(line string, mainModuleList []module, modMap map[module]bool ) {
if len(line) == 0 {
return
}
deps := splitSpaces(line)
if len(deps) == 2 {
} else {
// skip invalid line
log.Printf("!!!invalid line in mod.graph: %s", line)
return
}
}
func parseMainModule() {
first := parseModule(deps[0])
second := parseModule(deps[1])
if isValidVersion(first) {
if !mainModules[first] {
mainModules[first] = true
mainModulesList = append(mainModulesList, first)
}
}
modMap[first] = append(modMap[first], second)
}
func createVariables() ([]module, map[module]bool){
return ([]module{}, map[module]bool{})
}
func splitLines(text string) []string {
return strings.Split(text, "\n")
}
func splitSpaces(line string) []string {
return strings.Split(line, " ")
}
func isValidVersion(mod module) bool {
return mod.version == "" || mod.version == "v0.0.0"
}
As you can see, the second block (the SOLIDified version) looks “prettier,” “cleaner,” and more SonarQube-friendly — and I agree with that. What was originally a single function has been split into seven smaller ones. While these smaller functions are easier to read individually, I find that understanding the full algorithm and its business logic has actually become harder (at least for me). The original function was larger, but it kept all the context in one place, making it easier to grasp as a whole. In contrast, the SOLIDified version forces me to constantly jump around in my IDE, piecing together small fragments of logic in my head to reconstruct the bigger picture.
2. The Open to premature abstraction and Closed to deliver value
My next discomfort with SOLID revolves around the combination of the Open / Closed principle with the Dependecy Inversion. The “Closed” in the Open / Closed subtly states that the introduction of a new functionality should not touch the existing code. A developer trying to follow this proposition might incur in the premature abstraction. It’s really common to see in the Java and .NET world a lot of dependecy injection being made to things that will not change nor will be tested. Creating software trying to predict the next features, changes, issues and behaviors increases a lot the probability of rework or useless pieces. The most maintainable software is the one that as not coded.
To be fair, I’ve seen some modern languages like Rust, Go, Odin avoiding this incentive to premature abstraction. It’s still common in enterprise environments, though.
3. When everything is a dependency, your entire software is inverted
As I mentioned earlier when talking about premature abstraction, this topic is closely tied to the concept of “Open to Extension.” The issue I want to address here is the overuse of Dependency Inversion. Specifically, when every single thing a class needs is injected into it instead of being created within it. I understand the reasoning behind this practice and some of its benefits, but I also see it being taken too far. Supporters usually lean on two main arguments to justify this approach:
-
Framework dictatorship: Many frameworks push developers toward this style by forcing everything to be injected—whether as a Spring Java “Bean” or a .NET service. (If you’ve ever tried building a Spring web server in Java without Managed Bean annotations, you know exactly what I mean.)
-
The unit-testing obsession: At some point, the industry developed an obsession with the test pyramid (particularly with unit testing). The narrative became that achieving high code coverage is more important than actually building bug-free, reliable software. As a result, teams often end up writing more mocks and injecting more dependencies than necessary, all in service of hitting that coverage target.
I’m not saying that we shouldn’t do unit tests or do Dependency Inversion. I’m just saying that we should do it because the software design needs it, because adding a new inversion will unlock the software to evolve. Adding a new interface just to achieve coverage or to preemptively prepare for a change is a waste of time and brainpower.
The alternative
As you may have noticed, I like Liskov and Interface segregation principles.
My alternative suggestion for the other three are the LELIS acronym.
Locality of Behavior (instead of SRP)
Instead of trying to create short classes, methods, and functions, we should try to make the software more cohesive, i.e., pieces that work together should be close. Having everything that defines a behavior together makes it easier to undertand and maintain the software. Remember that our main objective is bug-free, reliable and maintainable software.
Evolve your Abstractions (instead of Open/Closed)
Here I’m supporting the idea that you should not try to predict the future beforehand. Create software today and refactor it tomorrow. Extract the abstraction (and the interfaces) when they are needed. Your software shouldn’t need to be clever. When adding a new feature is extremally expected to touch existing code and that’s the main reason why we should rely more on integration tests than unit tests. Integration tests need to change less frequently (sometimes they don’t change at all) when refactoring the code. Unit tests are britle and they constantly break when change anything in a piece (class/function).
Some Dependecy Inversion
This is the cherry on top. And as Uncle Bob himself said: “[About Dependency Injection] they understand the pros, but not the cons. Many developers, especially juniors, tend to apply DI everywhere, without fully grasping the trade-offs”
Dependency Inversion is a great way of decoupling modules and creating composable software but it should not be overly used. TBH, I don’t know how much is too much but I don’t create everything as a interface anymore. Small classes/structs, small components, pieces that work together can be created together, there’s no need to inject everything. Afterall, why is it ok to create a list (with its on encapsulated business rules) inline when needed
List<String> myList = new ArrayList<>();
but if I have to use a Formatter that was created exclusively to format that class I have to inject it?
(I’m leaning more torwards inner and static inner classes today)
Integration tests to the game
One final suggestion that is not related to the SOLID but can help to build (or counter) the arguments that supports it; Bug-free maintainable software is the goal. Maintaing a software is synonymous of touching existing code and refactoring it; Integration tests is the best tool to make that happen.
Good luck!