Against the grain: over-complicating the development architecture

Simpler solutions can be good

Photo by Michael Matlon on Unsplash

I wrote an article last time where I offered some ideas on how to solve problems by involving the simple tools offered by the operating system. We saw how to easily solve tasks usually associated with complicated platforms and libraries. The point of the article was to make us think of all those times we could have solved a scenario using simpler tools, maybe even offered by the operating system but chose to use default architectural solutions just because they are the norm.

This time we will talk about other examples where architecture gets in the way of development instead of helping it, again not to demolish existing and well-tried paradigms, but to help question the defaults and maybe put things in a fresh perspective. The first example will again feature a situation I had at one of my previous jobs.

Photo by Muhammad Zaqy Al Fattah on Unsplash

We were discussing the re-implementation of login for our web application and were brainstorming ideas. First of all, the login was working fine, there was no issue with it. It was a simple login, secure, with username and password, encryption was in place following the latest trends: all good. But the team’s architect was not satisfied. He just read about bearer token and refresh token login and wanted to migrate all logins in the company to that, because reasons.

That was not an issue. Changing a simple login system for a better login scheme should be worth it. The discussion quickly degenerated into finding more and more secure ways of dealing with the situation where a token was stolen. Meetings were held, testers came with more edge cases, each edgier than the next, Redis cache was involved, Redis exploits were discussed, it was really getting out of hand. In the end, after many meetings spent engaging in the most ridiculous ways of protecting user logins, I asked the question: what are we trying to protect here?

When dealing with complex solutions, I always like to take a step back from time to time and review the problem statement. Are we still implementing what we wanted initially? Are we still implementing what we need? I get doing something for the caprice of an architect, sure. But when the solutions discussed are getting ridiculous, it’s time to revisit the initial story. What are we trying to do here? What data are we trying to protect? Bank data? Medical data? Because if not, investing in government level data security is a waste of development time. A nice exercise, but nothing more. But this didn’t happen once, which leads me to the next point.

Photo by Artem Labunsky on Unsplash

Many times it happened to me while thinking about a certain application, that I would start thinking about the far future, about the time when the app would have a million users, servers would be flooded and I would need to jump in Azure dashboards trying to contain the crisis. Many companies think about load balancing from the get go, even though they have a few hundred thousand users in total, not to mention that only hundreds of them are online at a given moment.

Even if the user count grows, rushing to evolve the infrastructure should be a temporary solution until you figure out why the overall performance is slow. And why is the performance slow is starting to be a forgotten question, for some even forbidden. We don’t ask that anymore, we just increase the server count. Servers are cattle in the cloud so we don’t care. That’s very wasteful thinking. It’s good not to think about your servers, but it’s not good to waste resources just because you can.

But that’s just about resources, what about architecture? What about when you deliberately make software development choices not needed for your small project, just because in your dreams you are preparing to have a team of 30 developers each working in the same part of the code? How does that translate in practice? I have another example for that.

Photo by Thomas Couillard on Unsplash

Again, it’s just an example and I don’t want to make it seem like we don’t need good architecture in place. All projects I work on are benefiting heavily from many features of dependency injection. I am just giving it as an example because I like to provoke you in the worst possible way apparently. But extrapolating from here, you can thing about many times you applied a pattern just because you used it everywhere, not because it added something to the project.

It was a small project some time ago, I don’t even remember the company. It had a small UI where you could upload something and there was a processor integrating several types of files. Not a big deal, but it qualifies perfectly as an example. We knew exactly how many types of files and we knew exactly that it will never change under normal circumstances. But that “normal circumstances” part made us uneasy. We wanted a solution that would work for all files. And this started the downfall.

We split the processors into their own projects, implemented dependency injection, configured containers, services were born, we even added a database layer just in case, even though we were dealing with files. We were preparing the architecture for growth, change, everything included. What we forgot about was the developers and the development effort. The application grew so big that it was comical to learn what it was actually doing.

Dependency injection was the cherry on the cake. It was absolutely useless. It did nothing, helped with nothing, we always knew exactly what instances we needed, we weren’t even doing unit tests. So here’s the trap: yes, but maybe one day we would create unit tests. And it’s such a crazy trap because you know as a developer it’s true. Things change, they always do. How could you not prepare for every possible change? The tempting solution though may also be the project’s downfall. Think before you employ flexibility in your project. Maybe you don’t really need it now and maybe in five years when you will need it the application will require a rewrite anyway.

Photo by Chase McBride on Unsplash

I will end with this last trend. But instead of microservices you can say any trending architecture. You can even say any trending technology: React, Rust, anything at all. As an architect, you don’t jump on a solution because it’s new. You don’t do it because it’s cool and trendy. You weigh its usefulness for your project.

As an example, I can still use the one from the previous title where each processor was a service. Our web server looked like a Christmas tree with all the processors as REST services. The number of projects, deployment targets, deployment steps, lines of code, everything was so exploded just because instead of deciding initially to simply have a single .NET solution with a console project and a few services in a folder, we went with a full -blown corporate level business architecture. Essentially for a file upload tool. Just because we wanted to cover the latest trends and impossible growth and change.

Microservices or any kind of architecture should come as a helping hand, it should relieve the developers of complexity. If it creates complexity, the architecture is wrong. And there’s more. Today a lot of software is written in .NET for example. Well, we could add Node.js too. But if as you develop your software you start fighting platforms, programming languages, architectures and libraries, that’s a sure sign you picked it wrong. If all your software is written in a single programming language, it’s a sign you are using one tool for all purposes. Sooner or later you will struggle with it.

Leave a Comment