Monolithic Hell And Why You Don't Want To Be There
The monolithic architecture style is not inherently evil – its disadvantages only show when the code base grows too much. But because code bases have a habit of doing precisely that, developers often find themselves in monolithic hell, and once there, it’s very difficult to get out.
To gain a better understanding for how a microservice architecture can help us deliver better software faster, the following sections will introduce the older, but still very common monolithic architecture and its problems.
- Monolithic Hell
- Monoliths Make Sense…
- … But Only For Small Applications
- Wrap-Up: Microservices To The Rescue!
Monolithic Hell
Maybe you’ve been in monolithic hell before, or maybe you know someone who was – either way, you’re probably aware that monolithic hell can be incredibly painful. Projects end up in monolithic hell if the code base for a single artifact – for example, a Java WAR file deployed to an application server like JBoss or Tomcat – outgrows its monolithic architecture. By carefully designing the application and employing sensible programming patterns when coding it, the ride on the highway to (monolithic) hell can be slowed down, but never completely halted. So, as long as a team is confined to a monolithic architecture, all they can do is delay the inevitable, unless the application is strictly limited in its functionality and the code base therefore never really grows.
Monoliths Make Sense…
One might think the root of all evil is the monolithic architecture itself, and that this architecture style must therefore be banned from ever being applied again in any software development project. In fact, though, the monolithic style of architecture is not inherently bad – it even has a bunch of important benefits that shouldn’t be overlooked:
- Because IDEs and other development tools are focused around building a single application, monoliths tend to be very simple to develop (… at first).
- Since there’s only one artifact, the deployment process is rather straightforward.
- Scaling is similarly trivial – if you need multiple instances of a monolithic application, just spin up multiple machines and put a load balancer in front of them.
- It’s easy to make radical changes to the application since all functionality is contained in the same codebase.
- Another benefit from the point above: It’s straightforward to implement tests for the application, even on the integration or end-to-end level.
So, if your goal is to write only a small application – maybe you want to prototype something or you’re certain the application is so limited in its functional scope that its code base will never grow significantly –, then building it as a monolith is absolutely fine.
… But Only For Small Applications
It’s only once the code base grows beyond a certain point that its monolithic architecture becomes a problem and the team ends up in monolithic hell (and, unfortunately, monolithic applications tend to actually do outgrow their architecture). Monolithic hell is painful because of the following reasons:
- Large monoliths are too large for any single developer to fully understand, and so it becomes difficult to fix bugs or implement new features, and if code is added or modified, it’s likely the code doesn’t fit very well into the existing abstraction layers, resulting in a lot of broken windows in the long run (and, as we’ve seen, those should absolutely be avoided).
- A consequence of the above is that the time from story written to feature implemented (or, similarly, bug discovered to bug fixed) increases – development becomes slow.
- When an application grows, the team typically grows, too, and because all developers commit to the same code base, communication and organization overhead is increased, further slowing down the pace of development.
- Parts of the application may have conflicting requirements – one part may rely on a lot of CPU power, while another may need a lot of RAM. The result is compromises in the configuration of the servers the application is rolled out to.
- Because it’s increasingly difficult to test a large monolith, it becomes harder and harder to find bugs before they make it into production. Therefore, large monoliths tend to have reliability problems.
- The application’s dependencies – frameworks, operating system libraries, etc. – can often not be updated individually, forcing developers to work in an increasingly out-of-date technology stack.
With the monolithic architecture style having so many drawbacks for large applications, what’s the solution?
Wrap-Up: Microservices To The Rescue!
The fact that the microservice architecture has emerged along with concepts and technologies that support it – like containerization and container orchestration frameworks – is an incredible gift for software developers and DevOps engineers alike. To get out of monolithic hell, we need to decompose large applications into microservices, and then implement, test, deploy, and scale those microservices individually.
In this blog post, we’ll explore how the microservice architecture addresses the many problems that large applications built using the monolithic architecture suffer from.