I don't understand splitting an API into a bunch of "microservices" for scaling purposes. If all of the services are engaged for every request, they're not really scaled independently. You're just geographically isolating your code. It's still tightly coupled but now it has to communicate over http. Applications designed this way are flaming piles of garbage.
The idea is that you can scale different parts of the system at different rates to deal with bottlenecks. With a monolith, you have to deploy more instances of the entire monolith to scale it, and that’s if the monolith even allows for that approach. If you take the high load parts and factor them out into a scalable microservice, you can leave the rest of the system alone while scaling only the bottlenecks.
All of this is in the assumption you need to scale horizontally. With modern hardware most systems don’t need that scalability. But it’s one of those “but what if we strike gold” things, where systems will be designed for a fantasy workload instead of a realistic one, because it’s assumed to be hard to go from a monolith to a microservice if that fantasy workload ever presents itself (imho not that hard if you have good abstractions inside the monolith).
I understand how microsercices work, but I'm referring to a specific kind of antipattern where an application is arbitrarily atomized into many small services in such a manner that there's zero scaling advantage. Imagine making every function in your application a service as an extreme example.
This seems to be an example of a more general antipattern in software development, where a relatively large entity is broken down into multiple smaller entities for dogmatic reasons. The usual justification given is how much simpler each individual entity now is, glossing over the extra complexity introduced by integrating all of those separate entities.
Microservice architectures seem to be a recurring example of this phenomenon. Separating medium to long functions into shorter ones based on arbitrary metrics like line count or nesting depth is another.
Assuming every function is called the same amount of times and carries the same cost it would indeed be silly to cut up a system like that. But in the real world some parts of the system are called more often or carry a high cost of the execution. If you can scale those independently of the rest of the system, that is a definite advantage.
For me the antipattern poses itself when the cutting up into microservices is done as a general practice, without a clearly defined goal for each service to need to be separate.
(And by the way, i’ve seen a talk before of an application where the entire backend was functions in a function store, exactly as you described. The developer was enthusiastic about that architecture.)
> you have to deploy more instances of the entire monolith to scale it,
That's a common argument for microservices and one that I always thought was bunk.
What does that even mean? You have a piece of software that provides ten functions, running 100 instances of it in infeasible but running 100 of one, 50 of three and 10 of six is somehow not a problem?
That must be really the perfect margin call of some vsz hungry monstrosity. While not an impossible situation in theory, surely it can't be very common.
There are plenty of reasons to split an application but that seems unlikely at best.
I have seen multiple production systems, in multiple orgs, where "he monolith" provides somewhere in the region of 50-100 different things, has a pretty hefty footprint, and the only way to scale is to deploy more instances, then have systems in front of the array of monoliths sectioning off input to monolith-for-this-data (sharding, but on the input side, if that makes sense).
In at least SOME of these cases, the monolith would've been breakable-up into a smaller number of front-end micro-services, with a graph of micro-services behind "the thing you talk to", for a lesser total deployed footprint.
But, I suspect that it requires that "the monolith" has been growing for 10+ years, as a monolith.
> imho not that hard if you have good abstractions inside the monolith
And that is the big if! The big advantage of micro services is that it forces developers to think hard about the abstractions, and can’t just reach over the boarder breaking them when they are in a hurry. With good engineers in a well functioning organisation, that is of course superfluous, but those preconditions are unfortunately much rarer than they should be.
Especially true when the services are all stateless. If there isn’t a conway-esque or scaling advantage to decoupling the deployment... don’t.
I had a fevered dream the other night where it turned out that the bulk of AWS’s electricity consumption was just marshaling and unmarshalling JSON, for no benefit.
I recently decided to benchmark some Azure services for... reasons.
Anyway, along this journey I discovered that it's surprisingly difficult to get a HTTPS JSON RPC call below 3ms latency even on localhost! It's mindboggling how inefficient it actually is to encode every call through a bunch of layers, stuff it into a network stream, undo that on the other end, and then repeat on the way back.
Meanwhile, if you tick the right checkboxes on the infrastructure configuration, then a binary protocol between two Azure VMs can easily achieve a latency as low as 50 microseconds.
The first thing that comes to my mind is that there are different axes that you may need to scale against. Microservices are a common way to scale when you’re trying to increase the number of teams working on a project. Dividing across a service api allows different teams to use different technology and with different release schedules.
I don't necessarily disagree, but I believe that you have to be very careful about the boundaries between your services. In my experience, it's pretty difficult to separate an API into services arbitrarily before you've built a working system - at least for anything that has more than a trivial amount of complexity. If there's a good formula or rule of thumb for this problem, I'd like to know what it is.
I agree. From my perspective, microservices shouldn’t be a starting point. They should be something you carve out of a larger application as the need arises.