I think the trust issue doesn't really justify microservices. Assuming everyone is using interoperable languages you can still have a monolith with clear API contacts and separate ownership by using traditional libraries.
That is a good point about reliability and cost though. I hadn't heard that before.
What if some team makes the their part 10 times slower. This is not a theoretical scenario, but one I saw happening many times. While technically you could partition and monitor each part of the monolith separately, but then you are just reinventing microservice architecture.
You just run a profiler. It's better and arguably easier than microservice profiling since it doesn't only measure at the interface, can measure fine grained memory usage, etc.
Google "continuous profiling".
I'm not sure why you would think that that reinvents microservices.
Microservices is just taking a monolith and moving the components into separate processes that communicate via RPC.
In my experience, profiling is hard and a lot of times doesn't show the issue. e.g. for unnamed goroutines it is hard to see which goroutine profiler is referring to. Or say if some code change increased CPU usage without increase in time/memory, it will affect the entire monolith performance. Yes a good maintainer could pinpoint the issue, but remember my premise was that it is low trust environment, and saying I think your code change increased CPU usage involves talking to managers and being in 2 meetings. And in microservice they would have to deal with their alerts to not miss the SLA.
> Microservices is just taking a monolith and moving the components into separate processes that communicate via RPC.
Microservice architecture divides the responsibility much more than that. They have separate redis cache, local cache, tests, and even likely has different DB etc.
That is a good point about reliability and cost though. I hadn't heard that before.