Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

microservices make a lot of sense organizationally where each feature team can own their own feature service.


I would argue that in such cases those are not "micro" services anymore, they are services. In that case it makes sense to develop and deploy them separately, then find a way to make them talk to each other. Microservices is a different architectural decision.


My opinion is that there is a point where that is true... but its at a really high scale. Each team owning a separate service introduces a lot of complexity in managing all the services. There is a point where communication complexity of not microservices overwhelms the complexity implicit in microservices, but i think it is at a really high scale.

You also have to consider things like it is now harder for people to see the system as a holistic whole (the tricky bugs are often in the composition of components) and a lot of subtle effects that beings. Even just increasing the friction for people to move between teams or friction for security people to apply consistent standards across all groups.


OR each team can write their features as a python package, rust crate, go module to be included in the main app. Libraries, versioned.


And then team 1 needs to upgrade pandas to 2.0, but team 2 is still on pandas 1, so when the main app pulls them in nothing works, so you need to start a cross-team committee to schedule the work to upgrade a single library...

Separate services aren't a silver bullet, but as an fyi to the younger software developers, we tried "just have all teams work on the same code base and deployable artifact" for a long while and it didn't work very well either.


All the teams will need to migrate sooner or later so figuring out all of the potential problems in migrating and having everyone do it an once is more efficient than each team needing to figure it out separately.


That's not how it plays out in reality. usually nothing gets done because "upgrade this package" is never on anyone's priority list. Or teams end up doing shit like JAR shading or forking and renaming a package with some _v2 or whatever suffix to be able to support both the old and new version simultaneously in the main code base. And then of course nobody ever updates the runtime (hello, enterprise monoliths still running on Java 6/7!) It ends up being a complete mess.


I'm just grateful a language like Java has any sort of namespace solution to dependency nightmares.

I have lost track of how many times I did a git pull on a Python based solution only to find I broke all the things when I tried to upgrade one package.


Imagine a solo developer, writing an app that is composed of packages/libraries/crates from the get go.

Now in one place such an engineer uses pandas 2 in other place pandas 1 but it is just one single app. What does it say about the quality of engineering and mental focus of such a solo developer that cannot accomplish same thing with the same API - OR cannot refactor the already written code for Pandas 1 to Pandas2.

Sounds to me like more of an engineering discipline and engineering mindfulness problem.

Fix is simple with a simple rule - everyone has to use the latest major version, always.

Micro services do not make any sort of people's communication go away, they move it to different boundaries. From dependencies to the business layer/interfaces which is lot harder to navigate and negotiate.

Imagine needing a field in your downstream service. They refuse because they don't see it their domain and you cram it on your side and what not. Ask anyone working in micro services environment and they'll tell you it is a recurring issue every quarter if not more.


That's easy: we'll make the ultimate build system! It will scale, and maintain packages, and compile all the things transitively. Just give me $xx million dollars and a few years, and I'll give you the perfect solution.

Just press this button to start the upgrade build and....boom! 10,000 services and their dependencies being built on a ton of hardware; we can practically gurantee your change in dependency will be checked... Whoops, turns out your one dependency change cascaded into about 1.5% breakage....no, I don't know who owns those packsges; why do you ask? That's not my job!

/s


Yeah I see a lot of things that could be libraries packaged as services, so now each invocation incur in network latency and every transaction needs a two phase commit. And because each service need its own replica, deployment pipeline, and versioned internal api, production and deployment cost skyrocket


Because, you see, if you surround shit with other shit, that original shit doesn't look quite so bad in comparison. So take your shit monolith, surround it by shit services that distributed it across a shitty network, and now your original self inflicted shit design is just 1/3rd of the shit you gotta deal with. Totally not as bad as it used to be!


Which brings along all kinds of headaches that greater segregation solves. You can go in circles all day about this stuff.


Until one team need a feature in another service that makes their development grind to a halt and the other team is not prioritizing.

I have only seen this from the business side (I'm not a developer), but I have seen teams start coding in another teams service just to be able to proceed.

It's not always good to create silos like this either.


Sounds like someone has been in the trenches of a certain online retail company.

As a developer, I have certainly seen the same. Pretty sure this very scenario is where I heard the term "away team" used in the industry: send your folks over to change things, and under our guidance they can check in the code.


I mean even then, it's still easier just to share one code-base and then shard service aspects if you have to.

You need truly gargantuan scale before things become logically separate code-bases.


Not in my experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: