Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But Heroku was a closed-source, cloud-hosted, alternative to self-hosting... can't you just, I dunno, run your app? :(


I doubt if someone wanting to run an app is the target audience. A lot of companies would love to have something like Heroku that could be hosted internally. A platform team hosts it and development teams consume it. As it stands, they are stuck hand rolling their own poor implementations. Lots of wasted person-hours are happening in this space due to a lack of good, stable solutions that won't disappear (I don't know if this one qualifies).


> As it stands, they are stuck hand rolling their own poor implementations

It's either that or it's the monster that is Kubernetes.


Multiple projects like OpenFaaS and Knative try to bridge that moat.

And then there's also Nomad, which is drastically simpler than k8s. Not Heroku-easier, but closer to docker-compose than Kubernetes.

Self-plugging my article on the Nomad vs Kubernetes subject: https://atodorov.me/2021/02/27/why-you-should-take-a-look-at...


I have not had hands on experience with nomad, but as I understand it, it still requires a lot of plumbing. Something as simple as service discovery isn't, and requires a completely separate consul cluster. Similarly, there is no secrets management, which requires a vault cluster, which in turn requires a it's own consul cluster. I'm not sure what config management options there are.

So now you're stuck with 3 consul clusters, a vault cluster, whatever you choose for config management, and a nomad cluster. Feels like you didn't gain much from the simplicity of nomad.

In addition to that, knowing Hashicorp's pricing, I bet that set up would run you north of a million a year for enterprise setup.


Nope.

Nomad relies on Consul for Service Discovery and K/V storage, and Vault for secrets, indeed ( Vault can use a variety of backends, including an integrated Raft-based one, Consul, object storage, etc.). One tool that does one thing well, which integrates with other tools that their thing well.

I vastly prefer having three simple Raft-based clusters to manage than the "everything and the kitchen sink" approach Kubernetes takes, with results like base64 encoded for "secrets".

And as someone doing both, Nomad+Consul+Vault are drastically easier on day one and day two. They're also usable outside of Nomad ( you can have bare metal machines outside of Nomad using Vault secrets and Consul for SD and K/V), and you can link multiple regions together.

You do indeed need some basic config management to configure the clusters. Ansible seems to have won that race sadly, and there are available playbooks.

> In addition to that, knowing Hashicorp's pricing, I bet that set up would run you north of a million a year for enterprise setup

They've changed up their pricing structure and there are more tiers and add-ons, so i doubt it ( basic Vault cluster was in the 5 figured range per year), if you need support. It's not like enterprise support for Kubernetes would come cheap either, especially if you do it the recommended way with multiple clusters and all that.


> Nomad relies on Consul for Service Discovery and K/V storage, and Vault for secrets, indeed ( Vault can use a variety of backends, including an integrated Raft-based one, Consul, object storage, etc.). One tool that does one thing well, which integrates with other tools that their thing well.

You say nope but then you prove that Nomad relies on Consul as I had mentioned, meaning, you need a Consul instance behind the scenes. If the Nomad setup recommendation is anything like the Vault recommendation, the recommendation will be to run two clusters, one for services discovery, and one for K/V storage for Nomad. I've setup an enterprise Vault instance, and their enterprise architect recommended separate instances. Which is totally fine, but it does mean two Consul clusters + a Nomad cluster.

From my experience with Consul and Vault, it is not as simple as you say it is. A team of 3 Engineers took 3 months to set up an enterprise grade cluster. There was a little bureaucracy at the time, so I can't really blame it on that. If I recall correctly, the integrated Raft-based clustering was being worked on and we were made aware of it because there was some pushback from management on separate 2 Consul clusters for K/V and SD, but I never got to see it to fruition and utilise it, so for us it was Consul. Other backends were discouraged at the enterprise level, they never really made it clear if they'd fully support us if we went with a different backend, leading me to believe that at best, they'd prefer you use Consul over something else. I mean why wouldn't they? They'd rather you pay them extra for a Consul cluster.

If Nomad is anything like my experience with Vault/Consul, then unfortunately you are still stuck with the setup I mention earlier, that is 1 Vault cluster, 1 Nomad cluster, and 3 Consul clusters (1 K/V for Vault, 1 K/V for Nomad, and 1 for service discovery). For sure having separate individual tools that does 1 thing may have their advantages, but I fail to see how this is that "much more simpler" than Kubernetes. At best this is marginally simpler.


Your information is very outdated.

Vault has integrated storage since multiple versions, and Nomad can very well use a single Consul cluster for both SD and K/V. ( And honestly i can't recall having two Consul clusters being recommended, and the proposal we had from Hashicorp included a Consul Enterprise cluster for Vault as part of Vault's pricing.)

So, you need three clusters - Vault, Nomad and Consul for SD and KV for Nomad. Two of which, Consul and Nomad, can run on the same machines ( it'd be suboptimal security-wise to have Vault there too).


You don't have to deal with operational aspects of consult, nomad and vault of that matter if you choose the managed kubernetes cluster of the cloud provider. If you are talking about container orchestration onpremise - the experience I had with kubernetes is terrible. There 1000 things that can go wrong. Onpremise, I would recommend using k3s which is super simple to setup or microk8s with comes with ubuntu 20.04. I am not sure what benefits you would get with nomad as compared to the lightweight kubernetes distros. One of the benefits of these lightweight solutions is that, you will basically find a helm chart for any serious application out there where as in nomad you might have to figure it out how to deploy. For instances deploying cassandra, you will find a helm chart, but nomad, you might find blog that does it or figure it out yourself. To my best of knowledge that how it is, may be I am wrong?


microk8s and k3s aren't fit or designed for production, they're mostly for testing/experimenting.

And even with managed Kubernetes, there's still a lot of complexity remaining ( GCP had to come out with a more managed service, GKE Autopilot, to address some of that), but you still have the evolution of APIs to keep track of every update you make, and there are still dozens of services that are updated each update, and each one can go wrong ( even if it rarely does).

> . One of the benefits of these lightweight solutions is that, you will basically find a helm chart for any serious application out there where as in nomad you might have to figure it out how to deploy. For instances deploying cassandra, you will find a helm chart, but nomad, you might find blog that does it or figure it out yourself. To my best of knowledge that how it is, may be I am wrong?

Indeed, and that's the main disadvantage for Nomad IMHO, the ecosystem is much smaller so there aren't that many ready-made equivalents to Helm charts and operators. Depending on how many of those you need, k8s can save you a lot of time.


I don't agree with you. What aspects of microk8s or k3s makes it experimental? It used to be the case, in case of k3s it is one of core offerings from rancher. Same goes for microk8s. Purpose of autopilot is something else, if you are talking about pure orchestration bare minimum kubernetes is actually not a bad option. API will keep evolving but basic objects like deployments and statefulsets that you need for paas like experience are quite stable.


> K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

> [microk8s] Low-ops, minimal production Kubernetes, for devs, cloud, clusters, workstations, Edge and IoT.

Microk8s started as an easy alternative to minikube for local dev.

k3s started as a simplified version of k8s for testing/experimenting on RPis, etc.

Today both seem to focus on IoT/"edge". Both do clustering and HA of the control plane though, so are in theory usable in production.

However, why would you use either of them in production? Yes, it's easier than vanilla k8s, but still has a lot of moving parts, and to top it off, it's a specific flavour by Rancher or Canonical of moving parts ( e.g. microk8s uses dqlite for storage instead of etcd). So you might stumble on specific to the platform edge cases, and you still have a big part of the k8s complexity to deal with ( in the case of microk8s it tries to abstract some of the complexity with wrappers, but when they fail, you're screwed).

> API will keep evolving but basic objects like deployments and statefulsets that you need for paas like experience are quite stable

Stable now, but Ingress was in beta for quite some time, and the when the beta API gets deprecated, you have to adapt.


> And then there's also Nomad, which is drastically simpler than k8s

That's normal, since Nomad is only about deployment, where k8s is about full cluster management.


Is it though? I though nomad was also an orchestrator. Let me look up.


https://www.nomadproject.io/docs/nomad-vs-kubernetes

> Kubernetes aims to provide all the features needed to run Docker-based applications including cluster management, scheduling, service discovery, monitoring, secrets management and more.

> Nomad only aims to focus on cluster management and scheduling and is designed with the Unix philosophy of having a small scope while composing with tools like Consul for service discovery/service mesh and Vault for secret management.


Yep, it's an orchestrator that can orchestrate pretty much anything ( Docker, QEMU, Firecracker, LXC, etc.)


Yup that's my understanding.


I'm still relatively green, so there are likely a bunch of Kubernetes nightmare scenarios I haven't encountered, but I recently stood up microk8s to provide workers for Jenkins and GitLab CI, and I thought the ergonomics of it were great— easy to get going, easy to deploy stuff with the integrated helm3, easy to access the dashboard and get metrics out of it.

I'm sure there's still a gap to be bridged there between that and a PaaS which you literally just add as a git remote. But I don't think it's huge.


Microk8s is great for local development, but it's operational complexity doesn't come close to run k8s in a production environment.

Habing been on both sides of the isle, in my opinion, K8s has great ux for consumers, but for is a nightmare for ops teams who maintain it. For a self-hosted version anyway.


That's good to know— I've heard others say that it's fine if you just have a cluster of a few nodes, private cloud, that kind of thing, particularly if it's "throwaway" compute like CI workers, as opposed to something genuinely high availability.

Now, all that said, Canonical certainly advertises microk8s as being production-ready, production-grade, and suitable for use in production environments, for example in [1]. It definitely seems like it's meant to be far more serious than, say, minikube, which explicitly is just for local development.

Can you speak to specific limitations with microk8s, or point to resources which go into more depth on this?

[1]: https://microk8s.io/high-availability


Great question, I have been research around the same topic for past 6-8 months. The problem is they have been advertising as production grade very recently. I am not sure what limitation you will hit. Having said that, k3s running in production has same issue that you will see in managed kubernetes cluster in a cloud provider.


MicroK8s and K3s are actually touted as orchrestrator for production too. I actually know atleast one organization that uses k3s in production. Ofcourse it is a nightmare at scale, but running a services without kubernetes at that scale is more nightmarish.


I'm aware of k3s being production ready. I was not aware microk8s being production ready. TIL!


I've learned this too, and that's why at my company (https://primcloud.com) we're building obviously a PaaS for those who want Heroku/Netlify experience, but also building it in a way that we plan to package it up and offer an enterprise solution where you can install it on your own infrastructure, like GitHub Enterprise. This allows you to have the same experience but be in full control.


I love the way you are shamelessly plugging you product in the discussion. Is it container orchestration underneath? How does the app get deployed?


Haha not trying to shamelessly plug, but it's hard to talk about the subject including features we're building without actually mentioning it.

Yes its container orchestration. We're built on top of kubernetes.

Our idea of the enterprise version is just deploy your own kubernetes cluster, then install our helm chart or whatever and it bootstraps and sets up the platform on your cluster.


I disagree most cloudservices give it Away for free like azure devops


Turns out there's an awful lot more to do to 'run your app' than finish writing it and push to github which was the workflow that heroku promised.

I absolutely want to be able to write small personal projects and have them deploy on my cheap server in a sensible way by simply pushing to my git repository.

At the moment I'm using caprover to do this, and it's so much better than doing it myself, but I think there's plenty of space to make this experience better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: