Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It makes life so much easier. Time is non renewable, and if you want to pull a project apart for whatever reason, you still can.

“docker pull”, deploy, and one can move on to the next whatever. You can deploy this to a Synology NAS, a Raspberry Pi, or Heroku with a few clicks (or even an appropriately configured router that supports containers if you’re not running something providing this functionality natively).

(DevOps/infra monkey before moving to infosec, embrace the container concept)



Let's not overstate things here. It may well look like "docker pull", deploy, nothing, ok, how do I configure this thing, oh goodie here's the uncommented yaml, deploy again, strange error, headscratch, oh it's dependent on using the .68.x network which I've already used elsewhere, let's rename those docker networks, deploy again, what?, oh it must have initialized a temporary password to the database when it didn't come up, let's wipe it all clean and pull again because I have no idea what kind of state is in those persistent volumes, deploy, rats! forgot the network renumbering, wipe clean, confiure again, deploy again, yay!

Provided you already turned off everything that can interfere with this stuff, including IPv6, any security like SELinux, grsecurity and friends, and you let it administer your netfilter firewall for you. Don't forget to check if you accidentally exposed some redis instance to the public Internet.

(And yes, I have embraced the concept and work daily with similar things, albeit in a larger scale. Let's just not kid ourselves it's easier than it is though. Just because an out of the box deploy goes sideways doesn't mean you are dumb.)


Almost none of what you just mentioned has anything to do with Docker, and you can easily have that much trouble just running a binary. (In fact, I've found that many projects have better documentation for their Docker image than for running it natively.) Yes, there are some Docker-specific things you sometimes have to debug (especially with networking), but I've had far more trouble getting software running natively on my machine due to mismatches in local configuration, installed library versions, directory conventions, etc vs what's expected. It's also far easier to blow away all the containers and volumes and start over with Docker; no need to hunt down that config file in an obscure place that's still messing with the deployment.


This is a strange argument to me. It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex. The problem is complexity. There are many great projects that are just “build the binary, edit config file, and run it,” and why should things be more complex than that? It’s wild to me what people will put up with.


> It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex.

Not quite. My point was that the complexity of Docker is, in many cases, worth it because it hides a lot of the complexity of running software. Yes, you trade one problem for another, but the nice thing about Docker is, if you really go all in on it, the knowledge of how to use it transfers to pretty much any software you want to run.

For example, I wanted to run a JVM-based service the other day. Having never done this before, spinning it up with Docker took two minutes—I didn't have to figure out JDK vs runtime, which version I needed to install, etc. And yet, if I want to configure it past the defaults in the future, the image exposes several environment variables that make it easy.


> none of what you just mentioned has anything to do with Docker [...] > there are some Docker-specific things you sometimes have to debug

Not sure what to make of this. Networking was specifically called out as an example.

But there are stories to share about the storage layer too. Lots of driver specific things that leak through that abstraction.

One may use Docker for a lot of things but ease of operations is not one of them. There's a reason both Red Hat and Ubuntu had to make up their own formats, neither of which is trivial to use, but there was just no way they could have done it with Docker instead. They're unlikely both wrong here.


To be fair none of those operations require a re-pull; not a single one.


That's the spirit!


Not sure the intention but I still don't see how debugging config in docker is inherently different than native.


I upgraded my PiHole running on an Allwinner H3 SBC last year. It wouldn't start, turned out some indirect dependency wasn't compiled for the ARMv7 platform.

No worries, just specify the previous version in my launch script, literally changing a couple of digits, and I'm back up and running in seconds.

I'm sure I could get it done using apt, but it was literally changing some numbers in a script and rerunning it.

As someone who just wants things to work, Docker has made things significantly better.


To add to this, for me it's not specifically about the ease of setup which isnt that much easier (although it's nice that it's standardized). It's more about the teardown if it's not something for you. Services can leave a lot of residuals in the system, files in different places, unwanted dependencies, changes in system configuration. Removing a docker container is very clean, with the remaining stuff easily identifiable.

Makes trying new stuff a way less troublesome.


Can't deploy to a BSD server :(

Give me raw source code or binaries and a configuration file in /etc or $HOME any day of the week.


> It makes life so much easier.

If running an OS that supports docker...


If you're running an OS that doesnt support docker you have a very esoteric use case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: