Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean using a remote Docker context over SSH, where I run `docker` commands on my laptop, but it runs the containers on a remote Docker server. My laptop does not run the docker daemon, so its just a client.

If you tell me to run

    docker run -v ${HOME}/stuff:/stuff alpine
It will mount /home/stuff on the server, not my own home directory on my laptop. I would have to run another process that rsync's my local /home/stuff to the server.


Thanks for the reply. I didn't know that was a thing.

Though, I'm still confused by your example of having to rsync /home/stuff to the server. If you use a named Docker volume, is the remote Docker container somehow using a volume you have located on your laptop? Wouldn't you still have to transfer the volume from laptop to server?


In my README I explain how to setup the Docker context over SSH.

In my system all of the files get written to the volume from only three places:

    * From the docker image through VOLUME (fresh volumes copy the data from the image on start)

    * From a template container that writes config files.

    * From the container itself, writing files as it runs.
What I don't do is create a directory someplace and manually edit files and mount them.

When I run `docker build` on my laptop, this does copy files to the server (Docker designed build this way, and you have to set .dockerignore file to ignore files you dont want copied).


What if the docker daemon is on a storage server and the host volume of /stuff contains, say, 10 terabytes of photo album content?

    > If you tell me to run
    >  
    >     docker run -v ${HOME}/stuff:/stuff alpine
    > 
    > It will mount /home/stuff on the server, not my own  
    > home directory on my laptop.
That seems about right?

And then you note:

> What I don't do is create a directory someplace and manually edit files and mount them.

But if /stuff is photos, and another container, say, runs ingestion tools, or some other photo collection processing, you don't let it touch the same data volume?

Looking at your repo, I see your docker-compose volumes map e.g. data to data …

    volumes:
      - data:/data
… which is what I do, so I guess I'm not following what you're saying to do differently.

For instance, mounting a volume that can be edited by other containers lets me insta-move large files or sets of files between steps of containers, by container a doing a move not copy from its work path to its destination path watched as an incoming path by container b.


> Looking at your repo, I see your docker-compose volumes map e.g. data to data … volumes: - data:/data

Theres two ways to mount a volume, one with a / and one without.

/some/directory:data

some-volume:data

The first mounts a directory

The second mounts a named volume.

I suggest doing the second so that it can be maintained directly by docker through `docker volume create|rm` or `docker compose up|down [-v]`.


> What if the docker daemon is on a storage server and the host volume of /stuff contains, say, 10 terabytes of photo album content?

In this extreme example I think probably a bind mount might make sense, especially if the files are already there. But the named volume would just be stored in /var/lib/docker/volumes/some-volume-name, so as long as that /var/lib has 10TB I don't see the problem.

I can use my sftp container [1] to be able to sftp directly into a volume, but I've not yet transferred 10TB with it :)

[1] https://github.com/EnigmaCurry/d.rymcg.tech/tree/master/sftp


docker etc is on a flash partition, data is on a few 16 drive arrays…

… but this made me discover this:

    docker volume create \
     --driver local \
     --opt type=cifs \
     --opt device=//uxxxxx.your-server.de/backup \
     --opt o=addr=uxxxxx.your-server.de,\
           username=uxxxxxxx,\
           password=*****,\
           file_mode=0777,dir_mode=0777 \
     --name cif-volume
So that simplifies a lot for my use cases.

Thank you for all the replies!


Thats really cool, I've only ever used the local driver with default settings, which stores to /var/lib/docker, but I guess theres different storage drivers that do different sorts of storage. nice find!


Replying in series as I see more of your edits:

> For instance, mounting a volume that can be edited by other containers lets me insta-move large files or sets of files between steps of containers,

As long as the containers are on the same docker host, many containers can mount the same volume.


Just run docker compose on the remote machine (and copy the compose file with scp). I don’t see a lot of benefits if you run docker compose on another machine than the docker daemon. You can even automate it with ansible, terraform, or similar, if you like it neat.

Or just go with k8s if you need a more complex setup.


That is an entirely valid approach, but this way it feels like the server requires more maintainance. I can destroy a docker named volume through the lifecycle of the container. But I cant delete a system directory. (edit: I mean using `docker compose down -v` it deletes a named volume, but not a system directory)

Also, its really nice to be able to use `docker context use SERVER_NAME` so I can switch contexts (servers) very easily.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: