If you wonder why that is, it's because Docker for Mac actually runs the Docker engine inside a Linux virtual machine.
That means that whenever you call Docker, it has to copy your "context" to the virtual machine, then actually run the Docker invocation inside the VM. This gets slow and annoying very fast for even 10MB "contexts".
"Context" is anything in your working directory, more or less.
Another common slow down is bind mounts. Each file system operation is an RPC request between Docker for Mac and the Linux VM.
I’ve never been super happy with Docker for Mac. I did a pretty deep dive into figuring out why it’s so slow with no satisfying conclusion: https://stackoverflow.com/q/58277794/30900
This is also true for Windows. WSL2 made containers faster, but FS is painfully slow. Sounds like the best thing (apart from switching) is running Linux in a VM (Hyper-V does OK) and having IDE, Docker and all the data in there.
> WSL2 made containers faster, but FS is painfully slow.
If you keep your code base in WSL 2's file system it's really really fast. Even thousands of tiny asset files will get picked up and compiled through multiple Webpack loaders in ~100ms on 5+ year old hardware.
What I really had an issue with was the timespan between modifying a file in an editor (running in Windows) and the change being actually present in the container. I had to restart tests way too often because the old code was executing, which was really annonying, as sometimes you can't really tell if it failed because of the old code, or because the implementation was wrong.
The problem here is the delay between Windows and WSL2, once the files are in WSL2, it's fine.
FreeBSD once had Docker working through the Linux compatibility layer. No virtualisation. I always thought would have been really cool to see that ported to macOS.
That depends a lot on weather the freeBSD compatiblity layer is kernel or user space as MacOS contrary to common space does not use an freeBSD derived kernel but one derived from the CMU mach64 project, and yes i know jobs said otherwise but he was never a trustworthy source for anything.
I think macos actually uses an different binary executable format then the elf64 one shared between linux and freebsd, so the kind of binary level compatiblity that linux and freebsd share might not be shared between macos and freebsd.
This continues to baffle me. I have an Apple laptop, I run macOS on my laptop, but I do all my development on a cloud VM.
I'm surprised to learn that there are developers out there, that have the cash for Apple hardware, but don't have the cash or connectivity to not run more than an IDE locally, with everything else happening remotely.
My devices (laptops, tablets) are all glorified thin clients as far as development work goes. The meat never happens locally.
Am I a rare case? Is there reasons why this isn't palatable to most people that I'm missing besides cost + connectivity? Are most people genuinely still not got the option of decent connectivity (either fixed or wireless or a combination of the two)?
> don't have the cash or connectivity to not run more than an IDE locally
It's not about lack of money. I prefer developing everything locally because it feels snappier to me, even with a good internet connection, or even a local server. It might not make a difference to you but that's what I prefer.
Thanks for the responses, it's why I'm asking, as I genuinely don't get it.
I think my view stems from the days of having to re-install my windows workstation every 6-12 months in order to regain decent performance, so moving as much as I can to be on a 'different host' (usually a local linux server) to minimise the pain of backups/restores when rebuilding the workstation.
You definitely do not have to re-install Windows every 6-12 months for decent performance. Just don't install every doodad and hopefully don't have corporate IT pushing 10 management applications running in the background.
Developing in a cloud VM is painful in other regards, specifically with regards to IDEs. Basically your options become to use a local IDE, with slow access to your files (not fun when PHPStorm needs to re-index your vendor directory), or to use a cloud IDE (none of which I know of are particularly good for PHP, nor as snappy as running your IDE locally).
Of course, you can just use a text editor instead of an IDE, but once you get used to being able to jump to definitions, get method signature autocompletion, refactoring, syntax checks etc, it's kind of hard to go back to just a text editor.
I’ve found the best middle ground is to use a Mac and then mostly develop in a local VM. Snapshots/etc are wonderful, and they can be transferred from machine to machine, so “setting up my development environment” is as simple as “install Parallels.”
> I prefer developing everything locally because it feels snappier to me, even with a good internet connection…
oarsinsync’s IDE is sending each keystroke from his local computer to a cloud machine, where the source code lives. That source code compiles, executes, is tested in the cloud. Is this the setup you’re comparing with?
The theory behind this is sound: “When the size of the program is smaller than the data, move the program to the data.” In this particular instance, the code edit keystrokes are smaller than the total amount of source code. If the complete source code, packaged or compiled program has to be moved to the cloud anyway, it saves a lot of data transfer to just move the edits.
This assumes you’re running your application in the cloud, and the trade-off is that you need a reliable network connection, otherwise you might find yourself unable to edit when the network is down.
I believe when latency is a concern then the calculous skews much more in the direction of providing feedback locally based on the data. For instance a reasonable strategy could be to make updates to the file locally in the ide in order to provide immediate feedback as the user types, but then send those updates to the server where other higher latency features such as code completion or diagnostics are run. Sadly I have yet to see such a setup.
I use "cloud" as a catch-all that covers local and remote VMs, that are all built using standard templates. My local VMs are LAN-local, not host-local. My remote VMs are all <10ms away.
My particular workflow involves running my IDE locally, and having files hosted remotely. My IDE is plenty snappy, running my code is plenty snappy, but I'm slowed down by a need to commit + push changes to a repository.
I have it on my stack to do something like syncthing to keep a local + remote cache without needing to explicitly go through version control, but I suspect that'll just shift the latency out of my workflow, and trip me up in different ways.
I do my dev locally. It's so much faster and I have a dynamic IP so the work of setting up a private VPN or resetting the firewall every day would drive me mad. I've been thinking about setting something up so I can do dev work outside during the nice weather on a highly portable but underpowered laptop, but so far my un willingness to go through the effort of setting it up exceeds my desire to have it set up. (In the past I've handled this quite fine with a powerful laptop. But right now my powerful laptop has zero nanoseconds battery power and the idea of discarding an otherwise working laptop bothers me on environmental grounds. It's approaching it's fourth year of life, but I can't find anything that obviously exceeds its specs.)
It also means I just don't have to worry about things when the internet goes down. Back in the olden days of working in an office (at a company where most people took the work at home option), I can remember how often the other staff would ask me "is the internet down" and my answer would be "I don't know, let me check". My home internet connection only seems to go down for the moment the the IP changes, but office internet connections seem to be subject to IT staff that need to constant change something, upgrades, who knows what the excuse is today.
However, I do my "local" work on a virtual machine or a docker container. I use GNU/Linux as the dev OS and as the test/production OS, but that's because I'm using what I'm comfortable with - there's no technical reason I should do it. My co workers have been quite productive using MacOS and Windows. This probably depends on your language environment: if you're using a JetBrains ide for your inspections, I think it's not hard to be OS agnostic. But when I've used LSP servers, they've typically expected to run locally, and similarity helps.
I don't know why doing development in Linux is a unpopular opinion.
I've been using Linux on my workstation and laptop for the past 20 odd years and I very much prefer it to the MacOS environment on the company issued Macbook.
Doing development in Linux is not an unpopular opinion.
People who do develop in Linux telling everyone else to develop in Linux because not developing in Linux is Wrong and Bad, however, tends to be unpopular.
That means that whenever you call Docker, it has to copy your "context" to the virtual machine, then actually run the Docker invocation inside the VM. This gets slow and annoying very fast for even 10MB "contexts".
"Context" is anything in your working directory, more or less.