I have been exploring nix for the past few months and my experience with nix has been both exhilarating and frustrating, simultaneously. On one hand, I find it hard to imagine not using nix now, but on the other hand, I hesitate to recommend it to other colleagues due to its steep learning curve, ux issues and potential for footguns.
I sincerely hope that nix community improves the UX to make it more accessible to new users. Though for those willing to invest time in learning it, nix is extremely useful and highly recommended.
There are two sides to this problem, the first is to improve the UX, but the second is to clearly describe a compelling reason for people to adopt. It is very tempting to only blame the first, but I think we need to also need to tell a better story and highlight the values in a better way. This would then give people a reason to get past the UX issues in the hopes of achieving those desired values.
For example; people seem to have accepted that the benefits of using terraform in spite of various difficulties - the learning of its language or needing to hire specialists. The mantra of "infrastructure as code" is enough to drive adoption. What is our mantra? We need to accept that "reproducibility" isn't quite working and that we need either a clearer message, or to explain the message.
> but the second is to clearly describe a compelling reason for people to adopt.
Build source straight from Git:
nix run github:someuser/someproject
Need a different version?
nix run github:someuser/someproject?ref=v1.0.0
Wanna replace some dependency?
nix run \
--override-input somelib github:someotheruser/somelib \
github:someuser/someproject
Wanna fork:
git clone https://github.com/someuser/someproject
# do your changes
nix run someproject/
And the best part is, it's conceptually very simple, it's mostly just a bunch of symlinks and environment variables behind the scenes. If you wanna inspect what's in a package, just `cd /nix/store/yourpackage-HASH` and look around.
NixOS just feels like a distribution build from the ground up for Free Software. The "reproducibility" in every day use just means that stuff won't randomly break for no reason. And if you don't wanna go the full NixOS route, you can just install the Nix package manager itself on any other distribution.
That said, the part where Nix gets painful is when it has to interact with the rest of the software world. Things like software that wants to auto-update itself really does not fit into the Nix ecosystem at all and can be rather annoying to get to work.
There are of course numerous other pain points, missing features and all that. But being able to flip between versions, fork, compile and all that with feels just so much better than anything else.
I disagree, I think the value proposition for reproducibility is clear, it's just that the learning curve "is too damn high!" I'm highly motivated to learn and use Nix (or Guix for that matter) but I've bounced off of it three or four times now, and I'm the kind of weirdo who learns new PLs for fun.
Someone once said that you don't learn Nix, you reverse engineer it.
I think the reason why Nix is hard isn't the language (although lazily evaluated functional language is definitively a part of it), but the paradigm shift, but this is necessary for reproducibility.
Imagine if you had a distro where you couldn't depend on existing state (so you couldn't just compile something to /usr/local). Where you had to create package definition with all dependencies explicitly defined.
Kind of like using FreeBSD ports (or Gentoo) without precompiled packages and you were forced to add every package manually before using. You would also complain it is hard even if you would have to just use Makefile and shell script.
I don't think this will get easier until Nix would get embraced by other package s, making it easy to specify those components as dependencies. Now with flakes with can be composable this is possible.
Two of the major differences between terraform and Nix that I see are 1. it’s possible to muddle through in terraform and 2. Hashicorp has put a non-trivial amount of effort into documentation for all levels of users. I’ve taken a stab at using Nix for Rust projects and could not even get to a point where I had something that functioned. I found plenty of material online but was it out of date, idiosyncratic, did it use flakes or not, etc etc? I suppose I could have contorted my existing project to meet the examples I found in various GitHub repos but my stuff is bog standard Rust so I don’t know that I’d be willing to. As for documentation, what should I, as a new and invested user, be looking for? Are flakes the future? Are they a distraction? Why are all the suggested docs I could find several year old guides on blogs? There’s 20 years of information floating around and the official project documentation, well, I don’t know who the audience is but it’s not learners.
It’s a shame. The promise of Nix/NixOS is really interesting — being able to deterministically create VMs with a custom user land is desirable to me — but in practice I can’t even get a simplistic project to compile, let alone something elaborate. Terraform is jank but it’s not a whole language that needs to be learned, seemingly, before the official docs start to become coherent in their underlying context.
I don't think it's very valid to compare the two. It is a little bit just to compare the experiences using them bit they aren't meant to solve the same set of issues. In fact, they are better together in my experience. I use nix to manage my terraform configurations with a lot of success. It reduces my boilerplate and helps me build abstractions on top of HCL.
If you ever decide to take a stab at nix again, consider looking at https://github.com/ipetkov/crane and using flakes. I've got it down to the point that I can get a new rust project set up with nix in about 30 seconds with linting, package building, and test running all in the checks
Crane is one of the libraries(?) I came across. Couldn’t get it to work on an existing multi-crate workspace project. The crane documentation as-is didn’t provide enough context to debug the errors I saw in the process of trying to muddle through. And then looking further afield ran into all the documentation, bootstrapping issues I alluded to above. Although I don’t doubt I could start a new project the point was to add new capability to existing work, for me.
> We need to accept that "reproducibility" isn't quite working...
I guess it doesn't sell Nix as strongly as it could..
But, it's hardly for a lack of enthusiasm on Nix user's part. -- Rather, I've seen a few "what's nix good for anyway" comments, and this results in many lengthy replies extolling nix.
Agreed, reproducibility is only one aspect of Nix and doesn't quite capture the whole picture. That's why so many newcomers see Nix as nothing more than a Docker replacement. There's also too much misconceptions about Nix the language that's scaring people off.
I'd like to see more being discussed about:
* Its unique ability to treat packages as programmable data (i.e., derivations)
* Its use case as a building block for deployment systems that knows about and integrates with packages
* Its JSON-like simplicity
They're all central to the Nix experience, and yet it's often overlooked in Nix discussions.
We find it immensely useful to program our packages (technically, "derivations"):
It's trivial to combine multiple packages into one, e.g. via the 'nixpkgs.buildEnv' function. We use this to define a package called 'tools', containing shells, linters, differs, VCS, converters, data processors, etc. Our devs only need to install one package; and if they find something useful enough to share with the team, they can add it to that 'tools' package.
This approach is also modular/compositional: we define a 'minimal-tools' package containing bash, coreutils, git, diffutils, sed, grep, archivers, compressors, etc. which is enough for most shell scripts. Our 'tools' package is defined using that 'minimal-tools' package, plus a bunch of more-interactive tools like process monitors, download managers, etc. The reason we made this modular is so each of our projects can include that 'minimal-tools' package in their development environments, alongside project-specific tooling like interpreters (NodeJS, Python, JVM, etc.), compilers, code formatters, etc. (depending on the whole 'tools' package felt like bloat, and was easy to avoid)
(Outside of work, my personal NixOS config takes this even further; defining different packages for e.g. media tools, development tools, document tools, audio tools, etc. and splitting those into separate packages for gui/cli tools. That's not particularly "useful in practice"; I just like to keep my system config organised!)
Another very common way of programming with packages is to override their definitions. For example, we override the whole of Nixpkgs to use the 'jre11_headless' JVM. This is done by the following 'overlay' function (all of the dependency-propagation happens automatically, since Nixpkgs uses laziness):
Overriding is also useful for individual packages, e.g. if we want to alter something deep down a dependency chain. It's also useful for applying patches or running a "fixup" script to the source, without having to e.g. fork a git repo.
It's super easy to extend packages. For example, I was playing with Kafka and wanted the standard package to coexist with a separate install of Kafka with some jar files I needed. It was super easy to create a new package of Kafka with extra install steps that downloaded and placed those jar files where I needed.
I'm surprised that nix never ended up using augeas for package configuration because last I checked every upstream build option has to be reproduced in nix script by the packager.
It's easy enough for users to extend existing packages in Nix that it's never necessary for packagers to explicitly add support for every conceivable upstream build option out there. For example, the following three lines will create a new package based on an existing one with custom configure flags added.
Package options that tweak the build flags are there for convenience and not a requirement. Many of them are there for internal use in the Nixpkgs repo to provide variants of the same package.
I wonder if the corporate backing behind Docker has anything to do with Nix being adopted less. There’s a lot of overlap between Nix and Docker, and Docker had major corporate guns behind it from early on. Reproducibility as you mentioned, is easily achieved with Docker, and there is no need to learn the Nix ecosystem.
Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.
It’s like comparing trucks (Docker) with planes (Nix) for logistics. I can pay out the wazoo for a plane to get my package there over night, or I can pay a small fraction to wait a few days.
> Reproducibility as you mentioned, is easily achieved with Docker, and there is no need to learn the Nix ecosystem.
Docker isn't reproducible; I have no idea where this myth came from. The core feature of Docker is the ability to snapshot ("image") the result of a script ("Dockerfile"). That's no more "reproducible" than a statically-linked binary.
> Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.
Personally, I find Docker incredibly difficult. I finally bit the bullet when I took over maintenance of some systems at work, which had been written with Docker. The documentation was awful, the management systems keep falling over, and we have no idea what's actually running (it's tagged ":latest", but so is everything).
I avoid using anything Docker-related when I'm building new systems. It's easier to build container images with jq, tar and sha256sum anyway.
Docker serves the basic functionality of a container as well as any other. The workflows that forced you to migrate away from it are more complicated, I’m sure, than a procedurally defined dev environment.
What you’re calling a snapshot is what I meant when I said reproducible. Images allow me to have reproducible starting points to add other parts to my environment. Over time they get out of date depending on exactly what happens, but it’s much more controllable than a basic Ubuntu installation.
Your comment further pushes me away from Nix, really, even without mentioning it. It makes it seem even more like a tool that’s not for me, but rather for much more complicated things.
> Docker serves the basic functionality of a container as well as any other
I disagree. We've had multiple production outages caused by the Docker daemon misbehaving (usually causing us to run out of disk space). It also makes operations far more difficult than necessary; e.g. want to copy a .tar.gz file to AWS? Sorry, Docker's gonna insert itself in the workflow, and over-complicate the authentication[1]. Instead, I have a script which runs [2] in a loop; much easier!
> The workflows that forced you to migrate away from it are more complicated, I’m sure, than a procedurally defined dev environment.
The container I mentioned literally just runs `java -jar` with a pre-built .jar file. Nothing fancy. Still, I have no idea what it's running, since Docker allows mutable tags, and there's no reference to a version number, let alone a git revision.
> Your comment further pushes me away from Nix, really, even without mentioning it. It makes it seem even more like a tool that’s not for me
I wasn't talking about Nix, I was talking about Docker. They're very different tools (Docker manages running processes/containers; Nix describes the contents of files). I just really hate Docker, and am baffled when people say it's "easy".
> but rather for much more complicated things.
You're giving me too much credit. It took me days to even get Docker installed. It turned out that despite a mountain of documentation telling me to install "Docker Machine", that's actually been discontinued for years and I should have been installing "Docker Desktop" instead.
What I see as a barrier for entry to Nix/NixOS is not the UX, but the available documentation, or lack of thereof. One may consider the docs being part of UX though. I am in the process of writing a book about NixOS, you may track the progress here:
https://drakerossman.com/blog/practical-nixos-the-book
The emphasis is on how to make it as practical as possible, plus cover the topics which may apply to Linux in general, and in great detail.
Thank you for pointing that out! I have a twitter with the same handle, if you would like to subscribe to that. If not - wouldn't you mind a follow-up email when I fix the subscription form?
My main concern is that it puts another layer of abstraction atop an already complex (and at times leaky) abstraction.
I’d love to see more clear docs about what devenv is actually doing under the covers, and how to escape-hatch into Nix land when I inevitably need to tweak something.
Also, similarly, how do I map Nix docs (often just a set of example expressions) into equivalent
devenv incantations?
(It’s been a few months since I last looked so maybe things have come along since then.)
I think that's a valid concern. You read the nix pills and think you know what you're doing and then it turns out that the community has wrapped the things you've learned about in things you've never heard of, so you still can't learn from other people's repos.
This. I've been actively trying nix-based tooling on and off for my projects because it is legitimately solve the problem around sandboxing, versioning, reproducibility, consistency, etc. Recently, I'm trying to use asdf (and its faster alternative, rtx) and I keep telling myself "huh, nix could solve this problem better". But, damn it is infuriating to learn. Like other commenter said, it is the escape hatch that I'm missing so much. I really really want to convince myself to learn nix The Right Way. But, it feels like you will have another learning curve when using a nix-wrapper tooling.
I still have a high hope for the future of nix. And I believe the time nix will rise in popularity is when they have sorted the UX-related issues.
One thing that has helped somewhat is being more thorough in my exploration of repo's that are doing things that are similar to what I'm trying to do.
I want to use nix with a nim project, so I wrote a script that walks through all of the repo's in nimble (nim's package manager) and then filtered those for ones that had a flake.nix in the repo root.
Going through them was a helpful dose of context. It's like you gotta approach it from theory to practice and from practice to theory and eventually your efforts will meet in the middle. I think. I have glimmers of meeting in the middle happening.
I expect my main hurdle will be stuff that isn’t in NicPkgs. Especially for dev stuff, I’m going to want to pull in low-profile GitHub stuff or Python packages. What’s the escape hatch to bring those into the dev environment?
The main difference compared to running commands in a normal terminal is that builds are sandboxed, with no network access by default:
- If your commands need to download some particular files, you can have Nix fetch them separately (e.g. using `fetchurl`, `fetchGit`, etc.) and provide them to your commands via env vars. See https://nixos.org/manual/nixpkgs/stable/#chap-pkgs-fetchers
- If you don't know what will be downloaded, or there's no way to run in an 'offline' mode, then you can specify hash for the result (making it a "fixed output derivation"). That will give it network access, and Nix will check that the output matches the given hash for reproducibility (you can just make up a random hash to start with; Nix will reject the result, telling you its hash, which you can copy/paste into the definition :) )
The reasons I like this approach include:
- Bash is familiar/traditional and mostly-compatible with e.g. official install instructions provided by many projects, Stack Overflow answers, blog posts and tutorials, etc.
- Powerful/unrestricted, in case we need to do some fiddling between some steps
- Nix often reveals problems with those familiar/traditional instructions; e.g. if some deeply-nested part of an installer happens to run Python, it will fail if Python wasn't explicitly listed in its dependencies (AKA `buildInputs`). Revealing and fixing such things up-front avoids the "works on my machine" problem.
- Bash commands are often tedious and inflexible; so after writing a few of these we may find ourselves wanting more structure, more reusable parts, etc. which is exactly what the helper functions in Nixpkgs provide (like `pkgs.stdenv.mkDerivation`, `pkgs.pythonPackages.buildPythonApplication`, etc.). In contrastt, starting off with those helper functions can seem overwhelming, and the benefits may be hard to appreciate immediately.
> What’s the escape hatch to bring those into the dev environment?
One thing you can do is try the Nix package manager on your Linux or macOS system. -- If it's not in nix, you can just do it the way you did things before.
If you want to try NixOS:
1. Writing a package is only necessary for sharing code.. you can still just have Python, setup a virtual env, & run the program as you would.
Currently working on a graphical UX where you can create and share flakes without writing Nix code at https://mynixos.com
The site makes it easy to browse indexed flakes and configure flakes via options and packages. Hopefully the structure provided by the UI can makes it easier to get started with Nix flakes :)
A few antipatterns/annoyances I've come across over the years:
Importing paths based on environment variables:
There is built-in support for this, e.g. setting the env var `NIX_PATH` to `a=/foo:b=/bar`, then the Nix expressions `<a>` and `<b>` will evaluate to the paths `/foo` and `/bar`, respectively. By default, the Nix installer sets `NIX_PATH` to contain a copy of the Nixpkgs repo, so expressions can do `import <nixpkgs>` to access definitions from Nixpkgs.
The reason this is bad is that env vars vary between machines, and over time, so we don't actually know what will be imported.
These days I completely avoid this by explicitly un-setting the `NIX_PATH` env var. I only reference relative paths within a project, or else reference other projects via explicit git revisions (e.g. I import Nixpkgs by pointing the `fetchTarball` function at a github archive URL)
Channels:
These always confused me. They're used to update the copy of Nixpkgs that the default `NIX_PATH` points to, and can also be used to manage other "updatable" things. It's all very imperative, so I don't bother (I just alter the specific git revision I'm fetching, e.g. https://hackage.haskell.org/package/update-nix-fetchgit helps to automate such updating).
Nixpkgs depends on $HOME:
The top-level API exposed by the Nixpkgs repository is a function, which can be called with various arguments to set/override things; e.g. when I'm on macOS, it will default to providing macOS packages; I can override that by calling it with `system = "x86_64-linux"`. All well and good.
The problem is that some of its default values will check for files like ~/.nixpkgs/config.nix, ~/.config/nixpkgs/overlays.nix, etc. This causes the same sort of "works on my machine" headaches that Nix was meant to solve. See https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...
I avoid this by importing Nixpkgs via a wrapper, which defaults to calling Nixpkgs with empty values to avoid its impure defaults; but still allows me to pass along my own explicit overrides if needed.
The imperative nix-env command:
Nix provides a command called 'nix-env' which manages a symlink called ~/.nix/profile. We can run commands to "install packages", "update packages", "remove packages", etc. which work by building different "profiles" (Nix store paths containing symlinks to a bunch of other Nix store paths).
This is bad, since it's imperative and hard to reproduce (e.g. depending on what channels were pointing to when those commands were run, etc.). A much better approach is to write down such a "profile" explicitly, in a git-controlled text file, e.g. using the `pkgs.buildEnv` function; then use nix-env to just manage that single 'meta-package'.
Tools which treat Nix like Apt/Yum/etc.
This isn't something I haven't personally done, but I've seen it happen in a few tools that try to integrate with Nix, and it just cripples their usefulness.
Package managers like Apt have a global database, which maps manually-written "names" to a bunch of metadata (versions, installed or not, names of dependencies, names of conflicting packages, etc.). In that world names are unique and global: if two packages have the name "foo", they are the same package; clashes must be resolved by inventing new names. Such names are also fetchable/realisable: we just plug the name and "version number" (another manually-written name) into a certain pattern, and do a HTTP GET on one of our mirrors.
In Nix, all the above features apply to "store paths", which are not manually written: they contain hashes, like /nix/store/wbkgl57gvwm1qbfjx0ah6kgs4fzz571x-python3-3.9.6, which can be verified against their contents and/or build script (AKA 'derivation'). Store paths are not designed to be managed manually. Instead, the Nix language gives us a rich, composable way to describe the desired file/directory; and those descriptions are evaluated to find their associated store paths.
Nixpkgs provides an attribute set (AKA JSON object) containing tens of thousands of derivations; and often the thing we want can be described as 'the "foo" attribute of Nixpkgs', e.g. '(import <nixpkgs> {}).foo'
Some tooling that builds-on/interacts-with Nix has unfortunately limited itself to only such descriptions; e.g. accepting a list of strings, and looking each one up in the system's default Nixpkgs attribute set (this misunderstanding may come from using the 'nix-env' tool, like 'nix-env -iA firefox'; but nix-env also allows arbitrary Nix expressions too!). That's incredibly limiting, since (a) it doesn't let us dig into the structure inside those attributes (e.g. 'nixpkgs.python3Packages.pylint'); (b) it doesn't let us use the override functions that Nixpkgs provides (e.g. 'nixpkgs.maven.override { jre = nixpkgs.jdk11_headless; }'); (c) it doesn't let us specify anything outside of the 'import <nixpkgs> {}' set (e.g. in my case, I want to avoid NIX_PATH and <nixpkgs> altogether!)
Referencing non-store paths:
The Nix language treats paths and strings in different ways: strings are always passed around verbatim, but certain operations will replace paths by a 'snapshot' copied into the Nix store. For example, say we had this file saved to /home/chriswarbo/default.nix:
# Define some constants
with {
# Import some particular revision of Nixpkgs
nixpkgs = import (fetchTarball {...}) {};
# A path value, pointing to /home/chriswarbo/defs.sh
defs = ./defs.sh;
# A path value, pointing to /home/chriswarbo/cmd.sh
cmd = ./cmd.sh;
};
# Return a derivation which builds a text file
nixpkgs.writeScript "my-super-duper-script" ''
#!${nixpkgs.bash}/bin/bash
source ${nixpkgs.lib.escapeShellArg defs}
${cmd} foo bar baz
''
Notice that the resulting script has three values spliced into it via ${...}:
- The script interpreter `nixpkgs.bash`. This is a Nix derivation, so its "output path" will be spliced into the script (e.g. /nix/store/gpbk3inlgs24a7hsgap395yvfb4l37wf-bash-5.1-p16 ). This is fine.
- The path `cmd`. Nix spots that we're splicing a path, so it copies that file into the Nix store, and that store path will be spliced into the script (e.g. /nix/store/2h3airm07gp55rn9qlax4ak35s94rpim-cmd.sh ). This is fine.
- The string `nixpkgs.lib.escapeShellArg defs`, which evaluates to the string `'/home/chriswarbo/defs.sh'`, and that will be spliced into the script. That's bad, since the result contains a reference to my home folder! The reason this happens is that paths can often be used as strings, getting implicitly converted. In this case, the function `nixpkgs.lib.escapeShellArg` transforms strings (see https://nixos.org/manual/nixpkgs/stable/#function-library-li... ), so:
- The path `./defs.sh` is implicitly converted to the string `/home/chriswarbo/defs.sh`, for input to `nixpkgs.lib.escapeShellArg` (NOTE: you can use the function `builtins.toString` to do the same thing explicitly)
- The function `nixpkgs.lib.escapeShellArg` returns the same string, but wrapped in apostrophes (it also adds escaping with backslashes, but our path doesn't need any)
- That return value is spliced as-is into the resulting script
To avoid this, we should instead splice the path into a string before escaping; giving us nested splices like this:
The problem with Nix is that I still have to start with a Linux system--so I still need Docker, Terraform, something to give me a stable base for Nix to work against.
At that point--why should I add Nix to the mess since I still need those other things anyway?
With Linux, the only stable base required for Nix to function is the kernel. Nix packages all the required dependencies right down to glibc. Since the Linux kernel famously "doesn't break userspace," any sufficiently new kernel would suffice. Until recently, I've been able to get the latest Nix packages working on an ancient Linux 2.6 kernel. And even the kernel can be managed with Nix if you use NixOS. But Docker can't, so it's no use here.
As for Terraform, I don't see how it's relevant to this discussion. Nix-based SSH deployment tools can replace some of its functionality, so perhaps that's what you're talking about?
It feels so painful to go back to ‘regular’ Linux now. I'm so concerned about config file entropy and version incompatibility that Nix has solved for me. I'm happy I took the Nix Pill though and completely skipped over Docker and it's often-unnecessary overhead. Nix store is a better solution to reproducible builds, and the syntax is a lot better than LISP for Guix or whatever Skylark is trying to be.
Currently I'm setting up a second machine to distribute builds and share a cache on my local network. Overriding C flags Gentoo-style for better optimization is supported, but it can take a while to build--especially with LTO--as Hydra only builds for generic x86_64 so sharing optimized kernels and other software is great. I successfully got a shared znver3 LTO-optimized Linux 6.1.19 kernel with ZFS support yesterday! I just wish I could have built in parallel the kernel on the faster PC and the ZFS stuff on the slower one and resynced the build input derivations when it was finished after running `nixos-rebuild switch --flake ..`.
For the future, I hope distributed Nix caches become the norm like BitTorrent and we can all share optimized builds.
I’d like to learn more about the project history, who started it (early contributions), and the context under which the early work was initiated. Wikipedia has exactly two sentences on the history. Is there a better version of the Nix story available?
I'm definitely going to give that a try. I had issues with the Bash script on macOS and was annoyed that it left me with a system that I had to manually clean up.
They get a multi-user Nix install with a direnv integration that boots up project-local daemons. Think Postgres, Redis, and so on; those are all available, along with the Nix packages the env needs, while their current working directory is that of a project folder.
Very few things live outside of the project, but the things that do are unfortunately stateful. We'll be solving that soon, too.
Is the direnv integration custom or do you base it off of something? I use direnv extensively but I don't have it start any project daemons currently and I've been shipping scripts via devShell for managing them. I'd be very interested in this
We use nix-direnv mostly for the caching support and we use s6 as basically an init system but per project. Besides that it's all custom. We're looking to replace s6 with our own supervisor soon.
Edited to add, direnv just gets installed via `nix-env` but I wish it were using nix-darwin.
I'll probably get to write a blog post about this in a couple months.
People who are writing software in Nix-based dev envs would be interested in what we're working on, but the featureset is tightly bound to resource limitations and needs, so I can't really say any more without making promises I might not be able to keep, and I would feel bad badmouthing s6 if I couldn't do any better myself.
It absolutely works and is the best thing since sliced bread within package managers — many UNIX tools are just simply broken on Mac (not always on the packaging side), and in case of more niche tools it is simply not a priority.
Hard to answer your question without info on what the problem is, but lots of people use it on MacOS. Maybe ask about your problem on /r/nixos or https://discourse.nixos.org/.
I’ve never had an issue with Nix itself on macOS but there are occasionally packages that are broken on macOS but work on Linux - despite upstream supporting macOS.
What I would like is to replace both Docker and docker compose for prod images and local dev, respectively for my team. Is this possible with nix today? It’s mostly a macOS box team.
I use nix flakes to manage my own configuration, but last time I played with building docker images on macOS I had to stand up a builder image on qemu or inside docker.
Further, I’ve historically run into friction between other package managers and nix. The poetry2nix and pnpm2nix kind of tools have a lot of friction [for example, private registry support for poetry is poor]. My current project has a dependency on xmlsec and it’s a bit cumbersome to handle building non wheels on M1.
(The compile time of Nix itself is unpleasant, but not exactly exceptional among programs written in the modern C++ style. The eval time even for Nixpkgs even on a ten-year-old i5 is annoying but not a terrible problem the way it’s used now, though even on a recent Android device it’s admittedly measured in minutes.)
With the Hydra binary cache I can quite comfortably update my Avoton C2550 based router or Raspberry Pi 4 AirPlay receiver within a few minutes. It’s only if I need to build a non trivial package that I have to make sure I build on my desktop or in a VM on my MacBook.
I doubt your setup needed recompiling the Nix binary itself at any point. Even I did that more out of love of adventure than for any practical reasons, it’s just that the scars are still there. (Still less than those from the time when the LibreOffice build was broken on Hydra and a routine system update jumped right into trying to perform it locally, even if that indeed was what I technically asked for...)
And you know what, the long evaluation thing might just be a bug. Or at least I don’t see any other reason why (e.g.) `nix-shell -p yt-dlp` works reasonably fast on my Nix-on-Droid[1] installation but `nix shell nixpkgs#yt-dlp` takes minutes.
Oh that takes me back to trying to explain to some Gentoo users back in the day that that yea you can optimise stuff better for your slow ass computer. But you now have to spend most of your time compiling code on your slow ass computer.
I always found Arch to be a much more reasonable system for this use case because compiling everything from scratch was just pointless, at least back then. But it still maoes it easy to rebuild packages with custom patches etc.
> Oh that takes me back to trying to explain to some Gentoo users back in the day that that yea you can optimise stuff better for your slow ass computer. But you now have to spend most of your time compiling code on your slow ass computer.
Honestly it depends what your goal is with gentoo. My whole sell with gentoo is that custom packages are just so easy. I can take some garbage toolchain from an embedded manufacturer and wire up an ebuild for it that "just works". Nix is similar once you get it working but the nixpkg and nixos DSLs just kinda suck in their own ways. So it may take me 10-15 min to write and install an ebuild while the nix package may take me an hour or two.
Same goes for situations where you need weird kernel configs. It's so easy to just recompile the kernel on gentoo once you have stuff set up. The OS is built expecting a majority of users will want to tweak their kernels so the OS's tooling doesn't fight you when you do. I find this often less well handled in other distros.
As for arch, it is pretty close IMHO but (while it may have changed since I used it last) I found that the "bleeding-edge" focus makes dealing with old janky dependencies to be pretty painful.
Gentoo certainly isn't the first choice for the majority of people but it IMHO is a really good choice if you are going to be working with funky proprietary toolchains (o/ hi embedded engs).
I like the principle of Nix that one can simultaneously install different versions of the same software and make layered choices of what version to use with what or depending on the use case. Nix has spearheaded that principle and that's great.
That being said, that fine-grained layering selection is done via symlinks in Nix afaik, whereas a couple newer packaging systems (e.g. OCI containers or flatpak) can do such layering with newer stuff like bind mounts and namespaces+sandboxing (and I don't just mean sandbox for build time but for run time) and thus increase the security by selectively choosing what a package is supposed to have access to. I wonder how fast Nix will adapt to such new possibilities. I think it should do so quickly (e.g. switch to OCI as the underlying layering system; I hear that the Tvix project is experimenting with that?), as that could establish Nix as the dominant system/distribution in that field whereas otherwise it would be overtaken and left behind by whatever OCI-container-based distribution manages to come out as the dominant one.
There is currently (temporarily) a unique window of opportunity in that:
* Docker is totally ruining their position in the OCI world, and had never really put effort into building a comprehensive quality curated distribution. That is: their registry may be "comprehensive" as in large choice, but apart from a small set of base images, it's mostly a hotchpotch of low-quality uncurated images with uncertain security… and often found to be of severely lacking in the security domain.
* Redhat has a much too closed policy for their OCI registries and has made the mistake of restricting their OCI stuff to the server side while fedora pushes flatpak/flathub which is too restricted to the desktop. That artificial chasm between a server-only and a desktop-only system sucks.
* Ubuntu has completely borked their attempts at new sandboxed/layered package formats, snap sucks. And Debian and the other remaining big distros have nothing in that category
Nix has the advantage of already having a large, comprehensive and curated set of packages. All it needs is to adopt OCI as its underlying layering system (instead of symlinks), make its large package base trivially accessible to OCI, and make an effort on UX (a little more accessible and easier) and it could come out as the dominant distribution.
Treating packaging boundaries and runtime isolation as the same thing is exactly the problem with Docker and similar solutions. Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.
This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess. The runtime isolation boundary set by Docker doesn't represent any sort of logical component or security boundary in your system. It merely reflects how the underlying image was built.
This is a classic anti-pattern of mixing up policy with implementation. Runtime isolation policy should be independent of build time implementation. Nix gets this right with better design and composable packages. It's trivial to create a container that includes only the packages you want, with dependencies handled automatically by Nix. Docker, on the other hand, leaves you with a binary blob (i.e., Docker image) that's neither composable nor customizable.
> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.
But Docker in fact doesn't have compiletime dependencies, it needs you to specify runtime deps only. If you want to build something in Docker, you use two-stage builds and the runtime deps of the first stage become your compiletime deps.
> This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess
I don't get this, why is it considered an overcomplicated mess? If you want to run several processes in one container, you just launch it with a lightweight process manager, if you want to run it in separate containers -- well, that's even easier, just launch separate containers and configure the communication between them with a network.
> Nix gets this right with better design and composable packages
Nix actually implements it worse than Docker in some sense. Particularly, the exact problem that you described:
> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime
is not solved in Nix, runtime deps must be a subset of compiletime deps.
> that's neither composable nor customizable
Compositionality is a completely different issue on which I do have problems with Docker. DAG-oriented environment building is strictly better than inheritance-oriented, but that's all orthogonal with compiletime-runtime separation.
OCI is based on Linux namespaces, which provide a way to create isolated filesystem trees for processes. You probably want that, and I agree with you, but probably forcing this into OCI itself is a lost cause, since its tooling is too based on layers.
The choice of topology of the package and layering system - be it a tree as in OCI or a more general graph topology as in Nix - is only a very small part of either of these systems. I agree that the general graph topology is superior in some points.
My point though was that at the implementational level, the old symlink-based way of implementing it in Nix is severely lacking the isolation and more general security capabilities of the bind-mount, namespaces and cgroup-based approach of OCI and other newer packaging systems. And Nix needs to implement that.
My impression is that the OCI package spec isn't what would be in the way of implementing a system that would combine the isolation and security of a system based on bind-mount, namespaces and cgroups like OCI with a graph topology like Nix and that there would thus be an opportunity to combine the two, which would help Nix take over the dominating position in that space. If OCI turned out to be impossible to use with the more graph-based approach of Nix, then that would mean a much higher implementational work - not that it couldn't be done, but it still would need to be done then. But either way, Nix cannot stay with its old symlink-based layering: Failing to implement the security features that are now expected from modern packaging systems (isolation, bind mounts, cgroups, namespaces etc) is a surefire way to progressively maneuver into irrelevance.
Nix has a window of opportunity here due to the current weakness of the big players in the field. But it can't afford to let that slip.
Bind-mount, namespaces and cgroups are runtime properties, completely orthogonal to the problem Nix (the build system/package manager) is meant to solve. NixOS is the configuration layer on top that encodes those properties via systemd units, and that is where you'll find the concept of a "service" and where you can isolate the runtime parts of the system.
It's not orthogonal, Nix goes far out of its way to fix compositionality issues with hacks and dubious conventions, when instead namespaces (bind-mounts) could be used to deliver expected environments.
> I tried to install NixOS using the live cd last week in a Hyper-V VM, but it failed to get anywhere due to SquashFS errors.
Heh, that reminds me of installing NixOS back around 2014. I didn't have any way to physically boot off the install CD; so I ran it in qemu, using my real /dev/sda as the "virtual" hard drive (which I'd already partitioned). Thankfully there was no interference with the host system (Trisquel).
> I tried to install NixOS using the live cd last week in a Hyper-V VM, but it failed to get anywher e due to SquashFS errors.
I'm curious what happened here because just a few weeks ago I was using a NixOS live-cd to rescue a botched gentoo VM on my windows machine (managed with Hyper-V manager).
If you give it another shot, run into the issue again, and document it, the Nix community would almost certainly help you debug it and figure out what went wrong.
Realistically speaking, Nix will never become widespread. Instead, I foresee immutable OSes like Steam OS, Chrome OS and Fedora Silverblue combined with something like flatpak for installing applications.
The evolutionary strategy of reproducible snapshots of state has historically been far more successful than "pure" functional approaches; see Docker for example, or reproduction of DNA based lifeforms.
As far as I know, it’s still about [0]. I’ve had a better experience with deploy-rs though [1] - or even just using nixos-rebuild to target the remote machine.
I personally would prefer a build and environment tool which uses the modern Linux namespaces to prepare isolated filesystems without needing to reinvent a whole bunch of wheels.
A number of problems that I would like to be fixed:
- Post-build patches to insure correct shared library paths in artifacts due to requirement to use absolute /nix/store instead of using the traditional Linux filesystem.
- Numerous wrappers around existing build tools to move dependencies into nix packages to generate stuff like Cargo.nix.
- Not so good handling of content-addressable packages due to historic cruft.
- Horrible-horrible way to compute runtime-dependencies of packages [1]. This may be okay as a heuristic to initialize a project, but in the end it must be tweakable manually.
I see a future where a package manager builds a fully-isolated environment with only and only required dependencies for my app, using CAS but not forcing it's style of the filesystem on me. Nix is a good step into that direction, but that's not a usable product in my view, just a research project.
Edit:
Forgot to add, Nix the language is also not good, but I'd like not to discuss that matter. This would be solved by a more modular architecture, where we can manage package with a one tool and build packages with a different tool. Until then, I would be forced to code in Nix, and that's problematic no matter how good the package management is.
Absolute, immutable store paths are the main reason why Nix is so good in the first place. The key assertion is that FHS sucks. Using complicated and brittle namespace tricks to construct virtual FHS's everywhere is nothing more than shoving the problem under the carpet. It severely limits the places where Nix can be useful. You usually can't create nested namespaces inside a Docker container, so you couldn't use "Namespace-Nix" programs there. You're also destroying the ability to compose packages and environments. Using multiple versions of the same package within one environment - another goal of Nix - becomes impossible. Implementing NixOS in such a paradigm would be a nightmare, and the result would be very limited compared to what NixOS can do now.
Yes, having to clean up RPATH after compiling a program sucks. Yes, having to implement workarounds to make build tools that desperately cling to their FHS traditions work sucks. These are effectively bugs and/or design errors in those tools. Packages are supposed to be installable into various different prefixes, Nix or not. That's why ./configure --prefix= exists.
The wheel needed to be reinvented because the old one was square.
> uses the modern Linux namespaces to prepare isolated filesystems without needing to reinvent a whole bunch of wheels.
I'm confused; Nix mostly works by passing the '--prefix' argument to standard, off-the-shelf configure scripts. Some build/packaging tools don't seem to work with a user-chosen prefix, and hence need some sort of workarounds; but that's surely a fault with those tools.
> Horrible-horrible way to compute runtime-dependencies of packages. This may be okay as a heuristic to initialize a project, but in the end it must be tweakable manually.
I completely agree with this one. I've never experienced a problem with it; but that surprises me ;)
> This would be solved by a more modular architecture, where we can manage package with a one tool and build packages with a different tool. Until then, I would be forced to code in Nix, and that's problematic no matter how good the package management is.
The Nix store provides quite a nice separation between the "definition" side (where the Nix language lives), and the "building" side. In principle you can avoid the Nix language, e.g. the way Guix uses Guile Scheme. The only reason I use the Nix language is due to all of the definitions provided by Nixpkgs ;)
Honestly I don't think so, in modern Linux namespaces it's perfectly possible to run them in isolated filesystems and to provide whatever they want at whatever paths they want without patches and workarounds. Instead Nix forces it's way up at the build level, so whatever artifact you get as a result of Nix derivation is suitable for running in a Nix system alone. I'd rather build a generic artifact that could be run in a traditional Linux system and let the user decide how it want to store it, instead of forcing the Nix way.
> I've never experienced a problem with it
There are problems with gcc-compiled binaries for example, and they are solved with a post-compilation patch. This is a workaround which does not scale and shouldn't be there in the first place, but that's mainly not gcc's problem in my eyes (although it could've done a better job too).
> The Nix store provides quite a nice separation between the "definition" side and the "building" side.
Somewhere deep inside it indeed does, there is no Nix the lang in final derivations. Nevertheless, I had too much trouble working at that level.
The main problem is that there are decades of software developed for the classical filesystem, and Nix proposes just to patch that software so that it would be compatible with the "new" way. This is a radical and unneeded change. It would be better to just run software in separate namespaces and just provide whatever filesystem it wants, instead of forcing your own true way.
I don't think it will ever gain widespread industry adoption. All signs point to it remaining in niche, even if growing currently, user communities while being a curiosity to the wider IT world.
It's not even in the conversation in most of the professional world. A large portion those who use, like it, and write about it say they wouldn't recommend it to anyone.
I sincerely hope that nix community improves the UX to make it more accessible to new users. Though for those willing to invest time in learning it, nix is extremely useful and highly recommended.