home blog misc links contact about
I like NixOS a lot, but one thing I really hate about it is its massively inefficient use of bandwidth. Because derivations are addressed via their inputs, every time a package changes, all of the packages depending on it have to be rebuilt by Hydra and re-downloaded by everyone. Even if the only thing that changed is the cryptographic hash of one of its dependencies, like it happens regularly for Electron apps (eww). This might be totally acceptable for people with decent internet, but I live in a student flat with a 1MB/s downlink, and every time one of the core packages in Nixpkgs change, I basically have to redownload my entire system, which takes hours.
After much trial and error, the solution I came up with is the following:
config.system.build.toplevel
for every one of my
machines.cache.nixos.org
.In my case, this speeds up the upgrade by a factor of ~50.
While this sounds simple in theory, there were quite a lot of caveats on the way. In this blog post, I will explain in detail how I put this scheme into practice.
I am aware of Hydra, but it seemed too complex for my use case. What I settled for is a systemd timer running a script that runs
TMPDIR=$(mktemp -d)
cd $TMPDIR
git clone ${repo} .
nix flake update --commit-lock-file
git push
cd /
rm -rf $TMPDIR
to upgrade the flake.lock
in my repository and then
nix build my-config-flake#nixosConfigurations.fnord.config.system.build.toplevel` --out-link path/to/out-link
to build the system. This is a pretty minimalist solution, but it works perfectly well for me. If you want to, you can even write a small NixOS module for automatically setting up multiple systemd times for all of your configs like I did.
In order to sign the builds, I generated a binary cache key with
nix-store --generate-binary-cache-key
and then added it to
my NixOS configuration with
nix.settings = {
secret-key-files = "/var/lib/secrets/nix/cache-priv-key.pem";
};
This will cause Nix to automatically sign all the builds on your
server with the specified key. To use the binary cache we are going to
set up later, we will need to add the corresponding public key to all
machines via nix.settings.trusted-public-keys
.
If you’re a NixOS fanatic like me, the chances are that you already have a Raspberry Pi running NixOS sitting around somewhere. If the server is supposed to push build artifacts to the Raspberry Pi, the two should be in some sort of VPN so the server can connect to the Raspberry Pi by itself too. This is very simple with NixOS, see for instance the NixOS wiki article on WireGuard for instructions.
Now, the most straightforward solution would be to install nix-serve on the
Raspberry Pi and have the server push the build artifacts with
nix copy
. While this would work, this has several problems
in practice:
xz
or similar, but
nix-serve
can’t do that because it serves the files on the
fly fresh from the Nix store.What I settled for was buying an 1TB hard drive, setting up an unencrypted Nix cache partition on it and connecting it to the Pi with a SATA to USB bridge. The partition being unencrypted should not be a security problem because we are signing our builds, but this is a good opportunity to check that you did not include any secrets in your system config.
To be precise, I set up a static binary cache on the
external hard drive. This means that there is no server like
nix-serve
that dynamically generates the NAR files and
provides the HTTP Nix cache API, but simply a directory structure that
looks like this:
/var/lib/static-nix-cache
├── nix-cache-info
├── 0003v03ig6x84rz2byjb2lp3my11a4c7.narinfo
├── 000qf73rdzndncvmyriilz7idcna6kxa.narinfo
├── [...]
├── zzy1clxl8j7fayxjzx14kbk1pbr97p3i.narinfo
├── zzy4svjm00h3s7jaxf9zvhsvgslmqd6w.narinfo
├── nar
│ ├── 0008wx67x34khxh06spm4zxyslb3fklj070ydbgh8jh5whs33grc.nar.xz
│ ├── 000a257k85pzl39370iylfz8nrk7dr3xgw2h51ash0mcw7bibkb3.nar.xz
│ ├── [...]
│ ├── 1zzkqdg6d6x60wmzhf4rl63q04xqyhpkpwn8xi4xmvj9czpmvq2h.nar.xz
│ └── 1zzl4x1n14l7xzcbic46mgy47pfvikmm3dxcklprb73s8zx11hn1.nar.xz
└
We can then set up an nginx
server on the Pi pointing to
this directory with:
"192.168.0.42" = {
services.nginx.virtualHosts.locations."/".root = "/var/lib/static-nix-cache";
};
where you should replace 192.168.0.42
with your Pi’s IP
address. This will set up an unencrypted HTTP server, but again, since
we’re in a LAN and signing all the builds, this should not be an issue.
If you are as paranoid as I am, you can set up your own SSL CA to
encrypt the traffic between the Pi and your other machines too.
Since we are probably going to be interested in the build artifacts for less than ~2 weeks in total, it is sensible to also set up a systemd timer periodically garbage-collecting all files that are older than a certain threshold, for instance 14 days:
-nix-cache-gc = {
systemd.services.staticdescription = "Static Nix cache garbage collection";
serviceConfig.Type = "oneshot";
path = with pkgs; [ coreutils ];
script = ''
cd /var/lib/static-nix-cache
find -atime +14 -exec rm {} \;
'';
startAt = "09:00";
};
The question remains how we are to copy the build artifacts to the
Pi. It turns out that nix copy
does not seem to support
copying files via SSH/SFTP. What I ended up doing instead was
FUSE-mounting the SFTP directory on my server with
-etheria-static-nix-cache = {
systemd.services.sshfswantedBy = [ "multi-user.target" ];
after = [ "network.target" "wg-quick-c5h10-main.service" ];
wants = [ "network.target" ];
script = ''
mkdir -p /mnt/etheria-static-nix-cache
${pkgs.sshfs}/bin/sshfs -f -v -o allow_other root@etheria_deploy:/var/lib/static-nix-cache /mnt/etheria-static-nix-cache
'';
};
and then copying the artifacts into it at the end of the build farm timer script with
nix copy path/to/out-link --to /mnt/etheria-static-nix-cache
(If you know a better way to do this, feel free to hit me up :) )
Configuring the cache on my machines at home was pretty straightforward:
{
nix.settings = substituters = [ "192.168.0.42" ]; # the IP address of your Pi
trusted-public-keys = [
"etheria.local:Wi/1tMJgOE+lZr4aJ2fSO8lS6EAuSxJCWZLcyD2sV/c=" # The public key we generated earlier
];
};
The only caveat is that we can’t do this on mobile devices (laptop, tablet, etc.), because Nix builds will fail if one of the binary caches isn’t reachable. But apart from that, this setup has been working perfectly fine for half a year now for me.