deploy.buildHost tells Clan to run nix build on a different machine than the one it is deploying to. The target host stays out of the build entirely: it only receives the finished system closure and activates it. This guide covers when you want that split, how to configure it, and the second SSH hop it introduces.
Only if the machine you're deploying to is a poor fit for building. Low RAM, slow CPU, flaky network, or no access to a substituter the builder has are all good reasons. If building on the target works fine, leave buildHost unset.
By default, clan machines update <machine> evaluates your flake on your workstation, then builds and activates on deploy.targetHost. Setting deploy.buildHost splits that work. Evaluation still runs locally, but the build runs on a separate host. The finished closure is then copied from the build host to the target over a second SSH connection, and activated there.
Common reasons to split them:
Private flake inputs are not a reason to set buildHost. Clan evaluates your flake on your workstation, so private repositories are fetched locally and never need to be reachable from the build host. See Private Flake Inputs for the full setup.
nix build on the build host compiles natively for its own system. If the target is aarch64-linux and the build host is x86_64-linux, the build produces the wrong closure. Pick a build host that matches the target's architecture, or arrange cross-compilation yourself.
Add deploy.buildHost alongside deploy.targetHost in clan.nix:
inventory.machines.my-machine = {
deploy.targetHost = "root@target.example.com";
deploy.buildHost = "root@builder.example.com";
}; The value has the same format as targetHost:
user@host:port?SSH_OPTION=SSH_VALUE&SSH_OPTION_2=VALUE_2 A few examples:
root@builder.example.combuilder.example.com:2222root@builder.example.com:22?IdentityFile=/path/to/keyYou can set buildHost inside the NixOS configuration of the machine instead. This is useful when the deployment topology belongs with the machine, not the clan-level inventory:
clan.core.networking.buildHost = "root@builder.example.com"; Prefer the inventory when you can. It keeps the topology of every machine visible in one place.
For one-off deployments, clan machines update accepts --build-host:
clan machines update my-machine --build-host root@builder.example.com Pass localhost to force a local build, even if the inventory names a remote builder:
clan machines update my-machine --build-host localhost Clan resolves buildHost in this order, highest priority first:
--build-host on the command lineinventory.machines.<name>.deploy.buildHostclan.core.networking.buildHost in the machine configurationdeploy.targetHost (build on the target itself)With buildHost set, clan machines update runs three distinct stages:
nix build to produce the system closure.nix copy over SSH to the target host, and Clan activates the new system on the target.Stage 3 is the one that matters for authentication. The build host opens its own SSH connection to the target, with its own credentials and its own ~/.ssh/known_hosts. That connection has nothing to do with your workstation's SSH session. It needs:
deploy.targetHost.known_hosts entry for the target on the build host.Neither exists by default.
The build host has to prove it is allowed to log in to the target. You have two options:
Both options, the tradeoffs, and the step-by-step setup for option 1 are covered in the SSH Agent Forwarding guide. Work through it once per build host, then come back here.
On the first deploy, the target's host key is not yet in the build host's known_hosts, and the nested SSH fails with:
Host key verification failed.
error: failed to start SSH connection to '<target-host>' The fix is to pass --host-key-check accept-new on the first run. Clan forwards it to the nested SSH that the build host opens, so the target's key is recorded on first use:
clan machines update my-machine --host-key-check accept-new Subsequent deploys can drop the flag. The mechanism, and a manual alternative using ssh-keyscan, are covered in the Host Key Verification section of the SSH Agent Forwarding guide.
nixos-rebuild DirectlyIf you bypass clan machines update and call nixos-rebuild by hand, the equivalent flag is --build-host:
nixos-rebuild switch \
--flake .#my-machine \
--target-host root@target.example.com \
--build-host root@builder.example.com Run clan vars upload my-machine first if your configuration uses Clan vars. The full workflow is in NixOS Rebuild.
If the deploy fails with Permission denied (publickey) while the build host is copying the closure, the build host has no accepted key on the target. Work through the SSH Agent Forwarding guide, which walks through installing a dedicated build-host key.
If the build finishes and the deploy then aborts with Host key verification failed, the build host has no known_hosts entry for the target. Re-run with --host-key-check accept-new, or seed the entry by hand. Both are covered in the Host Key Verification section.
The precedence order has silently fallen through to targetHost. Check, in order: the --build-host flag you passed, the inventory entry in clan.nix, and clan.core.networking.buildHost in the machine configuration. If none is set, Clan builds on the target by design.
If the build aborts with a message along the lines of a 'x86_64-linux' ... is required to build ..., but I am a 'aarch64-linux', the build host and target have different architectures. Pick a build host that matches the target, or build locally with --build-host localhost.
nixos-rebuild directly instead of clan machines update.