Managing NixOS with Git: A Declarative Infrastructure Workflow

NixOS is unlike any other Linux distribution. Your entire system — packages, services, users, firewall rules, filesystems — is described in configuration files. Change a file, rebuild, and your system matches exactly what you declared. Pair this with Git, and you get version-controlled infrastructure that you can roll back, branch, diff, and share across machines.

This post walks through how a multi-host NixOS setup fits together, how Git integrates into the workflow, and how to use both effectively.

The Core Idea

Traditional Linux administration is imperative: you run commands, install packages, edit config files scattered across /etc, and hope you remember what you changed. NixOS inverts this. You write .nix files describing what your system should look like, and NixOS figures out how to get there.

This means your entire system state lives in a handful of text files — perfect for Git.

/etc/nixos/
├── flake.nix                 # Entry point: pins dependencies, defines systems
├── flake.lock                # Locked dependency versions (reproducibility)
├── hosts/                    # Per-machine configurations
│   ├── anubis/
│   │   ├── configuration.nix
│   │   └── hardware-configuration.nix
│   ├── banshee/
│   │   ├── configuration.nix
│   │   └── hardware-configuration.nix
│   ├── griffin/
│   │   ├── configuration.nix
│   │   └── hardware-configuration.nix
│   └── locust/
│       ├── configuration.nix
│       └── hardware-configuration.nix
├── modules/                  # Reusable building blocks
│   ├── common.nix
│   ├── users.nix
│   ├── docker.nix
│   ├── incus.nix
│   ├── samba.nix
│   ├── avahi.nix
│   ├── wsdd.nix
│   └── zfs.nix
└── home/
    └── leed.nix              # Home Manager: user-level config

One Git repo. Four machines. Every service, package, and user account tracked.

Flakes: Pinning the World

NixOS Flakes solve a fundamental problem: reproducibility. Without flakes, two people running nixos-rebuild on the same config could get different results depending on their channel version. Flakes fix this with a lock file.

# flake.nix
{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs?rev=f72778a5...";
    home-manager.url = "github:nix-community/home-manager";
  };

  outputs = { self, nixpkgs, home-manager, ... }:
    let
      system = "x86_64-linux";
      pkgs = import nixpkgs {
        inherit system;
        config.allowUnfree = true;
      };
    in {
      nixosConfigurations = {
        anubis = nixpkgs.lib.nixosSystem { /* ... */ };
        banshee = nixpkgs.lib.nixosSystem { /* ... */ };
        griffin = nixpkgs.lib.nixosSystem { /* ... */ };
        locust = nixpkgs.lib.nixosSystem { /* ... */ };
      };
    };
}

The flake.lock file pins exact revisions of every input. When you commit this lock file to Git, anyone cloning your repo builds against the same nixpkgs commit. No surprises.

To update your inputs:

nix flake update          # Update all inputs
nix flake lock --update-input nixpkgs  # Update just nixpkgs

Then commit the updated flake.lock. That commit is your upgrade record. You can diff it, revert it, or bisect it if something breaks.

Modules: Composable Building Blocks

The real power of a multi-host NixOS setup is modules. Instead of duplicating configuration across machines, you extract shared concerns into reusable modules and import only what each host needs.

The Common Module

Every host imports common.nix. This is your baseline — the packages, services, and settings that every machine in your fleet should have:

# modules/common.nix
{ config, pkgs, ... }:
{
  environment.systemPackages = with pkgs; [
    vim git curl wget htop btop
    tmux fzf direnv python3
    smartmontools lm_sensors
  ];

  services.openssh = {
    enable = true;
    settings = {
      PasswordAuthentication = false;
      PermitRootLogin = "no";
    };
  };

  programs.zsh.enable = true;
  # ... shell aliases, editor config, GPG setup
}

Change the SSH config here and it propagates to every host on the next rebuild. Add a package and every machine gets it.

Service Modules

Individual services get their own modules. A host opts in by importing it:

# modules/docker.nix
{ config, pkgs, ... }:
{
  virtualisation.docker = {
    enable = true;
  };
}
# modules/samba.nix
{ config, pkgs, ... }:
{
  services.samba = {
    enable = true;
    settings = {
      global = {
        workgroup = "WORKGROUP";
        security = "user";
        "hosts allow" = "192.168. 127.0.0.1 localhost";
      };
      public = {
        path = "/srv/samba/public";
        "read only" = "yes";
        "guest ok" = "yes";
      };
      private = {
        path = "/srv/samba/private";
        "read only" = "no";
        "valid users" = "leed";
      };
    };
  };
}

Each module is self-contained. You can read samba.nix and understand the entire Samba configuration without looking at anything else.

Host Configurations

Each host’s configuration.nix imports the modules it needs and adds host-specific settings:

# hosts/locust/configuration.nix
{ config, pkgs, ... }:
{
  imports = [
    ./hardware-configuration.nix
    ../../modules/common.nix
    ../../modules/users.nix
    ../../modules/samba.nix
    ../../modules/avahi.nix
    ../../modules/wsdd.nix
    ../../modules/incus.nix
    ../../modules/zfs.nix
  ];

  networking.hostName = "locust";
  boot.kernelPackages = pkgs.linuxPackages_6_6;  # LTS for ZFS

  # Host-specific networking, NFS mounts, etc.
}

The import list tells you at a glance what this machine does. Locust runs Incus containers on ZFS with Samba file sharing. Anubis runs Docker instead. Griffin is the NFS server. Each host is a composition of shared modules plus its own specifics.

Home Manager: User-Level Configuration

Home Manager extends the declarative model to user environments. Instead of manually configuring your editor, shell, and tools, you declare them:

# home/leed.nix
{ config, pkgs, ... }:
{
  home.packages = with pkgs; [
    gleam uv zig erlang odin
  ];

  programs.neovim = {
    enable = true;
    plugins = with pkgs.vimPlugins; [
      nvim-lspconfig
      nvim-treesitter.withAllGrammars
    ];
    extraLuaConfig = ''
      -- LSP, keybindings, indentation...
    '';
  };

  programs.ssh = {
    enable = true;
    addKeysToAgent = "yes";
  };
}

Home Manager integrates into the flake alongside the system configuration. The user’s development environment is versioned right alongside the system infrastructure.

The Git Workflow

Here’s where it all comes together. Your daily workflow looks like this:

Making Changes

# Edit the configuration
vim /etc/nixos/modules/common.nix    # Add a package, tweak a service

# Test the build without switching
sudo nixos-rebuild build --flake .

# If it builds, apply it
sudo nixos-rebuild switch --flake .#$(hostname)

Committing

cd /etc/nixos
git diff                              # Review what changed
git add modules/common.nix
git commit -m "Add smartmontools to common packages"

Every commit is a snapshot of your entire system state. The commit message is your changelog.

Rolling Back

Something broke after a rebuild? You have two options:

NixOS generations — every nixos-rebuild switch creates a new generation. Boot into a previous one from GRUB or roll back immediately:

sudo nixos-rebuild switch --rollback

Git revert — if you want to undo a specific configuration change:

git revert HEAD        # Undo the last commit
sudo nixos-rebuild switch --flake .#$(hostname)

The NixOS generation rollback is instant (it’s already built). The Git revert requires a rebuild but gives you more precise control over what to undo.

Deploying to Multiple Hosts

Since all hosts share one repo, you can rebuild any machine from any machine (with SSH access):

# Rebuild locust from your current machine
sudo nixos-rebuild switch --flake .#locust --target-host locust

# Or SSH in and pull
ssh locust
cd /etc/nixos && git pull && sudo nixos-rebuild switch --flake .#locust

A common pattern is to define shell aliases that make this seamless:

shellAliases = {
  rebuild = "sudo nixos-rebuild switch --flake /etc/nixos#$(hostname)";
  update = "cd /etc/nixos && nix flake update";
};

Then deploying a change is: edit, rebuild, verify, git commit, push, SSH to next host, git pull && rebuild.

Branching for Experiments

Git branches are perfect for testing risky changes:

git checkout -b try-new-kernel
# Edit configuration to use linux_latest
sudo nixos-rebuild switch --flake .#$(hostname)
# Test for a few days...
# If stable:
git checkout master && git merge try-new-kernel
# If broken:
git checkout master
sudo nixos-rebuild switch --flake .#$(hostname)

You get the safety of NixOS rollbacks and the history of Git branches.

Patterns That Work Well

Separate What Changes From What Doesn’t

hardware-configuration.nix is auto-generated and rarely changes. Host-specific networking is relatively stable. Modules like common.nix change frequently as you add tools. Structure your commits accordingly — a package addition shouldn’t be tangled with a network change.

One Module Per Concern

Don’t put Docker, Samba, and ZFS in the same file. Separate modules mean separate Git history, cleaner diffs, and the ability to compose hosts differently without touching shared code.

Pin Your Inputs

Always commit flake.lock. Updating nixpkgs should be an intentional, reviewable action — not something that happens implicitly. When an update breaks something, git log flake.lock tells you exactly when it changed and git revert fixes it.

Use Aliases for Common Operations

Define rebuild, update, and similar aliases in your NixOS config itself. This way the workflow tools are part of the versioned configuration:

shellAliases = {
  rebuild = "sudo nixos-rebuild switch --flake /etc/nixos";
  update = "cd /etc/nixos && nix flake update";
  nrs = "sudo nixos-rebuild switch --flake /etc/nixos";
};

Commit Messages as Documentation

Your Git log becomes the authoritative history of your infrastructure:

3852715 Update Configurations
c601a0a Updated NFS
ef4a882 Changed permissions on modules files
fb5c11f Adding separate ZFS Common module

Over time, this is more useful than any documentation you could write. You know exactly what changed, when, and (with good messages) why.

Why This Works

The combination of NixOS and Git gives you properties that are hard to achieve any other way:

  • Reproducibility: flake.lock + Git commit = exact system state, rebuildable from scratch.

  • Auditability: git log shows every change ever made to your infrastructure.

  • Rollback: Both NixOS generations (instant, boot-level) and Git revert (precise, change-level).

  • Multi-host consistency: Shared modules ensure all machines stay in sync. Drift is impossible — the configuration is the system.

  • Experimentation: Branches let you try things without fear. The worst case is git checkout master && rebuild.

Traditional configuration management tools like Ansible or Puppet layer declarative intent on top of an imperative system. NixOS is declarative all the way down. Git tracks the declarations. Together, they form a workflow where your infrastructure is code — not metaphorically, but literally.

Getting Started

If you’re new to this workflow:

  1. Start with a single configuration.nix in /etc/nixos. Don’t over-engineer the structure upfront.

  2. Run git init in /etc/nixos and make your first commit.

  3. Commit after every successful rebuild. Build the habit.

  4. Extract modules when you add a second host or when a section of your config grows large enough to warrant isolation.

  5. Adopt flakes when you’re comfortable with basic NixOS. Flakes add reproducibility but also complexity.

  6. Add Home Manager when you want your user environment (editor, shell, tools) versioned alongside the system.

The key insight is that NixOS already thinks in snapshots — every rebuild is a generation. Git adds the why to each snapshot. Together, they turn system administration from an art into engineering.

Comments

Loading comments...