Nix, Docker - or both?

"It worked on my machine!" Most of us who've worked in IT for any length of time have heard this complaint. You develop your software, you test it exhaustively. It looks great. Then you deploy it into production and it crashes. And the blame game starts.
- 8 min read
Nix, Docker - or both?

"It worked on my machine!"

Most of us who've worked in IT for any length of time have heard this complaint. You develop your software, you test it exhaustively. It looks great. Then you deploy it into production and it crashes. And the blame game starts.

Why does this happen? And how can you avoid it?

The Problem

The main cause of the "it worked on my machine" syndrome is that almost no software stands alone - it depends not only on the host's operating system, but on other software that may be installed on it. These dependency-related problems can be a nightmare to track down and fix.

It's probably worth mentioning another common syndrome: "It worked before I made changes - and it wasn't my changes that broke it!" This usually happens because something in the development environment has changed, and just rebuilding the software caused it to break. Your system may have updated some of the software your package depends on without you even being aware of it - and your package isn't compatible with the new versions.

Suggested Solutions

How can you avoid this? There are many proposed solutions, but you'll often be told: "Use Docker". Others say, "Use Nix." So which should you use?

Docker or Nix?

That's a bit like asking whether you should have a washing machine or a vacuum cleaner. Although they both save time in the house, they don't do the same job. If you can afford them, you have both.

Likewise, Nix and Docker are designed for different purposes:

  • Docker is a deployment tool, whereas Nix is a package management tool
  • Docker offers a reproducible run-time environment, whereas Nix offers a reproducible build.

You'd use the right tool for the right job. And since they are both free and work well together, there's no reason why you shouldn't use both.

In this article, I'll look at both tools: what they do best, how they make your workflow easier and more reliable, and how they complement each other to get the best of both technologies.

What is Docker?

Let's look at Docker first.

Docker allows you to package your software as images containing all the dependencies it needs to run successfully. At run time, your image is loaded into a container, which is isolated from the host machine. If the host machine is running a completely different operating system, it doesn't matter - the software in the container runs exactly the same as it did on your development machine.

What's the difference between a container and a virtual machine?

A virtual machine emulates an entire computer, including the hardware and the operating system. Virtual machines are resource-hungry - they require large amounts of RAM and hard disk space, since they must run an entire operating system.

A container is a more lightweight solution. Containers ideally contain a software application complete with the libraries it needs to perform its task. However, it takes skill and discipline to build a container that simple, and many containers are built on the entire root filesystem of the underlying operating system, as well as including build-time dependencies. This can make them quite large.

Docker containers are generally at least 30% more resource-efficient compared to virtual machines, with one benchmark finding the container to be 26 times more efficient in a particular case. If you’re interested in actual benchmarks, this article goes into performance comparisons in some detail.

This performance gain is only true when running on Linux: if it's running on MacOS, the container has to run in Docker's environment which is effectively a virtual machine. For Windows, Docker must run on WSL (Windows System for Linux), which comes with its own overheads and issues. It uses Windows Hyper-V, so again, effectively, it’s using a virtual machine.

How does Docker work?

Docker uses various features of Linux that make containerisation possible. These include:

  • Namespaces. These allow various resources, such as networking and file systems, to be isolated.
  • Control groups. These allow a group of processes to be managed and limited in their access to resources.

So under Linux, a container can run in a sandbox, where it’s isolated from the host machine and any other containers. Under other operating systems, containers run in a sandbox within Docker's virtual machine.

When is Docker useful?

Docker is useful wherever you need to quickly and reliably distribute software on a variety of different platforms. Here are some of the scenarios where Docker has been used to advantage.

  • Microservices. This is a popular trend in system architecture, which gained momentum with the advent of Docker. Instead of developing monolithic applications to cater to the entire needs of an organization, the software is designed as a collection of microservices that can be developed independently and can interact with each other. This means updates can be developed and deployed quickly and efficiently. Docker is perfect for this, as each service works in its own container and is not affected by dependencies that may have changed when other services are rebuilt.
  • Developing for the cloud. With Docker, your applications can be run anywhere, and you're no longer subject to vendor lock-in.
  • Creating reproducible environments for testing, staging, and production. Applications may need various services to be running to test them, and this can easily be achieved with Docker containers. You can be sure that the test and production environments are identical.

The bottom line is: Docker is great for any application that changes often and is a pain to deploy!

Developing for Docker

Let's have a quick look at how an application is packaged and deployed using Docker. You build an image, which can be used anywhere to create a running Docker container

  • List all your dependencies, starting with the operating system and version your application needs to run
  • Dockerhub, the Docker repository, contains useful base images that you can use when building your own image. Examples include various flavours and versions of Linux, and language-related images such as Python libraries. You can also build your own base images for future use, or use images offered by third parties. These images can be stored in any repo, such as Github, or stored on Dockerhub.
  • You can layer these images to create the exact environment you need
  • Create a configuration file, known as a Dockerfile, that defines exactly what images and local files you'll use to build your image
  • The image can now be built, tested, and pushed to Dockerhub or your own repository. From there, it can be pulled to any Linux, MacOS, or Windows machine, and used to bring up a container running your software
  • Since many applications may consist of more than one container, each running a different service, Docker has provided the Docker Compose utility to allow you to manage a group of containers with a single command.

As you can see, Docker eliminates the 'it worked on my machine' syndrome easily and efficiently. What it doesn't address is the 'it worked until I rebuilt it' syndrome.

This is because the Docker build process doesn’t have a mechanism to restrict the build to a specific version of packages the application depends on. Typically, Docker builds use commands such as apt-get to load the relevant software, and this simply loads the latest version. Sometimes you’ll find your application doesn’t work with the latest version, and occasionally the software may not even be available anymore.

And that’s where Nix comes in.

What is Nix?

Nix is a package management system with a difference. Nix aims to solve both the 'it worked on my machine' and the 'it worked until I rebuilt it' syndromes. With Nix, every dependency is locked into a specific version. It has a unique approach, where instead of software being installed into a common bin directory, all software resides in the Nix store, and is identified uniquely by a hash of its inputs. When your software is deployed, Nix will ensure that the exact versions of libraries and other dependencies are available on the host machine, and will install them if they're not.

If you need to have different versions of the same software, or the libraries it depends on - no problem. They can co-exist, and you're able to specify the version you need for a given task.

How does Nix work?

Nix includes its own functional language, which is used to define exact instructions for building a package. When the package is built, Nix establishes a dependency graph, and calculates a hash from these dependencies. This hash is used to create a unique name for the build, and the package is then built and placed in the Nix store.

Except in very rare cases, builds will from a practical point of view be entirely reproducible, although there may be a slight bit-level difference due to compiler idiosyncrasies. When the application is deployed, it will always pick up the correct versions of all libraries and other dependencies. If the application works on the test machine, it will work in production. If you rebuild the software, it will refer to the correct versions of any dependencies, and will not break.

It also has the advantage that it's very easy to roll back to a previous version of the software without disturbing other applications. You can do this with a single command

When is Nix useful?

Nix can be used for building and deploying any type of software, but it is particularly useful for:

  • Software that changes often, especially when continuous integration is in place
  • Critical applications where you can't afford downtime while a new version is installed. There’s no need to bring down the system during installation since the old and new versions can co-exist, and switching versions after the installation takes only a few moments.
  • Where a development environment and a production environment are held on the same machine
  • When you need to be able to set up the exact development environment quickly for new team members
  • When software dependencies need to be auditable. Nix has the facility to generate a full and accurate dependency graph, making it simple to audit the SBOM (Software Bill of Materials.)

Developing with Nix

Setting up the development environment

Nix allows you to easily create a reproducible development and testing environment. The Nixpkgs repository contains a huge collection of useful software, including language compilers, IDEs, libraries, and utilities for all the major programming languages, and lots of useful development tools.

You can also include your own repositories and files.

You would define the software you need in a Nix configuration, and then install everything with a single command.

Building and deploying your software

You specify all the dependencies and steps needed to build the software in a Nix configuration. Your software is then built with a single command, and placed in the Nix store.

Nix allows various deployment methods, and there are useful utilities such as cachix, deploy-rs and Colmena, which simplify distribution.

Why would you use Nix and Docker together?

It's simple to use Nix to build your Docker images, and there are several reasons why you might want to do this.

  • To ensure reproducible builds, and eliminate the 'it worked until I rebuilt it' syndrome. As we've seen, Docker doesn’t address this problem. If you use Nix to deliver packages required by the build, Nix ensures that every build uses exactly the same versions of all software as the original.
  • Nixpkgs includes some really useful tools for building images.
    • In a complex system, much thought and effort needs to go into how your image should be layered for best results. With Nix, you don't need to do this: you just specify what you want, and Nix works out the best way of achieving it.
    • When built with Nix, the image is generally very much smaller and more efficient because:
      • Nix includes only the dependencies that you need.
      • Nix makes it easy to start with an empty image, instead of starting with the root file system of an operating system distribution, and thus including a lot of unneeded files.
      • Nix includes only the runtime environment, not the build environment
    • The build process is usually much faster.
    • Since Nix builds software inside a sandbox, you will only be able to include the dependencies you've specified. You can't accidentally include untrusted sources or software that may infringe license agreements. This also prevents accidental dependencies being downloaded from the Internet.
    • You don't have to clean up files in your image that were only needed for the build.
  • You can use Nix tools to deploy your system easily.

Final thoughts

Should you use Nix or Docker? The answer is: use each of them for what they excel at. But the benefits of using both together are huge. My bottom line is: use both!

share