Speeding up Docker on MacOS

Roughly a 6 minute read by Matthew

Shipping Port

If you’ve moved your development environment to Docker, you might have noticed your web application stacks might be slower than another native environment you’ve been used to. There are things we can do to return your response times back down to how they were (or thereabouts).

Overview

1. Volume optimisations
Modify your volume bind-mount consistency. Consistency cache tuning in Docker follows a user-guided approach. We prefer delegated for most use-cases.

2. Use shared caches
Make sure that common resources are shared between projects — reduce unnecessary downloads, and compilation.

3. Increase system resources
Default RAM limit is 2GB, raise that up to 4GB — it won’t affect system performance. Consider increasing CPU limits.

4. Further considerations
A few final tips and tricks!

Introduction

Most of our web projects revolve around a common Linux, Nginx, MySQL, PHP (LEMP) stack. Historically, these components were installed on our machines using Homebrew, a virtual machine, or some other application like MAMP.

At Engage, all our developers use Docker for their local environments. We’ve also moved most of our pre-existing projects to a Dockerised setup too, meaning a developer can begin working on a project without having to install any prerequisites.

When we first started using Docker, it was incredibly slow in comparison to what we were used to; sharp, snappy response times similar to that of our production environments. The development quality of life wasn’t the best.

Why is it slower on Mac?

In Docker, we can bind-mount a volume on the host (your mac), to a Docker container. It gives the container a view of the host’s file system — In literal terms, pointing a particular directory in the container to a directory on your Mac. Any writes in either the host or container are then reflected vice-versa.

On Linux, keeping a consistent guaranteed view between the host and container has very little overhead. In contrast, there is a much bigger overhead on MacOS and other platforms in keeping the file system consistent — which leads to a performance degradation.

Docker containers run on top of a Linux kernel; meaning Docker on Linux can utilise the native kernel and the underlying virtual file system is shared between the host and container.

On Mac, we’re using Docker Desktop. This is a native MacOS application, which is bundled with an embedded hypervisor (HyperKit). HyperKit provides the kernel capabilities of Linux. However, unlike Docker on Linux, any file system changes need to be passed between the host and container via Docker for Mac, which can soon add a lot of additional computational overhead.

1. Volume optimisations

We’ve identified bind-mounts can be slow on Mac (see above).

One of the biggest performance optimisations you can make, is altering the guarantee that file system data is perfectly replicated to the host and container. Docker defaults to a consistent guarantee that the host and containers file system reflect each other.

For the majority of our use cases at Engage we don’t actually need a consistent reflection — perfect consistency between container and host is often unnecessary. We can allow for some slight delays, and temporary discrepancies in exchange for greatly increased performance.

The options Docker provides are:

Consistent

The host and container are perfectly consistent. Every time a write happens, the data is flushed to all participants of the mount’s view.

Cached

The host is authoritative in this case. There may be delays before writes on a host are available to the container.

Delegated

The container is authoritative. There may be delays until updates within the container appear on the host.

The file system delays between the host and the container aren’t perceived by humans. However, certain workloads could require increased consistency. I personally default to delegated, as generally our bind-mounted volumes contain source code. Data is only changing when I hit save, and it’s already been replicated via delegated by the time I’ve got a chance to react.

Some other processes, such as our shared composer and yarn cache could benefit from Docker’s cached option — programs are persisting data, so in this case it might be more important that writes are perfectly replicated to the host.


See an example of a docker-compose.yml configuration below:

volumes:
    - ../:/var/www:delegated
    - ./local.app.php.ini:/usr/local/etc/php/php.ini:delegated
    - ~/.composer/docker-cache/:/root/.composer:cached

Docker doesn’t do this by default. It has a good reason, which states that a system that was not consistent by default would behave in ways that were unpredictable and surprising. Full, perfect consistency is sometimes essential.

Further reading: https://docs.docker.com/docker-for-mac/osxfs-caching/

2. Using shared caches

Most of our projects are using Composer for PHP, and Yarn for frontend builds. Every time we start a Docker container, it’s a fresh instance of itself. HTTP requests and downloading payloads over the web adds a lot of latency, and it brings the initial builds of projects to a snail’s pace — Composer and Yarn would have to re-download all it’s packages each time.

Another great optimisation is to bind-mount a ‘docker cache’ volume into the container, and use this across similar projects. Docker would then pull Composer packages from an internal cache instead of the web.


See an example of bind-mounting a docker cache into the container, we do this in the docker compose configuration:

app:
  image: php:7.3-fpm
  volumes:
    - ../:/var/www:delegated
    - ~/.composer/docker-cache/:/root/.composer:cached

3. Increasing system resources

If you’re using a Mac, chances are, you have a decent amount of RAM available to you. Docker uses 2GB of RAM by default. Quite a simple performance tweak would be to increase the RAM limit available to Docker. It won’t hurt anything to give Docker Desktop an extra 2GB of RAM, which will greatly improve those memory intensive operations.

You can also tweak the amount of CPUs available; particularly during times of increased i/o load, i.e running yarn install. Docker will be synchronising a lot of file system events, and actions between host and container. This is particularly CPU intensive. By default, Docker Desktop for Mac is set to use half the number of processors available on the host machine. Increasing this limit could be considered to alleviate I/O load.

4. Further considerations

This post isn’t exhaustive, as I’m sure there are other optimisations that can be made based on the context of each kind of setup. In our use cases though, we’ve found these tweaks can greatly improve performance.

Some final things to consider are:

  • Ensure the Docker app is running the latest version of Docker for Mac.
  • Ensure your primary drive, Macintosh HD, is formatted as APFS. this is Apple’s latest proprietary HDD format and comes with a few performance optimisations versus historical formats.

Final notes

Docker are always working on improving the performance of Docker for Mac, so it’s a good idea to keep your Docker app up to date in order to benefit from these performance optimisations. Most of the performance of file system I/O can be improved within Hypervisor/VM layers. Reducing the I/O latency requires shortening the data path from a Linux system call to MacOS and back again. Each component in the data path requires tuning, and in some cases, requires a significant amount of development effort from the Docker team.