flock(2) behaviour on macOS and Linux

Note to self: flock is simple, but not as simple as you think

I’ve been using flock(2) — a system call for advisory locking — in a library I’ve been porting to Rust. The original library ran only on Linux but I wanted to support macOS too, so I set about checking my assumptions.

Ubuntu 17.04’s man page for flock(2) says:

A process may hold only one type of lock (shared or exclusive) on a file. Subsequent flock() calls on an already locked file will convert an existing lock to the new lock mode.

macOS Sierra’s man page for flock(2) says:

A shared lock may be upgraded to an exclusive lock, and vice versa, simply by specifying the appropriate lock type; this results in the previous lock being released and the new lock applied (possibly after other processes have gained and released the lock).

To me this reads that, on Linux, we can switch between shared and exclusive locks without losing out to another process1 that’s trying to grab an exclusive lock.

But on macOS it reads like other processes will be able to wriggle in.

This rang little alarm bells, so I investigated some more. It turned out to make sense in the end and be consistent between the platforms, but have more nuanced behaviour than the man pages hint at.

I also had the feeling that I’ve investigated this before, so this time I’m writing it down.

The importance of running tests quickly

Here, MAAS, and everywhere

MAAS’s inception date was 16th January 2012 and it has been continually developed ever since, including the development of many, many unit tests. Since almost the beginning we’ve had a landing robot that runs those unit tests before merging a new branch into trunk.

At the time of writing it runs 14337 tests, and that number grows daily. Until recently the landing robot would take over an hour to test and merge each branch.

This is too slow — and I’ll explain why I think this — and this is how my journey to fix it began.

Bazaar repositories for fun/profit/shenanigans

Save time and disk with Bazaar's shared repositories

Bazaar can support you whether you like the Git model of having a single working tree for each clone of your repository, or if you prefer to have multiple working trees.

When dealing with large projects the latter can get slow and disk hungry. This is because, by default, each new working tree created by bzr branch a-branch new-branch also holds a complete copy of the repository history.

I use a mix of both development models when I’m using Bazaar. Fortunately there’s an easy and out-of-the-box way to prevent those slow downs and get back your disk space: shared repositories. Read on to find out how.