System-wide mutex in Python on Linux

Try ilock library:

from ilock import ILock

with ILock('Unique lock name'):
    # The code should be run as a system-wide single instance
    ...

The "traditional" Unix answer is to use file locks. You can use lockf(3) to lock sections of a file so that other processes can't edit it; a very common abuse is to use this as a mutex between processes. The python equivalent is fcntl.lockf.

Traditionally you write the PID of the locking process into the lock file, so that deadlocks due to processes dying while holding the lock are identifiable and fixable.

This gets you what you want, since your lock is in a global namespace (the filesystem) and accessible to all processes. This approach also has the perk that non-Python programs can participate in your locking. The downside is that you need a place for this lock file to live; also, some filesystems don't actually lock correctly, so there's a risk that it will silently fail to achieve exclusion. You win some, you lose some.


My answer overlaps with the other answers, but just to add something people can copy-paste, I often do something like this.

class Locker:
    def __enter__ (self):
        self.fp = open("./lockfile.lck")
        fcntl.flock(self.fp.fileno(), fcntl.LOCK_EX)

    def __exit__ (self, _type, value, tb):
        fcntl.flock(self.fp.fileno(), fcntl.LOCK_UN)
        self.fp.close()

And then use it as:

print("waiting for lock")
with Locker():
    print("obtained lock")
    time.sleep(5.0)

To test, do touch lockfile.lck then run the above code in two or more different terminals (from the same directory).

UPDATE: smwikipedia mentioned my solution is unix-specific. I needed a portable version recently and came up with the following, with the idea from a random github project. I'm not sure the seek() calls are needed but they're there because the Windows API locks a specific position in the file. If you're not using the file for anything other than the locking you can probably remove the seeks.

if os.name == "nt":
    import msvcrt

    def portable_lock(fp):
        fp.seek(0)
        msvcrt.locking(fp.fileno(), msvcrt.LK_LOCK, 1)

    def portable_unlock(fp):
        fp.seek(0)
        msvcrt.locking(fp.fileno(), msvcrt.LK_UNLCK, 1)
else:
    import fcntl

    def portable_lock(fp):
        fcntl.flock(fp.fileno(), fcntl.LOCK_EX)

    def portable_unlock(fp):
        fcntl.flock(fp.fileno(), fcntl.LOCK_UN)


class Locker:
    def __enter__(self):
        self.fp = open("./lockfile.lck")
        portable_lock(self.fp)

    def __exit__(self, _type, value, tb):
        portable_unlock(self.fp)
        self.fp.close()

The POSIX standard specifies inter-process semaphores which can be used for this purpose. http://linux.die.net/man/7/sem_overview

The multiprocessing module in Python is built on this API and others. In particular, multiprocessing.Lock provides a cross-process "mutex". http://docs.python.org/library/multiprocessing.html#synchronization-between-processes

EDIT to respond to edited question:

In your proof of concept each process is constructing a Lock(). So you have two separate locks. That is why neither process waits. You will need to share the same lock between processes. The section I linked to in the multiprocessing documentation explains how to do that.