Prevent duplicate cron jobs running

Solution 1:

There are a couple of programs that automate this feature, take away the annoyance and potential bugs from doing this yourself, and avoid the stale lock problem by using flock behind the scenes, too (which is a risk if you're just using touch). I've used lockrun and lckdo in the past, but now there's flock(1) (in newish versions of util-linux) which is great. It's really easy to use:

* * * * * /usr/bin/flock -n /tmp/fcj.lockfile /usr/local/bin/frequent_cron_job

Solution 2:

Best way in shell is to use flock(1)

(
  flock -x -w 5 99
  ## Do your stuff here
) 99>/path/to/my.lock

Solution 3:

Actually, flock -n may be used instead of lckdo*, so you will be using code from kernel developers.

Building on womble's example, you would write something like:

* * * * * flock -n /some/lockfile command_to_run_every_minute

BTW, looking at the code, all of flock, lockrun, and lckdo do the exact same thing, so it's just a matter of which is most readily available to you.


Solution 4:

You havent specified if you want the script to wait for the previous run to complete or not. By "I don't want the jobs to start "stacking up" over each other", I guess you are implying that you want the script to exit if already running,

So, if you don want to depend on lckdo or similar, you can do this:


PIDFILE=/tmp/`basename $0`.pid

if [ -f $PIDFILE ]; then
  if ps -p `cat $PIDFILE` > /dev/null 2>&1; then
      echo "$0 already running!"
      exit
  fi
fi
echo $$ > $PIDFILE

trap 'rm -f "$PIDFILE" >/dev/null 2>&1' EXIT HUP KILL INT QUIT TERM

# do the work


Solution 5:

You can use a lock file. Create this file when the script starts and delete it when it finishes. The script, before it runs its main routine, should check if the lock file exists and proceed accordingly.

Lockfiles are used by initscripts and by many other applications and utilities in Unix systems.