Run a bash command when a process is done

Here's an approach that doesn't involve looping and checking if the other process is still alive, or calling train_v1.py in a manner different from what you'd normally do:

$ python train_v1.py
^Z
[1]+  Stopped                 python train_v1.py
$ % && python train_v2.py

The ^Z is me pressing Ctrl+Z while the process is running to sleep train_v1.py through sending it a SIGTSTP signal. Then, I tell the shell to wake it with %, using it as a command to which I can add the && python train_v2.py at the end. This makes it behave just as if you'd done python train_v1.py && python train_v2.py from the very beginning.

Instead of %, you can also use fg. It's the same thing. If you want to learn more about these types of features of the shell, you can read about them in the "JOB CONTROL" section of bash's manpage.

EDIT: How to keep adding to the queue

As pointed out by jamesdlin in a comment, if you try to continue the pattern to add train_v3.py for example before v2 starts, you'll find that you can't:

$ % && python train_v2.py
^Z
[1]+  Stopped                 python train_v1.py

Only train_v1.py gets stopped because train_v2.py hasn't started, and you can't stop/suspend/sleep something that hasn't even started.

$ % && python train_v3.py

would result in the same as

python train_v1.py && python train_v3.py

because % corresponds to the last suspended process. Instead of trying to add v3 like that, one should instead use history:

$ !! && python train_v3.py
% && python train_v2.py && python train_v3.py

One can do history expansion like above, or recall the last command with a keybinding (like up) and add v3 to the end.

$ % && python train_v2.py && python train_v3.py

That's something that can be repeated to add more to the pipeline.

$ !! && python train_v3.py
% && python train_v2.py && python train_v3.py
^Z
[1]+  Stopped                 python train_v1.py
$ !! && python train_v4.py
% && python train_v2.py && python train_v3.py && python train_v4.py

If you have already started python train_v1.py, you could possibly use pgrep to poll that process until it disappears, and then run your second Python script:

while pgrep -u "$USER" -fx 'python train_v1.py' >/dev/null
do
    # sleep for a minute
    sleep 60
done
python train_v2.py

By using -f and -x you match against the exact command line that was used to launch the first Python script. On some systems, pgrep implements a -q option, which makes it quiet (just like grep -q), which means that the redirection to /dev/null wouldn't be needed.

The -u option restricts the match to commands that you are running (and not a friend or other person on the same system).

If you haven't started the first script yet:

As mentioned in comments, you could just launch the second script straight after the first script. The fact that the second script does not exist, or isn't quite ready to run yet, does not matter (as long as it ready to run when the first script finishes):

python train_v1.py; python train_v2.py

Doing it this way will launch the second script regardless of the exit status of the first script. Using && instead of ;, as you show in the question, will also work, but will require the first script to finish successfully for the second script to start.


You can launch the first script with

python train_v1.py; touch finished

Then simply make a loop that checks regularly if finished exists:

while [ ! -f finished ] ; do     
    sleep 5
done
python train_v2.py
rm finished