How do I terminate all the subshell processes?

Here's a simpler solution -- just add the following line at the top of your script:

trap "kill 0" SIGINT

Killing 0 sends the signal to all processes in the current process group.


One way to kill subshells, but not self:

kill $(jobs -p)

Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes).

If you just want to make sure one specific child-process (and its own children) are tidied up then a better solution is to kill by process group (PGID) using the sub-process' PID, like so:

set -m
./some_child_script.sh &
some_pid=$!

kill -- -${some_pid}

Firstly, the set -m command will enable job management (if it isn't already), this is important, as otherwise all commands, sub-shells etc. will be assigned to the same process group as your parent script (unlike when you run the commands manually in a terminal), and kill will just give a "no such process" error. This needs to be called before you run the background command you wish to manage as a group (or just call it at script start if you have several).

Secondly, note that the argument to kill is negative, this indicates that you want to kill an entire process group. By default the process group ID is the same as the first command in the group, so we can get it by simply adding a minus sign in front of the PID we fetched with $!. If you need to get the process group ID in a more complex case, you will need to use ps -o pgid= ${some_pid}, then add the minus sign to that.

Lastly, note the use of the explicit end of options --, this is important, as otherwise the process group argument will be treated as an option (signal number), and kill will complain it doesn't have enough arguments. You only need this if the process group argument is the first one you wish to terminate.

Here is a simplified example of a background timeout process, and how to cleanup as much as possible:

#!/bin/bash
# Use the overkill method in case we're terminated ourselves
trap 'kill $(jobs -p | xargs)' SIGINT SIGHUP SIGTERM EXIT

# Setup a simple timeout command (an echo)
set -m
{ sleep 3600; echo "Operation took longer than an hour"; } &
timeout_pid=$!

# Run our actual operation here
do_something

# Cancel our timeout
kill -- -${timeout_pid} >/dev/null 2>&1
wait -- -${timeout_pid} >/dev/null 2>&1
printf '' 2>&1

This should cleanly handle cancelling this simplistic timeout in all reasonable cases; the only case that can't be handled is the script being terminated immediately (kill -9), as it won't get a chance to cleanup.

I've also added a wait, followed by a no-op (printf ''), this is to suppress "terminated" messages that can be caused by the kill command, it's a bit of a hack, but is reliable enough in my experience.


You need to use job control, which, unfortunately, is a bit complicated. If these are the only background jobs that you expect will be running, you can run a command like this one:

jobs \
  | perl -ne 'print "$1\n" if m/^\[(\d+)\][+-]? +Running/;' \
  | while read -r ; do kill %"$REPLY" ; done

jobs prints a list of all active jobs (running jobs, plus recently finished or terminated jobs), in a format like this:

[1]   Running                 sleep 10 &
[2]   Running                 sleep 10 &
[3]   Running                 sleep 10 &
[4]   Running                 sleep 10 &
[5]   Running                 sleep 10 &
[6]   Running                 sleep 10 &
[7]   Running                 sleep 10 &
[8]   Running                 sleep 10 &
[9]-  Running                 sleep 10 &
[10]+  Running                 sleep 10 &

(Those are jobs that I launched by running for i in {1..10} ; do sleep 10 & done.)

perl -ne ... is me using Perl to extract the job numbers of the running jobs; you can obviously use a different tool if you prefer. You may need to modify this script if your jobs has a different output format; but the above output is also on Cygwin, so it's very likely identical to yours.

read -r reads a "raw" line from standard input, and saves it into the variable $REPLY. kill %"$REPLY" will be something like kill %1, which "kills" (sends an interrupt signal to) job number 1. (Not to be confused with kill 1, which would kill process number 1.) Together, while read -r ; do kill %"$REPLY" ; done goes through each job number printed by the Perl script, and kills it.

By the way, your for i in {1 .. $num} won't do what you expect, since brace expansion is handled before parameter expansion, so what you have is equivalent to for i in "{1" .. "$num}". (And you can't have white-space inside the brace expansion, anyway.) Unfortunately, I don't know of a clean alternative; I think you have to do something like for i in $(bash -c "{1..$num}"), or else switch to an arithmetic for-loop or whatnot.

Also by the way, you don't need to wrap your while-loop in parentheses; & already causes the job to be run in a subshell.

Tags:

Bash

Cygwin