Long-running ssh commands in python paramiko module (and how to end them)

1) You can just close the client if you wish. The server on the other end will kill the tail process.

2) If you need to do this in a non-blocking way, you will have to use the channel object directly. You can then watch for both stdout and stderr with channel.recv_ready() and channel.recv_stderr_ready(), or use select.select.


Just a small update to the solution by Andrew Aylett. The following code actually breaks the loop and quits when the external process finishes:

import paramiko
import select

client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
channel = client.get_transport().open_session()
channel.exec_command("tail -f /var/log/everything/current")
while True:
    if channel.exit_status_ready():
        break
    rl, wl, xl = select.select([channel], [], [], 0.0)
    if len(rl) > 0:
        print channel.recv(1024)

Instead of calling exec_command on the client, get hold of the transport and generate your own channel. The channel can be used to execute a command, and you can use it in a select statement to find out when data can be read:

#!/usr/bin/env python
import paramiko
import select
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command("tail -f /var/log/everything/current")
while True:
  rl, wl, xl = select.select([channel],[],[],0.0)
  if len(rl) > 0:
      # Must be stdout
      print channel.recv(1024)

The channel object can be read from and written to, connecting with stdout and stdin of the remote command. You can get at stderr by calling channel.makefile_stderr(...).

I've set the timeout to 0.0 seconds because a non-blocking solution was requested. Depending on your needs, you might want to block with a non-zero timeout.