How to run Python 3 function even after user has closed web browser/tab?

Normally you'd create a task and return to the user an id he can use to pool the status of said task.

Then you'd process the task in another container\process\thread.

Celery is a Python library that can help you set this up.

Another common solution is to use a publisher\subscriber design and use a distributed queue such as Kafka, RabbitMQ or even Redis.

In fact, Celery can use RabbitMQ or Redis as its message broker.


You need to handle this task asynchronously because it's a long-running job that would dramatically reduce the performance of an HTTP response (if you wait untill it finishes).

Also, you may notice that you need to run this task in a separate process of the current process that serves your HTTP request. Because, web servers (Gunicorn, uWSGI, etc...) will spawn the process they had created and liberate the system ressources when they need. You can easly be in the case that the async process launched via Ajax will be interrupted and killed by the web server because you closed the browser (request closed). So, threading and coroutines are not the best tools for this task.

This is why there is some cool Task queue projects that solves your problem. We may note:

  • Celery: (Production ready solution) It’s a task queue with focus on real-time processing, while also supporting task scheduling. Works well with Redis and RabbitMQ as message brokers
  • RQ (Redis Queue): RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily.
  • Taskmaster: Taskmaster is a simple distributed queue designed for handling large numbers of one-off tasks.
  • Huey: is a Redis-based task queue that aims to provide a simple, yet flexible framework for executing tasks. Huey supports task scheduling, crontab-like repeating tasks, result storage and automatic retry in the event of failure.
  • Dramatiq: is a fast and reliable alternative to Celery. It supports RabbitMQ and Redis as message brokers.
  • APScheduler: Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically.

And many more !

And with the rise of micro services you can combine the power of Task queues and containers and you can build a seperate container(s) that handles your long running tasks (and updates your databse(s) as your current case). Also, if you can't use micro services architecture yet, you can build a seperate server that handles those tasks and keep the web server that handles the user requests free from running a long running tasks.

Finally, you can combine these solutions in your current website like this scenario:

  • User click on a button.
  • Ajax request trigger your backend (via API or whatever)
  • You schedule a Task in your broker message to run it now or later (in a separate container/VPS...)
  • In your backend you retrieve the Task ID of the task
  • You return the Task ID by API or whatever and you add it in the session cookies or in seperate table that deals with the user who launched the process.
  • Within some JS you keep requesting the status of the task from your backend by the Task ID you have (in the user session cookies or in your database)
  • Even if the user closes his browser, the task will continue its action untill it finishes or raises an exception. And within the Task ID you already have you can easly know the status of this task and send this information to the user (in the view when he loggedin again, by email, etc ...)

And sure you can improve this scenario !