Execute a function after Flask returns response

Flask now supports (via Werkzeug) a call_on_close callback decorator on response objects. Here is how you use it:

@app.after_request
def response_processor(response):
    # Prepare all the local variables you need since the request context
    # will be gone in the callback function

    @response.call_on_close
    def process_after_request():
        # Do whatever is necessary here
        pass

    return response

Advantages:

  1. call_on_close sets up functions for being called after the response is returned, using the WSGI spec for the close method.

  2. No threads, no background jobs, no complicated setup. It runs in the same thread without blocking the request from returning.

Disadvantages:

  1. No request context or app context. You have to save the variables you need, to pass into the closure.
  2. No local stack as all that is being torn down. You have to make your own app context if you need it.
  3. Flask-SQLAlchemy will fail silently if you're attempting to write to the database (I haven't figured out why, but likely due to the context shutting down). (It works, but if you have an existing object, it must be added to the new session using session.add or session.merge; not a disadvantage!)

Flask is a WSGI app and as a result it fundamentally cannot handle anything after the response. This is why no such handler exists, the WSGI app itself is responsible only for constructing the response iterator object to the WSGI server.

A WSGI server however (like gunicorn) can very easily provide this functionality, but tying the application to the server is a very bad idea for a number of reasons.

For this exact reason, WSGI provides a spec for Middleware, and Werkzeug provides a number of helpers to simplify common Middleware functionality. Among them is a ClosingIterator class which allows you to hook methods up to the close method of the response iterator which is executed after the request is closed.

Here's an example of a naive after_response implementation done as a Flask extension:

import traceback
from werkzeug.wsgi import ClosingIterator

class AfterResponse:
    def __init__(self, app=None):
        self.callbacks = []
        if app:
            self.init_app(app)

    def __call__(self, callback):
        self.callbacks.append(callback)
        return callback

    def init_app(self, app):
        # install extension
        app.after_response = self

        # install middleware
        app.wsgi_app = AfterResponseMiddleware(app.wsgi_app, self)

    def flush(self):
        for fn in self.callbacks:
            try:
                fn()
            except Exception:
                traceback.print_exc()

class AfterResponseMiddleware:
    def __init__(self, application, after_response_ext):
        self.application = application
        self.after_response_ext = after_response_ext

    def __call__(self, environ, after_response):
        iterator = self.application(environ, after_response)
        try:
            return ClosingIterator(iterator, [self.after_response_ext.flush])
        except Exception:
            traceback.print_exc()
            return iterator

You can use this extension like this:

import flask
app = flask.Flask("after_response")
AfterResponse(app)

@app.after_response
def say_hi():
    print("hi")

@app.route("/")
def home():
    return "Success!\n"

When you curl "/" you'll see the following in your logs:

127.0.0.1 - - [24/Jun/2018 19:30:48] "GET / HTTP/1.1" 200 -
hi

This solves the issue simply without introducing either threads (GIL??) or having to install and manage a task queue and client software.


The long story short is that Flask does not provide any special capabilities to accomplish this. For simple one-off tasks, consider Python's multithreading as shown below. For more complex configurations, use a task queue like RQ or Celery.

Why?

It's important to understand the functions Flask provides and why they do not accomplish the intended goal. All of these are useful in other cases and are good reading, but don't help with background tasks.

Flask's after_request handler

Flask's after_request handler, as detailed in this pattern for deferred request callbacks and this snippet on attaching different functions per request, will pass the request to the callback function. The intended use case is to modify the request, such as to attach a cookie.

Thus the request will wait around for these handlers to finish executing because the expectation is that the request itself will change as a result.

Flask's teardown_request handler

This is similar to after_request, but teardown_request doesn't receive the request object. So that means it won't wait for the request, right?

This seems like the solution, as this answer to a similar Stack Overflow question suggests. And since Flask's documentation explains that teardown callbacks are independent of the actual request and do not receive the request context, you'd have good reason to believe this.

Unfortunately, teardown_request is still synchronous, it just happens at a later part of Flask's request handling when the request is no longer modifiable. Flask will still wait for teardown functions to complete before returning the response, as this list of Flask callbacks and errors dictates.

Flask's streaming responses

Flask can stream responses by passing a generator to Response(), as this Stack Overflow answer to a similar question suggests.

With streaming, the client does begin receiving the response before the request concludes. However, the request still runs synchronously, so the worker handling the request is busy until the stream is finished.

This Flask pattern for streaming includes some documentation on using stream_with_context(), which is necessary to include the request context.

So what's the solution?

Flask doesn't offer a solution to run functions in the background because this isn't Flask's responsibility.

In most cases, the best way to solve this problem is to use a task queue such as RQ or Celery. These manage tricky things like configuration, scheduling, and distributing workers for you.This is the most common answer to this type of question because it is the most correct, and forces you to set things up in a way where you consider context, etc. correctly.

If you need to run a function in the background and don't want to set up a queue to manage this, you can use Python's built in threading or multiprocessing to spawn a background worker.

You can't access request or others of Flask's thread locals from background tasks, since the request will not be active there. Instead, pass the data you need from the view to the background thread when you create it.

@app.route('/start_task')
def start_task():
    def do_work(value):
        # do something that takes a long time
        import time
        time.sleep(value)

    thread = Thread(target=do_work, kwargs={'value': request.args.get('value', 20)})
    thread.start()
    return 'started'