Better approach to handling sqlalchemy disconnects

Using exponential backoff (https://github.com/litl/backoff):

@backoff.on_exception(
    backoff.expo,
    sqlalchemy.exc.DBAPIError,
    factor=7,
    max_tries=3,
    on_backoff=lambda details: LocalSession.get_main_sql_session().rollback(),
    on_giveup=lambda details: LocalSession.get_main_sql_session().flush(),  # flush the session
    logger=logging
)
def pessimistic_insertion(document_metadata):

    LocalSession.get_main_sql_session().add(document_metadata)
    LocalSession.get_main_sql_session().commit()

Assuming that LocalSession.get_main_sql_session() returns a singleton.


The way I like to approach this is place all my database code in a lambda or closure, and pass that into a helper function that will handle catching the disconnect exception, and retrying.

So with your example:

import sqlalchemy as SA

def main():
    def query():
        for name in session.query(Names):
            print name

    run_query(query)

def run_query(f, attempts=2):
    while attempts > 0:
        attempts -= 1
        try:
            return f() # "break" if query was successful and return any results
         except SA.exc.DBAPIError as exc:
            if attempts > 0 and exc.connection_invalidated:
                session.rollback()
            else:
                raise

You can make this more fancy by passing a boolean into run_query to handle the case where you are only doing a read, and therefore want to retry without rolling back.

This helps you satisfy the DRY principle since all the ugly boiler-plate code for managing retries + rollbacks is placed in one location.