Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy

I've had trouble with this before, and found that the way to handle it is by not keeping sessions around. The trouble is you are trying to keep a connection open for way too long. Instead, use a thread local scoped session like so either in __init__.py or in a utility package that you import everywhere:

from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session( sessionmaker() )

Then set up your engines and metadata once. This allows you to skip configuration mechanics every time you connect/disconnect. After that, you can do your db work like this:

session = Session()
someObject = session.query( someMappedClass ).get( someId )
# use session like normal ...
session.close()

If you want to hold on to old objects and you don't want to leave your session open, then you can use the above pattern and reuse old objects like this:

session = Session()
someObject = session.merge( someObject )
# more db stuff
session.close()

The point is, you want to open your session, do your work, then close your session. This avoids timeouts very well. There are lots of options for .merge and .add that allow you to either include changes you've made to detached objects or to load new data from the db. The docs are very verbose, but once you know what you are looking for it might be a little easier to find.

To actually get all the way there and prevent the MySQL from "going away", you need to solve the issue of your connection pool keeping connections open too long and checking out an old connection for you.

To get a fresh connection, you can set the pool_recycle option in your create_engine call. Set this pool_recycle to the number of seconds of time in the connection pool between checkouts that you want a new connection to be created instead of an existing connection to be returned.


I had a similar issue, but for me I'd get the 'MySQL has gone away' error somewhere between 5 minutes and 2 hours of each session.

I'm using Flask-SQLAlchemy so it's supposed to close idle connections, but didn't seem to be doing that unless the connection had been idle for over a couple of hours.

Eventually I narrowed it down to the following Flask-SQLAlchemy settings:

app.config['SQLALCHEMY_POOL_SIZE'] = 100
app.config['SQLALCHEMY_POOL_RECYCLE'] = 280

The default settings for these are 10 and 7200 (2 hours) respectively.

It's a matter of playing around with these settings to fit your environment.

For example, I'd read in many places that SQLALCHEMY_POOL_RECYCLE should be set to 3600, but that didn't work for me. I'm hosting with PythonAnywhere and they kill idle MySQL connections after 5 minutes (300 seconds). So setting my value to less than 300 solved the problem.

I hope this helps others, because I wasted WAY too much time on this issue.

http://flask-sqlalchemy.pocoo.org/2.1/config/#configuration-keys

UPDATE: 2019-OCT-08

The configuration keys 'SQLALCHEMY_POOL_SIZE' and 'SQLALCHEMY_POOL_RECYCLE' are deprecated as of v2.4 and will be removed in v3.0 of SQLAlchemy. Use 'SQLALCHEMY_ENGINE_OPTIONS' to set the corresponding values.

app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_size' : 100, 'pool_recycle' : 280}

2018 answer: In SQLAlchemy v1.2.0+, you have the connection pool pre-ping feature available to address this issue of "MySQL server has gone away".

Connection pool pre-ping - The connection pool now includes an optional "pre ping" feature that will test the "liveness" of a pooled connection for every connection checkout, transparently recycling the DBAPI connection if the database is disconnected. This feature eliminates the need for the "pool recycle" flag as well as the issue of errors raised when a pooled connection is used after a database restart.

Pessimistic testing of connections upon checkout is possible with the new argument:

engine = create_engine("mysql+pymysql://user:pw@host/db", pool_pre_ping=True)