How to open and convert sqlite database to pandas dataframe

Search sqlalchemy, engine and database name in google (sqlite in this case):

import pandas as pd
import sqlalchemy

db_name = "data.db"
table_name = "LITTLE_BOBBY_TABLES"

engine = sqlalchemy.create_engine("sqlite:///%s" % db_name, execution_options={"sqlite_raw_colnames": True})
df = pd.read_sql_table(table_name, engine)

Parsing a sqlite .db into a dictionary of dataframes without knowing the table names:

def read_sqlite(dbfile):
    import sqlite3
    from pandas import read_sql_query, read_sql_table

    with sqlite3.connect(dbfile) as dbcon:
        tables = list(read_sql_query("SELECT name FROM sqlite_master WHERE type='table';", dbcon)['name'])
        out = {tbl : read_sql_query(f"SELECT * from {tbl}", dbcon) for tbl in tables}

   return out

The line

data = sqlite3.connect('data.db')

opens a connection to the database. There are no records queried up to this. So you have to execute a query afterward and provide this to the pandas DataFrame constructor.

It should look similar to this

import sqlite3
import pandas as pd

dat = sqlite3.connect('data.db')
query = dat.execute("SELECT * From <TABLENAME>")
cols = [column[0] for column in query.description]
results= pd.DataFrame.from_records(data = query.fetchall(), columns = cols)

I am not really firm with SQL commands, so you should check the correctness of the query. should be the name of the table in your database.


Despite sqlite being part of the Python Standard Library and is a nice and easy interface to SQLite databases, the Pandas tutorial states:

Note In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed.

But Pandas still supports sqlite3 access if you want to avoid installing SQLAlchemy:

import sqlite3
import pandas as pd
# Create your connection.
cnx = sqlite3.connect('file.db')

df = pd.read_sql_query("SELECT * FROM table_name", cnx)

As stated here, but you need to know the name of the used table in advance.