PostgreSQL - dump each table into a different file

If you are happy to hard-code the list of tables, but just want each to be in a different file, you could use a shell script loop to run the pg_dump command multiple times, substituting in the table name each time round the loop:

for table in table1 table2 table3 etc;
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;

EDIT: This approach can be extended to get the list of tables dynamically by running a query through psql and feeding the results into the loop instead of a hard-coded list:

for table in $(psql -U userName -d dbName -t -c "Select table_name From information_schema.tables Where table_type='BASE TABLE' and table_name like 'thr_%'");
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;

Here psql -t -c "SQL" runs SQL and outputs the results with no header or footer; since there is only one column selected, there will be a table name on each line of the output captured by $(command), and your shell will loop through them one at a time.


(not enough reputation to comment the right post) I used your script with some corrections and some modifications for my own use, may be usefull for others:

#!/bin/bash

# Config:
DB=rezopilotdatabase
U=postgres
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir

mkdir -p $DIR
TABLES="$(psql -d $DB -U $U -t -c "SELECT table_name FROM 
information_schema.tables WHERE table_type='BASE TABLE' AND table_name 
LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
  echo backup $table ...
  pg_dump $DB -U $U -w -t $table > $DIR/$table.sql;
done;
echo done

(I think you forgot to add $DB in the pg_dumb command, and I added a -w, for an automated script, it is better not to have a psw prompt I guess, for that, I created a ~/.pgpass file with my password in it I also gave the user for the command to know which password to fetch in .pgpass) Hope this helps someone someday.


This bash script will do a backup with one file per table:

#!/bin/bash

# Config:
DB=dbName
U=userName
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir

mkdir -p $DIR
AUTH="-d $DB -U $U"
TABLES="$(psql $AUTH -t -c "SELECT table_name FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_name LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
  echo backup $table ...
  pg_dump $AUTH -t $table > $DIR/$table.sql;
done;
echo done

Since version 9.1 of PostgreSQL (Sept. 2011), one can use the directory format output when doing backups

and 2 versions/2 years after (PostgreSQL 9.3), the --jobs/-j makes it even more efficient to backup every single objects in parallel

but what I don't understand in your original question, is that you use the -s option which dumps only the object definitions (schema), not data.

if you want the data, you shall not use -s but rather -a (data-only) or no option to have schema+data

so, to backup all objects (tables...) that begins with 'th' for the database dbName on the directory dbName_objects/ with 10 concurrent jobs/processes (increase load on the server) :

pg_dump -Fd -f dbName_objects -j 10 -t 'thr_*' -U userName dbName

(you can also use the -a/-s if you want the data or the schema of the objects)

as a result the directory will be populated with a toc.dat (table of content of all the objects) and one file per object (.dat.gz) in a compressed form

each file is named after it's object number, and you can retrieve the list with the following pg_restore command:

pg_restore --list -Fd dbName_objects/ | grep 'TABLE DATA'

in order to have each file not compressed (in raw SQL)

pg_dump --data-only --compress=0 --format=directory --file=dbName_objects --jobs=10 --table='thr_*' --username=userName --dbname=dbName