Save PL/pgSQL output from PostgreSQL to a CSV file

There are several solutions:

1 psql command

psql -d dbname -t -A -F"," -c "select * from users" > output.csv

This has the big advantage that you can using it via SSH, like ssh postgres@host command - enabling you to get

2 postgres copy command

COPY (SELECT * from users) To '/tmp/output.csv' With CSV;

3 psql interactive (or not)

>psql dbname
psql>\f ','
psql>\a
psql>\o '/tmp/output.csv'
psql>SELECT * from users;
psql>\q

All of them can be used in scripts, but I prefer #1.

4 pgadmin but that's not scriptable.


CSV Export Unification

This information isn't really well represented. As this is the second time I've needed to derive this, I'll put this here to remind myself if nothing else.

Really the best way to do this (get CSV out of postgres) is to use the COPY ... TO STDOUT command. Though you don't want to do it the way shown in the answers here. The correct way to use the command is:

COPY (select id, name from groups) TO STDOUT WITH CSV HEADER

Remember just one command!

It's great for use over ssh:

$ ssh psqlserver.example.com 'psql -d mydb "COPY (select id, name from groups) TO STDOUT WITH CSV HEADER"' > groups.csv

It's great for use inside docker over ssh:

$ ssh pgserver.example.com 'docker exec -tu postgres postgres psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv

It's even great on the local machine:

$ psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv

Or inside docker on the local machine?:

docker exec -tu postgres postgres psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv

Or on a kubernetes cluster, in docker, over HTTPS??:

kubectl exec -t postgres-2592991581-ws2td 'psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv

So versatile, much commas!

Do you even?

Yes I did, here are my notes:

The COPYses

Using /copy effectively executes file operations on whatever system the psql command is running on, as the user who is executing it1. If you connect to a remote server, it's simple to copy data files on the system executing psql to/from the remote server.

COPY executes file operations on the server as the backend process user account (default postgres), file paths and permissions are checked and applied accordingly. If using TO STDOUT then file permissions checks are bypassed.

Both of these options require subsequent file movement if psql is not executing on the system where you want the resultant CSV to ultimately reside. This is the most likely case, in my experience, when you mostly work with remote servers.

It is more complex to configure something like a TCP/IP tunnel over ssh to a remote system for simple CSV output, but for other output formats (binary) it may be better to /copy over a tunneled connection, executing a local psql. In a similar vein, for large imports, moving the source file to the server and using COPY is probably the highest-performance option.

PSQL Parameters

With psql parameters you can format the output like CSV but there are downsides like having to remember to disable the pager and not getting headers:

$ psql -P pager=off -d mydb -t -A -F',' -c 'select * from groups;'
2,Technician,Test 2,,,t,,0,,                                                                                                                                                                   
3,Truck,1,2017-10-02,,t,,0,,                                                                                                                                                                   
4,Truck,2,2017-10-02,,t,,0,,

Other Tools

No, I just want to get CSV out of my server without compiling and/or installing a tool.


Do you want the resulting file on the server, or on the client?

Server side

If you want something easy to re-use or automate, you can use Postgresql's built in COPY command. e.g.

Copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER;

This approach runs entirely on the remote server - it can't write to your local PC. It also needs to be run as a Postgres "superuser" (normally called "root") because Postgres can't stop it doing nasty things with that machine's local filesystem.

That doesn't actually mean you have to be connected as a superuser (automating that would be a security risk of a different kind), because you can use the SECURITY DEFINER option to CREATE FUNCTION to make a function which runs as though you were a superuser.

The crucial part is that your function is there to perform additional checks, not just by-pass the security - so you could write a function which exports the exact data you need, or you could write something which can accept various options as long as they meet a strict whitelist. You need to check two things:

  1. Which files should the user be allowed to read/write on disk? This might be a particular directory, for instance, and the filename might have to have a suitable prefix or extension.
  2. Which tables should the user be able to read/write in the database? This would normally be defined by GRANTs in the database, but the function is now running as a superuser, so tables which would normally be "out of bounds" will be fully accessible. You probably don’t want to let someone invoke your function and add rows on the end of your “users” table…

I've written a blog post expanding on this approach, including some examples of functions that export (or import) files and tables meeting strict conditions.


Client side

The other approach is to do the file handling on the client side, i.e. in your application or script. The Postgres server doesn't need to know what file you're copying to, it just spits out the data and the client puts it somewhere.

The underlying syntax for this is the COPY TO STDOUT command, and graphical tools like pgAdmin will wrap it for you in a nice dialog.

The psql command-line client has a special "meta-command" called \copy, which takes all the same options as the "real" COPY, but is run inside the client:

\copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER

Note that there is no terminating ;, because meta-commands are terminated by newline, unlike SQL commands.

From the docs:

Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.

Your application programming language may also have support for pushing or fetching the data, but you cannot generally use COPY FROM STDIN/TO STDOUT within a standard SQL statement, because there is no way of connecting the input/output stream. PHP's PostgreSQL handler (not PDO) includes very basic pg_copy_from and pg_copy_to functions which copy to/from a PHP array, which may not be efficient for large data sets.


In terminal (while connected to the db) set output to the cvs file

1) Set field seperator to ',':

\f ','

2) Set output format unaligned:

\a

3) Show only tuples:

\t

4) Set output:

\o '/tmp/yourOutputFile.csv'

5) Execute your query:

:select * from YOUR_TABLE

6) Output:

\o

You will then be able to find your csv file in this location:

cd /tmp

Copy it using the scp command or edit using nano:

nano /tmp/yourOutputFile.csv