how can i tune postgres to avoid this error?

Postgres is running out of space in its temporary dumping ground while trying to complete your request -- The way to fix this is to either run a simpler query (which probably isn't helpful) or to free up more space on the drive that holds PGDATA/base/pgsql_tmp/ (If you haven't done a VACUUM FULL in a while now may be a good time :-)

You can also put pgsql_tmp on its own partition (mind the permissions as Postgres tends to get snippy about those things)

Note that I believe pgsql_tmp is per-tablespace these days, so if this isn't the main (base) tablespace substitute appropriately :-)


Add space on the disk it's writing to?


I know this thread is a bit old but thought this could help someone as I was having a similar issue and had no option of extending the storage on the remote host.

Another way I managed to resolve it was to run a script to select records in chunks and iterate through the records until the last result returns no records. For example, something like the below should work for you.

Note the order by, this is very important, or else, you may be selecting records that already been selected and end up with duplicates (i have had that).

INSERT INTO summary SELECT t1.a, t1.b, SUM(t1.p) AS p, COUNT(t1.*) AS c, t1.d, t1.r, DATE_TRUNC('month', t1.start) AS month, t2.t AS t, t2.h, t2.x FROM raw1 t1, raw2 t2 WHERE t1.t2_id=t2.id AND (t2.t<>'a' OR t2.y) order by t1.a, t1.b limit 100 offset 0 GROUP BY month, t, a, b, d, r, h, x

In next run execution, you adjust offset to 100 and so on.

Hope it helps someone.:)

Tags:

Postgresql