How to reclaim disk space after delete without rebuilding table?

If you use anything but CLUSTER / VACUUM FULL / pg_repack (which all manage locks automatically) you need to make sure there are no concurrent writes to the table. Take an exclusive lock on the table and do everything in a single transaction or, better yet, shut out all connections to avoid concurrent changes.

TABLESPACE

Yes, your last idea could work. Create a new tablespace on the other disk.

CREATE TABLESPACE ts1 LOCATION '/data/disk2';

Then create an optimized copy of the table in the new tablespace:

CREATE TABLE new_tbl 
TABLESPACE ts1 AS
SELECT * FROM tbl
ORDER BY ....  -- ORDER BY is optional 

The copy will be packed tightly without dead rows.

Then you might just drop the old table and rename the new one to keep using the new disk.

Or, if you have to keep it on the old disk for some reason, delete the old table, rename the new one and move it back to the default tablespace. This step is much faster now with:

ALTER TABLE tbl SET TABLESPACE pg_default

Per documentation:

This form changes the table's tablespace to the specified tablespace and moves the data file(s) associated with the table to the new tablespace. Indexes on the table, if any, are not moved; but they can be moved separately with additional SET TABLESPACE commands. See also CREATE TABLESPACE.

Either way, don't forget to (re)create all depending objects. Indexes, foreign keys, views, ...

Well, you could use ALTER TABLE tbl SET TABLESPACE ... to begin with, but then the table would not be optimized as requested, just moved as is. But you would have enough wiggle room then to run pg_repack etc.

COPY

A complete backup / restore might take too long, but you could do that for only the table in question.

COPY tbl TO '/path/to/other/disk/tbl.pgsql';
TRUNCATE tbl;
COPY tbl FROM '/path/to/other/disk/tbl.pgsql';

The table is now packed tightly.

Temporary table

If you should happen to have enough RAM, you could do something similar with a temporary table in RAM. Would be much faster. Detailed instructions:

  • Best way to delete millions of rows by ID
  • How to delete duplicate entries?