Measure the size of a PostgreSQL table row

Q2: way to measure page size

PostgreSQL provides a number of Database Object Size Functions. I packed the most interesting ones in this query and added some Statistics Access Functions at the bottom. (The additional module pgstattuple provides more useful functions, yet.)

This is going to show that different methods to measure the "size of a row" lead to very different results. It all depends on what you want to measure, exactly.

This query requires Postgres 9.3 or later. For older versions see below.

Using a VALUES expression in a LATERAL subquery, to avoid spelling out calculations for every row.

Replace public.tbl with your optionally schema-qualified table name to get a compact view of collected row size statistics. You could wrap this into a plpgsql function for repeated use, hand in the table name as parameter and use EXECUTE ...

SELECT l.metric, l.nr AS "bytes/ct"
     , CASE WHEN is_size THEN pg_size_pretty(nr) END AS bytes_pretty
     , CASE WHEN is_size THEN nr / NULLIF(x.ct, 0) END AS bytes_per_row
FROM  (
   SELECT min(tableoid)        AS tbl      -- = 'public.tbl'::regclass::oid
        , count(*)             AS ct
        , sum(length(t::text)) AS txt_len  -- length in characters
   FROM   public.tbl t                     -- provide table name *once*
   ) x
 , LATERAL (
   VALUES
      (true , 'core_relation_size'               , pg_relation_size(tbl))
    , (true , 'visibility_map'                   , pg_relation_size(tbl, 'vm'))
    , (true , 'free_space_map'                   , pg_relation_size(tbl, 'fsm'))
    , (true , 'table_size_incl_toast'            , pg_table_size(tbl))
    , (true , 'indexes_size'                     , pg_indexes_size(tbl))
    , (true , 'total_size_incl_toast_and_indexes', pg_total_relation_size(tbl))
    , (true , 'live_rows_in_text_representation' , txt_len)
    , (false, '------------------------------'   , NULL)
    , (false, 'row_count'                        , ct)
    , (false, 'live_tuples'                      , pg_stat_get_live_tuples(tbl))
    , (false, 'dead_tuples'                      , pg_stat_get_dead_tuples(tbl))
   ) l(is_size, metric, nr);

Result:

              metric               | bytes/ct | bytes_pretty | bytes_per_row
-----------------------------------+----------+--------------+---------------
 core_relation_size                | 44138496 | 42 MB        |            91
 visibility_map                    |        0 | 0 bytes      |             0
 free_space_map                    |    32768 | 32 kB        |             0
 table_size_incl_toast             | 44179456 | 42 MB        |            91
 indexes_size                      | 33128448 | 32 MB        |            68
 total_size_incl_toast_and_indexes | 77307904 | 74 MB        |           159
 live_rows_in_text_representation  | 29987360 | 29 MB        |            62
 ------------------------------    |          |              |
 row_count                         |   483424 |              |
 live_tuples                       |   483424 |              |
 dead_tuples                       |     2677 |              |

For older versions (Postgres 9.2 or older):

WITH x AS (
   SELECT count(*)               AS ct
        , sum(length(t::text))   AS txt_len  -- length in characters
        , 'public.tbl'::regclass AS tbl      -- provide table name as string
   FROM   public.tbl t                       -- provide table name as name
   ), y AS (
   SELECT ARRAY [pg_relation_size(tbl)
               , pg_relation_size(tbl, 'vm')
               , pg_relation_size(tbl, 'fsm')
               , pg_table_size(tbl)
               , pg_indexes_size(tbl)
               , pg_total_relation_size(tbl)
               , txt_len
             ] AS val
        , ARRAY ['core_relation_size'
               , 'visibility_map'
               , 'free_space_map'
               , 'table_size_incl_toast'
               , 'indexes_size'
               , 'total_size_incl_toast_and_indexes'
               , 'live_rows_in_text_representation'
             ] AS name
   FROM   x
   )
SELECT unnest(name)                AS metric
     , unnest(val)                 AS "bytes/ct"
     , pg_size_pretty(unnest(val)) AS bytes_pretty
     , unnest(val) / NULLIF(ct, 0) AS bytes_per_row
FROM   x, y

UNION ALL SELECT '------------------------------', NULL, NULL, NULL
UNION ALL SELECT 'row_count', ct, NULL, NULL FROM x
UNION ALL SELECT 'live_tuples', pg_stat_get_live_tuples(tbl), NULL, NULL FROM x
UNION ALL SELECT 'dead_tuples', pg_stat_get_dead_tuples(tbl), NULL, NULL FROM x;

Same result.

Q1: anything inefficient?

You could optimize column order to save some bytes per row, currently wasted to alignment padding:

integer                  | not null default nextval('core_page_id_seq'::regclass)
integer                  | not null default 0
character varying(255)   | not null
character varying(64)    | not null
text                     | default '{}'::text
character varying(255)   | 
text                     | default '{}'::text
text                     |
timestamp with time zone |
timestamp with time zone |
integer                  |
integer                  |

This saves between 8 and 18 bytes per row. I call it "column tetris". Details:

  • Configuring PostgreSQL for read performance
  • Calculating and saving space in PostgreSQL

Also consider:

  • Would index lookup be noticeably faster with char vs varchar when all values are 36 chars

An approximation of the size of a row, including the TOAST'ed contents, is easy to get by querying the length of the TEXT representation of the entire row:

SELECT octet_length(t.*::text) FROM tablename AS t WHERE primary_key=:value;

This is a close approximation to the number of bytes that will be retrieved client-side when executing:

SELECT * FROM tablename WHERE primary_key=:value;

...assuming that the caller of the query is requesting results in text format, which is what most programs do (binary format is possible, but it's not worth the trouble in most cases).

The same technique could be applied to locate the N "biggest-in-text" rows of tablename:

SELECT primary_key, octet_length(t.*::text) FROM tablename AS t
   ORDER BY 2 DESC LIMIT :N;

There are a few things that could be happening. In general, I doubt that length is the proximal problem. I suspect instead you have a length-related problem.

You say the text fields can get up to a few k. A row cannot go over 8k in main storage, and it is likely that your larger text fields have been TOASTed, or moved out of main storage into an extended storage in separate files. This makes your main storage faster (so select id actually is faster because fewer disk pages to access) but select * becomes slower because there is more random I/O.

If your total row sizes are still all well under 8k you could try altering the storage settings. I would, however, warn that you can get bad things happen when inserting an oversized attribute into main storage so best not to touch this if you don't have to and if you do, set appropriate limits via check constraints. So transportation is not likely the only thing. It may be collating many, many fields that require random reads. Large numbers of random reads may also cause cache misses, and large amounts of memory required can require that things get materialized on disk and large numbers of wide rows, if a join is present (and there is one if TOAST is involved) may require costlier join patterns, etc.

The first thing I would look at doing is selecting fewer rows and see if that helps. If that works, you could try adding more RAM to the server too, but I would start and see where the performance starts falling off due to plan changes and cache misses first.