Should I add an arbitrary length limit to VARCHAR columns?

The answer is no.

Related advice in the Postgres Wiki.

Don't add a length modifier to varchar if you don't need it. (Most of the time, you don't.) Just use text for all character data. Make that varchar (standard SQL type) without length modifier if you need to stay compatible with RDBMS which don't have text as generic character string type.

Performance is almost the same, text is a bit faster in rare situations, and you save the cycles for the check on the length. Related:

  • Any downsides of using data type “text” for storing strings?

If you actually need to enforce a maximum length, varchar(n) is a valid choice, but I would still consider text with a CHECK constraint like:

ALTER TABLE tbl ADD CONSTRAINT tbl_col_len CHECK (length(col) < 51);

You can modify or drop such a constraint at any time without having to mess with the table definition and depending objects (views, functions, foreign keys, ...). And you can enforce other requirements in the (same) constraint.

Length modifiers used to cause problems like this or this or this ...

PostgreSQL 9.1 introduced a new feature to alleviate the pain somewhat. The release notes:

Allow ALTER TABLE ... SET DATA TYPE to avoid table rewrites in appropriate cases (Noah Misch, Robert Haas)

For example, converting a varchar column to text no longer requires a rewrite of the table. However, increasing the length constraint on a varchar column still requires a table rewrite.

More issues with varchar(n) have been fixed in later releases.


If you see the length limit as a kind of check constraint to make sure you validate the data, then yes add one. Actually you might want to not use a length definition but a real check constraint instead, to make changing the limit faster.

To change (increase) a length limit you need to run an ALTER TABLE which might take a long time to finish (due to a possible re-write of the table) during which an exclusive table lock is necessary.

Changing (i.e. dropping and re-creating) a check constraint is a very brief operation and only requires reading the table's data, it will not change any rows. So that is going to be a lot quicker (which in turn means the exclusive table lock will be held for a much shorter amount of time).

During operation there is no difference whatsovever between a text, a varchar or a varchar(5000) column.


The question is specifically whether adding an arbitrary length limit to VARCHAR columns?

To that, the answer is simply "no". There is nothing that can justify adding an arbitrary limit like you would in inferior databases that support varchar(max) or use conventions like varchar(255). However, if the spec addresses a limit, I think the answer becomes much more complex especially on modern versions of PostgreSQL. And, for that, I would lean towards YES.

In my opinion, the limit is a wise-choice if the spec requires it. Especially for more reasonable workloads. If for no other reason then to preserve meta-data.

From my answer here, index performance for CHAR vs VARCHAR (Postgres), where I address the value of meta-data.

If I found a spec that had variable-length text-keys that were meaningful and that I trusted to have a constant max-length, I would use varchar too. However, I can't think of anything that fits that criteria.