Why historically do people use 255 not 256 for database field magnitudes?

With a maximum length of 255 characters, the DBMS can choose to use a single byte to indicate the length of the data in the field. If the limit were 256 or greater, two bytes would be needed.

A value of length zero is certainly valid for varchar data (unless constrained otherwise). Most systems treat such an empty string as distinct from NULL, but some systems (notably Oracle) treat an empty string identically to NULL. For systems where an empty string is not NULL, an additional bit somewhere in the row would be needed to indicate whether the value should be considered NULL or not.

As you note, this is a historical optimisation and is probably not relevant to most systems today.


255 was the varchar limit in mySQL4 and earlier.

Also 255 chars + Null terminator = 256

Or 1 byte length descriptor gives a possible range 0-255 chars