Are there downsides for creating a large VARCHAR value in Redshift?

What do you mean "downside"? There is a really big downside if you don't make the column big enough -- you can't use it to store the values you want to store there.

As for additional overhead, you don't need to worry about that. A varchar() type basically only takes up the storage needed for the value, plus a small overhead for the length. Also, "400" is not such a big number, especially when compared to "200".

So, if you need 400 bytes to store the value, change the table to store it. There may be overhead for changing the length of the value. I'm not sure if RedShift will feel the need to copy the data because the type changed. However, the effect on performance should be negligible.


Don’t make it a practice to use the maximum column size for convenience.

Instead, consider the largest values you are likely to store in a VARCHAR column, for example, and size your columns accordingly. Because Amazon Redshift compresses column data very effectively, creating columns much larger than necessary has minimal impact on the size of data tables. During processing for complex queries, however, intermediate query results might need to be stored in temporary tables. Because temporary tables are not compressed, unnecessarily large columns consume excessive memory and temporary disk space, which can affect query performance.

http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-smallest-column-size.html