Why do salts for hashing passwords need to be globally unique, not just system/site-unique?

The goal of salting is resistance to precomputation.

While you can't guarantee that a salt is globally unique, you can (just as you said) generate salts of sufficient size as to make it infeasible to generate a rainbow table in advance (making them "unique enough").

More generally, if you're asking about how to create a salt, it sounds like you're rolling your own hashing method, which may be educational but is not generally recommended.

Instead, use an existing hash (such as PBDKF2 or bcrypt) (or even better, as CBHacking rightly suggests in the comments, scrypt or Argon2). Not only will you automatically inherit their existing salting methods (which are quite sufficient), but your experimentation will be aligned with password storage best practices ... which would probably be better practice.

Update 2018-04-03: Steve Thomas (sc00bz) makes a good point here that Argon2 and scrypt are actually worse for the defender when used at speeds generally considered to be compatible with authentication at scale (<0.1 seconds), while being better for the defender for speeds compatible with encryption of individual files (1 to 5 seconds). (This means that the gap from 0.1 seconds to 1 second is probably interesting, and bears testing for your specific environment). In other words, his assertion is that if you try to tune Argon2 or scrypt to <0.1s speeds, the results are less resistant to cracking than bcrypt is at those speeds.

In the same thread, Aaron Toponce also makes a great suggestion for working around bcrypt's length (72 characters max) and pre-hashing limitations, by using bcrypt(base64(sha-256(password))). (See also Thomas Pornin's similar answer here). The reason that the base64 is needed is also interesting - don't leave it out.

So if your use case is scaleable, cracking-resistant authentication with a fast authentication experience, bcrypt(base64(sha-256(password))) is probably a good approach. (And you'll need to adjust bcrypt's work factor to the highest value that fits within your authentication speed window). If your users can tolerate waiting a second or more to authenticate, Argon2i or scrypt may be better. And note that relative performance will shift over time, as hardware capabilities improve - so increasing the bcrypt work factor every X months for new hashes (as Dropbox does) would allow the approach to adapt to future capabilities over time.

Update 2019-09-01 It turns out that wrapping a faster hash in a slower hash without salting/peppering that faster hash first is dangerous, because fast uncracked unsalted hashes from another source (such as another leak) can be tested in bulk against the wrapped hashes without having to crack those faster hashes first. The faster hashes can then be attacked in a separate job at a much higher rate. So my advice above should be updated to bcrypt(base64(sha256(password.salt))).


There is no advantage in using the e-mail address instead of generating a random value. You have to store it anyway like the random salt, otherwise the user can never change the e-mail address.

The global uniqueness is not a requirement, you don't need to guarantee the uniqueness, but the more globally unique the salts are, the better. The possible combinations for a 128-bit salt (BCrypt) are 3E38. Even if you generate 1000 salts/sec one would expect about 6E8 years to wait for a 50% chance of a duplicate.

As Royce already mentioned, todays algorithms already generate the salt for you, and store it plaintext in the resulting hash-string, so there is no need for special handling in the database (just 1 field for the hash).

When using email addresses, an attacker could pre calculate rainbow tables for certain emails of interest.

While the disadvantages are not fatal in practise, there are simply no advantages, so why would you choose a more complicate and unsafer method to generate the salt? Use a modern password hash function and you are good.