How should an application store its credentials

Never hardcode passwords or crypto keys in your program.

The general rule of thumb is: the only credentials you should store on a user's machine are credentials associated with that user, e.g., credentials that enable that user to log into his/her account.

You should not store your developer credentials on the user's machine. That's not safe.

You have to assume that anything stored on the user's machine is known by the user, or can easily be learned by the user. (This is the right assumption: it is not hard to reverse-engineer an application binary to learn any keys or secrets that may be embedded in it.)

Once you understand this general principle, everything becomes easy. Basically, then you need to design the rest of your system and your authentication protocol so that client software can authenticate itself using only those credentials that are safe to store on the client.

Example 1. Suppose you have a Facebook app ID and key, associated with your app (i.e., associated with your developer account). Do you embed the app ID and key in the desktop software you ship to users? No! Absolutely not. You definitely don't do that, because that would allow of any of your users to learn your app ID and key and submit their own requests, possibly damaging your reputation.

Instead, you find another way. For instance, maybe you set up your own server that has the app ID and key and is responsible for making the requests to the Facebook platform (subject to appropriate limitations and rate-limiting). Then, your client connects to your server. Maybe you authenticate each client by having each user set up his/her own user account on your server, storing the account credentials on the client, and having the client authenticate itself using these credentials.

You can make this totally invisible to the user, by having the client app generate a new user account on first execution (generating its own login credentials, storing them locally, and sending them to the server). The client app can use these stored credentials to connect in the future (say, over SSL) and automatically log in every subsequent time the app is executed.

Notice how the only thing stored on a user's machine are credentials that allow to log into that user's account -- but nothing that would allow logging into other people's accounts, and nothing that would expose developer app keys.

Example 2. Suppose you write an app that needs to access the user's data in their Google account. Do you prompt them for their Google username and password and store it in the app-local storage? You could: that would be OK, because the user's credentials are stored on the user's machine. The user has no incentive to try to hack their own machine, because they already know their own credentials.

Even better yet: use OAuth to authorize your app. This way your app stores an OAuth token in its app-local storage, which allows your app to access the user's Google account. It also avoids the need to store the user's Google password (which is particularly sensitive) in the app's local storage, so it reduces the risk of compromise.

Example 3. Suppose you're writing an app that has a MySQL database backend that is shared among all users. Do you take the MySQL database and embed it into the app binary? No! Any of your users could extract the password and then gain direct access to your MySQL database.

Instead, you set up a service that provides the necessary functionality. The client app connects to the service, authenticates itself, and sends the request to the service. The service can then execute this request on the MySQL database. The MySQL password stays safely stored on the server's machine, and is never accessible on any user's machine. The server can impose any restrictions or access control that you desire.

This requires your client app to be able to authenticate to the service. One way to do that is to have the client app create a new account on the service on first run, generate a random authentication credential, and automatically log in to the service every subsequent time. You could use SSL with a random password, or even better yet, use something like SSL with a unique client certificate for each client.


The other rule is: you don't hardcode credentials into the program. If you are storing credentials on the user's machine, store them in some private location: maybe a configuration file or in a directory, preferably one that is only readable by this particular app or this particular user (not a world-readable file).


It's a classical security problem with no perfect solution, just imperfect ones, and it boils down to the more general problem of protecting software against tampering and reverse-engineering.

  1. Use an external authentication method which the user must actively provide to reach the credentials: a manually entered password (whose hash digest, for example, is used to decrypt the credentials), a secure authentication dongle containing a certificate and matching private key which must be entered into a USB port, a fingerprint reader providing the correct fingerprint etc. Ideally, the result of this will not be a simple yes/no answer to your program, as this can be overridden/patched/spoofed, but a real value (a cryptographic key) required to decrypt your credentials (or whatever else you're trying to protect), derived directly from the authenticator. A multi-source approach in which the decryption key is calculated on the fly from various sources (as to which sources, this really depends on your system) could be even better.

  2. Heavily (automatically and massively) obfuscate your program to thwart reverse-engineering. True enough, static analysis tools have become state-of-the-art, but there are [proprietary, expensive] obfuscation tools out there (obfuscating compilers, packers etc.) that make reverse-engineering very time-consuming, challenging and laborious, enough to send the attackers to look for easier targets. Adding protection mechanisms against debugging and tamper resistance methods can further strengthen the security of your program. True enough, Java as a bytecode language is especially vulnerable in this regard, as decompiling it (as compared to decompiling/disassembling native code) is rather straightforward.