What’s wrong with in-browser cryptography in 2017?

The main issue with cryptography in web pages is that, because the code you're executing is loaded from a web server, that server has full control over what that code is and can change it every time you refresh the page. Unless you manually inspect the code you're running every time you load a new page on that site (preferably before that code is actually executed), you have no way of knowing what that code will actually do.

The Web Cryptography API can mitigate the risks of this somewhat by securely storing cryptographic keys in a way that scripts running on the page can not access, but all the operations that can be performed with those keys (decrypting, signing, etc) will still be available to those (potentially malicious) scripts.

As long as you do trust the server not to behave maliciously cryptography in the browser can be quite useful, but in many applications where cryptography is used that level of trust in a remote server you do not control is unacceptable.

For your scheme in particular:

  1. Of course, we will be using SSL

This is good. Without SSL, all later security measures would be pointless because an attacker could simply replace your code with their own and do whatever they want with the user's data.

  1. The first time a user logins in our notes app, we send them our public key. This key will be used to verify the authenticity of our "crypto.js" script. The public key will be stored in the user's browser.

This seems pointless. TLS already sends the client your server's public key and uses it to verify the authenticity of all scripts you load over that connection. There's no reason to do the same thing all over again in JavaScript.

  1. A "checker.js" script is downloaded and stored as well. This script will never change and it will be in charge of checking the integrity of "crypto.js" and (2).

This is also pointless, because there's no way to enforce your requirement that "This script will never change". You could send a Cache-Control header with a long max-age, but there's no guarantee the user agent will always respect that value; caching is not intended to be relied upon for security.

  1. In (2) and (3) we establish a Trust On First Use (TOFU) relationship between the user and our site. Both the public key and "checker.js" are cached using a service worker or similar.

Just to be clear: caching those files with service workers has no impact on the security of the overall system. When the user later comes back to your site the browser will check with the server to see whether the service worker has updated and install the new version if it has. So the server still has full control of the code running in the user's browser. There's no "Trust On First Use (TOFU) relationship" here.

  1. Even though we are using SSL, a MITM attack could happen while downloading (2) and (3), so we could offer a way to check that the public key and "checker.js" are not compromised.

That's a nice gesture, but as I previously stated even if those files are not currently compromised, the server or a MITM (who somehow managed to compromise your TLS connection) can easily update those files at any time to compromise them without the user noticing, so I don't really see the point of this feature.

  1. On first login, we also send to the user its private key. This private key will be used to encrypt and sign the notes. This private key will be encrypted.

  2. The key required to decrypt (6) is sent via email to the user. In this way we establish a two-channel authentication.

  3. Using Web Crypto ( https://www.w3.org/TR/WebCryptoAPI/ ) we decrypt (6) with (7). In this way (6) is never stored in the browser decrypted and it is not accessible by JavaScript thanks to the Web Crypto API.

Implementing this would require that the server have access to a plaintext version of the user's private key. Depending on exactly what you're using these keys for, that could be problematic if the server is ever compromised. Instead, you should consider using the Web Crypto API to generate a private-public key pair on the user's device, and have the browser send the public portion of that key to the server. That way the server never has access to the user's private key.

  1. Now we can start with the functionality of our web app: create encrypted notes. To do so, the user writes a note and click the save button. The server sends "crypto.js" signed with the server's private key (see 2).

  2. The signature is verified using the public key downloaded in (2) with (3) and if correct, the note is encrypted. If "checker.js" was modified, SRI should stop this process.

Unless you're loading checker.js from an untrusted third-party server, Subresource Integrity is unnecessary in this scenario. Anyone who can compromise your server or its connection to the client to modify checker.js can also modify the values of the subresource integrity hashes so that the browser will accept the modified script without complaint. Or they could just modify the page to not load checker.js at all, and use a completely different script of their own making instead. Either way, subresource integrity doesn't help.

  1. The note is sent back to the server and stored.

That's fine as long as you fix the issue I already mentioned with 6, 7, and 8 so the server doesn't have the keys needed to decrypt the user's files. If you're fine with the server having the keys to access the user's files, there's no need for client-side crypto at all; just let the server handle the encryption.

  1. Depending on the functionality required, the server should delete the user's private key and keep only the public one or not.

Or, as I suggested, just don't give the server the user's key in the first place. Other than that though, this part is fine security-wise, in that it prevents the server from accessing the user's files while the user is not using the site.

Once the user visits the site though, the user's browser will load code from that server which will have the ability to use the user's keys to decrypt the user's notes. So for the average user, accessing their notes without giving your server the ability to read them is impossible.

There are also some usability issues with this implementation, as it means users will not be able to sign into their account from a new browser and still have access to their notes. A better implementation would be to derive users' crypto keys from their passwords using a key derivation algorithm like PBKDF2 (available via the Web Cryptography API) with a high work factor. This would allow them to access their notes from any browser. (But would still have all the same security downsides mentioned in my comments above.)


The things that really stand out to me are 6 and 7. This is by far the thing that makes my cringe the most about this description.

The entire point of setting up TOFU is that there is two way trust. Of course first use trust has its own issues, and I believe you already outlined most of those cases, though less likely to occur, are possible.

But you are telling me, the site will generate a private key for me, and hand me that key encrypted, then give me a way to decrypt a private key via email? basically sending me a way to decrypt my way to decrypt via email.

I mean when I use a service I generally look for equal exposure. I don't want the website to be the single point of failure for anything that I do. It also creates a situation where messages meant for me can be decrypted by anyone with sysadmin access to the user generated private keys. Which means I can't trust it.

It completely undercuts the whole point of asymmetric cryptography. Especially since creating my own private key and sending the server the public key is a simple matter. Even for users who aren't technically inclined it could be included in the client. There is absolutely no reason for another party to create a private key for me IMO or for that key to ever touch the internet.

I'll let others answer the other points, I think 6 and 7 are the most dangerous. Barring MITM which you already mentioned in the OP.