Cannot turn off SSLCompression in Apache?

Solution 1:

The CRIME attack against CVE-2012-4929 is about encrypting the compressed headers without properly obfuscating the length of unencrypted data, which makes it possible to reveal plaintext headers (by guessing).

In your situation, the contents are compressed, the size (length) of the compressed data is added as another header and then all of this is encrypted. This is not vulnerable to the CRIME attack, as the length of unencrypted data is never revealed.

Solution 2:

To answer your question it helps to know a bit about the background:

Background - why is using compression potentially a security risk?

There’s a few so-called “Compression side channel attacks” which basically use compression outcomes to try to guess the original text. Each works basically by being able to add to the input to the compression and then observing the output. This is because a lot of compression algorithms work by recognising repeated text and replacing them with references rather than repeating the text in full multiple times. This leads to smaller messages, but does open an attack opportunity.

How do these attacks work?

Basically if you guess some or all of the secret part, add it to the message along with the unknown secret part, and then observe the size of the encrypted outcome and if it becomes smaller with certain guesses, then it you must have repeated part of the message so benefitted from higher compression.

With a few guesses it’s possible to figure out the secret part. Doing this depends on being able to add to the message but there’s various methods to do this. For example if you wanted to know a token cookie set for example.com, then send a message (perhaps a hidden XHR message that happens when people visit your totally unrelated site?) to example.com?token=1 and measure the resulting message size (as the browser will automatically add the cookie to the message as well). Then try example.com?token=2 and see if it’s bigger, smaller or the same. Repeat this for all possible values, until you have found out the first character of the cookie, where the message will be smaller. Let’s say in this example it’s token=5. Then try the second character (e.g. example.com?token=51, example.com?token=52... etc.). Repeat until you have the full cookie.

You can either measure the length of the message directly (e.g. by observing the encrypted messages if you can Man-in-the-middle the network) or time how long it takes to send the message, to make a good guess at the length.

HTTP messages can be compressed in multiple ways

Compression can happen a different levels in an HTTP message: 1) at the SSL/TLS level, 2) at the HTTP Body level and 3) at the HTTP Header level.

SSL Compression

Doing SSL/TLS compression basically happens regardless of the fact that it’s an HTTP message underneath - it’s done at the SSL/TLS level. Attacks like CRIME basically stopped us being able to use SSL/TLS compression because it introduced too many ways to guess at hidden parts of the message, basically using the above algorithm. To be honest the gains of SSL/TLS compression were not that great anyway, especially if we’ve already compressed the body response at the underlying HTTP level using gzip or similar, so compressing it again after encrypting it didn’t really save that much more data. So no real reason to use it, and this gave a real reason NOT to use it. SSL/TLS compression should always be turned off and use a tool like SSLLabs to confirm this. It’s off by default in most servers and has been for some time so would be very surprised it that was on.

HTTP Body compression

Compression at HTTP Body level is more interesting. This typically uses gzip or the newer Brotli algorithm and IS recommended to be on in most cases as the gains in web performance are significant. This is because HTTP Bodies are often large (particular response bodies) and networks typically relatively slow — so there are real gains in sending smaller sizes across the network. Now yes in theory this is vulnerable to similar attacks (the so-called BREACH attack and also the TIME variant) — but only if the secret data is in the body again (so any identical guess can be seen to be smaller after compression). So the risk is much smaller as most responses don’t include secret data (e.g. when was the last time you saw your cookie printed to screen on a page?), whereas cookies in headers are often always included and a larger proportion of the message.

Of course if you have some secret info that is printed to screen (your name, social security number, DoB, bank details... etc.) then it could be vulnerable and maybe should consider not HTTP compressing those responses but those are fairy atypical so disabling HTTP compression for every response is rarely the right answer. Even when you are presenting secret info on screen, there are better options: for example not presenting that data on screen at all, or at least in full (e.g. star out all but last 4 digits), not allowing user response data to be shown on screen at same time, padding the data with random characters, or adding rate limiting are usually much better options, for example.

Back to your question

So, to answer your question, SSL Compression and HTTP Body Compression are two different things and the former should be off and the latter on (except in really secure applications that don’t want to risk this, despite the gains — but even then, there are usually better ways to handle this).

To finish off, some bonus info on HTTP Header compression

To finish off the story let’s talk about HTTP Header compression because, as per above, they often contain cookie secrets that attackers would find valuable.

HTTP/1.1 which, until very recently was the predominant version in use, didn’t allow this so there wasn’t much to talk about here. These were sent into full uncompressed form (though encrypted using SSL/TLS if HTTPS was used) and so not vulnerable to side channel compression risks (assuming SSL Compression was not used).

These were also typically very small compared to HTTP Bodies so no one really worried about compressing them too much. However with the increase in the number of resources used to make up a web page (over 100 is not unusual nowadays) there is a lot of redundancy in sending pretty much the same HTTP Headers back and forth all the time (have you seen the size of the User Agent header for example, which is sent with every, single request but never changes for all those requests?).

So the newer HTTP/2 and about to be released HTTP/3 protocols do allow HTTP Header compression but they specifically choose a compression algorithm (HPACK for HTTP/2 and the similar QPACK for HTTP/3) which is not vulnerable to these attacks. This was an explicit choice btw, because the earlier SPDY protocol that HTTP/2 was based on did use gzip and so was vulnerable. So when that was flagged it had to change as part of standardising it to HTTP/2.

Why not use “safe compression” always?

So why can’t we use safe compression techniques (like HPACK or QPACK) for HTTP Response bodies as well and avoid this? Well they are very specific compression techniques which use dictionaries or known and repeated values. This works well for HTTP Headers where there are few values and that are repeated at lot, but is not really an option for the more general purpose HTTP body responses which are likely to be completely different each response.

Hope that explains a few things and so answers your question.