The valid role of obscurity

Interesting question. My thoughts on this are that obscuring information is helpful to security in many cases as it can force an attacker to generate more "noise" which can be detected.

Where obscurity is a "bad thing" can be where the defender is relying on that obscurity as a critical control, and without that obscurity, the control fails.

So in addition to the one you gave above, an effective use of obscurity could be removing software name and version information from Internet facing services. The advantages of this are:

  • If an attacker wants to find out if a vulnerable version of the service is in use they will have to make multiple queries (eg. looking for default files, or perhaps testing timing responses to some queries). This traffic is more likely to show up in IDS logs than a single request which returned the version. Additionally fingerprinting protocols aren't well developed for all services, so it could actually slow the attacker down considerably
  • The other benefit is that the version number will not be indexed by services like Shodan. This can be relevant where an automated attack is carried out for all instances of a particular version of a service (eg. where a 0-day has been discovered for that version). Hiding this from the banner, may actually prevent a given instance of the service from falling prey to that attack.

That said, it shouldn't ever be the only line of defense. In the above example, the service should still be hardened and patched to help maintain its security.

Where I think that obscurity fails is where it's relied on. Things like hard-coded passwords that aren't changed, obfuscating secrets with "home grown encryption", or basing a risk decision on whether to patch a service on the idea that no-one will attack it. So the kind of idea that no one will find/know/attack this generally fails, possibly because the defenders are limiting their concept of who a valid attacker might be. It's all very well saying that an unmotivated external attacker may not take the time to unravel an obscure control, but if the attacker turns out to be a disgruntled ex-employee, that hard-coded password could cause some serious problems.


You've mischaracterized the conventional wisdom. The conventional wisdom doesn't say that obscurity is bad. It says that relying upon security through obscurity is bad: it usually leads to fragile or insecure systems. Do note the difference. Obscurity might add some additional security, but you should not rely upon it, and it shouldn't be your primary defense. You should be prepared that the obscurity might be pierced, and be confident that you have adequate defenses to handle that case.

An important concept here is Kerckhoff's principle. Back in the 1800's, Kerckhoff already articulated the reasons why we should be skeptical about security through obscurity, and how to draw a line between appropriate and inappropriate uses of cryptography. The Wikipedia article on Kerckhoff's principle is very good and an excellent starting point.

Here are some points to ponder:

  • As the Wikipedia article says, "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security." Therefore, all else being equal, the less things we have to keep secret, the easier it may be to secure the system.

  • Generally speaking, there is little hope of keeping the design or algorithms used in the system secret from dedicated attackers. Therefore, any system whose security relies upon the secrecy of its design is, in the long run, doomed -- and in the short run, it is taking an unnecessary risk.

  • The worst kind of secret is one that cannot be changed if it is compromised or leaked to unauthorized parties. The best kind of secret is one that can be easily changed if it is suspected to be leaked. Building a system where security relies upon keeping the system's design secret is one of the worst possible uses of secrecy, because once the system is deployed, if its secret leaks, it is very hard to change (you have to replace all deployed copies of the system with a totally new implementation, which is usually extremely expensive). Building a system where security relies upon each user to select a random passphrase is better, because if the password is leaked (e.g., the user types their password into a phishing site and then says "Oops!"), it is relatively easy to change the user's password without inconveniencing others.

  • Or, as Kerckhoff wrote in the 1800's, the design of a cryptosystem must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience. This is basically a restatement of my previous point, in a particular domain.

For these reasons, well-designed systems generally try to minimize the extent to which they rely upon secrets; and when secrets are necessary, one usually designs them to concentrate all the required secrecy into a cryptographic key or passphrase that can be easily changed if compromised.


It's security through obscurity that is the bad part. Obscurity can increase security, but you can't depend on obscurity alone to provide security.

Absolute mantras are always harmful. ;) It's essential to understand the reasoning behind the mantra and the tradeoff's involved.

For example, hiding a key outside your house when you are going for a run is security through obscurity, but it might be an acceptable risk if you are going to be back in 30 minutes (and aren't a high risk target?).

The same can be said for "never use goto." Sometimes goto is the best way to write clear code in certain situations. As an experienced professional, you need to understand the reasons for the guidelines, so you can understand the tradeoffs.