Popular Security "Cargo Cults"

  • Closed source is more secure than open-source as attackers can view the source code and find and exploit vulnerabilities. While I'm not claiming this is always false, with open source software it's at least possible for outside experts to review the software looking for gaping vulnerabilities/backdoors and then publicly patching them. With closed source software that simply isn't possible without painstakingly disassembling the binary. And while you and most attackers may not have access to the source code, there likely exist powerful attackers (e.g., US gov't) who may be able to obtain the source code or inject secret vulnerabilities into it.

  • Sending data over a network is secret if you encrypt the data. Encryption needs to be authenticated to prevent an attacker from altering your data. You need to verify the identity of the other party you are sending information to or else a man-in-the-middle can intercept and alter your traffic. Even with authentication and identification, encryption often leaks information. You talk to a server over HTTPS? Network eavesdroppers (anyone at your ISP) knows exactly how much traffic you sent, to what IP address, and what the size of each of the responses (e.g., you can fingerprint various webpages based on the size of all the resources transferred). Furthermore, especially with AJAX web sites, the information you type in often leads to a server response that's identifiable by its traffic patterns. See Side-Channel Leaks in Web Applications.

  • Weak Password Reset Questions - How was Sarah Palin's email hacked? A person went through the password reset procedure and could answer every question correctly from publicly available information. What password reset questions would a facebook acquaintance be able to figure out?

  • System X is unbreakable -- it uses 256-bit AES encryption and would take a billion ordinary computers a million billion billion billion billion billion years to likely crack. Yes, it can't be brute-forced as that would require ~2256 operations. But the password could be reused or in a dictionary of common passwords. Or you stuck a keylogger on the computer. Or you threatened someone with a $5 wrench and they told you the password. Side-channel attacks exist. Maybe the random number generator was flawed. Timing attacks exist. Social engineering attacks exist. These are generally the weakest links.

  • This weak practice is good enough for us, we don't have time to wait to do things securely. The US government doesn't need to worry about encrypting the video feeds from their drones - who will know to listen to the right carrier frequencies; besides encryption boxes will be heavy and costly - why bother?

  • Quantum Computers can quickly solve exponential time problems and will break all encryption methods. People read popular science articles on quantum computers and hear they are these mystical super-powerful entities that will harness the computing power of a near infinite number of parallel universes to do anything. It's only part true. Quantum computers will allow factoring and discrete logarithms to be done in polynomial time O(n3) via Shor's algorithm rendering RSA, DSA, and encryption based on those trap-door functions easily breakable. Similarly, quantum computers can use Grover's algorithm to brute force a password that should take O(2N) time in only O(2N/2) time; effectively halving the security of a symmetric key -- Granted Grover's algorithm is known to be asymptotically optimal for quantum computers, so don't expect further improvement.


Some examples:

  • Bigger keys. 4096-bit RSA, 256-bit AES... more bits are always better. (See the comments: there is no point to have keys bigger than the size which ensures the "cannot break it at all" status; but bigger keys imply network and CPU overhead, sometimes in large amounts.)

  • Automatic enforcement of "safe functions" like snprintf() instead of sprintf() (it won't do much good unless the programmer tests for the possible truncate, and it won't prevent using a user-provided string as format string). Extra points for strncpy() which does not do what most people seem to assume (in particular, it does not ensure a final '\0').

  • "Purity of the Security Manager". As an application of the separation of duties and roles, all "security-related" decisions should be taken by a specialist in security, who is distinct from the project designers and developers. Often taken to the misguided extreme, where the guy who decides what network ports should be left open on any firewall has no knowledge whatsoever about the project, and deliberately refuses to learn anything in that respect, because independent decision is more important than informed decision.


I'll add my own appsec examples that I have seen while consulting:

  • "I'll email you an encrypted zip and include the password in the same email..." This has happened to me more than once. A locked door won't stay locked if you leave the key in the door.
  • "But you couldn't have gotten SQL Injection and SMTP injection, we called sanitize() on everything!". There is no way to make a variable safe for every use, you need to use the sanitation routine for the job.
  • "We cannot be hacked because we only use XXX platform/language/OS". Every platform has security problems, period.
  • "We have a yearly security assessment, you won't be able to find anything." Frequency != Quality. Having frequent assessments is a good thing, but this does not guarantee anything!
  • "We have a WAF, which means we don't have to actually patch anything." Yeah, so this happens... I had a client that didn't patch known CSRF vulnerabilities, because they assumed the WAF would be able to stop these attacks. (No WAF can do this. I once found a WAF that claimed it could "prevent all of the owasp top 10", and the WAF's HTTP management interface was vulnerable to CSRF.)