Automated tools vs. Manual reviews

Some good answers here, but I think some points were missing:

  • Automatic tools finish a lot faster than manual testing, by orders of magnitude.
  • Automatic tools cover the breadth, but you need manual testing for depth. (Breadth both in range of attacks/tests, and in probing all interfaces / lines of code).
  • Autotools are great for the common low hanging fruit, but if you need to step up the level of security you'll need to go deeper manually.
  • Manual testing cannot possibly cover every bit of the system (whether it's lines of code, decompiled assembly, web pages and parameters, web services, etc), whereas autotools are great for that.
  • As @Andreas said, sometimes there is a complex vector, that autotools cannot possibly imagine, but will be obvious to an expert.
  • Autotools cannot test for business logic flaws, they hunt the technical flaws only - and the more common ones, at that.
  • Manual testing is not consistent.
  • Manual testing depends on the skill of the individual tester (oh, the horror!!), but you really need to know what you're looking for.
  • Likewise, with manual testing you cannot get regression testing.
  • Autotools are automatically updated with the newest exploits, but a human usually won't remember all the vectors he read about two years ago...
  • On the other hand, autotools will only get updated once every while, but a human can learn about a spanking new technique and implement it the very next day.
  • Autotools usually include a very high percentage of false positives (from 30% to over 90%, depending on the methodology and choice of product).
  • Autotools usually come with a decent reporting suite.

Bottom line? They both have a place, and should both be used in the correct context. For low-quality apps, first start by fixing everything the autotool can find, and don't bother with investing in a proper manual review just yet. When you raise the security level, and gotten rid of the low hanging fruit, go the distance and perform an in-depth manual review. And, when you're doing manual testing - first step, is running the autotool, filter the results, THEN begin the real testing.


Automated pros:

  • Fast - checks per time;
  • Does not need attention (mostly);
  • Can be scheduled and reported;

Automated cons:

  • Does not cover smart attack vectors;
  • Not always ensures full process control;

Manual approach basically converts automated pros/cons to their cons/pros. But manual approach requires more deep knowledge of subject.


Semi-automation is the answer. Human intelligence piloting automated tools is the best bet for maximizing test coverage and depth, not either or.

What works: Smart people driving the tools.

What fails: Everything else.