Code Analysis: Binary vs Source

It depends on the situation - type of application, deployment model, especially your threat model, etc.

For example, certain compilers can substantially change some delicate code, introducing subtle flaws - such as bypassing certain checks, that do appear in the code (satisfying your code review) but not in the binary (failing the reality test).
Also there are certain code-level rootkits - you mentioned C++, but there are also managed code rootkits for e.g. .NET and Java - that would completely evade your code review, but show up in the deployed binaries.
Additionally, the compiler itself may have certain rootkits, that would allow inserting backdoors into your app. (See some history of the original rootkit - the compiler inserted a backdoor password into the login script; it also inserted this backdoor into the compiler itself when recompiling from "clean" code). Again, missing from the source code but present in the binary.


That said, it is of course more difficult and time consuming to reverse engineer the binary, and would be pointless in most scenarios if you already have the source code.
I want to emphasize this point: if you have the source code, don't even bother with RE until you've cleaned up all the other vulnerabilities you've found via code review, pentesting, fuzzing, threat modeling, etc. And even then, only bother if it's a highly sensitive app or extremely visible.
The edge cases are hard enough to find, and rare enough, that your efforts can be better spent elsewhere.


On the other hand, note that there are some static analysis products that specifically scan binaries (e.g. Veracode), so if you're using one of those it doesnt really matter...


@AviD solid points, totally agree on the root kits on binaries/compilers component.

If you're a knowledgeable sec professional, setting aside the valid points AviD makes, the most vulnerabilities will most likely be in your source code. Having a strong knowledge of programming securely and how reverse engineering is accomplished should give you the best method for fixing/preventing the majority of holes in your source code. Plus, if there is an exploit with a compiler/binary, lots of times there is nothing you as a developer can do to prevent it except to use another compiler/language (which usually is not a viable option).


There are plenty of reasons aside from security-related ones to look into the final binary. Either by means of a debugger, disassembler or a profiler and emulator like Valgrind (which can verify various aspects of a compiled program).

Security and correctness of the program usually go hand in hand.

For me it's first linting the code (i.e. using PCLINT), then building binaries, verifying these with a fuzzer and memcheck (from Valgrind) and that gave me very good results when it comes to robustness and reliability. Only PCLINT in this case has access to the source code.