Is exploit-free software possible?

Software is too complex

This is by far the most important factor. Even if you just look at something like a web application, the amount of work hours put into the codebase is immense. The code works with technologies, who's standards are pages over pages long, written decades ago, and which offers features that most developers have never even heard of.

Combine that with the fact that modern software is built on libraries, which are built on libraries, which abstract away some low-level library based on some OS functionality, which again is just a wrapper for some other OS function written in the 1990s.

The modern tech stack is just too big for one person to fully grok, even if you exclude the OS side of things, which leads to the next point:

Knowledge gets lost over time

SQL Injections are now 20 years old. They are still around. How so? One factor to consider is that knowledge inside a company gets lost over time. You may have one or two senior developers, who know and care about security, who make sure that their code isn't vulnerable against SQL injections, but those seniors will eventually take on different positions, change companies or retire. New people will take their place, and they may be just as good developers, but they don't know or care about security. As a result, they might not know or care about the problem, and thus not look for them.

People are taught the wrong way

Another point is that security isn't really something that schools care about. I recall the first lesson about using SQL in Java, and my teacher used string concatenation to insert parameters into a query. I told him that was insecure, and got yelled at for disturbing the lesson. All the students in this class have seen that string concatenation is the way to go - after all, that's how the teacher did it, and the teacher would never teach anything wrong, right?

All those students would now go into the world of development and happily write SQL code that is easily injectable, just because nobody cares. Why does nobody care? Because

Companies are not interested in "perfect code"

That's a bold statement, but it's true. To a company, they care about investment and returns. They "invest" the time of their developers (which costs the company a specific amount of money), and they expect features in return, which they can sell to customers. Features to sell include:

  • Software can now work with more file formats
  • Software now includes in-app purchases
  • Software looks better
  • Software makes you look better
  • Software works faster
  • Software seamlessly integrates into your workflow

What companies can't sell you is the absence of bugs. "Software is not vulnerable against XSS" is not something you can sell, and thus not something companies want to invest money in. Fixing security issues is much like doing your laundry - nobody pays you to do it, nobody praises you for doing it, and you probably don't feel like doing it anyways, but you still have to.

And one more final point:

You can't test for the absence of bugs

What this means is, you can never be certain if your code contains any bugs. You can't prove that some software is secure, because you can't see how many bugs there are left. Let me demonstrate this:

function Compare(string a, string b)
{
    if (a.Length != b.Length)
    {
        // If the length is not equal, we know the strings will not be equal
        return -1;
    }
    else
    {
        for(int i = 0; i < a.Length; i++)
        {
            if(a[i] != b[i])
            {
                // If one character mismatches, the string is not equal
                return -1;
            }
        }

        // Since no characters mismatched, the strings are equal
        return 0;
    }
}

Does this code look secure to you? You might think so. It returns 0 if strings are equal and -1 if they're not. So what's the problem? The problem is that if a constant secret is used for one part, and attacker-controlled input for the other, then an attacker can measure the time it takes for the function to complete. If the first 3 characters match, it'll take longer than if no characters match.

This means that an attacker can try various inputs and measure how long it will take to complete. The longer it takes, the more consecutive characters are identical. With enough time, an attacker can eventually find out what the secret string is. This is called a side-channel attack.

Could this bug be fixed? Yes, of course. Any bug can be fixed. But the point of this demonstration is to show that bugs are not necessarily clearly visible, and fixing them requires that you are aware of them, know how to fix them, and have the incentive to do so.

In Summary...

I know this is a long post, so I am not blaming you for skipping right to the end. The quick version is, writing exploit-free code is really really hard, and becomes exponentially harder the more complex your software becomes. Every technology your software uses, be it the web, XML or something else, gives your codebase thousands of additional exploitation vectors. In addition, your employer might not even care about producing exploit-free code - they care about features they can sell. And finally, can you ever really be sure it's exploit free? Or are you just waiting for the next big exploit to hit the public?


The existing answers, at the time of writing this, focused on the difficulties of making bug free code, and why it is not possible.

But imagine if it were possible. How tricky that might be. There's one piece of software out there which earned the title of "bug free:" a member of the L4 family of microkernels called seL4. We can use it to see just how far the rabbit hole goes.

seL4 is a microkernel. It is unique because, in 2009, it was proven to have no bugs. What is meant by that is that they used an automated proof system to mathematically prove that if the code is compiled by a standards-complient compiler, the resulting binary will do precisely what the documentation of the language says it will do. This was strengthened later to make similar assertions of the ARM binary of the microkernel:

The binary code of the ARM version of the seL4 microkernel correctly implements the behaviour described in its abstract specification and nothing more. Furthermore, the specification and the seL4 binary satisfy the classic security properties called integrity and confidentiality.

Awesome! We have a non trivial piece of software that is proven to be correct. What's next?

Well, the seL4 people aren't lying to us. They immediately then point out that this proof has limits, and enumerate some of those limits

Assembly: the seL4 kernel, like all operating system kernels, contains some assembly code, about 340 lines of ARM assembly in our case. For seL4, this concerns mainly entry to and exit from the kernel, as well as direct hardware accesses. For the proof, we assume this code is correct.
Hardware: we assume the hardware works correctly. In practice, this means the hardware is assumed not to be tampered with, and working according to specification. It also means, it must be run within its operating conditions.
Hardware management: the proof makes only the most minimal assumptions on the underlying hardware. It abstracts from cache consistency, cache colouring and TLB (translation lookaside buffer) management. The proof assumes these functions are implemented correctly in the assembly layer mentioned above and that the hardware works as advertised. The proof also assumes that especially these three hardware management functions do not have any effect on the behaviour of the kernel. This is true if they are used correctly.
Boot code: the proof currently is about the operation of the kernel after it has been loaded correctly into memory and brought into a consistent, minimal initial state. This leaves out about 1,200 lines of the code base that a kernel programmer would usually consider to be part of the kernel.
Virtual memory: under the standard of 'normal' formal verification projects, virtual memory does not need to be considered an assumption of this proof. However, the degree of assurance is lower than for other parts of our proof where we reason from first principle. In more detail, virtual memory is the hardware mechanism that the kernel uses to protect itself from user programs and user programs from each other. This part is fully verified. However, virtual memory introduces a complication, because it can affect how the kernel itself accesses memory. Our execution model assumes a certain standard behaviour of memory while the kernel executes, and we justify this assumption by proving the necessary conditions on kernel behaviour. The thing is: you have to trust us that we got all necessary conditions and that we got them right. Our machine-checked proof doesn't force us to be complete at this point. In short, in this part of the proof, unlike the other parts, there is potential for human error.
...

The list continues. All of these caveats have to be carefully accounted for when claiming proof of correctness.

Now we have to give the seL4 team credit. Such a proof is an incredible confidence building statement. But it shows where the rabbit hole goes when you start to approach the idea of "bug free." You never really get "bug free." You just start having to seriously consider larger classes of bugs.

Eventually you will run into the most interesting and human issue of all: are you using the right software for the job? seL4 offers several great guarantees. Are they the ones you actually needed? MechMK1's answer points out a timing attack on some code. seL4's proof explicitly does not include defense against those. If you are worried about such timing attacks, seL4 does not guarantee anything about them. You used the wrong tool.

And, if you look at the history of exploits, it's full of teams that used the wrong tool and got burned for it.

†. In response to the comments: The answers actually speak to exploit free code. However, I would argue a proof that code is bug free is necessary for a proof that it is exploit free.


You can have high quality code, but it becomes massively more expensive to develop it. The Space Shuttle software was developed, with great care and rigorous testing, resulting in very reliable software - but much more expensive than a PHP script.

Some more day-to-day things are also very well coded. For example, the Linux TCP/IP stack is pretty solid and has had few security problems (although unfortunately, one recently) Other software at high risk of attack includes OpenSSH, Remote Desktop, VPN endpoints. The developers are typically aware of the importance of their software as often providing a "security boundary" especially with pre-authentication attacks, and in general they do better and have fewer security problems.

Unfortunately, some key software is not so well developed. A notable example is OpenSSL that is very widely used, yet has messy internals where it's easy to introduce security flaws like Heart Bleed. Steps have been taken to address this, e.g. LibreSSL.

A similar effect happens in CMS software. For example, Word Press core is generally well engineered and has few issues. But plugins are much more variable, and often outdated plugins is how such sites are hacked.

Web browsers are a front-line in this. Billions of desktop users rely on their web browser to be secure, keep malware off their systems. But they also need to be fast, support all the latest features, and still handle millions of legacy sites. So while we all really want web browsers to be trustworthy security boundaries, they are not that currently.

When it comes to bespoke software - which is often web applications - the developers working on them are typically less experienced and security aware than core infrastructure developers. And commercial timescales prevent them taking a very detailed and careful approach. But this can be helped with architectures that contain security critical code in a small area, which is carefully coded and tested. The non-security-critical code can be developed more quickly.

All development can be helped with security tools and testing, including static analyzers, fuzzers and pen tests. Some can be embedded in an automated CI pipeline, and more mature security departments do this already.

So we've got a long way to go, put there is definitely hope in the future that there will be much fewer security bugs. And many opportunities for innovative tech that gets us there.