What are the potential vulnerabilities of allowing a large http body size?
I'm going to go for an answer but I'm also not an expert on this particular topic, so I'll be curious to read any other answers that might come in.
I believe that the short answer is this: allowing large HTTP post bodies certainly provides an avenue for DOS attacks, but it isn't necessarily the most attractive avenue for DOS attacks. As a result, I certainly wouldn't set a max body size substantially larger than what I need, but if I needed a large post body I would allow it and not worry to much about it. In detail:
Preferred DDOS attack vectors
There are lots of ways to DOS a website. These days the largest DDOS attacks come from amplified DDOS attacks. These are attacks that (typically) hit UDP protocols that can receive a small amount of traffic and respond with a much larger amount of traffic which can be directed to arbitrary IP addresses on the internet. For instance, the current (March 2018) record for largest DDOS attack hit github using such an attack vector (https://githubengineering.com/ddos-incident-report/). These sorts of attacks seek to completely overwhelm the network capacity of the target servers, taking them down for as long as the attack lasts.
The vector used in the github incident had an amplification factor of ~51,000, which means that for each byte of network traffic the originating botnet created, 51,000 bytes hit github. At the peak of the attack github's network (or their DDOS mitigation provider's network) saw roughly 1.35Tbs of traffic. Assuming an amplification factor of 51000x, this means the actual botnet that triggered the attack could push out roughly 26Mbs of traffic. If that same botnet had been used to simply upload large amounts of data to an endpoint that accepted large uploads, there would have been no amplification and githhub would have been hit with a DDOS attack that consumed roughly 26Mbs of github's traffic. I doubt they would have noticed.
Above I focused on amplification-based DDOS attacks, but they obviously aren't the only option. The amplification part is what made this one so successful (in terms of network traffic - it didn't actually take down github), but you also need to find a suitable amplification vector that you can exploit to perform such an attack. Otherwise though, there are many other ways to DDOS a service: you can overrun its CPU capabilities (imagine repeatedly hitting a poorly performing endpoint that sucks up substantial CPU or database resources per request), you can overrun its network by simply throwing lots of data at it, you can hold open TCP/IP connections until the server empties its connection pool, etc...
Disadvantages of attacking a file upload
All this to say: overrunning a file upload endpoint is certainly one way to try to DOS a system, but probably not the best. The trouble is that it leaves the attacker and defender on equal territory. You can only overwhelm the target with as much data as you are able to send. Using up open connection pools, hitting high-resource endpoints, amplification attacks, etc, are all ways to attack a resource that give your resources extra leverage, and make it easier to DDOS people. As a result, you certainly can DOS someone by overwhelming a file upload endpoint, but there are probably more effective ways to do it. Moreover, as long as you have enough network traffic at your disposal, I don't think that hitting a file upload endpoint will have any particular advantages: I believe that you can send the data regardless of whether or not the server on the other end feels like receiving it. The only possible advantage I can see about hitting a file upload endpoint (although again, not an expert in this area), is that the file ends up actually stored on a server somewhere, and you may be able to fill up a hard-drive with large temporary files. Even this though is probably not the most effective DOS strategy.
All this brings us back to what I originally said: be smart and don't configure your server to allow substantially larger uploads than you need, but in terms of security concerns there are likely larger ones than this.
Securing a file upload endpoint
While a file upload endpoint may not be the first choice of places to hit with a DDOS attack, that doesn't mean that security concerns should go out the window. As you mention in comments, requiring user access doesn't help much: PHP will accept the upload regardless and store it in a temporary location. It will get immediately deleted when you don't do anything with the upload, but it will still get written to disk temporarily.
Your other suggestion of setting the
/tmp directory size large enough that you can't run out of free space (as determined by your available bandwidth) will certainly make sure your hard-drive won't fill up (although overflowing a /tmp directory wouldn't necessarily kill a server, especially if it is on its own partition). That then shows the next weak point in your chain - if all your bandwidth is being used up in an attempt to overwhelm a file upload endpoint, then your service is going to be down regardless of whether or not your machine has space on its hard-drive (because your network bandwidth is all used up). Which pretty much answers your question because at that point in time a DDOS via file upload is no longer possible. If your file upload endpoint can't be DDOS'd except by completely overwhelming your network bandwidth, then no one will bother trying to DDOS your file upload endpoint: they'll just aim enough traffic at you to try to overwhelm your network connection. At that point in time, I'd say that your file upload is secure!
I'm not sure how much it would cost to purchase that much hard-drive space. It would depend on your hosting provider (although hard-drive space is usually cheap) and your available bandwidth. That being said, there is also a pretty simple solution to protect such a system: put your file upload system on a separate server on a separate network, (
uploads.example.com) and make sure all your systems have some DDOS protection (i.e. cloudflare). Then a DDOS against your file upload endpoint can take it down, but the rest of your systems would continue on their merry way. Of course in that case, I can't imagine someone would try to DDOS your file upload. They would just try to find a more appealing attack vector that impacts your main systems.
There are two big risks when allowing large uploads.
The first is simply that the uploaded files may fill the disk space on the server. Even with clever use of quota or mounts, this is likely to lead to other uploads failing. In a less strictly configured system, it may lead to other applications failing or even to failures in the operating system. A particular problem is ensuring that uploaded files are eventually deleted under all situations.
The second risk, and one that might be more attractive to an attacker, is that a large upload may create a buffer overflow in the code that processes the file. At its simplest, such an overflow might break the code that checks that the file is "legitimate", allowing an illegitimate upload to end up as an apparently legitimate download. However, buffer overflows are also a potential source of privilege escalation, allowing attackers to escape the web server and perform operations directly in the operating system.
So, at a minimum, you should consider:
- what happens if file uploads fill the disk space?
- does your file processing (e.g. validation) read the whole file into memory, or does it process it in smaller chunks? Be especially vigilant if you use a third-party parser (e.g. for XML)
- how does your code handle an unexpected failure in the processing of the file? e.g. you expect validation to return
INVALID, but it segfaults instead. Can this lead to an upload being interpreted as a valid file?
- will the large file be removed from everywhere it exists on disk, even if an error occurs?