Pen test results for web application include a file from a forbidden directory that is not even used or referenced
Brute force scanners
Many automated scanners get around banned directory listings by "bruteforce" searching for files. This means that they will check for additional files with names similar to files that do exist (i.e.
filename.js1 as well as files that aren't referenced at all (aka
secret.txt). If you happen to have a file whose name is on the bruteforced list and is in an accessible directory, then it will be found regardless of whether or not "directory listing" is enabled
It's worth pointing out that hackers do this same thing, so this is a real issue. In general, if something is in a publicly accessible directory, then you should assume it will be found. So if you don't want it to be public then you need to keep it out of public directories - disabling directory listing provides very little security.
tl/dr: If something is hosted by your website but doesn't have a reason to be there, then it is a liability. Kill it with prejudice.
There are many tools available which brute-force filenames. Some of these are more intelligent than others.
For instance, a "dumb" tool may just have a word list, containing probable names for files and directories, such as
A more intelligent tool may look at the files which it already knows about (e.g. by crawling the application) and try to find similarly-named files. In your case, there was a file named
filename.js, so the application likely tried to mangle the name, as TripeHound pointed out in a comment:
Why are these files a problem?
One might be tempted to think that an unreferenced file is "safe", because it's not a part of the application. However, the file is still accessible, and depending on the contents of the file, this may allow an attacker to do various things:
- Unreferenced files may be archives that are left over from deployment and still contain source code, thus allowing an attacker to gain access to that
- Unreferenced files may contain credentials or other relevant configuration data
In general, it's best to avoid having unreferenced files in your webroot. As the name implies, they are not used by the application and thus are only a source of problems.
The real problem here is that you have a deployment / production environment that is not controlled (and thus replicable) through an automated source control and deployment system.
This means that, if you find some new file in your system, you don't know if that's some kind of backdoor dropped by a root kit, or some innocuous renamed file your colleague left behind.
In general, a best security practice is to only ever have files on a server that are put there by an automated script that clones some kind of build artifacts, and to have that automated process also delete files that should no longer be there. Then you can run audits for "are the files in production what the build system says they should be?"
And if you think that "bad deployment practices can't possibly be a life threatening problem for my business," then I invite you to google "Knight Capital Group."