Secure big, old ecommerce website from XSS?

I propose the following four step program, where you first pick the low hanging fruit to give you some minimum of protection while you work on the bigger problems.

1. Activate client side filtering

1.1 Set the X-XSS-Protection header

Setting the following HTTP response header will turn on the browsers built in XSS protection:

X-XSS-Protection: 1; mode=block

This is by no means waterproof, and it only helps against reflected XSS, but it's something. Some old versions of IE (surprise, surprise) have a buggy filter that actually might make things worse, so you might want to filter out some user agents.

1.2 Set a content security policy

If you do not use inline JavaScript in your app, a CSP can help a lot. Setting script-src 'self' will (a) limit script tags to only include scripts from your own domain, and (b) disable inline scripts. So even if an attacker could inject <img onerror="alert('XSS')"> the browser will not execute the script. You will have to tailor the value you use for the header to your own use, but the linked MDN resource should help you with that.

But again, this is not waterproof. It does nothing to help users with a browser that doesn't implement CSP (see here). And if your source is littered with inline scripts you will have to choose between cleaning that up or abstaining from using CSP.

2. Activate server side filtering

John Wu has a good suggestion in comments:

Also, since this is .NET, a very quick and easy change can turn on ASP.NET Request Validation which can catch a variety of XSS attacks (but not 100% of them).

If you are working in another language, you might instead consider using a web application firewall (as suggested by xehpuk). How easy a WAF is to configure for you depends on what application you are protecting. If you are doing things that makes filtering inherently hard (e.g. pass HTML in GET or POST parameters) it might not be worth the effort to configure one.

But again, while a WAF might help, it is still not waterproof.

3. Scan and fix

Use an automated XSS scanner to find existing vulnerabilities and fix these. As a complement you can run your own manual tests. This will help you focus your precious time on fixing easy to find vulnerabilities, giving you the most bang for the buck in the early phase.

But for the third time, this is not waterproof. No matter how much you scan and test, you will miss something. So, unfortunately, there is a point #4 to this list...

4. Clean up your source code

Yes, you will "have to read every single page". Go through the source and rewrite all code that outputs data using some kind of framework or template library that handles XSS issues in a sane way. (You should probably pick a framework and start using it for the fixes you do under #3 already.)

This will take a lot of time, and it will be a pain in the a**, but it needs to be done. Look at it from the bright side - you have the opportunity to do some additional refactoring while you are at it. In the end you will not only have solved your security problem - you will have a better code base as well.


The short of it is that there is no easy solution. I have a suggestion for an "easy" solution at the bottom, but bear in mind that it has many caveats, which I will discuss here. First though, let's start from the big picture and work our way down.

In my experience (having worked with many legacy systems) "security hasn't been a priority for a long time" means that you likely have any number of security issues hiding in your system. XSS is just one issue, I'm sure. So unless you know someone is already on top of these I would concern myself with:

  1. Password security. I doubt you are hashing according to modern security standards, and this is a crtical problem which is otherwise easily overlooked.
  2. Credit card security. I hope you are PCI compliant and aren't storing credit cards on site. I've seen plenty of legacy systems that store credit cards even though you aren't supposed to.
  3. SQLi is probably a real problem, and is especially dangerous if you store passwords insecurely or credit cards in your database.
  4. XSS vulnerabilities!

The numbers aren't meant to imply priority: they are all top priorities.

The starting point

The most important thing is to fix this "institutionally". This is going to be the hardest to do, but is also the most critical. If you spend a few weeks fixing up all your XSS vulnerabilities, but security continues to be a bottom-tier priority, the problem is just going to come back the next time a developer outputs data unfiltered to the browser.

The best protections against XSS vulnerabilities is having developers that know to take security seriously and using a templating engine that properly handles XSS escaping for you. The key to remember is that with XSS you have to filter on output, not input. It's easy to see this as a one way problem "Clean the user data when it goes into the input, and then you're good". But this doesn't protect against all attack vectors, especially XSS added via SQLi. In general though, if XSS protection is something that your developers have to do remember to do everytime, it will end up being forgotten. That's why your best bet is to have that XSS protection built into your system. This is where a templating engine comes in. Any competent templating system automatically applies XSS filtering by default, and must specifically be told if it needs to not filter for XSS.

I'm sure that refactoring your system to include a templating engine specifically to take care of XSS vulnerabilities is probably not going to happen, but it is also important to understand that if you don't do something to fix the institutional problem that allowed this to happen in the first place, the problem is just going to come back, and the weeks it takes you to fix this will be wasted.

First practical steps

@Anders has some great starting points in his answer. A CSP and the XSS-header both work the same way: by telling the browser to enable XSS protection client side. Keep in mind (as @Anders mentioned) that these are browser-dependent and, especially for older browsers, may not be supported at all. In particular, IE's support for CSP is very minimal, even all the way up to IE11 (https://stackoverflow.com/questions/42937146/content-security-policy-does-not-work-in-internet-explorer-11)

The result is that while these steps are good starting points, you definitely cannot rely on them as your primary security: you still have to fix the problem on your end. Getting a good automated scanning tool is definitely the best way to get started. It will get you some immediate action items.

A partial solution

Another option you may have is to put XSS filtering across the board on your application. I don't normally recommend this, but I think the best bet for you is a multi-tiered response. The idea here is that you add some code to your applications bootstrapping process that checks all data incoming from the client (url data, POST data, cookies, REQUEST headers, etc...). You then perform some filtering to detect common XSS payloads, and if found reject the request all together.

The problem with blacklist filtering is that it can be very unreliable. If you read up on the OWASP XSS filter evasion cheat sheet you'll get a good idea of how difficult it can be to reliable filter out XSS vulnerabilities. However, it is a quick way to get some protection up on every request, so it may be worthwhile in your case. One important issue to keep in mind though is that this will generally stop WYSIWYG editors from working. That may or may not be a problem for you.