Whitelisting DOM elements to defeat XSS

JavaScript is a client-side language.

In my professional opinion trusting ANY client-side implementation for added security is a waste of time, and the implementation would be a gross waste of resources.

Your time, and money would be better utilized implementing proper input cleansing and validation on the server-side environment.


Have you seen the work of Mozilla Security's Content Security Policy (summary)?

This is the specification.

Content Security Policy is intended to help web designers or server administrators specify how content interacts on their web sites. It helps mitigate and detect types of attacks such as XSS and data injection. CSP is not intended to be a main line of defense, but rather one of the many layers of security that can be employed to help secure a web site.

I added emphasis to the clause talking about not relying on CSP too much.

Here are some policy examples:

Sample Policy Definitions

Example 1: Site wants all content to come from its own domain:

X-Content-Security-Policy: allow 'self'

Example 2: Auction site wants to allow images from anywhere, plugin content from a list of trusted media providers (including a content distribution network), and scripts only from its server hosting sanitized JavaScript:

X-Content-Security-Policy: allow 'self'; img-src *; \ object-src media1.com media2.com *.cdn.com; \ script-src trustedscripts.example.com

Example 3: Server administrators want to deny all third-party scripts for the site, and a given project group also wants to disallow media from other sites (header provided by sysadmins and header provided by project group are both present):

X-Content-Security-Policy: allow *; script-src 'self' X-Content-Security-Policy: allow *; script-src 'self'; media-src 'self';

It seems like setting the security policy in the headers, while not impregnable to attack, makes it a lot harder than adding the security policy in-line in the DOM where it could be mutated.

I think your idea/question is a good one to think about, but as you start making in more and more comprehensive there comes a point where it is too expensive to do right.

I had a similar idea when I realized how annoying it is that all resources should be served over SSL if the page is supposed to be secured by SSL. My first thought was an idea around having checksums for each resource that is allowed to load on the SSL page (loading them in the clear would make it easier to cache them). Of course there are a host of security issues to doing that and eventually I came to the conclusion that any performance gain is quickly lost through security issues and complexity. I can see how a DOM whitelist could quickly suffer in the same way.

One final point: if you have enough knowledge to whitelist on the browser side, why can't you run the whitelist on the data you are serving before sending to the browser?


It seems like an interesting idea to me but since you're asking about problems with it I'll respond with some of the obvious ones. I don't see how this can be applied globally to most sites, as most would seem to need many of the dangerous javascript functions available for core functionality. E.g. onmouseover, href="javascript:alert(1)", etc. For this to be useful as an alternative to Caja or JSReg, or JSandbox it may need to applicable at any level in the DOM. For example if I just wanted to protect my <div name="untrusted"> content.