This already shows a conceptual limitation of the browser based XSS filter: They cannot detect stored XSS vulnerabilities. That is if the XSS payload isn’t anywhere in the HTTP parameter but is stored in the web application. A browser can of course not differentiate between legitimate <script> tags and malicious ones because they look just the same. But for reflected XSS vulnerabilities, which are probably more common, this approach works just fine.
How XSS filters handle detected XSS payloads differs from browser to browser. For example the IE8 filter tries to sanitize the payload, so the site can still be viewed in case the detection was a false positive. A good example for a false positive was the Google search for “what’s<script>”, which looks like “http://www.google.com/search?hl=en&q=what’s%3Cscript%3E” in it’s simplest form. Of course the search query is echoed back to the user and looks like an XSS attempt to an naive filter and removing the script tags might break the web site.
Trying to sanitize the detected XSS payload modifies the site in a way the developer did not expect and might break more things. The IE8 XSS filter has also become famous because in some cases it created XSS vulnerabilities, where there were actually no XSS vulnerabilities. *
But also the Chrome filter is not perfect. It has been demonstrated * that it is easy to bypass the filter if two consecutive HTTP parameters are used to deliver the XSS payload. For example the first parameter could be
and the second one contains
Firefox currently contains no XSS filter, but add-on NoScript also provides a XSS filter. It works similar to the filters of the other browsers. Because it is an add-on it is not as tightly integrated and does not have all the possiblities the other filters have. * *
The XSS filters can be controlled by web applications through an HTTP header:
“X-XSS-Protection” (or with the “X-” prefix omitted depending on the browser).
There are three modes which can be requested by the web applications