w3c / ServiceWorker

Service Workers

Home Page:https://w3c.github.io/ServiceWorker/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to serve service workers from CDN/remote origin?

samertm opened this issue · comments

Hey all,

I want to serve a service worker from a CDN, but I can't figure out how to get that to work.

I expected this to work:

// The header `Service-Worker-Allowed: www.example.com` is set in the response
var swURL = "cdn.example.com/sw.js";
var options = {scope: 'www.example.com'};
navigator.serviceWorker.register(swURL, options);

But it throws the following error in Chrome:

Uncaught (in promise) DOMException: Failed to register a ServiceWorker:
The origin of the provided scriptURL ('https://cdn.example.com/sw.js')
does not match the current origin ('https://www.example.com').

With the current wording, it seems like Service-Worker-Allowed allows loading the service worker from a remote origin.

Service-Worker-Allowed
Indicates the user agent will override the path restriction, which limits the maximum allowed scope url that the script can control, to the given value.
The value is a URL. If a relative URL is given, it is parsed against the script’s URL.

https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#service-worker-allowed

However, all of the examples and discussion only use Service-Worker-Allowed with relative URLs.

Digging through the Chromium code, I'm pretty sure the error is thrown before the request is made to the url, which means it's made before we can check the header on the sw response:

void ServiceWorkerContainer::registerServiceWorkerImpl(/* ... */)
{
    // ...
    if (!documentOrigin->canRequest(scriptURL)) {
        RefPtr<SecurityOrigin> scriptOrigin = SecurityOrigin::create(scriptURL);
        callbacks->onError(WebServiceWorkerError(WebServiceWorkerError::ErrorTypeSecurity, String("Failed to register a ServiceWorker: The origin of the provided scriptURL ('" + scriptOrigin->toString() + "') does not match the current origin ('" + documentOrigin->toString() + "').")));
        return;
    }
    // ...
    m_provider->registerServiceWorker(patternURL, scriptURL, callbacks.release());
}

https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/serviceworkers/ServiceWorkerContainer.cpp?q=%22does+not+match+the+current+origin%22&sq=package:chromium&dr=C&l=224

You can follow the rabbit hole down 'canRequest', but I'm pretty sure nothing in that function allows you to allow remote scripts dynamically (e.g. with a header).

Questions:

  • Is it possible to serve a service worker from a remote origin through some other means?
  • The spec is currently ambiguous about remote origins -- it doesn't say they aren't allowed, but it doesn't say they are either. Should Service-Worker-Allowed let you serve from a remote origin?

Thanks for your time! BTW, you all have done excellent work with the spec.

I don't think this is actually ambiguous in the spec. In step 2 of the Register algorithm we check to make sure that the origin of the (resolved) script URL is the same as the origin of the job's referrer (which more or less is the document that called register). And since we also compare the scope to be same origin with that, that means that the document registering the service worker, the scope of said service worker, and the main script of the service worker all have to be same origin.

You could have your main script (on the same origin as your website) be nothing but an importScripts('https://cdn.example.com/...') though. Currently that has the downside that changes in imported scripts are not taken into account when deciding if a new version of a service worker should be downloaded, but we're changing/fixing that to treat imported scripts the same as the main script for update checks (#839).

Are there security reasons for not being able to load a service worker from a remote origin?

Yes, it would give a remote origin control over your origin.

Yeah, we're not doing this as it turns a small XSS into a huge long-term issue.

If we allowed something like this, the controlled origin would need to opt into it somehow, maybe via something like CSP. It seems like importScripts(crossOriginURL) already provides this opt-in in a much simpler way.

Just to play devil's advocate, what is the material difference between a cross-origin top level script and a top level script with a single cross-origin importScripts()?

You can change the latter but not the former.

Is there a way to bypass this error locally by using some sort of "unsafely-treat-insecure-origin*" flag?

No, the main service worker resource must be same-origin.

Are there any plans to allow this in the future? The security could be retained by requiring a hash-code for cross-origin serviceworker registrations.

I don't see how that solves the security issues.

If the browser only accepts a serviceworker with the exact same hash-code I provided, there is no xss-risk imo?
To clarify, I mean something like this:
navigator.serviceWorker.register('https://remote.domain/serviceworker.js','sha256:364135...d')

In my case I have a domain where I store my (almost) static resources which are shared/used at several other domains. It would be very beneficial to be able to also store the serviceworker on that domain.
(I do realize this puts some limitations on the possibilities of the sw, but for me this still outweighs the hassle of propagating and updating identical resources on each individual domain)

The XSS attacker can compute the hash of the resource hosted off-domain, so the hash gives no extra defense.

He could indeed compute the hash, but I don't see how that could be an attack vector. It's not that the hash is supposed to be hidden.

Can you explain what he could do with the hash and how that could possibly tamper the requested resource?

Sorry, my comment was rather unhelpful.

An attacker using an XSS exploit on example.com to register a serviceworker hosted at evil.net by injecting script can just as easily register the serviceworker including a hash in the registration script. Therefore the hash adds no mitigation.

Perhaps you're not understanding the core XSS problem. It presumes that example.com has an exploit which allows someone to alter the content served by example.com in such a way that arbitrary script is executed. Combined with the ability to register a service worker, this would allow a hacker to cause a service worker to be registered that then intercepts all future loads of example.com content. If the service worker is loaded from evil.com then evil.com now has full control of all content loaded by pages from example.com. The same-origin restriction mitigates this by only allowing scripts hosted at example.com to be registered as a service worker; the worst the XSS exploiter can do is register content under the control of example.com - and badness is possible by abusing existing content within the limits of other mitigations - but in the worst case example.com can replace the content to get back control.

Once this threat (XSS leading to persistent intercept) and mitigation (same-origin requirement) is understood, it should be clearer why the hash is an insufficient replacement mitigation.

Thank you for the clear explanation, I now see the real threat.

This seems to be in violation of the purpose of the worker-src-directive of Content-Security-Policy.

Aren't the two incompatible as it is right now? (Incompatible in that if the same-origin policy takes precedence over worker-src then worker-src serves no purpose)

That feedback is better addressed at https://github.com/w3c/webappsec-csp, but I'll note that worker-src 'none' (or however you specify it) is a valid use of that directive as well, or specifying explicit hashes, etc.

@annevk I might be misunderstanding, but I believe the trouble really lies with the ServiceWorker spec, in that it mostly makes worker-src superfluous (have no effect). Whereas the better solution would be to have the worker-src default to 'self' which matches the current behavior of the ServiceWorker spec, but also makes it overridable similar to how CORS headers can allow certain behaviors that are disabled by default.

Currently, the worker-src can't be used to allow behavior – it can only restrict – because of the way the ServiceWorker spec is written. This goes against expected behavior when comparing to other CSP directives and CORS.

See: w3c/webappsec-csp#130 (comment)

There's various reasons why cross-origin (service) workers aren't workable as already explained earlier.

And no, CSP isn't meant to enable things, it only allows for adding further restrictions.

Sorry for my misunderstading of CSP – I saw some behavior in Chrome that I interpreted as CSP allowing to extend what was allowed.

Then, I have just once last pushback. I can agree that ServiceWorker need heavy restrictions, but what is the rationale for a standard "background" web worker to be same-origin?

Similar to documents, we use the URL of the worker to determine the service worker to use (especially important for shared workers as you might imagine). That wouldn't work cross-origin. They also create their own global object and such, and it would be slightly weird if the origin of that wasn't obtained from the request URL, but from the entity creating the worker.

Thank you! – also found the rationale over here: whatwg/html#3109 (comment)

I still think that non-shared, non-service workers should have different behavior. The same-origin restriction will prevent adoption of workers for reasonable use-cases e.g. PDF.js hosted on a CDN 🙁

You can use a data: URL if I remember correctly, which perhaps suggests we could take another look at this at some point. I'd like for all the existing things to be implemented a little better before digging into that again though.

I also believe the implementation has to be changed to allow for serviceworkers to be hosted on CDN's.
It's true there are some serious safety issues with the current spec if remote SW's are allowed so they have to be solved first

Sorry, my comment was rather unhelpful.

An attacker using an XSS exploit on example.com to register a serviceworker hosted at evil.net by injecting script can just as easily register the serviceworker including a hash in the registration script. Therefore the hash adds no mitigation.

Perhaps you're not understanding the core XSS problem. It presumes that example.com has an exploit which allows someone to alter the content served by example.com in such a way that arbitrary script is executed. Combined with the ability to register a service worker, this would allow a hacker to cause a service worker to be registered that then intercepts all future loads of example.com content. If the service worker is loaded from evil.com then evil.com now has full control of all content loaded by pages from example.com. The same-origin restriction mitigates this by only allowing scripts hosted at example.com to be registered as a service worker; the worst the XSS exploiter can do is register content under the control of example.com - and badness is possible by abusing existing content within the limits of other mitigations - but in the worst case example.com can replace the content to get back control.

Once this threat (XSS leading to persistent intercept) and mitigation (same-origin requirement) is understood, it should be clearer why the hash is an insufficient replacement mitigation.

So outside the presumption of an already existing vulnerability, are you thus saying that the hash can provide some meaningful leverage with respect to loading sw files from different domains? @inexorabletash

The hash is not safe either.
For example: I use some third party service to send push notifications to my users. If after some time I'm tired of this service and I want to stop it I can remove the SW and they can no longer send push notifications from my domain.

If the file is hosted on their domain, there is nothing I can do to prevent them sending notifications to my users (the already subscribed ones) using my own domain.

commented

Maybe you can put a service worker script on your domain, and let it fetch the CDN scripts to eval()?

@jakearchibald : is possible that google cache my service worker if i am using amp and amp-signed-exchange ?

Hi @jakearchibald I had one doubt. I am able to serve this sw.js file through nginx server but when I try to use importScripts() it throws error-> self.importScripts() is not a function any idea what is the workaround for this?

I'm happy to debug a running version of this.