Users Online

· Guests Online: 9

· Members Online: 0

· Total Members: 188
· Newest Member: meenachowdary055

Forum Threads

Newest Threads
No Threads created
Hottest Threads
No Threads created

Latest Articles

Articles Hierarchy

Service Workers

Service Workers: an Introduction

 

 

Service Workers: an Introduction

Matt Gaunt

Rich offline experiences, periodic background syncs, push notifications—functionality that would normally require a native application—are coming to the web. Service workers provide the technical foundation that all these features rely on.

What is a service worker

A service worker is a script that your browser runs in the background, separate from a web page, opening the door to features that don't need a web page or user interaction. Today, they already include features like push notifications and background sync. In the future, service workers might support other things like periodic sync or geofencing. The core feature discussed in this tutorial is the ability to intercept and handle network requests, including programmatically managing a cache of responses.

The reason this is such an exciting API is that it allows you to support offline experiences, giving developers complete control over the experience.

Before service worker, there was one other API that gave users an offline experience on the web called AppCache. There are a number of issues with the AppCache API that service workers were designed to avoid.

Things to note about a service worker:

  • It's a JavaScript Worker, so it can't access the DOM directly. Instead, a service worker can communicate with the pages it controls by responding to messages sent via the postMessage interface, and those pages can manipulate the DOM if needed.
  • Service worker is a programmable network proxy, allowing you to control how network requests from your page are handled.
  • It's terminated when not in use, and restarted when it's next needed, so you cannot rely on global state within a service worker's onfetch and onmessage handlers. If there is information that you need to persist and reuse across restarts, service workers do have access to the IndexedDB API.
  • Service workers make extensive use of promises, so if you're new to promises, then you should stop reading this and check out Promises, an introduction.

The service worker life cycle

A service worker has a lifecycle that is completely separate from your web page.

To install a service worker for your site, you need to register it, which you do in your page's JavaScript. Registering a service worker will cause the browser to start the service worker install step in the background.

Typically during the install step, you'll want to cache some static assets. If all the files are cached successfully, then the service worker becomes installed. If any of the files fail to download and cache, then the install step will fail and the service worker won't activate (i.e. won't be installed). If that happens, don't worry, it'll try again next time. But that means if it does install, you know you've got those static assets in the cache.

When installed, the activation step will follow and this is a great opportunity for handling any management of old caches, which we'll cover during the service worker update section.

After the activation step, the service worker will control all pages that fall under its scope, though the page that registered the service worker for the first time won't be controlled until it's loaded again. Once a service worker is in control, it will be in one of two states: either the service worker will be terminated to save memory, or it will handle fetch and message events that occur when a network request or message is made from your page.

Below is an overly simplified version of the service worker lifecycle on its first installation.

service worker lifecycle

Prerequisites

Browser support

Browser options are growing. Service workers are supported by Chrome, Firefox and Opera. Microsoft Edge is now showing public support. Even Safari has dropped hints of future development. You can follow the progress of all the browsers at Jake Archibald's is Serviceworker ready site.

You need HTTPS

During development you'll be able to use service worker through localhost, but to deploy it on a site you'll need to have HTTPS setup on your server.

Using service worker you can hijack connections, fabricate, and filter responses. Powerful stuff. While you would use these powers for good, a man-in-the-middle might not. To avoid this, you can only register service workers on pages served over HTTPS, so we know the service worker the browser receives hasn't been tampered with during its journey through the network.

GitHub Pages are served over HTTPS, so they're a great place to host demos.

If you want to add HTTPS to your server then you'll need to get a TLS certificate and set it up for your server. This varies depending on your setup, so check your server's documentation and be sure to check out Mozilla's SSL config generator for best practices.

Register a service worker

To install a service worker you need to kick start the process by registering it in your page. This tells the browser where your service worker JavaScript file lives.

if ('serviceWorker' in navigator) {
  window
.addEventListener('load', function() {
    navigator
.serviceWorker.register('/sw.js').then(function(registration) {
     
// Registration was successful
      console
.log('ServiceWorker registration successful with scope: ', registration.scope);
   
}, function(err) {
     
// registration failed :(
      console
.log('ServiceWorker registration failed: ', err);
   
});
 
});
}

This code checks to see if the service worker API is available, and if it is, the service worker at /sw.js is registered once the page is loaded.

You can call register() every time a page loads without concern; the browser will figure out if the service worker is already registered or not and handle it accordingly.

One subtlety with the register() method is the location of the service worker file. You'll notice in this case that the service worker file is at the root of the domain. This means that the service worker's scope will be the entire origin. In other words, this service worker will receive fetch events for everything on this domain. If we register the service worker file at /example/sw.js, then the service worker would only see fetch events for pages whose URL starts with /example/ (i.e. /example/page1//example/page2/).

Now you can check that a service worker is enabled by going to chrome://inspect/#service-workers and looking for your site.

Inspect service workers

When service worker was first being implemented, you could also view your service worker details through chrome://serviceworker-internals. This may still be useful, if for nothing more than learning about the life cycle of service workers, but don't be surprised if it gets replaced completely by chrome://inspect/#service-workers at a later date.

You may find it useful to test your service worker in an Incognito window so that you can close and reopen knowing that the previous service worker won't affect the new window. Any registrations and caches created from within an Incognito window will be cleared out once that window is closed.

Install a service worker

After a controlled page kicks off the registration process, let's shift to the point of view of the service worker script, which handles the install event.

For the most basic example, you need to define a callback for the install event and decide which files you want to cache.

self.addEventListener('install', function(event) {
 
// Perform install steps
});

Inside of our install callback, we need to take the following steps:

  1. Open a cache.
  2. Cache our files.
  3. Confirm whether all the required assets are cached or not.
var CACHE_NAME = 'my-site-cache-v1';
var urlsToCache = [
 
'/',
 
'/styles/main.css',
 
'/script/main.js'
];

self.addEventListener('install', function(event) {
 
// Perform install steps
 
event.waitUntil(
    caches
.open(CACHE_NAME)
     
.then(function(cache) {
        console
.log('Opened cache');
       
return cache.addAll(urlsToCache);
     
})
 
);
});

Here you can see we call caches.open() with our desired cache name, after which we call cache.addAll() and pass in our array of files. This is a chain of promises (caches.open() and cache.addAll()). The event.waitUntil() method takes a promise and uses it to know how long installation takes, and whether it succeeded or not.

If all the files are successfully cached, then the service worker will be installed. If any of the files fail to download, then the install step will fail. This allows you to rely on having all the assets that you defined, but does mean you need to be careful with the list of files you decide to cache in the install step. Defining a long list of files will increase the chance that one file may fail to cache, leading to your service worker not getting installed.

This is just one example, you can perform other tasks in the install event or avoid setting an install event listener altogether.

Cache and return requests

Now that you've installed a service worker, you probably want to return one of your cached responses, right?

After a service worker is installed and the user navigates to a different page or refreshes, the service worker will begin to receive fetch events, an example of which is below.

self.addEventListener('fetch', function(event) {
 
event.respondWith(
    caches
.match(event.request)
     
.then(function(response) {
       
// Cache hit - return response
       
if (response) {
         
return response;
       
}
       
return fetch(event.request);
     
}
   
)
 
);
});

Here we've defined our fetch event and within event.respondWith(), we pass in a promise from caches.match(). This method looks at the request and finds any cached results from any of the caches your service worker created.

If we have a matching response, we return the cached value, otherwise we return the result of a call to fetch, which will make a network request and return the data if anything can be retrieved from the network. This is a simple example and uses any cached assets we cached during the install step.

If we want to cache new requests cumulatively, we can do so by handling the response of the fetch request and then adding it to the cache, like below.

self.addEventListener('fetch', function(event) {
 
event.respondWith(
    caches
.match(event.request)
     
.then(function(response) {
       
// Cache hit - return response
       
if (response) {
         
return response;
       
}

       
return fetch(event.request).then(
         
function(response) {
           
// Check if we received a valid response
           
if(!response || response.status !== 200 || response.type !== 'basic') {
             
return response;
           
}

           
// IMPORTANT: Clone the response. A response is a stream
           
// and because we want the browser to consume the response
           
// as well as the cache consuming the response, we need
           
// to clone it so we have two streams.
           
var responseToCache = response.clone();

            caches
.open(CACHE_NAME)
             
.then(function(cache) {
                cache
.put(event.request, responseToCache);
             
});

           
return response;
         
}
       
);
     
})
   
);
});

What we are doing is this:

  1. Add a callback to .then() on the fetch request.
  2. Once we get a response, we perform the following checks:
    1. Ensure the response is valid.
    2. Check the status is 200 on the response.
    3. Make sure the response type is basic, which indicates that it's a request from our origin. This means that requests to third party assets aren't cached as well.
  3. If we pass the checks, we clone the response. The reason for this is that because the response is a Stream, the body can only be consumed once. Since we want to return the response for the browser to use, as well as pass it to the cache to use, we need to clone it so we can send one to the browser and one to the cache.

Update a service worker

There will be a point in time where your service worker will need updating. When that time comes, you'll need to follow these steps:

  1. Update your service worker JavaScript file. When the user navigates to your site, the browser tries to redownload the script file that defined the service worker in the background. If there is even a byte's difference in the service worker file compared to what it currently has, it considers it new.
  2. Your new service worker will be started and the install event will be fired.
  3. At this point the old service worker is still controlling the current pages so the new service worker will enter a waiting state.
  4. When the currently open pages of your site are closed, the old service worker will be killed and the new service worker will take control.
  5. Once your new service worker takes control, its activate event will be fired.

One common task that will occur in the activate callback is cache management. The reason you'll want to do this in the activate callback is because if you were to wipe out any old caches in the install step, any old service worker, which keeps control of all the current pages, will suddenly stop being able to serve files from that cache.

Let's say we have one cache called 'my-site-cache-v1', and we find that we want to split this out into one cache for pages and one cache for blog posts. This means in the install step we'd create two caches, 'pages-cache-v1' and 'blog-posts-cache-v1' and in the activate step we'd want to delete our older 'my-site-cache-v1'.

The following code would do this by looping through all of the caches in the service worker and deleting any caches that aren't defined in the cache whitelist.

self.addEventListener('activate', function(event) {

 
var cacheWhitelist = ['pages-cache-v1', 'blog-posts-cache-v1'];

 
event.waitUntil(
    caches
.keys().then(function(cacheNames) {
     
return Promise.all(
        cacheNames
.map(function(cacheName) {
         
if (cacheWhitelist.indexOf(cacheName) === -1) {
           
return caches.delete(cacheName);
         
}
       
})
     
);
   
})
 
);
});

Rough edges and gotchas

This stuff is really new. Here's a collection of issues that get in the way. Hopefully this section can be deleted soon, but for now these are worth being mindful of.

If installation fails, we're not so good at telling you about it

If a worker registers, but then doesn't appear in chrome://inspect/#service-workers or chrome://serviceworker-internals, it's likely failed to install due to an error being thrown, or a rejected promise being passed to event.waitUntil().

To work around this, go to chrome://serviceworker-internals and check "Open DevTools window and pause JavaScript execution on service worker startup for debugging", and put a debugger statement at the start of your install event. This, along with Pause on uncaught exceptions, should reveal the issue.

The defaults of fetch()

No credentials by default

When you use fetch, by default, requests won't contain credentials such as cookies. If you want credentials, instead call:

fetch(url, {
  credentials
: 'include'
})

This behaviour is on purpose, and is arguably better than XHR's more complex default of sending credentials if the URL is same-origin, but omitting them otherwise. Fetch's behaviour is more like other CORS requests, such as <img crossorigin>, which never sends cookies unless you opt-in with <img crossorigin="use-credentials">.

Non-CORS fail by default

By default, fetching a resource from a third party URL will fail if it doesn't support CORS. You can add a no-CORS option to the Request to overcome this, although this will cause an 'opaque' response, which means you won't be able to tell if the response was successful or not.

cache.addAll(urlsToPrefetch.map(function(urlToPrefetch) {
 
return new Request(urlToPrefetch, { mode: 'no-cors' });
})).then(function() {
  console
.log('All resources have been fetched and cached.');
});

Handling responsive images

The srcset attribute or the <picture> element will select the most appropriate image asset at run time and make a network request.

For service worker, if you wanted to cache an image during the install step, you have a few options:

  1. Install all the images that the <picture> element and the srcset attribute will request.
  2. Install a single low-res version of the image.
  3. Install a single high-res version of the image.

Realistically you should be picking option 2 or 3 since downloading all of the images would be a waste of storage space.

Let's assume you go for the low res version at install time and you want to try and retrieve higher res images from the network when the page is loaded, but if the high res images fail, fallback to the low res version. This is fine and dandy to do but there is one problem.

If we have the following two images:

Screen DensityWidthHeight
1x 400 400
2x 800 800

In a srcset image, we'd have some markup like this:

<img src="image-src.png" srcset="image-src.png 1x, image-2x.png 2x" />

If we are on a 2x display, then the browser will opt to download image-2x.png, if we are offline you could .catch() this request and return image-src.png instead if it's cached, however the browser will expect an image that takes into account the extra pixels on a 2x screen, so the image will appear as 200x200 CSS pixels instead of 400x400 CSS pixels. The only way around this is to set a fixed height and width on the image.

<img src="image-src.png" srcset="image-src.png 1x, image-2x.png 2x"
 
style="width:400px; height: 400px;" />

For <picture> elements being used for art direction, this becomes considerably more difficult and will depend heavily on how your images are created and used, but you may be able to use a similar approach to srcset.

Learn more

There is a list of documentation on service worker being maintained at https://jakearchibald.github.io/isserviceworkerready/resources that you may find useful.

 

 


 

 

 

The Service Worker Lifecycle

Jake Archibald

The lifecycle of the service worker is its most complicated part. If you don't know what it's trying to do and what the benefits are, it can feel like it's fighting you. But once you know how it works, you can deliver seemless, unobtrusive updates to users, mixing the best of web and native patterns.

This is a deep dive, but the bullets at the start of each section cover most of what you need to know.

The intent

The intent of the lifecycle is to:

  • Make offline-first possible.
  • Allow a new service worker to get itself ready without disrupting the current one.
  • Ensure an in-scope page is controlled by the same service worker (or no service worker) throughout.
  • Ensure there's only one version of your site running at once.

That last one is pretty important. Without service workers, users can load one tab to your site, then later open another. This can result in two versions of your site running at the same time. Sometimes this is ok, but if you're dealing with storage you can easily end up with two tabs having very different opinions on how their shared storage should be managed. This can result in errors, or worse, data loss.

Caution: Users actively dislike data loss. It causes them great sadness.

The first service worker

In brief:

  • The install event is the first event a service worker gets, and it only happens once.
  • A promise passed to installEvent.waitUntil() signals the duration and success or failure of your install.
  • A service worker won't receive events like fetch and push until it successfully finishes installing and becomes "active".
  • By default, a page's fetches won't go through a service worker unless the page request itself went through a service worker. So you'll need to refresh the page to see the effects of the service worker.
  • clients.claim() can override this default, and take control of non-controlled pages.

Take this HTML:

<!DOCTYPE html>
An image will appear here in 3 seconds:
<script>
  navigator
.serviceWorker.register('/sw.js')
   
.then(reg => console.log('SW registered!', reg))
   
.catch(err => console.log('Boo!', err));

  setTimeout
(() => {
   
const img = new Image();
    img
.src = '/dog.svg';
    document
.body.appendChild(img);
 
}, 3000);
</script>

It registers a service worker, and adds image of a dog after 3 seconds.

Here's its service worker, sw.js:

self.addEventListener('install', event => {
  console
.log('V1 installing…');

 
// cache a cat SVG
 
event.waitUntil(
    caches
.open('static-v1').then(cache => cache.add('/cat.svg'))
 
);
});

self.addEventListener('activate', event => {
  console
.log('V1 now ready to handle fetches!');
});

self.addEventListener('fetch', event => {
 
const url = new URL(event.request.url);

 
// serve the cat SVG from the cache if the request is
 
// same-origin and the path is '/dog.svg'
 
if (url.origin == location.origin && url.pathname == '/dog.svg') {
   
event.respondWith(caches.match('/cat.svg'));
 
}
});

It caches an image of a cat, and serves it whenever there's a request for /dog.svg. However, if you run the above example, you'll see a dog the first time you load the page. Hit refresh, and you'll see the cat.

Note: Cats are better than dogs. They just are.

Scope and control

The default scope of a service worker registration is ./ relative to the script URL. This means if you register a service worker at //example.com/foo/bar.js it has a default scope of //example.com/foo/.

We call pages, workers, and shared workers clients. Your service worker can only control clients that are in-scope. Once a client is "controlled", its fetches go through the in-scope service worker. You can detect if a client is controlled via navigator.serviceWorker.controller which will be null or a service worker instance.

Download, parse, and execute

Your very first service worker downloads when you call .register(). If your script fails to download, parse, or throws an error in its initial execution, the register promise rejects, and the service worker is discarded.

Chrome's DevTools shows the error in the console, and in the service worker section of the application tab:

Error displayed in service worker DevTools tab

Install

The first event a service worker gets is install. It's triggered as soon as the worker executes, and it's only called once per service worker. If you alter your service worker script the browser considers it a different service worker, and it'll get its own install event. I'll cover updates in detail later.

The install event is your chance to cache everything you need before being able to control clients. The promise you pass to event.waitUntil() lets the browser know when your install completes, and if it was successful.

If your promise rejects, this signals the install failed, and the browser throws the service worker away. It'll never control clients. This means we can't rely on "cat.svg" being present in the cache in our fetch events. It's a dependency.

Activate

Once your service worker is ready to control clients and handle functional events like push and sync, you'll get an activate event. But that doesn't mean the page that called .register() will be controlled.

The first time you load the demo, even though dog.svg is requested long after the service worker activates, it doesn't handle the request, and you still see the image of the dog. The default is consistency, if your page loads without a service worker, neither will its subresources. If you load the demo a second time (in other words, refresh the page), it'll be controlled. Both the page and the image will go through fetch events, and you'll see a cat instead.

clients.claim

You can take control of uncontrolled clients by calling clients.claim() within your service worker once it's activated.

Here's a variation of the demo above which calls clients.claim() in its activate event. You should see a cat the first time. I say "should", because this is timing sensitive. You'll only see a cat if the service worker activates and clients.claim() takes effect before the image tries to load.

If you use your service worker to load pages differently than they'd load via the network, clients.claim() can be troublesome, as your service worker ends up controlling some clients that loaded without it.

Note: I see a lot of people including clients.claim() as boilerplate, but I rarely do so myself. It only really matters on the very first load, and due to progressive enhancement the page is usually working happily without service worker anyway.

Updating the service worker

In brief:

  • An update is triggered if any of the following happens:
    • A navigation to an in-scope page.
    • A functional events such as push and sync, unless there's been an update check within the previous 24 hours.
    • Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
  • Most browsers, including Chrome 68 and later, default to ignoring caching headers when checking for updates of the registered service worker script. They still respect caching headers when fetching resources loaded inside a service worker via importScripts(). You can override this default behavior by setting the updateViaCache option when registering your service worker.
  • Your service worker is considered updated if it's byte-different to the one the browser already has. (We're extending this to include imported scripts/modules too.)
  • The updated service worker is launched alongside the existing one, and gets its own install event.
  • If your new worker has a non-ok status code (for example, 404), fails to parse, throws an error during execution, or rejects during install, the new worker is thrown away, but the current one remains active.
  • Once successfully installed, the updated worker will wait until the existing worker is controlling zero clients. (Note that clients overlap during a refresh.)
  • self.skipWaiting() prevents the waiting, meaning the service worker activates as soon as it's finished installing.

Let's say we changed our service worker script to respond with a picture of a horse rather than a cat:

const expectedCaches = ['static-v2'];

self.addEventListener('install', event => {
  console
.log('V2 installing…');

 
// cache a horse SVG into a new cache, static-v2
 
event.waitUntil(
    caches
.open('static-v2').then(cache => cache.add('/horse.svg'))
 
);
});

self.addEventListener('activate', event => {
 
// delete any caches that aren't in expectedCaches
 
// which will get rid of static-v1
 
event.waitUntil(
    caches
.keys().then(keys => Promise.all(
      keys
.map(key => {
       
if (!expectedCaches.includes(key)) {
         
return caches.delete(key);
       
}
     
})
   
)).then(() => {
      console
.log('V2 now ready to handle fetches!');
   
})
 
);
});

self.addEventListener('fetch', event => {
 
const url = new URL(event.request.url);

 
// serve the horse SVG from the cache if the request is
 
// same-origin and the path is '/dog.svg'
 
if (url.origin == location.origin && url.pathname == '/dog.svg') {
   
event.respondWith(caches.match('/horse.svg'));
 
}
});
Note: I have no strong opinions on horses.

Check out a demo of the above. You should still see an image of a cat. Here's why…

Install

Note that I've changed the cache name from static-v1 to static-v2. This means I can set up the new cache without overwriting things in the current one, which the old service worker is still using.

This patterns creates version-specific caches, akin to assets a native app would bundle with its executable. You may also have caches that aren't version specific, such as avatars.

Waiting

After it's successfully installed, the updated service worker delays activating until the existing service worker is no longer controlling clients. This state is called "waiting", and it's how the browser ensures that only one version of your service worker is running at a time.

If you ran the updated demo, you should still see a picture of a cat, because the V2 worker hasn't yet activated. You can see the new service worker waiting in the "Application" tab of DevTools:

DevTools showing new service worker waiting

Even if you only have one tab open to the demo, refreshing the page isn't enough to let the new version take over. This is due to how browser navigations work. When you navigate, the current page doesn't go away until the response headers have been received, and even then the current page may stay if the response has a Content-Disposition header. Because of this overlap, the current service worker is always controlling a client during a refresh.

To get the update, close or navigate away from all tabs using the current service worker. Then, when you navigate to the demo again, you should see the horse.

This pattern is similar to how Chrome updates. Updates to Chrome download in the background, but don't apply until Chrome restarts. In the mean time, you can continue to use the current version without disruption. However, this is a pain during development, but DevTools has ways to make it easier, which I'll cover later in this article.

Activate

This fires once the old service worker is gone, and your new service worker is able to control clients. This is the ideal time to do stuff that you couldn't do while the old worker was still in use, such as migrating databases and clearing caches.

In the demo above, I maintain a list of caches that I expect to be there, and in the activate event I get rid of any others, which removes the old static-v1 cache.

Caution: You may not be updating from the previous version. It may be a service worker many versions old.

If you pass a promise to event.waitUntil() it'll buffer functional events (fetchpushsync etc.) until the promise resolves. So when your fetch event fires, the activation is fully complete.

Caution: The cache storage API is "origin storage" (like localStorage, and IndexedDB). If you run many sites on the same origin (for example, yourname.github.io/myapp), be careful that you don't delete caches for your other sites. To avoid this, give your cache names a prefix unique to the current site, eg myapp-static-v1, and don't touch caches unless they begin with myapp-.

Skip the waiting phase

The waiting phase means you're only running one version of your site at once, but if you don't need that feature, you can make your new service worker activate sooner by calling self.skipWaiting().

This causes your service worker to kick out the current active worker and activate itself as soon as it enters the waiting phase (or immediately if it's already in the waiting phase). It doesn't cause your worker to skip installing, just waiting.

It doesn't really matter when you call skipWaiting(), as long as it's during or before waiting. It's pretty common to call it in the install event:

self.addEventListener('install', event => {
 
self.skipWaiting();

 
event.waitUntil(
   
// caching etc
 
);
});

But you may want to call it as a results of a postMessage() to the service worker. As in, you want to skipWaiting() following a user interaction.

Here's a demo that uses skipWaiting(). You should see a picture of a cow without having to navigate away. Like clients.claim() it's a race, so you'll only see the cow if the new service worker fetches, installs and activates before the page tries to load the image.

Caution: skipWaiting() means that your new service worker is likely controlling pages that were loaded with an older version. This means some of your page's fetches will have been handled by your old service worker, but your new service worker will be handling subsequent fetches. If this might break things, don't use skipWaiting().

Manual updates

As I mentioned earlier, the browser checks for updates automatically after navigations and functional events, but you can also trigger them manually:

navigator.serviceWorker.register('/sw.js').then(reg => {
 
// sometime later…
  reg
.update();
});

If you expect the user to be using your site for a long time without reloading, you may want to call update() on an interval (such as hourly).

Avoid changing the URL of your service worker script

If you've read my post on caching best practices, you may consider giving each version of your service worker a unique URL. Don't do this! This is usually bad practice for service workers, just update the script at its current location.

It can land you with a problem like this:

  1. index.html registers sw-v1.js as a service worker.
  2. sw-v1.js caches and serves index.html so it works offline-first.
  3. You update index.html so it registers your new and shiny sw-v2.js.

If you do the above, the user never gets sw-v2.js, because sw-v1.js is serving the old version of index.html from its cache. You've put yourself in a position where you need to update your service worker in order to update your service worker. Ew.

However, for the demo above, I have changed the URL of the service worker. This is so, for the sake of the demo, you can switch between the versions. It isn't something I'd do in production.

Making development easy

The service worker lifecycle is built with the user in mind, but during development it's a bit of a pain. Thankfully there are a few tools to help out:

Update on reload

This one's my favourite.

DevTools showing 'update on reload'

This changes the lifecycle to be developer-friendly. Each navigation will:

  1. Refetch the service worker.
  2. Install it as a new version even if it's byte-identical, meaning your install event runs and your caches update.
  3. Skip the waiting phase so the new service worker activates.
  4. Navigate the page.

This means you'll get your updates on each navigation (including refresh) without having to reload twice or close the tab.

Skip waiting

DevTools showing 'skip waiting'

If you have a worker waiting, you can hit "skip waiting" in DevTools to immediately promote it to "active".

Shift-reload

If you force-reload the page (shift-reload) it bypasses the service worker entirely. It'll be uncontrolled. This feature is in the spec, so it works in other service-worker-supporting browsers.

Handling updates

The service worker was designed as part of the extensible web. The idea is that we, as browser developers, acknowledge that we are not better at web development than web developers. And as such, we shouldn't provide narrow high-level APIs that solve a particular problem using patterns we like, and instead give you access to the guts of the browser and let you do it how you want, in a way that works best for your users.

So, to enable as many patterns as we can, the whole update cycle is observable:

navigator.serviceWorker.register('/sw.js').then(reg => {
  reg
.installing; // the installing worker, or undefined
  reg
.waiting; // the waiting worker, or undefined
  reg
.active; // the active worker, or undefined

  reg
.addEventListener('updatefound', () => {
   
// A wild service worker has appeared in reg.installing!
   
const newWorker = reg.installing;

    newWorker
.state;
   
// "installing" - the install event has fired, but not yet complete
   
// "installed"  - install complete
   
// "activating" - the activate event has fired, but not yet complete
   
// "activated"  - fully active
   
// "redundant"  - discarded. Either failed install, or it's been
   
//                replaced by a newer version

    newWorker
.addEventListener('statechange', () => {
     
// newWorker.state has changed
   
});
 
});
});

navigator
.serviceWorker.addEventListener('controllerchange', () => {
 
// This fires when the service worker controlling this page
 
// changes, eg a new worker has skipped waiting and become
 
// the new active worker.
});

You survived!

Phew! That was a lot of technical theory. Stay tuned in the coming weeks where we'll dive into some practical applications of the above.

 

 


 

 

 

 

Service Worker Registration

Jeff Posnick
 

Service workers can meaningfully speed up repeat visits to your web app, but you should take steps to ensure that a service worker's initial installation doesn't degrade a user's first-visit experience.

Generally, deferring service worker registration until after the initial page has loaded will provide the best experience for users, especially those on mobile devices with slower network connections.

Common registration boilerplate

If you've ever read about service workers, you've probably come across boilerplate substantially similar to the following:

if ('serviceWorker' in navigator) {
  navigator
.serviceWorker.register('/service-worker.js');
}

This might sometimes be accompanied by a few console.log() statements, or code that detects an update to a previous service worker registration, as a way of letting users know to refresh the page. But those are just minor variations on the standard few lines of code.

So, is there any nuance to navigator.serviceWorker.register? Are there any best practices to follow? Not surprisingly (given that this article doesn't end right here), the answer to both is "yes!"

A user's first visit

Let's consider a user's first visit to a web app. There's no service worker yet, and the browser has no way of knowing in advance whether there will be a service worker that is eventually installed.

As a developer, your priority should be to make sure that the browser quickly gets the minimal set of critical resources needed to display an interactive page. Anything that slows down retrieving those responses is the enemy of a speedy time-to-interactive experience.

Now imagine that in the process of downloading the JavaScript or images that your page needs to render, your browser decides to start a background thread or process (for the sake of brevity, we'll assume it's a thread). Assume that you're not on a beefy desktop machine, but rather the type of underpowered mobile phone that much of the world considers their primary device. Spinning up this extra thread adds contention for CPU time and memory that your browser might otherwise spend on rendering an interactive web page.

An idle background thread is unlikely to make a significant difference. But what if that thread isn't idle, but instead decides that it's also going to start downloading resources from the network? Any concern about CPU or memory contention should take a backseat to worries about the limited bandwidth available to many mobile devices. Bandwidth is precious, so don't undermine critical resources by downloading secondary resources at the same time.

All of this is to say that spinning up a new service worker thread to download and cache resources in the background can work against your goal of providing the shortest time-to-interactive experience the first time a user visits your site.

Improving the boilerplate

The solution is to control start of the service worker by choosing when to call navigator.serviceWorker.register(). A simple rule of thumb would be to delay registration until after the load event fires on window, like so:

if ('serviceWorker' in navigator) {
  window
.addEventListener('load', function() {
    navigator
.serviceWorker.register('/service-worker.js');
 
});
}

But the right time to kick off the service worker registration can also depend on what your web app is doing right after it loads. For example, the Google I/O 2016 web app features a short animation before transitioning to the main screen. Our team found that kicking off the service worker registration during the animation could lead to jankiness on low-end mobile devices. Rather than giving users a poor experience, we delayed service worker registration until after the animation, when the browser was most likely to have a few idle seconds.

Similarly, if your web app uses a framework that performs additional setup after the page has loaded, look for a framework-specific event that signals when that work is done.

Subsequent visits

We've been focusing on the first visit experience up until now, but what impact does delayed service worker registration have on repeat visits to your site? While it might surprise some folks, there shouldn't be any impact at all.

When a service worker is registered, it goes through the install and activate lifecycle events. Once a service worker is activated, it can handle fetch events for any subsequent visits to your web app. The service worker starts before the request for any pages under its scope is made, which makes sense when you think about it. If the existing service worker weren't already running prior to visiting a page, it wouldn't have a chance to fulfill fetch events for navigation requests.

So once there's an active service worker, it doesn't matter when you call navigator.serviceWorker.register(), or in fact, whether you call it at all. Unless you change the URL of the service worker script, navigator.serviceWorker.register() is effectively a no-op during subsequent visits. When it's called is irrelevant.

Reasons to register early

Are there any scenarios in which registering your service worker as early as possible makes sense? One that comes to mind is when your service worker uses clients.claim() to take control of the page during the first visit, and the service worker aggressively performs runtime caching inside of its fetch handler. In that situation, there's an advantage to getting the service worker active as quickly as possible, to try to populate its runtime caches with resources that might come in handy later. If your web app falls into this category, it's worth taking a step back to make sure that your service worker's install handler doesn't request resources that fight for bandwidth with the main page's requests.

Testing things out

A great way to simulate a first visit is to open your web app in a Chrome Incognito window, and look at the network traffic in Chrome's DevTools. As a web developer, you probably reload a local instance of your web app dozens and dozens of times a day. But by revisiting your site when there's already a service worker and fully populated caches, you don't get the same experience that a new user would get, and it's easy to ignore a potential problem.

Here's an example illustrating the difference that registration timing could make. Both screenshots are taken while visiting a sample app in Incognito mode using network throttling to simulate a slow connection.

Network traffic with early registration.

The screenshot above reflects the network traffic when the sample was modified to perform service worker registration as soon as possible. You can see precaching requests (the entries with the gear icon next to them, originating from the service worker's install handler) interspersed with requests for the other resources needed to display the page.

Network traffic with late registration.

In the screenshot above, service worker registration was delayed until after the page had loaded. You can see that the precaching requests don't start until all the resources have been fetched from the network, eliminating any contention for bandwidth. Moreover, because some of the items we're precaching are already in the browser's HTTP cache—the items with (from disk cache) in the Size column—we can populate the service worker's cache without having to go to the network again.

Bonus points if you run this sort of test from an actual, low-end device on a real mobile network. You can take advantage of Chrome's remote debugging capabilities to attach an Android phone to your desktop machine via USB, and ensure that the tests you're running actually reflect the real-world experience of many of your users.

Conclusion

To summarize, making sure that your users have the best first-visit experience should be a top priority. Delaying service worker registration until after the page has loaded during the initial visit can help ensure that. You'll still get all the benefits of having a service worker for your repeat visits.

A straightforward way to ensure to delay your service worker's initial registration until after the first page has loaded is to use the following:

if ('serviceWorker' in navigator) {
  window
.addEventListener('load', function() {
    navigator
.serviceWorker.register('/service-worker.js');
 
});
}


High-performance service worker loading

Jeff Posnick

Adding a service worker to your web app can offer significant performance benefits, going beyond what's possible even when following all the traditional browser caching best practices. But there are a few best practices to follow in order to optimize your load times. The following tips will ensure you're getting the best performance out of your service worker implementation.

First, what are navigation requests?

Navigation requests are (tersely) defined in the Fetch specification as: A navigation request is a request whose destination is "document". While technically correct, that definition lacks nuance, and it undersells the importance of navigations on your web app's performance. Colloquially, a navigation request takes place whenever you enter a URL in your browser's location bar, interact with window.location, or visit a link from one web page to another. Putting an <iframe> on a page will also lead to a navigation request for the <iframe>'s src.

Note: Single page applications, relying on the History API and in-place DOM modifications, tend to avoid navigation requests when switching from view to view. But the initial request in a browser's session for a single page app is still a navigation.

While your web app might make many other subresource requests in order to display all its contents—for elements like scripts, images, or styles—it's the HTML in the navigation response that's responsible for kicking off all the other requests. Any delays in the response for the initial navigation request will be painfully obvious to your users, as they're left staring at a blank screen for an indeterminate period of time.

Note: HTTP/2 server push adds a wrinkle here, as it allows subresource responses to be returned without additional latency, alongside the navigation response. But any delays in establishing the connection to the remote server will also lead to delays the data being pushed down to the client.

Traditional caching best practices, the kind that rely on HTTP Cache-Control headers and not a service worker, require going to the network each navigation, to ensure that all of the subresource URLs are fresh. The holy grail for web performance is to get all the benefits of aggressively cached subresources, without requiring a navigation request that's dependent on the network. With a properly configured service worker tailored to your site's specific architecture, that's now possible.

For best performance, bypass the network for navigations

The biggest impact of adding a service worker to your web application comes from responding to navigation requests without waiting on the network. The best-case-scenario for connecting to a web server is likely to take orders of magnitude longer than it would take to read locally cached data. In scenarios where a client's connection is less than ideal—basically, anything on a mobile network—the amount of time it takes to get back the first byte of data from the network can easily outweigh the total time it would take to render the full HTML.

Choosing the right cache-first service worker implementation largely depends on your site's architecture.

Streaming composite responses

If your HTML can naturally be split into smaller pieces, with a static header and footer along with a middle portion that varies depending on the request URL, then handling navigations using a streamed response is ideal. You can compose the response out of individual pieces that are each cached separately. Using streams ensures that the initial portion of the response is exposed to the client as soon as possible, giving it a head start on parsing the HTML and making any additional subresource requests.

The "Stream Your Way to Immediate Responses" article provides a basic overview of this approach, but for real-world examples and demos, Jake Archibald's "2016 - the year of web streams" is the definitive guide.

Note: For some web apps, there's no avoiding the network when responding to a navigation request. Maybe the HTML for each URL on your site depends on data from a content management system, or maybe your site uses varying layouts and doesn't fit into a generic, application shell structure. Service workers still open the door for improvements over the status quo for loading your HTML. Using streams, you can respond to navigation requests immediately with a common, cached chunk of HTML—perhaps your site's full <head> and some initial <body> elements—while still loading the rest of the HTML, specific to a given URL, from the network.

Caching static HTML

If you've got a simple web app that relies entirely on a set of static HTML documents, then you're in luck: your path to avoiding the network is straightforward. You need a service worker that responds to navigations with previously cached HTML, and that also includes non-blocking logic for keeping that HTML up-to-date as your site evolves.

One approach is to use a service worker fetch handler that implements a stale-while-revalidate policy for navigation requests, like so:

self.addEventListener('fetch', event => {
 
if (event.request.mode === 'navigate') {
   
// See /web/fundamentals/getting-started/primers/async-functions
   
// for an async/await primer.
    event
.respondWith(async function() {
     
// Optional: Normalize the incoming URL by removing query parameters.
     
// Instead of https://example.com/page?key=value,
     
// use https://example.com/page when reading and writing to the cache.
     
// For static HTML documents, it's unlikely your query parameters will
     
// affect the HTML returned. But if you do use query parameters that
     
// uniquely determine your HTML, modify this code to retain them.
     
const normalizedUrl = new URL(event.request.url);
      normalizedUrl
.search = '';

     
// Create promises for both the network response,
     
// and a copy of the response that can be used in the cache.
     
const fetchResponseP = fetch(normalizedUrl);
     
const fetchResponseCloneP = fetchResponseP.then(r => r.clone());

     
// event.waitUntil() ensures that the service worker is kept alive
     
// long enough to complete the cache update.
      event
.waitUntil(async function() {
       
const cache = await caches.open('my-cache-name');
        await cache
.put(normalizedUrl, await fetchResponseCloneP);
     
}());

     
// Prefer the cached response, falling back to the fetch response.
     
return (await caches.match(normalizedUrl)) || fetchResponseP;
   
}());
 
}
});

Another approach is to use a tool like Workbox, which hooks into your web app's build process to generate a service worker that handles caching all of your static resources (not just HTML documents), serving them cache-first, and keeping them up to date.

Using an Application Shell

If you have an existing single page application, then the application shell architecture is straightforward to implement. There's a clear-cut strategy for handling navigation requests without relying on the network: each navigation request, regardless of the specific URL, is fulfilled with a cached copy of a generic "shell" of an HTML document. The shell includes everything needed to bootstrap the single page application, and client-side routing logic can then render the content specific to the request's URL.

Written by hand, the corresponding service worker fetch handler would look something like:

// Not shown: install and activate handlers to keep app-shell.html
// cached and up to date.
self
.addEventListener('fetch', event => {
 
if (event.request.mode === 'navigate') {
   
// Always respond to navigations with the cached app-shell.html,
   
// regardless of the underlying event.request.url value.
    event
.respondWith(caches.match('app-shell.html'));
 
}
});

Workbox can also help here, both by ensuring your app-shell.html is cached and kept up to date, as well as providing helpers for responding to navigation requests with the cached shell.

⚠️ Performance gotchas

If you can't respond to navigations using cached data, but you need a service worker for other functionality—like providing offline fallback content, or handling push notifications —then you're in an awkward situation. If you don't take specific precautions, you could end up taking a performance hit when you add in your service worker. But by steering clear of these gotchas, you'll be on solid ground.

Never use a "passthrough" fetch handler

If you're using a service worker just for push notifications, you might mistakenly think that the following is either required, or will just be treated as a no-op:

// Don't do this!
self
.addEventListener('fetch', event => {
  event
.respondWith(fetch(event.request));
});

This type of "passthrough" fetch handler is insidious, since everything will continue to work in your web application, but you'll end up introducing a small latency hit whenever a network request is made. There's overhead involved in starting up a service worker if it's not already running, and there's also overhead in passing the response from the service worker to the client that made the request.

If your service worker doesn't contain a fetch handler at all, some browsers will make note of that and not bother starting up the service worker whenever there's a network request.

Use navigation preload when appropriate

There are scenarios in which you need a fetch handler to use a caching strategy for certain subresources, but your architecture makes it impossible to respond to navigation requests. Alternatively, you might be okay with using cached data in your navigation response, but you still want to make a network request for fresh data to swap in after the page has loaded.

A feature known as Navigation Preload is relevant for both of those use cases. It can mitigate the delays that a service worker that didn't respond to navigations might otherwise introduce. It can also be used for "out of band" requests for fresh data that could then be used by client-side code after the page has loaded. The "Speed up Service Worker with Navigation Preloads" article has all the details you'd need to configure your service worker accordingly.

Comments

No Comments have been Posted.

Post Comment

Please Login to Post a Comment.

Ratings

Rating is available to Members only.

Please login or register to vote.

No Ratings have been Posted.
Render time: 0.76 seconds
10,917,785 unique visits