Speed at Scale: Web Performance Tips and Tricks from the Trenches (Google I/O ’19)

Speed at Scale: Web Performance Tips and Tricks from the Trenches (Google I/O ’19)

Show Video

Hi. Everyone my name is Katie and Pena's and I'm addy Osmani we work on the probe we. Work on the chrome team on trying, to keep the web fast today, we're going to talk about a few web performance tips, and tricks from, real production, sites but. First, let's, talk about buttons. Now, you've, probably had to cross the street at some point and may have had to press a pedestrian. Beg, button before now, there are three types of people in the world there are people who press, the button once there. Are people who don't press the button and then people who press it a hundred times because, of course that makes it go faster. The. Frequency, of pushing these buttons increases. Proportional, the user's level of frustration want. To know a secret sure. Lisa, in New York most of these buttons aren't even hooked up so your. New goal is to have a better time to interactive, than this. Now. This experience, is feeling frustrated, with you know buttons just not working, is actually, something that applies to the web as well according. To a UX study, that was done by Akamai, in 2018. Users, expect, experiences, to be interactive, in about 1.3, times the point when they're visually, ready and if they're not people end up rage clicking right. It's important for sites you visually, already and interactive. It's an area where we still have a lot of work to do here. We can see page weight percentiles, on the web both, overall and by resource, type if, one of these categories is particularly high for a site it typically, indicates, that there's room for optimization, and in case you're wondering what this looks like visually, it looks a little bit like this you're sending just way too many resources down to the browser. Delightful. User experiences, can be found across the world so today we're at a deep dive into performance. Learnings from some of the world's largest brands. Let's. Start by talking about how sites approach performance. This. Probably looks familiar for, many sites, maintaining. Performance is just as difficult if, not more difficult than, getting fast in the first place. In. Fact. An internal study, done by Google found that 40% of, large brands regress, on performance, after six months. One. Of the best ways to prevent this from happening is, through performance, budgets performance. Budgets at standards, for the performance, of your site just. Like how you might commit to delivering, a certain level of uptime, to your users you commit, to delivering a certain level of performance.

There. Are a couple different ways that performance, budgets can be defined. They. Can be based on time for. Example having a budget of having. Less than a two second time to interactive, on 4G they. Can be based on page resources. For, example having less, than 150. Kilobytes of JavaScript, on a page or, they. Can be based on computed, metrics, such as lighthouse scores, for, example, having a budget of a 90, or greater lighthouse, performance, score. Well. There are many ways to set, a performance, budget the motivation. And benefits, of doing so remain the same well. We talked to companies who use performance. Budgets we hear the same thing over and over they use performance, budgets because it makes it easy to identify and, fix performance, issues before they ship just. As tests. Catch code issues performance, budgets. Can catch performance, issues. Walmart. Grocery, does this by running a custom job that checks the size of the builds corresponding, to all PRS, if, a PR the. Key bundle by more than one percent the PR automatically, fails and the issue is escalated, to a performance, engineer. Twitter. Does this by running a custom-built, tracker that they built against, all PRS this, build tracker comments, on the PR with a detailed, breakdown of how, that PR, will affect the various, parts of the app. Engineers. Then uses information, to determine whether, a PR should be approved in, addition, they're working on incorporating this, information, into automatic, checks that could potentially fail a PR, both. Walmart and Twitter use custom, infrastructure. That they built themselves to. Implement performance budgets. We. Realize that not everybody has the resources and time to devote to doing that so, today we're really excited, to announce light, wallet. Light. Wallet add support, for performance, budgets to lighthouse it. Is available today in the command line version of lighthouse. The. First and only step required to set up light wallet is to add a budget, JSON, file in this, file you'll define the budgets for your site once. That's set up from, the newest, version of lighthouse from, the command-line and make sure to use the budget path, flag, to, indicate the path to your budget file. If. You've done this correctly you'll now see a budget section, within the lighthouse, report, this. Section will give you a breakdown of the resources, on your page and they've applicable the, amount that your budgets were exceeded, by. Light. Wall was officially, released yesterday, but. Some companies have already been using it in production, jabong. At the online retailer, based in india who recently went through a refactor, that dropped the size of their app by 80% they, didn't want to lose these performance, winds so they decided to put performance, budgets into place up. On the screen you can see the exact budget JSON, file that jabong is using jouvens. Budgeting, is based on resource, sizes, but in addition to that light wallet also supports. A resource, count based budgets. Juban, use the current size of their app as the basis for determining what, their budgets would be this, worked well for them because their app is already in a good place but what if your app isn't, in a good place how should you set your budgets but, one way to approach this problem would be to look at HTTP. Archive data, to, see what breakdown, of resources, correspond, with your performance goals. But. Speaking, from personal experience that's, a lot of sequel code to write so, to save you the effort we're making that information directly available today and what we're calling the performance, budget calculator, simply. Put the performance, budget calculator allows, you to forecast, time to interactive, based, on the breakdown. Of resources. On your page in, addition. It could also generate a budget JSON, file for you.

For. Example, a site, with 100, kilobytes of JavaScript, and 300, kilobytes of other resources, typically. Has a 4 second time to interactive, and for. Every additional, hundred kilobytes of JavaScript, that time to interactive, increases, by one second, no. Two sites are alike so in addition to providing an estimate the, calculator also provides a time to interactive range this. Range represents, the 25th, to 75th, percentile. TTI, for, similar sites. So. One, of the things that can end up impacting your budgets are images, so let's talk about images, starting, off with lazy loading, now. We. Currently send down a lot, of images to our pages and these aren't the best for limited data plans or particularly slow Network connections, at the 90th, percentile, HP archive says that we're shipping almost five Meg's, worth of images down on mobile, and desktop and, that's not perhaps the best now, lazy loading is a strategy, of loading resources as, they're needed and this applies really well to things like off-screen, images, there's, a really big opportunity here, once, again looking at HP archive we can see that there's actually an opportunity where, at the 90s percentile, folks are currently shipping down anywhere, up to three megabytes of images that could, be lazy, loaded, and at the median 416. Kilobytes, now. Luckily there are plenty, of JavaScript. Libraries available for adding lazy, loading to your pages today things like lazy sizes, or react lazy load now, the way that these usually work is that you specify, a data source instead of a source as well as a class and then the library will upgrade your data source to a source as soon, as you come into view now, you can build on this with patterns like optimizing. Perceived performance and. Minimizing. Reflow, just. To let your users know that something's happening as these images are being fetched. Now. We're going to walk through some case studies of people who've been able to use lazy loading effectively, so chrome comm is our browser consumer, site and recently, we've been very focused on optimizing its performance, we'll, cover some of those techniques in more depth soon but these resulted, in a 20%, improvement in, page load times on mobile and a 26%. Improvement. On desktop. Lazy. Loading was one of the techniques the team use to get to this place they, use an SVG placeholder. With image dimensions specified, to avoid reflow, they're, using intersection, observers to tell when images are in or near the viewport, and a small custom, JavaScript lazy loading implementation. The, win here was 46, percent fewer image bytes on initial page float which is a nice win we. Can also look at more advanced, uses of image lazy loading so here's shoppi shopping or. A large e-commerce player in Southeast Asia recently. They adopted image lazy loading and we're able to save one megabyte, of, fewer images that they're serving on initial load now, the way that shop II works is that they're displaying a placeholder by default here and one, of the images inside the viewport once, again using intersection, observer they're able to trigger a network call for the image to download it in the background once. The image is either decoded. If they the browser supports the image decode API or, download. It if it doesn't the image tag is rendered and they're able to do things like have a nice fade in animation, when that image appears, which overall looks quite pleasant. We. Can also take a look at Netflix so, as Netflix, is catalog of films grows it can become challenging for them to present their members with enough information to decide what to watch so, they had this goal of creating a rich enjoyable, video preview experience, so members could have a deeper idea of what was on offer now. As part of this Netflix wanted to optimize their home page to reduce, CPU load and network traffic to keep the UX intuitive, the, technical goal was to enable fast vertical, scrolling through, 30 plus rows of titles the, old version of their home page would render all of the tiles at the highest priority, and that would include data fetching from the server creating, all the dom fetching, all of their images and they, wanted the new version to load much faster minimize, memory, overhead and and, they will smoother playback, so here's where they ended up when, the page now loads they first render billboard, images, at the very top three rows on the server once, they're on the client that make a call for the rest of the page and the render the rest of the rows and then load all the images in so, they're effectively simply rendering the first three reasons with rows of dom and lazy loading the rest as needed the impact of this was decreased, load time for members who don't scroll quite as far, and.

This Is effectively a summary where they'll ended up overall faster, startup times for video previews, and full screen playback so, before there's, cpu load required, to generate all their Dom nodes and get images to load now, they, don't matte saturate, quite as much member bandwidth and they pull in four times for your images on initial load so, their video previews now have faster. Load times they've, got less bandwidth consumption and lower memory overall. So. From our tests, image lazy loading has helped many brands shave an average of seventy four six percent off of their image bytes on initial load as a result of using this optimisation these, include, the likes of Spotify, and Target so, it looks like there could be something, here we could bring into the platform so, today. We're happy to announce that, native. Image lazy loading, is coming to Chrome this summer now. The idea here is that with just one line of code using. The brand new loading attribute, you'll, be able to add lazy loading to your pages this. Is big, deal very excited about it, this, will hopefully work with three values the. Lazy value, eager, if an. Image is not gonna be lazy. Loaded and auto if you want to defer it to the browser. Thank. You. So. We're also happy to announce that this capability is also coming to iframes, so the exact same attribute, the loading attribute is going to be possible to use on iframes, and I think that this introduces, a huge, opportunity for us to optimize how, we address loading, third-party, content. Now. Here is an example of, the brand new loading attribute, working in practice so, what. Way this is going to work is that on initial load we're, actually just going to fetch the images that are in or near the viewport we're, also going to fetch the first two kilobytes, of all of our images as that, will give us dimension, information, and help us avoid reflow, it'll give us the placeholders, that we need and then we start loading these images on demand, and. What this leads to is quite, nice savings so we're only loading up 548. Kilobytes of images rather than those 2.2, megabytes, now, Chrome's implementation. Of lazy loading is doing a few other things behind, the hood we actually factor in the users effective, connection, type when we decide what distance, from viewport, thresholds, were going to use and those can be different from whether you're on 4G, - whether you're on 2g. Now. The loading attribute can either be treated as a progressive enhancement so, only using it in browsers that support it, or you can load javascript lazy loading library as a fallback, so, here we're checking for the presence of the loading attribute, on HTML. Image element, if it's present we'll just use the native attribute, and we'll upgrade our image data sources and if it's not we can fetch in something like lazy sizes, and apply, it to get the same behavior so. Here, it is working in Firefox, where we've applied you know this exact same pattern and so, we're able to get to a place where we have cross browser image, lazy loading with a relatively, hybrid technique that works quite well. Usually. Expect images, to look good and be performing, across a wide variety of devices this, is why responsive, images are an important technique. Responsive. Images are the practice of serving multiple versions, of an image so that the browser can choose the version that works best for the, user's device. Responsive. Images can either be based on serving different widths, of an image or based, on different densities, of an image density. Refers to the. Device pixel, ratio or, pixel, density, of the, device that the image is intended, for for. Example traditional, CRT, monitors, have a pixel density of 1 whereas. Retina displays have a pixel density of 2 however. These are only just two of the many pixel, densities, in use on devices today. And. What Twitter realized was that it was unnecessary, to serve images, beyond, a retina density, this. Is because the human eye cannot, distinguish. Between images. Beyond, that density this. Is an important realization because. It decreased, image size by 33%, the. One exception, to this is that they do continue, to serve higher, density.

Images, In situations. Where the image is displayed fullscreen. And the user can pinch zoom on the image. Responsive. Images are just one of the many techniques, that go into a fully optimized, image. When. We're talking with large brands, those optimizations, not, only include the usual suspects, like compression or resizing, but. Also more advanced techniques like using machine, learning for automated, art direction, or using. A/b testing, to evaluate the effectiveness of an image. And. This is where image CD ends come in you, can think of image CD ends as image, optimization as, a service, and they, provide a level of sophistication and. Functionality, that can often be difficult to replicate, on your own with local, script based image optimization. At. A high level image. CD ends work by providing you with an API for accessing. And, more importantly, manipulating. Your images, and image. CDN can be something that you manage yourself or leave, to a third party. Many. Companies do decide to go as a third party because they find that it is a better use of resources to. Have their engineers focus on their core business rather. Than the, building, and maintenance, of, another piece of software, Chewbacca. Was a travel site based in Europe who, switched to cloud in area and this was exactly their experience. When. Chewbacca switched to an image CDN they found that overall, image size decreased, by 80%. Those. Results are very good but they're not necessarily, unusual when. Talking with brands who switched to image see the ends we found that they experienced. A drop an image size of anywhere from forty to eighty percent I, personally. Think part of the reason for this is that image, students, can often provide a level of optimization. And specialization, that can be difficult to replicate on your own if only due to lack of time and resources. Images. Are of a single largest component of most website, so this translates, into a significant. Savings and overall page size. So. Next let's talk about JavaScript. Starting, off with deferring, third-party. Script and embeds, things like ads analytics. And widgets now, third-party, code is responsible, for, 57%. Of JavaScript, execution time on the web, that's a huge number this is based on HP archive data and this represents, a majority, chunk of script execution, time across the top four million websites this. Includes everything across, ads analytics. Embeds, and a lot of the CPU, intensive scripts, can cause, issues with things like script execution, and can delay your user interaction, so we need to exercise a lot of care when we're including third parties in our pages well. Now when I ask folks how their JavaScript diet is going it usually, isn't very great. Tag.

Managers, Adds libraries. Maybe, there's, an opportunity for us to defer, some of this work to a smarter. Point in time let's. Talk about a site that actually did this for real The Telegraph. So. The Telegraph knew that improving the performance of, third-party scripts, would, take time and it benefits from installing a performance, culture in your organization they. Say that everybody wants that tag on their page that's, gonna make the organization money, and it's, very important, to get the individuals, in a room to educate, challenge and work, together on this problem so, what they did was they set up a web perf working group across their ads marketing, and Technology teams to, review tags so that non-technical. Stakeholders, could actually understand, what the opportunity, here was, what. They discovered, was. A, change the. Single biggest improvement. At the Telegraph, was deferring all Java, scripts including, their own using. The defer attribute, this. Hasn't, skewed analyst or advertising. Based. On their tests, this, is a really, huge deal especially for. A publisher, because, usually you see a lot of hesitation from, marketing, folks from advertising, from analytics. Because there's this fear, that you're going to end up losing revenue or, not quite tracking, as many users as you want to be able to track but. Through collaboration. Through building, that performance, culture they're actually, get to able to get to a place with the org where they kept building on top of this including, leading, to changes such, as a 60 second improvement in their time to interactive, so they still have work to do but this is a really solid start, we. Can also talk about tooie who are a travel, operator, in Europe they were looking at how to be more customer centric and, realize, that just, think prices wasn't, gonna cut it if visitors, were leaving their site because of slow speed now. For speed projects, at their organization, to get off the ground they had to get organizational. Buy-in from. Management all, the way up to their CEO and through, a test and learn mindset, they were able to discover that when low times decreased by 71%. Bounce. Rates decreased, by 31%. Now. Part of the things that allowed them to get to a place where they can improve performance were, these two optimizations. To, our room using Google tag manager in, the document, head in their case we're using it to inject, tracking, scripts and things like that so, they move the execution, of Google, tag manager after, the. Load event they. Didn't see any drop in tracking a meaningful, level as a result of this and the result was great in their perspective the 50% reduction in Dom complete, so. We also had a third-party, AV testing library, that they were using that weighed a hundred kilobytes. Of gzipped, and minified script they, realized that even if they were to push this to after the onload event it could potentially have some issues they noticed some flickering as, it would switch between one a/b tests to the other so, they completely threw that dependency, out and they, rewrote their a/b testing, as, something custom part of their CMS, in under, a hundred lines of JavaScript, the, impact was being able to throw away that dependency, and a 15% reduction in homepage javascript. Let's. Also talk about embeds, now, we, notice lighthouse flags chrome comm is having a high JavaScript, execution time despite, it looking like it's mostly a static, site this. Would delay how soon users could react with the experience, no, we saw was that chrome comm actually had this watch, video, button on it where they show a YouTube promo if you clicked on the button unfortunately. They dropped in YouTube's default, embed into their HTML and, this was pulling in all of the YouTube video player all, of its scripts and resources, on initial, page load bumping. Up their time to interactive, to 13.6. Seconds now, the solution here was that instead of loading those YouTube embeds and their scripts eagerly, on page, load switching. To doing it on interaction. So, now when a user clicks, to watch that video that's the point when we load, in all those resources on demand because, the user signaled an intense, that they're interested in watching this, led to a 69 point, improve, in their lighthouse performance, score as well as a ten-second. Faster times interactive, so really big change. Now. No performance, talk is complete with a doubt a discussion, of the cost of libraries and how you should just remove all of them but. Since that topic has been done so so many times I wanted to take a little bit different angle and instead talk about what are some alternatives to, removing expensive libraries, in other words that's an option for you what are some other things you can look into. First. Is deferring.

Or Deprecating. Expensive, libraries, so taking steps to, eventually. Removing. That library. For. Placing, the library with something less expensive. Deferring. The use of an expensive, library until after the initial page load and. Updating. A library, to a newer version. When. Replacing libraries, there. Are generally two things you want to look for one, that the library, is smaller, but also maybe more importantly, that it's tree-shaking by. Only using tree shakable dependencies, you're, ensuring, that you're only paying the cost for the parts of the library that you actually use. You. Can also defer the loading, and use of expensive dependencies. Until after the initial page load tokopedia. Is an online retailer, based in indonesia and they're using this technique on their landing page they. Really wanted their initial landing, page experience, to be as fast as possible, so they rewrote it in spelt, the new, version only takes 37. Kilobytes. Of JavaScript, to render above-the-fold content. By. Comparison, their existing, react app is 320. Kilobytes, I think. This is a really interesting technique. Because they did not rewrite, their entire app. Instead. They're still using the react app they just lazy loaded in the background, using service workers this. Can be a really, nice alternative to, rewriting, an entire application. As. I mentioned tokopedia use felt, for their landing, page in, addition to this felt pre-act. And lit HTML. Are two other very, lightweight frameworks, to look into. And. Last consider, updating your dependencies, as a, result of using newer technologies. Newer versions of libraries are often much more performant. Than their predecessors. For. Example, zalando is a european fashion, retailer, and they notice that their particular version, of react was impacting. Page load performance they. Ap tested this and found that by updating from react. 15.6. One to, sixteen point two they. Were able to improve load, time by 100 milliseconds. And lead. To eight point seven percent uplift. In revenue, per session. Now. Another useful optimisation. To consider is code splitting when, we're thinking about loading routes and components, we ideally, want to do three things we, want to let the user know that something, is happening we, want to load the minimal code and data really fast and we want to render as quickly as possible, now Co splitting enables us to do this more easily by, breaking our larger, bundles into, smaller ones whoops. That. Allow us to load them on demand this. Enables all sorts of interesting loading. Patterns including, progressive bootstrapping. Now. When it comes to JavaScript, it actually does have a real cost and those two costs are download and JavaScript, execution down, load times are critical for really slow networks so things like 2g and 3G and JavaScript, execution time ends up being critical, for devices with slow CPUs, because, javascript, is CPU, bound this, is one of those places where small, JavaScript bundles can be useful for improving your download speeds lowering, memory usage and reducing, your overall CPU. Costs now, when it comes to JavaScript our team have a motto, if. JavaScript doesn't bring users joy thank, it and throw it away I, believe. That this was in an extended special, of Murray condos show. Now. One site the breaks of jobs were pretty well is Google Shopping they're interactive, in under five seconds over 3G, and they, have this goal of very loading very very quickly including their project, details page now.

Shopping Have at least three JavaScript, chunks one for above-the-fold rendering, one for co2 response. Those user interactions. And one for other Spurs. That are supported by search now, their work to get to this place involved. Drawing a new template, compiler, producing. Small or code through it and also, looking at things like a lighter experience. For folks who are on the slowest of connections, they, actually ship a, version. That's under 15 kilobytes, of code for four users in those types of markets, another. Good example are, walmart groceries, so walmart grocery is a single page application, the loads as much as possible up front and they've been focused on cleaning up their code removing, old duplicate, dependencies, anything, that's unnecessary and they split up their core Java Script bundles using code splitting they've also been doing things that Katie's been suggesting earlier like moving to smaller builds, of libraries like moment Jas and the impact of this iterative, work has been great a sixty nine percent smaller, JavaScript, bundle and 28% smaller, faster. TTI, now, they continue to work on shaving JavaScript, off their experience, to improve it, as much as possible we. Can also talk about Twitter so Twitter is a popular, social networking, site eighty percent of their customers are using mobile every day and they've been focused on unlocking a user experience, for the web but lets users access, content, pretty quickly regardless of their device high for their connection, now. When Twitter light first launched, a few years ago the, team invested, in many optimizations, to how they load Java scripts they used route based code splitting and for T on-demand chunks, for breaking up those large JavaScript bundles, so that users could interactive. In just a few seconds over 4g between. This and smart usage of resource since they're able to prioritize, loading, their bundles pretty early so. What did the team focus on next after that well, twitter is a global, site that supports 34, languages now, supporting, this required a tool chain of libraries and plug-ins for handling things like locale, strings, now, after choosing a set of open-source tools they discover that on every build they were including, internationalization. Strings in those builds, that were in validating, file hashes across, the entire app each. Deploy would end up with an invalidated, cache for their users and this meant that their service worker had to go and redownload everything.

This, Is a really hard problem to solve and they ended up actually rewriting. Their internationalization. Pipeline, and revamping. It the. Impact of this was that it enabled code. To be dropped from all of their bundles and for translation, strings to be lazy loaded, the impact was 30 kilobytes of reduction, in over Bundle size and it also unlocked other optimizations. Such as the emoji, picker in Twitter, being. Loaded on demand that saves 50 kilobytes, from their core bundles having to include it the, changes in the internationalization, pipeline. Also led to an 80 percent improvement, in JavaScript, execution so some, nice wins all around we. Can also take a look at JavaScript for your first-time user so those people who are coming to your experience, for the first time looking, at Spotify so Spotify started serving their web player to users without an account and they would show an option to sign up to play as soon as users would click on a song for first-time users that didn't need to you know use their playback, library, or their CoreLogic. They would actually just keep first-time page loads very very low, weight, just 60 kilobytes of JavaScript, to get it interactive, really quickly once, users actually authenticate, and they log in they then lazy load the web player and their vendor chunk meaning, that you as a first-time user get a really quick experience, and then a an okay experience, for the rest of your navigations, now, Spotify recently also rewrote, their web player in react, and redux and one decision that they made was to improve, performance of navigations, in the player previously. They would load an iframe for every view which was bad for performance they. Discovered that Redux was pretty good for storing data from rest api's, and a pretty normalized, shape and making use of it to start rendering as soon as the user clicks on a link this, enabled them to have quick navigations, between pages even on really slow connections, because you were reducing, overall API calls, and. Finally, we can take a look at jabong so, jabong as Katie mentioned earlier, they're a popular fashion destination, in India they, decided, to rewrite one of their experiences, as a PWA and to keep that experience fast, they use the purple pattern, so push, render pre cache and lazy load this. Allowed them to get interactive in just 18, kilobytes, of JavaScript, so they're using HP server push they trim their vendor bundles to eight kilobytes and their pre caching trips for, future routes using service workers which overall led to a TTI improvement, and 82% with, some good business wins off the back event. Performance. Sites display text as soon as possible, in, other words they don't hide text while waiting for a web font to load by. Default browsers, hide text, if the corresponding, font is not and, the length of time that they will do this for depends, on the browser. Simple. To see why this is not ideal. Good. News is that the fix is also simple wherever you declare, a font, face simply just add the line font, display, swap this, tells the browser to, swap. Out you, use a default, system font initially, and then swap it out for a custom web font, once the it arrives. Although. You do, currently have to self host web fonts to add font display to your pages right. Yes. But. We have a special announcement today, so. Developers. Have been asking us to do something with google fonts for about a year and a half and today, we're happy to announce that google, fonts is su gonna support font, display, so. You'll be able to set things like font display swap optional. And a full set of values. We're. Very excited about this change and this. Is actually just came in like last minute so we've got some last night last night last night so we've got some Docs to update, let's, also talk about resource, hints so browsers, do their best to prioritize fetching resources they, think that are important, but you as an author know more about your experience than anybody else now, thankfully you can use things like resource hints to get ahead of that here's. Some examples so barefoot, is an award-winning wine business they, recently used a library called quick link which. Is under a kilobyte in size and what they do is they prefetch, links, that are in viewport, using intersection, observer and what.

They Saw off the back of this was a 2.7. Seconds fast or TTI for future pages off the back of it jabong. Are a site that are very heavily dependent on JavaScript, for their experience so they used link rel preload, to preload their critical bundles and saw a 1.5, second faster time to interactive, off the back and. Chrome comm it was, originally. Connecting, to nine different origins, for their resources they, used link rel pre connect and saw 0.7. Second, decrease, in latency, off the back. What. Are other folks doing with prefetching, so eBay is one of the world's most popular, e-commerce sites, and help them speed up how soon users can view content, they've started to prefetch, search, results, so, ebay now prefetch, the top five items on a search result page for faster subsequent, loads, this led to an improvement of 759. Milliseconds. For a custom metric called above, the full time it's a lot like first meaningful paint eBay. Shear that they're already seeing a positive impact on conversions, through prefetching, well, the way that this works is effectively, doing their prefetching, after request I'll call back so once the page kind of settles and this, is rolling out to a few, different regions right now it's shipped to eBay Australia, and it's coming soon to the US and UK now. As part of eBay site speed initiative they're also doing predictive, prefetching, of static, assets, so if you're on the homepage it'll. Fetch the assets, for the search page if you're on the search page it'll do it for the item page and so on right. Now the way that they're doing predictive, prefetching, is a little bit static Bribie, are excited, to experiment, with how to use machine learning and analytics. In order to do this a little bit more smartly. No. Not a site that are that's using a very similar technique to this is Virgilio, sports they're a Sports News website by attalia, and they've been improving the performance of their core journeys they actually track impressions, and clicks from, users who are navigating around the experience, and they're actually able to use link, rel prefetch and serviceworkers, to prefetch the most clicked article, URLs, that every, 7 minutes their service workers will go and fetch the top articles, picked by their algorithms, except. If you're on a slow 2g or 2g connection the.

Impact Of this was a 78%, faster. Article, fetch time and they've also seen that article impressions, have been on the rise too after, just three weeks of using this optimization, they saw a 45, percent increase, in article impressions. Critical. CSS is CSS. Necessary. To render above the fold content it, should, be inlined, and that initial, document that is inlined in should, be delivered in under 14 kilobytes this, allows the browser to render content, to the user as soon as the first packet, arrives. In. Particular, critical, CSS hence, have a large impact, on first content will paint for. Example tui is a European. Travel site and they were able to improve their first content full paint by 1.4, seconds, down to one second, by inlining their CSS. The, K is another site using critical CSS there. Are a large Japanese newspaper. Publisher, and one, of the issues they ran into when implementing, this was that they had a lot of critical, CSS 300. Kilobytes to be specific and part of the reason for that was there were a lot of differences in styles between pages but, also due to factors like whether user was logged in whether a paywall was on whether, a user had a paid, or free subscription, and so on, once. I realized this they decided to create a critical, CSS server, that, tooken all these variables as inputs, and return the, correct critical, CSS for, a given situation. The. Application, server then in lines this information, and it's returned, to the user. They're. Now taking this optimization, a step, further and trying out a technique known as edge side inclusion. Edge. Side inclusion, is a markdown, language, that allows you to dynamically. Assemble, documents, at the CDN level. Why. This is exciting is that it allows Nikkei, to get the benefits. Of critical CSS while. Also being able to cache the, CSS, granted, they're caching it at the CDN level and not the browser level in the. Event that the Desa sorry CSS, isn't already cached on the CDN it simply, falls back to serving the default CSS and, that, requested. CSS, is cached for future use. McKay, is still testing out the use of edge side inclusion, but just through dynamic CSS. Alone they were able to decrease. The amount of inline CSS in, their application, by 80% and, improve. Their first content full paint by a full, second. Probably. Is a newer compression, algorithm, that can provide, better text, compression, than gzip, oh, yo. Is a Indian, hospitality, company, and they use brightly to compress CSS. And JavaScript. This, has decreased the transfer, size of JavaScript, by 15% which. Is translated into a 37. Percent improvement. In latency, most. Companies are only using brought Leon static, assets, at the moment the, reason for this is particularly, at high compression, ratios, bratli can take longer and sometimes much much longer than gzip, to compress but.

That Isn't to say that bratli can't be used on dynamic, content, and used, effectively. Twitter. Is currently, using broadly, to compress their api responses. And on. P75. Payloads, so this would be some of their larger, payloads, they found that using brightly, decreased, the size by 90%, this. Is really large but makes sense when viewed in context, of the fact that compression algorithms, are gonna be more effective on larger payloads. And. Our last topic is adaptive, serving so. Loading, pages can be a different experience depending. On whether you're on a slow network or slow device or, your other on a high-end device now. The network information API is one of those who have platform features to give you a number of signals such as the, effective, type of the users connection save, data so you can adapt but, really, loading as a spectrum, and we can take a look at how some sites handle this challenge so. For low end users on mobile Facebook, actually for all users on low end mobile devices, Facebook actually offers a very basic version of their site that loads very fast it has no JavaScript it has very limited images, and it uses minimal CSS, with tables, mostly for layout what's, great about this experience, is that users can view and interact with it in under two seconds, over 3G. What. About Twitter so cross-platform, Twitter, is designed to minimize the amount of data that you use which you can further reduce data usage by enabling data saver mode this, allows you to control what media you want to download and then this mode images, are presented to users when, they tap on them so an iOS and on Android this led to a 50%, reduction in data usage from images and on web anywhere up to 80% these. Savings add up and you, to still get an experience, is pretty fast with Twitter on limited data plans it, was part of looking into how Twitter are handling, their usage of, effective, type we discovered they're doing something really fascinating, they're. Handling image uploads in an interesting way so on the server Twitter compresses, images to 85 percent JPEG, and a max edge of 4096, pixels but. What about when you've got a phone out you're taking an image you're. Taking a picture but you're on a slow connection and may not be able to upload it well, on the client what they now do is, that they check if images appear, to be above a certain threshold and, if so they draw it to the canvas output. It at 85 percent JPEG and they see if that it improved size often. This can decrease, the size of phone captured images from four megabytes, down to 500 kilobytes the, impact of this was nine point five percent reductions. And cancels photo uploads, and they're also doing all sorts of other interesting, things depending. On the effect of type of the users connection, and finally. We've got eBay so eBay are experimenting, with adaptive, serving using effective type if a, user is on a fast connection the lazy load features, like product image zooming if you're on a slow connection it isn't loaded at all if, you are also looking at limiting the number of search results that were presented to users on really slow connections, and these strategies allow them to focus on small small payloads and really, give users the best experience.

Based, On their situation so. Those are just a few things that people are doing with adaptive serving it's, almost. Time for us to go it, is we, hope you found our talk helpful, remember you can get fast with many of the optimizations. That we talked about today and you can stay fast using performance, budgets and light wallet that's, it from us thank you thank, you.

2019-05-20 12:41

Show Video

Other news