buttons / github-buttons

:octocat: Unofficial github:buttons

Home Page:https://buttons.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using conditional requests (If-None-Match) to prevent exceeding API rate limit

optimalisatie opened this issue · comments

Hi!

When using a Github button with a counter on a regularly visited page such as an admin panel, the user will potentially exceed it's Github API rate limit causing the button to stop working after some time. On pages with many buttons the rate limit can be used up quickly.

/**/_({
  "meta": {
    "Content-Type": "application/javascript; charset=utf-8",
    "X-RateLimit-Limit": "60",
    "X-RateLimit-Remaining": "0",
    "X-RateLimit-Reset": "1511616179",
    "X-GitHub-Media-Type": "github.v3; format=json",
    "status": 403
  },
  "data": {
    "message": "API rate limit exceeded for 123.456.789.123. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
    "documentation_url": "https://developer.github.com/v3/#rate-limiting"
  }
})

Github advises to use condition requests as a solution to prevent exceeding the API rate limit (and to save tons of useless traffic on Github's servers).

Most responses return an ETag header. Many responses also return a Last-Modified header. You can use the values of these headers to make subsequent requests to those resources using the If-None-Match and If-Modified-Since headers, respectively. If the resource has not changed, the server will return a 304 Not Modified.

https://developer.github.com/v3/#conditional-requests

It would be nice if the counter could be improved to make use of condition requests to save a lot of all API traffic from github-buttons.

This implementation of GitHub buttons does not have backend. Without a backend, it soly relies on browsers’ caching behavior to reduce requests. As far as I know, if you don’t do hard refresh, most browsers will use the caching and conditional requests. If you are still hitting the limit, there is nothing I can do about it. However, the button should at least render without a count when the API limit runs out.

An alternative solution is to implement a server side proxy for GitHub API, but that means all the users will share API limit, which would run out even faster due to the users number. Shield.io use this method, and their workaround is to let users donate API tokens into a pool. Still, they run out of the tokens from time to time and results in broken service.

Hi!

The browser doesn't add the headers automatically. I tested with Chrome 62 and Firefox 57.

Accept:*/*
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Cookie:_octo=...
Host:api.github.com
Pragma:no-cache
Referer:http://domain.local/wp-admin/admin.php
User-Agent:Mozilla/5.0 ...

It means that for any button there will always be 1 request to Github API.

A backend isn't needed to make it work. It would be possible to use localStorage and Fetch API with custom conditional request headers.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Conditional_requests

Using localStorage you could set a interval at which the API is queried for an update, e.g. once per 5 minutes and/or when the counter is clicked. On an high traffic admin page (e.g. WordPress) this will easily save millions of requests per day.

I have an implementation of using CORS instead of JSON-P working, caching is yet to be implemented, because I'm a little bit hesitated on the caching behavior, through.

As you pointed out, we can use localStorage to cache data, but it does not offer expiration control. Browser tends to have 5 ~ 10MB of localStorage quota. When go over the quota, an QUOTA_EXCEEDED_ERR will be thrown. So that we not only need to cache the data, but also need to remove them. A naive solution will be clear the whole localStorage when the QUOTA_EXCEEDED_ERR is thrown, and then keep as much data as possible. A nicer solution would be implement an LRU or whatever cache on top of localStorage, but obvious it would have more complexity.

Caching + conditional requests will reduce the requests count as long as the API data does not change, but this clearly depends on the activeness of a repository.

To really reduce the requests, an extra delay that you mentioned would be required. Maybe caching + conditional requests is already good enough for a repository with average popularity. However, maybe it's not good enough for super popular ones. Then, how long should that delay be? 5 minutes sounded too arbitrary.

The purpose is to reduce traffic so I believe that it would be OK to not worry about QUOTA_EXCEEDED_ERR and simply consider the localStorage cache as a feature that will provide caching for most buttons while the conditional request enables the API to optimize it's request processing. Quota management may be something to consider later to save even more traffic if it proves to be a significant benefit.

To enable automatic cleanup of browser cache it is possible to use sessionStorage. It has about the same browser support and it doesn't add extra code.

https://developer.mozilla.org/nl/docs/Web/API/Window/sessionStorage#Browser_compatibility

In regards to the delay. I am not certain what would be most favorable for Github. Do they mind buttons updating once per hour? Or would they be more happy with once per 12 hours? It may be an option to set it to 12 hours by default and enable to control the refresh interval using a data-attribute.

An other problem is that the buttons require multiple (5) individual requests per button (buttons.js 2x, buttons.css, buttons.html + Github API).

image

It may be an option to draw the buttons dynamically in a javascript controlled iframe with the CSS included in javascript. When using a dynamic iframe it is simple to write a CSS string to a <style> element. From a management perspective it is possible to use a build tool to dynamically replace a marker in the script with the latest CSS.

It would be possible to reduce the amount of requests to 1 static javascript file + uncached API requests (once per 12 hours) for unlimited buttons.

From MDN document on sessionStorage:

Opening a page in a new tab or window will cause a new session to be initiated, which differs from how session cookies work.

So it's almost useless.

For multiple requests (html + js + css), certainly there is a little space to optimize. However, since GitHub Pages server set Cache-Control:max-age=600, which means all those file will be cached for 10 minutes. Thus those requests will only be send once to the server per page no matter how many buttons there are on the page. Also, GitHub Pages support HTTP/2. For browsers supporting it, multiple requests can be done with one connection.

GitHub API requests has Cache-Control:public, max-age=60, s-maxage=60, so that the same API requests (e.g. for a star count and a fork count) on the same page will only do one actual request.

As you can see many requests are from memory in your screenshot. So, personally I'm worried about repeated requests on the same page, since browsers handle this very well.

sessionStorage would enable to cache buttons for a specific website/admin panel. Only when navigating the web / starting new windows will cause a new API request for the same button.

The advantage of sessionStorage is that it takes less code and that there is less requirement for cache management. It could be added relatively easy and it would provide in a sufficient solution for buttons on actively browsed pages (e.g. admin panels).

In regards to the requests being cached by the browser, that is correct, however the individual requests cause a significant overhead.

image

A dynamic iframe could be much faster but I agree that it does not help much in reducing the amount of API requests. If you would want to optimize it, it would be an option to look at Shadow DOM as alternative for an iframe.

https://code.tutsplus.com/tutorials/intro-to-shadow-dom--net-34966
http://blog.catchpoint.com/2017/11/03/shadow-doms-encapsulation-progressive-web-applications/
https://www.polymer-project.org/2.0/start/quick-tour (cross browser solution from Google)

I won't use shadow DOM because it is not a stable standard and only Chrome based browsers currently support it by default. It still has a long way to go.

And it's entirely possible that a user open links in your admin panel in new tabs, and that would create new sessions for sessionStorage.

Speaking of overhead caused by individual requests - those happen mostly in parallel if you look at timeline, so not really a big deal.

A dynamic iframe may be the best solution for performance and compatibility. It would enable to include the buttons.js code inline to potentially draw the buttons with zero requests. It would enable to use the buttons more efficiently in a offline app or progressive web app. It would also reduce the amount of code so it would lead to the best performance possible.

In regards to the timeline. I can imagine Github buttons being used on more advanced apps. Every CPU cycle may count in those areas so for developers the less CPU is used, the better. Requests are very heavy to process, especially on mobile devices. It causes a great overhead. As an example, Google uses localStorage to cache assets in favor of simply using browser cache.

Tests by Google and Bing have shown that there are performance benefits to caching assets in localStorage (especially on mobile) when compared to simply reading and writing from the standard browser cache.
http://www.stevesouders.com/blog/2011/03/28/storager-case-study-bing-google/
https://addyosmani.com/basket.js/

If you want to optimize for CPU availability in heavy / critical apps where the buttons may be used, you could take a look at requestIdleCallback. It will ensure that the buttons will never get in the way of the app / website user. setTimeout(..,0) could make it work cross browser. It could be combined with requestanimationframe for optimal render performance.

https://developers.google.com/web/updates/2015/08/using-requestidlecallback

In regards to sessionStorage. I suggested it from your perspective as it would cost less time and it may be a sufficient solution. If just 50% of traffic could be reduced it would make a big difference. Together with the conditional requests it may be sufficient to solve the main issue (exceeding API rate limit, users being blocked from Github API).

localStorage + management would be a much better solution but it will cost more time to create a solution with the best efficiency and least amount of code.

In regards to a potential implementation. It may be an option to make it a non critical feature that simply provides caching 'if it works' and otherwise uses Github API requests. This could make management simple. If there is a quota issue, it would be possible to simply ignore it and load from a regular request. It would be up to the website owner / developer to manage the localStorage cache. Button counts do not take significant space so it may not require cleanup.

A dynamic iframe

If I understand you correctly, you mean an implementation like mdo/github-buttons, which uses a single HTML containing embedded <script> and <style> tags.

There is a reason to NOT do so. First, such <iframe> cannot be automatically sized, with the content being dynamic, you have to give a size that’s slightly larger than the actual content to give it some space to grow. For example, a different may render a wider content, going from 99 stars to 100 stars will results in a wider content, etc. But your <iframe> width has to be hard coded, which is annoying.

To automatically size an <iframe>, it has to be a same-origin <iframe>. But same-origin <iframe> could be accidentally altered by parent page JavaScript. That’s why we have the current solution:

  1. Load buttons.js on parent page.
  2. buttons.js create a same-origin <iframe>, and render it with an embedded HTML string.
  3. The embedded HTML string will load the same buttons.js from cache, and render the button.
  4. Once button is render, get and save its dimensions.
  5. Reload the <iframe> with buttons.html instead of HTML string. This will be cross-origin.
  6. Set the <iframe> size to be exact dimensions saved in step 4.

Given these steps, if you try to bundle JavaScript / CSS into HTML, it actually increases traffic. Bundling CSS into JavaScript is the only place that could give a little performance benefit.

If you really care that tiny bit of loading speed, make a fork and skip step 5, so that you can bundle HTML & CSS into JavaScript.

more advanced apps

Most of advanced web apps today are single page apps. They don’t reload the page thus buttons on the page is just a one time thing. Not so much meanings in optimizing it.

requestIdleCallback

Experimental API, so NO. Using setTimeout to make code execute asynchronously? I already use async as much as possible, expect for parts that are not necessary.

I won’t use experimental technology in production unless it’s widely supported and stable enough. Even if it can be fallback with code that’s more compatible, unless the benefit is significant, I’d rather wait for the API to be widely supported.

Conditional requests with XHR/CORS

First, unlike JSON-P which is essentially a <script>, XHR is isolated from page load event. When XHR load event is triggered, the callback has not been triggered, but in JSON-P, the load event means callback is already finished. And same for the error event.

Those are all extra layers of complexity, but I at least I can live with those, and I already have an experimental implementation of this.

localStorage / sessionStorage

This is where I get uncomfortable. We make compromises all the time in web dev, but I don’t like making compromises saying maybe that’s good enough. I respect users' privacy (and care about the overhead) so I don't collect usage data for analysis, how could I know it's good enough even with experiments.


Going back to your admin panel example, as a compromise, the easiest solution would be disable the dynamic count. And if you really care about performance, put a static button directly on the page.


I’m open to ideas. As always, Pull Requests are very welcome.

I just wanted to provide some ideas to consider. I understand that it may not be applicable for github-buttons. Besides the API limit / caching issue the buttons load fast and look professional/authentic.

We will be using the buttons on WordPress admin pages that are actively browsed by website creators during performance optimization.

image

requestIdleCallback has proven to be exceptionally effective (especially for widgets such as social buttons). It can enable loading a page with 1000+ buttons in 200-300ms while the buttons load smooth based on available CPU capacity. It may be an option to test it with a HTML page with 100+ buttons to see the difference. setTimeout could be used for any browser that does not support it (yet) so that the complexity is low and reliability is high.

In regards to the dynamic iframe, I understand your choice.

In regards to modern apps, a progressive web app (Google PWA) does load each page individually and has no ability to persist a button on the page.

https://developers.google.com/web/progressive-web-apps/

In regards to sessionStorage. If you're interested to invest the time, localStorage may be a better option. However, to quickly solve the main issue sessionStorage may be a safe and easy option.

I am not familiar with coffeescript and didn't know your exact wishes for the project.

I am interested to create a concept / pull request. What options would you be interested in?

  1. requestIdleCallback and requestAnimationFrame
  2. localStorage cache
  3. conditional request based on XHR and/or Fetch API

PWA does not conflict with the idea of single page app (SPA). A web app can be both PWA and SPA at same time. A PWA won't necessarily be a SPA, and vice versa.


For browsers do not support requestIdleCallback, setTimeout changes nothing comparing to the current implementation. On the parent page, the buttons.js has attribute async and defer, and for code inside <iframe>, they are under a different context so the synchronous code does not block parent page at all. So requestIdleCallback may only help browsers supporting it.

Also, there is no magic about requestIdleCallback, parent page code and <iframe> code has to call it separately. Event callbacks have to call it individually again, in order to lower their priority.

My real concern is that the current implementation heavily relies on load, error, and readystatechange event on <iframe> and JSON-P <script>. I detect (<iframe>'s load event (and JSON-P <script>'s (load or error) event if exists)) from the parent page to determine the first time a button get rendered. Adding requestIdleCallback means that the current detection method is no longer working.

Instead, in the step of The embedded HTML string will load the same buttons.js from cache, and render the button.:

On the parent page:

  1. Wait for <iframe>'s load event so that contentWindow will be accessible.
  2. See if loaded is true in <iframe>. If true, we're done (unlikely to happen), otherwise, set up the hook function onload in <iframe>, and wait for the hook to be called.
  3. Go to the next step to get the button dimensions.

In the <iframe>:

  1. The code shall run with requestIdleCallback.
  2. It should setup a flag loaded when the button is fully rendered.
  3. It should call a callback hook function onload if it is set up when the button is fully rendered.

To JSON-P, the current implementation, "fully rendered" means <script>'s native load or error event, but if we put requestIdleCallback in the callback the current method no longer holds.

To CORS, "fully rendered" means it has ((a 200 response and callback is completed), or it has a non 200 response, or CORS' error event). That's already a lot of code to write and test.

What I want to say is: it's really complicated.


For your worst 1000 buttons example, a workaround would be load buttons.js as a CommonJS module, and manually call render() after all important stuff has being loaded.


I would consider CORS XHR + Conditional requests + LRU localStroage cache with JSON-P fallback.

I'm open to requestIdleCallback, but it will be a super low priority, because of the complexity.

requestIdleCallback will benefit Firefox 55+, Chrome 47+ and Opera 34+. Chrome has a 60% market share so it will benefit about 70-80% of all browsers.

The benefit of requestIdleCallback is significant and in essence the functionality is very simple. It simply enables to prioritize javascript execution. It makes it possible to start loading the buttons when CPU is available. On a fast desktop PC there will be no delay while on a overloaded mobile phone the buttons may be delayed for 10+ seconds allowing the main page to be loaded much faster. It will make the buttons more friendly to a page. It sets the priority of the buttons to a lower level so that it won't compete for resources with the main javascript of a page.

parent page code and <iframe> code has to call it separately

It would be sufficient to add it in the parent code only (before the iframe is added to the DOM).

Example:

// requestIdleCallback, run tasks in CPU idle time
var idleCallback = function(task, timeframe) {
    if (window.requestIdleCallback) {
        // shedule for idle time
        window.requestIdleCallback(task, {
            timeout: timeframe
        });
    } else {
        // not supported, run task immediately
        task(); // or setTimeout(task,0);
    }
};

// draw buttons
idleCallback(function() {
// original code to insert iframe etc.
},10000); // allow waiting for 10 seconds, then force execution of code

In regards to the CORS XHR + Conditional request + localStorage + JSON-P solution. What you describe is in essence a simple format that should be written in Coffee script. There is nothing special to consider. It appears to be a perfect solution that enables optimal caching.

I am interested provide some examples however I have no experience with Coffee script so it may be best if you would add a solution that would best fit the existing buttons.js.

delayed for 10+ seconds

That’s way too inconsistent. Thus it should be called by users (developers) if wanted instead of a default behavior. On a page there is usually only one or two buttons, which does not slow down the page loading much. If someone really care, as I mentioned in previous comment, you can manually load the library and call the render function. You can then wrap the call with requestIdleCallback.

const { render } = require('github-buttons')

if (window.requestIdleCallback) {
  requestIdleCallback(render)
} else {
  render()
}

It would be sufficient to add it in the parent code only (before the iframe is added to the DOM).

If the end goal is just to delay the start of loading, but not low the priority of the whole loading process, manually call the function would be sufficient, but it could deliver a worse user experience. At the point when it starts loading buttons the browser would be idle, but what if at the same time when code in<iframe> starts, the app code becomes heavy loaded and busy again. Now the code in <iframe> would compete with app code. On a not so powerful device, what user may see then is lagging in app experience itself rather than just lagging in app loading, which to me is clearly worse. That’s why I said you need to call it everywhere to lower the priority of everything if that really matters.

I agree that it may be something for developers to apply manually.

In regards to your arguments however, requestIdleCallback is not intended to delay the loading process, it is intended to enable the browser to delay the loading when it is executing other code (to avoid a race condition during critical execution time).

For a desktop browsers there may be no delay (loading starts in 2-3 ms) while on a mobile phone it may be delayed for 10 seconds (it could be set to 1 second). If the browser decides to delay the code for 10 seconds it may have a good reason to do so. Are the buttons really essential for those devices? In general, requestIdleCallback will load the code as fast as possible so it's not really downgrading code, it's simply optimizing it to make optimal use of 'time in between' windows that are available during code execution.

The buttons will start loading immediately when execution tasks finish (this is what is meant with 'idle'). In practice, a websites script may be active for 100-200ms up to 1 second during a page load. The buttons would start loading instantly when the main script has been processed, or when it simply has some room in between (a few ms would be sufficient).

Example:

var work = function() { var x = 0; for (var i = 0; i < 10000; i++) { x += i; }  };

// Test 1
console.time('setTimeout(fn,0)'); console.time('requestIdleCallback 0');
setTimeout(function() { work(); console.timeEnd('setTimeout(fn,0)'); },0);
requestIdleCallback(function() { console.timeEnd('requestIdleCallback 0'); },{timeout: 1000}); // exec in 1 second

// Test 2 (apply tests individually)
console.time('setTimeout(fn,2)'); console.time('requestIdleCallback 2');
setTimeout(function() { work(); console.timeEnd('setTimeout(fn,2)'); },2);
requestIdleCallback(function() { console.timeEnd('requestIdleCallback 2'); },{timeout: 1000}); // exec in 1 second

In the above tests, the first test shows that the idleCallback waits until the work of setTimeout is completed, while with a few ms delay, it will be executed instantly.

Test 1:
image

Test 2:
image

In regards to the requirement to include requestIdleCallback in the iframe and other area's. The primary function is allow the main code of a page to execute more efficiently. When the browser indicates that there is a time window for the buttons to load, it will simply be able to make full use of that opportunity from then on. If the main code then suddenly starts other code execution, that may cause a race condition but the probability will be low. In the current situation the code is loaded via JSON-P causing the exact same situation (the render call is made as soon as the JSON-P request finishes). There is more chance to not cause a race condition by applying a single requestIdleCallback on the render method.

In regards to the benefit from including it as a default feature, it will make the buttons more friendly to diverse real world situations. Browsers will have an ability to hold the code off when it is overloaded so that a website or app has a better chance of satisfying the user with the available resources. The buttons will load within a defined time frame, so if it would be critical to load the buttons within 100ms that would be possible while 100ms still offers a big space for critical code execution.

A time window of 1 second may not cause a noticeable delay in loading of the buttons on slow devices while it would provide the page 1 second of time to use 100% CPU if needed. On powerful devices the buttons would still load within a few ms after the render call.

Looking at Facebook button, Twitter button, and Google +1 button:

Only Twitter's code contains requestIdleCallback. However, if you set a breakpoint on that, you will find that it's only used on embedded tweet / timeline to dynamic resize when viewport changes. The twitter buttons do not use requestIdleCallback at all.

To align with the major buttons providers, a potential 10 seconds delay is clearly an unexpected behavior. If those buttons change their behavior in the future, I will reconsider it at that time. Thus, the conversation about requestIdleCallback ends here.

Revisiting the workflow as below:

  1. Load buttons.js on parent page.
  2. buttons.js create a same-origin <iframe>, and render it with an embedded HTML string.
  3. The embedded HTML string will load the same buttons.js from cache, and render the button.
  4. Once button is render, get and save its dimensions.
  5. Reload the <iframe> with buttons.html instead of HTML string. This will be cross-origin.
  6. Set the <iframe> size to be exact dimensions saved in step 4.

localStorage does not carry cross-domain. In step 2, localStorage belongs to parent page's domain, In step 5, localStorage belongs to buttons.github.io.

I don't think it is a good idea to pollute parent page's localStorage, but without cache conditional requests won't work. So let's say we use parent page's localStorage anyways. Now in step 5, code in the <iframe> won't be able to read the localStorage on the parent page's domain.


  • If it is the first time access an API (say, /users/ntkme):
    • Parent sends non-conditional request and gets response from server.
    • buttons.github.io sends non-conditional request again, and gets response from browser's cache.

In this case it's same as current JSON-P.


A user would visit different parent page, and they cache different API separately, so they don't run out of space. However, at a point buttons.github.io run out of space, and started purging data.

  • Assume /users/ntkme gets purged in buttons.github.io's localStorage, but parent page still has it:
    • Parent sends conditional request, and gets response from server.
    • buttons.github.io sends non-conditional request, and gets response from the server.

Also it's possible are parent page application also uses localStorage, and it ran out of space.

  • Assume /users/ntkme get purged in parent page's localStorage, but buttons.github.io still has it:
    • Parent sends non-conditional request, and gets response from server.
    • buttons.github.io sends conditional request, and gets response from server.

In either case that one of the two cache get dropped, even though it uses same amount of API limit comparing to current JSON-P, it doubles the actual requests that went to the server. So the only case it does better is when both parent and buttons.github.io has cache.

I'm not saying it won't work. In fact, it would work most of the time in theory. However, it would not only pollute the parent localStorage, but also save lots of messy duplicated data.


To avoid duplicating at least two copies of cache on multiple domains, it's possible to save it only on buttons.github.io and share it with parent using postMessage API with an extra <iframe>:

  1. With in the <iframe>, add a message listener that accepts origin buttons.github.io.
  2. With in the <iframe>, create a second <iframe> for a page in buttons.github.io.
  3. In the second <iframe> page, load the API (with cache + conditional request), then postMessage to parent domain.
  4. In the first <iframe>, receive the API data, and continue.

But this could mess up as well if parent page already has a message listener that does not check origin.


Let me know If you have any better ideas.

My apologies for the delay.

In regards to passing data to buttons.html, it would be possible to load buttons.html using a hash, e.g. buttons.html#1233. The request would use browser cache and extracting the count from the hash costs a few bytes of code.

In regards to overall performance, it seems that the buttons load OK and fast. I believe that a dynamic iframe could increase performance a lot (cached requests cost significant overhead) but the main issue is exceeding the API rate limit / getting blocked by Github so it may be sufficient to keep the existing code and simply add localStorage and conditional requests.

In regards to localStorage space, I believe that it would be OK to simply ignore QUOTA_EXCEEDED_ERR. It would be an option to use a try { ... } catch(e) {} and if it fails, simply ignore it. It would be impossible to know what data is safe to delete so it would be better to leave it up to the site. The buttons won't consume much space. For situations where localStorage fails, the buttons will depend on API requests. For most of the buttons the API rate limit issue would be solved.

We still need a localStorage, and in your proposal it will be the parent domain one.

I simply don't think polluting parent domain's localStorage is a good idea. It's not about the amount of space it uses, but it's more of the namespace - we don't own the parent domain. Although we can prefix all the keys to emulate a namespace, we still cannot assume that it won't cause any side effect to the parent domain, or parent domain code won't cause any side effect to our code.

I believe that it is OK for a social widget to make use of localStorage.

localStorage in essence is like a cookie. If you use a name space it should not cause any problems. To prevent issues it would be possible to use sessionStorage. It would lose the cache when closing tabs however it prevents pollution of localStorage and I believe that it will be sufficient to solve the Github API rate limit issue.

https://scotch.io/@PratyushB/local-storage-vs-session-storage-vs-cookie

You could also consider a JSON cookie and a short URL hash. The conditional requests could be based on a single time stamp of the API request (If-Modified-Since). The counts would cost space per button. 10 buttons would cost about 100-150 bytes in space.

http://hashids.org/

In regards to passing data to buttons.html, it would be possible to load buttons.html using a hash, e.g. buttons.html#1233.

I already use this method to pass data around. However, I won’t pass the count to the <iframe>. If you look at mdo/github-buttons, there is a user case of directly put the final <iframe> into parent page. Using hash to pass the count means two things:

  1. It’s could be a breaking change for the use case I mentioned above, unless I maintain two versions.
  2. As it’s possible to pass any number, it may no longer represent the truth.

After putting those into consideration, it’s not really an option to store the data in parent domain and pass it into our domain. As I said before the other way around, store on our domain and pass it to parent could work with postMessage. Or, just keep duplicated copies on both domains.

I would pick postMessage method if I have to pick one, but still I don’t like that solution.

sessionStorage would leave localStorage free and cross-tab / window cache may not be essential to solve the Github API rate limit issue. sessionStorage also provides in automated cache management.

In regards to the postMessage solution. A combination with sessionStorage could enable a single request for buttons.html for each button. Using a onload on the iframe could trigger a postMessage from the parent with the button config and count.

My suggestion of using postMessage is to avoid touching parent domain cache, and having all the cache in one place to max the benefit of conditional caching. In my case, when the <iframe> receive a request via hash, it will request an API if it's not cached, and post a message containing the API data. No harm can be done.

Your proposal is totally different. Even if you use onload and only accept the message once to minimize the chance of receiving a message that we're not supposed to get, since the listener has to be set before onload, there is a time frame before the onload that parent domain code can simply do this to screw it:

setInterval(function () {
  postMessage("1234", "*")
}, 10)

Of course, you can add a checker in the listener:

let onceToken = 0
window.addEventListener("message", function (event) {
  // ignore messages that's not from parent domain
  if (event.origin !== "parent.domain")
    return
  // ignore messages come in before our page is fully loaded
  if (!(/m/.test(document.readyState) || (!/g/.test(document.readyState) && !document.documentElement.doScroll)))
    return
  // only accept the first message
  if (onceToken === 0 && onceToken = 1) {
    // do work here...
  }
}, false)

However, even with that level of sanity check, it's still possible to get screwed.

To make it harder, we can generate a random token, and pass it to the <iframe> via hash, then in the message also pass the same token to validate. However, since it requires passing it down to <iframe> via hash, there is no way to hide it. So, still can get screwed.

That's bad security.

You may say: it's just a button and no one will get harmed. What's the point of security,

There is indeed a point. Code security is part of code quality. In some rare cases, one might compromise security to make something work given specific conditions, but for open source it's must have a strong security standard. Even if there is no harm in this particular case, it's entirely possible other coders might copy part of the code with no knowledge of how this work, and introduce a security loophole in their project.

There was a paper about this: Leveraging Flawed Tutorials for Seeding Large-Scale Web Vulnerability Discovery. Well, the paper itself is not super interesting, and their example is SQL injection done by concatenating string to build query. That's extreme comparing to our case.

I don't want to in anyways teach people writing insecure code.


Another reason for insisting doing cache in our domain is trust. Since this is a JavaScript based implementation, which basically means people have to trust it to not do anything malicious. Without knowing what the script is doing, accessing other domain's localStorage, sessionStorage, or cookies is just one step away to a malicious behavior.

I understand your concern about security and for not wanting to access localStorage or sessionStorage on the parent domain.

A solution may be to load the iframe with the username as a hash, use postMessage to the parent to communicate the count and do the size calculation in the background to optionally resize the iframe.

For modifying the size of the iframe I would suggest to have a look at requestAnimationFrame.

https://developers.google.com/web/fundamentals/performance/rendering/optimize-javascript-execution#use_requestanimationframe_for_visual_changes

From the link that you provided (at the end of the page):

Avoid micro-optimizing your JavaScript

It may be cool to know that the browser can execute one version of a thing 100 times faster than another thing, like that requesting and element’s offsetTop is faster than computing getBoundingClientRect(), but it’s almost always true that you’ll only be calling functions like these a small number of times per frame, so it’s normally wasted effort to focus on this aspect of JavaScript’s performance. You'll typically only save fractions of milliseconds.

That's indeed the point.

If you profile a page that has nothing but only 100 buttons, you will see >65% of time was spent in "Scripting", <5% of time on "Rendering" and only <0.1% on "Painting". On this particular machine I'm using, "Rendering" + "Painting" uses ~180ms, which is about 1.8ms per button. So, even without testing I would know that it will "only save fractions of milliseconds".

I do not agree with that statement. It may be a good advise for the novice web developer who seeks to address web performance (in order to prioritize their time efficiently) but in my opinion it is not a good advise for the more experienced developer.

Micro optimizations cost a ton of effort and it is true that it is essential to prioritize the major optimizations in favor of micro optimizations, and the way that Google describes it is also correct in itself (a few micro optimizations do not provide a significant benefit), however, 1000 micro optimizations add up. As a core practice - taking the extra effort from the start - micro optimizations will enable to achieve significant better results that are more reliable.

500μs = a fraction of a millisecond. 100 * 500μs = 50ms = a lot. 100ms = users start to feel that there is a delay.

Issues covered in micro optimizations do not stand on itself. They also interact with and influence each other, sometimes causing an exponentially increasing performance hit.

In regards to the buttons. They will be added on almost any type of websites. Websites in general become more and more javascript heavy, demanding more and more from the devices / CPU. If the buttons would save just 1ms it will simply be one of the 100 factors that could potentially save 1 full second. In itself 1ms is negligible. Together with just 100 other factors on a website, it is a major aspect that deserves optimization.

requestAnimationFrame is listed as a major optimization by Google. Just think about the unique situation that is described by Google where 2-3 buttons start loading at the end of a frame causing a small visual jank in the rendering. It could simply have been prevented.

I understand that you may consider it a waste of energy and time. I just wanted to share my perspective in regards to the benefits of micro optimizations as a best practice.

Just think about the unique situation that is described by Google where 2-3 buttons start loading at the end of a frame causing a small visual jank in the rendering. It could simply have been prevented.

Seems you have a misunderstanding here.

JavaScript is single threaded. So, requestAnimationFrame would temporarily block other code from running at the beginning of a frame and then start its callback to give it an opportunity to not miss a repaint in this frame.

First, buttons are not animations, so buttons themselves will not have visual janks. If it starts rendering in the later part of a frame, it will simply miss the opportunity to be painted in the current frame, and will be painted in the next frame.

If you are concerning that button code that may cause the other animations on the parent page to have janks, as long as the parent page utilizes requestAnimationFrame, it should get higher priority and be in good shape.

If both the button and the parent page have it, nothing seems to be better than the previous case.

If we use requestAnimationFrame and the parent does not, it helps nothing, because we can block parent page code and it may cause jank in parent page animations.

requestAnimationFrame does not block other render processes. In practice, it will cause less stress during a page load and the buttons will be rendered smooth on physical domready.

https://swizec.com/blog/how-to-properly-wait-for-dom-elements-to-show-up-in-modern-browsers/swizec/6663

The buttons are part of the render and paint process during a page load. My argument is that no matter how small the impact is on performance, any saving will have an impact on a website's ability to satisfy a individual visitor in any situation. Together with 100s of other aspects it will add up.

requestAnimationFrame optimizes the following which applies to the buttons.

The browser can optimize concurrent animations together into a single reflow and repaint cycle, leading to higher fidelity animation. For example, JS-based animations synchronized with CSS transitions or SVG SMIL.

https://github.com/vasanthk/browser-rendering-optimization/blob/master/README.md

In regards to the iframe. When the iframe is rendered within a rAF callback the browser has the opportunity to optimize rendering of the DOM change invoked paint that includes the iframe. It is sufficient to leave it up to the browser.

https://codepen.io/danieldiekmeier/pen/mWRjzb

I understand it if you prefer not to use it, I just wanted to suggest it as an option.

The main issue is the Github API rate limit / users getting blocked from Github.

It may not block “Rendering”, but any single JavaScript code executing must block other JavaScript code from running, as it’s single-threaded. Thus the parent page code that triggers rendering (e.g. JavaScript based animation) could indeed be blocked from running, which would result in a visual jank.

Say you have a callback that would takes more than 16.67ms (a frame based on 60fps), putting it in requestAnimationFrame will guarantee it starts at the beginning of a frame. During the whole frame, it will block other JavaScript code unless another callback gets a similar or higher priority. If another callback for animation shall be run in this perticular frame, with a lower priority it would miss its chance and results in a visual junk. Thus requestAnimationFrame should be primarily use by things that are frame sensitive (which is indeed animation).

Resizing a button only takes less than 1 frame of time, no matter which frame it is, and the reflow would happen anyways no matter if you request requestAnimationFrame or not. Each button on a page would take a random time frame whenever it comes to resize. If it’s lucky and two buttons get resized in the same frame, two reflows will become one reflow. However, that’s very unlikely to happen due to each button loads asynchronously.

Multiple callbacks registered by requestAnimationFrame will be grouped in the same frame only if they are all registered with requestAnimationFrame in the same previous frame. Again, because each button loads asynchronously, even with requestAnimationFrame, we will ends up with the same situation - each button cause a reflow in a random frame unless it gets lucky.

Let’s be clear that requestAnimationFrame on resizing only saves time when those reflowing is end up being bundled with other rendering event with in the same frame. With requestAnimationFrame it will land in next frame, without requestAnimationFrame it will land in the current frame. If there is no other rendering in those two frames, there is no improvement.


The real micro optimization is to group the reflows caused by multiple buttons into a single frame, which means no buttons will appear until all buttons are ready. This is what Google refers as Avoid Large, Complex Layouts and Layout Thrashing according to your link.

Regarding your example:

Insert <iframe> into DOM is a tricky one, because most of the browsers today share the same thread between parent page and <iframe>. The browser will first load DOM in <iframe>, then a reflow happens on parent page. Thus insert <iframe> is costly.

On my machine, the "adding iframes" in the following example takes less than 300ms:

console.time('adding iframes')
let a = 0
while (a++ < 100) document.body.append(document.createElement('iframe'))
console.timeEnd('adding iframes')

console.time('resizing')
ref = document.getElementsByTagName('iframe');
for (i = 0, len = ref.length; i < len; i++) {
  iframe = ref[i];
  iframe.style.height = '500px';
  iframe.style.width = '500px';
}
console.timeEnd('resizing')

On average insert one <iframe> would take less than 3ms, and your requestAnimationFrame example basically spaces it out to use first 3ms in each 16.67ms frame to load <iframe>. In that case it does have a little benefit. However, I shall remind you we're testing empty <iframe> here. appendChild is called asynchronously by requestAnimationFrame in your case, but <iframe> loading is synchronous. requestAnimationFrame shall insert a new iframe every 16.67ms, but if loading an <iframe> take than 16.67ms, using requestAnimationFrame will still cause scripts to compete in the same time frame back-to-back.

Now here is the important part: "resizing" 100 <iframe> takes only less than 0.3ms in total in the example above as it's all done in the same 16.67ms frame causing a single reflow. That is what I said previously, the real optimization is to group reflowing and repainting events into same frame. If you try requestAnimationFrame on resizing, what can be done in 0.3ms will be spaced out to into multiple frames, causing every single frame to have an reflow, a.k.a Layout Thrashing, which is much more expensive.


With that said however, grouping all the reflow in the real world means a huge complexity of wrapping all buttons loading with Promise.all() or equivalent. By the way, we don’t have control on other buttons services, so we cannot group them with us. That means things could not add up in a straight manner like you would expect.

On average there are less than three of github buttons on a page. So even if we implement a complex mechanism, the average benefit will be saving less than two frames of reflowing. - that’s a fraction of a milliseconds, which is exactly a micro optimization that’s not worth doing.


Nevertheless, I would thank you for giving an opportunity for me to deeply looking into some new technology I was not aware of.

The main advantage of requestAnimationFrame for the buttons is that the rendering of the buttons would be scheduled for the moment that the browser is ready to paint to the screen, thus providing more CPU time for expensive scripts that may be running on a page during a page load.

Inserting <iframe> is relatively lightweight and synchronous. Loading <iframe> is heavy, and it's asynchronous from the parent page perspective, but within the <iframe> the page loading itself is synchronous. So we can model inserting button <iframe> with requestAnimationFrame like:

requestAnimationFrame(function () {
  setTimeout(aHeavyComputingFunction, 0)
})

requestAnimationFrame is supposed to run some light weighted callback before the repaint, to paint within the given frame with minimal overhead. However, we're running some heavy stuff instead, and that heavy code requires a long time before it will really trigger what a browser would concern as "Rendering" (remember ~65% time in "Scripting"). At that point it may simply missed the given repaint frame.

From that perspective, we're not using requestAnimationFrame as it was intended. We're now using it to delay the beginning of the heavy execution. Because we cannot use requestIdleCallback which could delay execution for a unknown time, so we should use requestAnimationFrame instead to delay it for a random time between 0 - 16.67 ms? Why don't just do setTimeout() to have a guaranteed delay then?

That is abusing features.

My argument is that relatively light weight is a major factor when seen as part of 100 similar factors. If the buttons would be able to reduce the load by just 1ms, it would be a significant advantage. From my perspective, it would be best if the buttons would never get in the way of the page load / rendering of a website or app and would load based on available CPU resources once the page is already visible to the user. On some devices the buttons may be delayed for a few seconds to provide a faster time to interactive and on other devices the buttons should load instantly without an additional delay compared to the current format. The buttons are an 'addition', almost never a primary feature.

requestIdleCallback is a solution that enables to achieve the described result.

https://developers.google.com/web/updates/2015/08/using-requestidlecallback

requestAnimationFrame is also able to provide a similar result when specifically applied to DOM changes and in the context of a page load. I agree that you can consider it a sort of abusing of a feature, however, the result is what matters.

In regards to the iframe, Google is planning to make cross origin iframes multi threading which would enable to do more heavy work in the iframe.

Subframes are currently rendered in the same process as their parent page. Although cross-site subframes do not have script access to their parents and could safely be rendered in a separate process, Chromium does not yet render them in their own processes. Similar to the first caveat, this means that pages from different sites may be rendered in the same process. This will likely change in future versions of Chromium.

https://www.chromium.org/developers/design-documents/process-models

When you abuse requestAnimationFrame for 100 similar factors you are pretty much guaranteed to have exactly 100 separate repaints / reflows, versus without it you get at max 100 repaints / reflows based on the order of execution. Say that we can finish 4 buttons per 16.67 ms, then we have 25 repaints without requestAnimationFrame, with requestAnimationFrame we have 100 repaints - that's Layout Thrashing.

More complex a page is, the layout trashing cost more. You cannot simply say it will reduce the load time without analyze the possible side effect. Things do not become faster just because a Google article says so.

requestIdleCallback is the right solution to delay execution. In the previous comment, I have already made clear that it will not be the default behavior. It up to the user to use the API exposed by this library, and wrap it when needed.

I've heard that Google is going to do multi threading for cross-origin <iframe>, but remember that in this implementation we do a two pass rendering, the first pass is in a same origin <iframe>, and that is never going to be on a different thread.

requestAnimationFrame actually prevents layout trashing.

http://wilsonpage.co.uk/preventing-layout-thrashing/

All rAF callbacks always run in the same or next frame of work
Any rAFs queued in your event handlers will be executed in the ​same frame​.
Any rAFs queued in a rAF will be executed in the next frame​. (Same for any queued within IntersectionObserver or ResizeObserver callbacks.)
All rAFs that are queued will be executed in the intended frame.

https://medium.com/@paul_irish/requestanimationframe-scheduling-for-nerds-9c57f7438ef4

rAF callbacks are stacked so if you would draw 50 buttons at once, it will complete the work for 50 buttons and then paint to the screen (1 repaint).

In the context of a page load, which would apply to the buttons, rAF could provide a similar advantage as requestIdleCallback.

There are several options for optimization. It would be possible to use a javascript inserted rel="preload" as="document" for browsers that support it with a onload handler that triggers a rAF that draws the buttons on the screen. It would reduce the load on the main thread and drawing the buttons would be smooth and efficient.

https://www.smashingmagazine.com/2016/02/preload-what-is-it-good-for/
https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content

You cannot simply say it will reduce the load time without analyze the possible side effect. Things do not become faster just because a Google article says so.

I agree. If you would be interested to try an optimization solution it would be best to create a test to measure it's benefits.

Following are examples for loading 10 cross-origin <iframe> without and with requestAnimationFrame.

const loadIframe = function () {
  const iframe = document.createElement('iframe')
  iframe.src = 'http://buttons.github.io/buttons.html#href=http%3A%2F%2Fbuttons.github.io%2F%23&data-icon=octicon-star&data-text=Star&data-size=large'
  document.body.appendChild(iframe)
}
let i = 0
while (i++ < 10) loadIframe()

Without requestAnimationFrame

const loadIframe = function () {
  const iframe = document.createElement('iframe')
  iframe.src = 'http://buttons.github.io/buttons.html#href=http%3A%2F%2Fbuttons.github.io%2F%23&data-icon=octicon-star&data-text=Star&data-size=large'
  document.body.appendChild(iframe)
}
let i = 0
const callback = function () {
  loadIframe()
  if (++i < 10) {
    requestAnimationFrame(callback)
  }
}
requestAnimationFrame(callback)

With requestAnimationFrame

Putting requestAnimationFrame in our use case will cause layout trashing due to the nature of this implementation. Now you have a benchmark to look at so I don't have to repeatedly explain the same thing over and over again. Look at the (14 - 3 = 9) extra frames / paints, ~5 ms extra time in rendering and ~2 ms extra time in painting. Exactly what I have explained! So please stop pasting links and claiming them as universal truth without considering any specific use case.

You may say that when a page has 100 <iframe> buttons, even if requestAnimationFrame causes layout trashing, it appears to be smoother. That is true, but that's only because it spaces out the heavy work and paints much more frequently, which would eat more CPU in the end.

For the typical use case which is a tiny number of buttons, using requestAnimationFrame does not give parent page much time for execution, so the real benefit is tiny. By putting the overhead into consideration, it would actually perform worse in terms of both loading time and total CPU time.


Preloading cross-origin document requires CORS headers. I’ve looked at preload long time ago, and at that time GitHub Pages did not have CORS “Access-Control” header. I just checked again and now they set it to “*” so preload should be feasible now.

The test isn't realistic. You are comparing rendering in 100% CPU time without a page load context. In the real world the requestAnimationFrame would function like a prioritization mechanism allowing the app to load faster and the buttons to render smoothly.

Further, in your test you are writing the iframes in separate frames on purpose. requestAnimationFrame could serve as a callback stack that will write all DOM changes in 1 frame. Simply writing 20 iframes to the DOM is actually layout trashing while correct usage of requestAnimationFrame would prevent it.

// example requestAnimationFrame test
const iframe = document.createElement('iframe');
iframe.src = 'http://buttons.github.io/buttons.html#href=http%3A%2F%2Fbuttons.github.io%2F%23&data-icon=octicon-star&data-text=Star&data-size=large';
const loadIframe = function () {
  document.body.appendChild(iframe.cloneNode());
}
requestAnimationFrame(function() {
  let i = 0
  while (++i < 10) {
    loadIframe()
  }
});

Oh My. First, please don’t say it’s on purpose. I modeled it from a link you pasted in previous comment: https://codepen.io/danieldiekmeier/pen/mWRjzb

And here is the result for your new example:

screen shot 2017-12-12 at 7 56 30 am

First, the shape of timeline looks exactly same as my example without requestAnimationFrame. What happens here is that you delay the start of the benchmark by a random 0-16.67 ms. And everything after that is the same as the example without requestAnimationFrame.

Again, I have said multiple times: when abusing requestAnimationFrame to lower the priority of its callback in the scheduler, it's actually done by delaying, and it does not really lower the priority of any work in it once the callback starts. In fact the browser could even make the priority higher when executing, as the intention was to guarantee an animation to precisely meet a frame. Putting in the page context means you will only save a single random 0-16.67 ms for the parent page. That's all and nothing adds up in this model when you increase the number of buttons.

<link rel="preload" href="https://buttons.github.io/buttons.html" as="document" crossorigin="anonymous">

Chrome warns, <link rel=preload> must have a valid `as` value. It turns out Chrome does not support preload as="document".

https://bugs.chromium.org/p/chromium/issues/detail?id=593267

From that Chrome bug report:

To clarify csharrison@'s point, at current prioritization scheme, there is concern that iframe preloading would end up delaying the downloading of resources for the current page, which are arguably more important in the general case.

If the end goal is to give the parent page higher priority to load, pre-loading is the wrong thing to do as of today. Pre-fetching might be OK, but as the second and subsequent requests are cached, it only benefit the loading speed of the first button. Again, it will reduce the total time for starting loading buttons, thus less time will be given for the parent page. Resource Loader is multi-threaded, so fetching resources for <iframe> does not impact the parent page performance. Adding pre-fetch will only make button code land even earlier, thus making parent page slower.

After a browser implemented loading cross-origin <iframe> in its own thread, preload could be a great improvement. However, based on the discussion on Chrome bug tracker, it's not going to land in a near future.

In the context of a page load the callback behaves differently. I've put the requestAnimationFrame test on a regular website (www.people.com) and the callback fires in 500-600ms.

image

As the buttons basically perform just DOM mutations, requestAnimationFrame is a method for optimizing the priority of the work to the moment that the browser is ready to paint to the screen. Drawing 2-3 buttons doesn't cost significant time but during a page load a lot is being done. The browser on a heavily loaded mobile device could be processing 3MB of CSS code + 1MB of jQuery scripts that also modify the DOM. The buttons could start to compete for resources during the critical phase of a page load while the buttons are not a critical feature of most websites.

Simply inserting the iframes + modifying it's size will cost significant effort in a mobile device during a page load. requestAnimationFrame could reduce the stress significantly.

My argument is that it may be best to optimize the loading of the buttons so that they will load as fast as possible, but never get in the way of the primary parts of a website during a page load.

In regards to the buttons, an option to investigate could be to use 2+ requestAnimationFrames so that the buttons would start to render after the primary parts have been painted to the screen, enabling an app to load a fraction faster while the buttons would render almost just as fast.

From MDN:

requestAnimationFrame() calls are paused in most browsers when running in background tabs or hidden <iframe>s in order to improve performance and battery life.

Did you tested it with the window not visible or not focused so that it is considered as background tabs, and the callback is delayed? I tested the following for on people.com for more than 20 times on both Chrome and Safari, it consistently fired with in 17 ms. Then, I tested with focus on a different window, it indeed fired much later.

console.time('fire')
requestAnimationFrame(function() {
  console.timeEnd('fire')
})

Simply inserting the iframes + modifying it's size will cost significant effort in a mobile device during a page load. requestAnimationFrame could reduce the stress significantly.

This is something you misunderstood.

What requestAnimationFrame does is scheduling the callback in a different time. You need luck for buttons’ reflow to be combined with reflow cause by parent page into a single paint, with or without requestAnimationFrame. It doesn’t reduce stress.

Look at the profiler result again, "Scripting" is where the most cost comes from, which means the logic code before inserting, and code in <iframe> are taking the most CPU time. A single requestAnimationFrame only delay it for 0 - 17 ms, but it does not reduce the stress on those most heavy part.

load as fast as possible, but never get in the way of the primary parts of a website during a page load.

Again, requestIdleCallback is the right way.

You are forgetting the context of a page load. In that regards, it has nothing to do with animations on a site. There is just 1 action to perform at 1 time (DOM mutations).

When you would add the test code to the header and load the page (you could easily test the HTML by adding <base href="http://www.people.com/">) then you will see that requestAnimationFrame can be used to prioritize the loading of the buttons to a moment at which there is less likeliness that the buttons could get in the way during the critical phase of a page load.

For example, you could use 2 or 3 requestAnimationFrame callbacks to ensure that the main DOM changes are visible before the buttons start painting to the screen. This could improve time to first meaningful paint (TTFMP), a term coined by Google that indicates the time that it takes before a website is visible to such extend that it can be perceived by the user.

https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint

The buttons simply have a significant impact on this aspect (when not custom optimized) so it would be very beneficial if it would be optimized in buttons.js to serve millions of website visitors.

When user is using <script async src="buttons.js">, because the script is loaded asynchronously, it will land after TTFMP. After TTFMP, calling requestAnimationFrame will only delay 0 - 17 ms. It you want to requestAnimationFrame, you have to load the script synchronously, which is definitely not a good idea.

From MDN:

The number of callbacks is usually 60 times per second, but will generally match the display refresh rate in most web browsers as per W3C recommendation.

I played it more, and it turns out requestAnimationFrame does not even try to get close to the fps when busy. That's misunderstanding on my part. But then when the browser is really busy, the behavior is similar to requestIdleCallback that the delay is unpredictable.

As you can see in my test on www.people.com, requestAnimationFrame is fired after 600ms (when the browser starts to paint).

Drawing the buttons isn't an animation, it is part of the browser's render / paint process. requestAnimationFrame can be used to optimize the browser render / paint process.

My only argument for it's potential usage is that it would be best when the buttons would never get in the way of the main website / app and start rendering after TTFMP. This could be achieved using requestAnimationFrame (e.g. using 2-3 callbacks to select a later frame).

For cached requests, buttons.js will be loaded instantly and start rendering before TTFMP. The buttons will start competing for resources during the critical phase of a page load.

Putting requestAnimationFrame directly on the page sometimes gives 450 ms in my tests, replacing it with requestIdleCallback does almost exactly the same.

Putting it in a <script async src="buttons.js"> on poeple.com will randomly give mostly 0 ~ 30 ms, sometimes 80 - 90 ms in my tests.

The reason is requestAnimationFrame does not do best effort scheduling, and I believe best effort scheduling is the correct implementation for requestAnimationFrame. There is an issue opened on Chromium since 2015:

https://bugs.chromium.org/p/chromium/issues/detail?id=527123

There wasn't any real effort on the issue in last two years, but at least someone in Chromium team does agree requestAnimationFrame and other animation related functions should use best effort scheduling. When is the implementation is going to change is unknown, and may be it will never change since there are already tons of cases of abusing requestAnimationFrame as a requestIdleCallback polyfill.

I got distracted by the off-topic discussion, but this will be landing soon.

I verified again that browsers indeed do and cache conditional requests automatically, the reason it is not doing so currently is because JSONP contains a query.

I tested CORS XHR, and it did conditional requests without any extra efforts.

Once I finish the unit test I will push the change.

Conditional Request in now in place if browser supports CORS XHR.

Hi!

Thanks a lot for the fix. It does appear to improve the caching behaviour however it doesn't appear to enable conditional requests which would fix the Github API block issue. In Chrome 62 the requests are cached in the browser, however, it doesn't appear to send a conditional request.

A developer may often use features such as "Disable cache" for testing. These are also the users who are most impacted by being blocked from Github API.

A conditional Fetch API request returns 304 Not Modified and could still use the browser cache.

var etag = 'W/"97706faba5eb0fafbc31fee9b245d0a2"';
var last_modified = 'Sat, 13 Jan 2018 01:32:16 GMT';

var headers = new Headers();
headers.append('if-none-match', etag);
headers.append('if-modified-since', last_modified);

fetch('https://api.github.com/repos/josdejong/jsoneditor', { method: 'GET',
               headers: headers,
               mode: 'cors',
               cache: 'default' }).then(function(response) {
	console.log(response.status,response);
});

An option for consideration:

// Download a resource with economics in mind!  Prefer a cached
// albeit stale response to conserve as much bandwidth as possible.
fetch("some.json", {cache: "force-cache"})
	.then(function(response) { /* consume the response */ });

https://hacks.mozilla.org/2016/03/referrer-and-cache-control-apis-for-fetch/

Chrome sometimes returns “from disk”, and sometimes 304. Reading from disk (which has to do with "Cache-Control" header) actually further reduce the requests by not requesting from the server. After that cache expires, you can see Chrome runs a conditional request, and return 304.

From the article you linked:

“default” means use the default behavior of browsers when downloading resources. The browser first looks inside the HTTP cache to see if there is a matching request. If there is, and it is fresh, it will be returned from fetch(). If it exists but is stale, a conditional request is made to the remote server and if the server indicates that the response has not changed, it will be read from the HTTP cache. Otherwise it will be downloaded from the network, and the HTTP cache will be updated with the new response.

This appear to be the exactly same as the cache behavior for XHR.

“force-cache” means that the browser will always use a cached response if a matching entry is found in the cache, ignoring the validity of the response. Thus even if a really old version of the response is found in the cache, it will always be used without validation. If a matching entry is not found in the cache, the browser will make a normal request, and will update the HTTP cache with the downloaded response.

This is a wrong thing to do because it always "reads from disk" and never send conditional requests.

Here is an example of conditional request I saw on Chrome after the cache control expired.

General

Request URL:https://api.github.com/repos/ntkme/github-buttons
Request Method:GET
Status Code:304 Not Modified
Remote Address:192.30.255.116:443
Referrer Policy:no-referrer-when-downgrade

Response Headers

Access-Control-Allow-Origin:*
Access-Control-Expose-Headers:ETag, Link, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval
Cache-Control:public, max-age=60, s-maxage=60
Content-Security-Policy:default-src 'none'
Content-Type:application/octet-stream
Date:Sat, 13 Jan 2018 20:20:34 GMT
ETag:"f48bd4822b1e6cd8c45b476cb1c3eb6d"
Last-Modified:Thu, 11 Jan 2018 13:37:16 GMT
Server:GitHub.com
Status:304 Not Modified
Strict-Transport-Security:max-age=31536000; includeSubdomains; preload
Vary:Accept-Encoding
Vary:Accept
X-Content-Type-Options:nosniff
X-Frame-Options:deny
X-GitHub-Request-Id:9A9B:1E362:6D60D4:996C01:5A5A6A12
X-RateLimit-Limit:60
X-RateLimit-Remaining:59
X-RateLimit-Reset:1515878273
X-Runtime-rack:0.037787
X-XSS-Protection:1; mode=block

Request Headers

Accept:*/*
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
Connection:keep-alive
Host:api.github.com
If-Modified-Since:Thu, 11 Jan 2018 13:37:16 GMT
If-None-Match:W/"f48bd4822b1e6cd8c45b476cb1c3eb6d"
Origin:https://ntk.me
Referer:https://ntk.me/works/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36

Hi!

I've used your test page on https://ntk.me/works/ and it confirms that when using the Disable cache option in Chrome (whose functionality can be triggered by many factors, including installed plugins, proxies etc), Chrome 62 will not use a conditional request.

There may be many factors that will make a browser to simply send a non-conditional request.

As you can see Github banned my IP after just 2 requests of your test page.

image

My advise is to create a custom solution in which buttons.js manages the conditional requests so that it will work more stable even with Disable cache option enabled (used by many developers who may also need Github API access).

That's what Disable cache does.

I don't think it's a good idea to override its behavior, because someone may eventually open an issue saying Disable cache is broken.

As a workaround, if you have separate systems for development and production, then you can conditionally set data-show-count="true" only in the production version. For example, with some kind of template engine in nodejs it can be something like:

data-show-count="{{ process.env.NODE_ENV === 'production' }}"

Although this is just pseudo code, it should be very straightforward to implement in any language with a proper template engine.

After all I don't what to overly complicate this projects. If you desperately want a built-in cache management system that ignores browser's default caching system, please feel free to do a fork.