WICG / netinfo

Home Page:https://wicg.github.io/netinfo/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Providing Network Speed to Web Servers

bengreenstein opened this issue · comments

What is this?

This is a proposal for a new HTTP request header and an extension to the Network Information API to convey the HTTP client's network connection speed.

  • The header specifies the service level provided by the client's network connection. Levels of service are broken down into effective connection types (ECTs), which bin measured network characteristics such as round trip times and available bandwidth by the cellular connection technology with the most
    similar typical performance.
  • The API extension provides the same information as the header, and additionally provides an estimate of the current round trip time (RTT) measured at the transport layer.

By default, the header will be sent only on slow network connections that perform as slow or slower than typical 2G connections.

Why do we care?

Web servers and proxies benefit, because they can tailor content to network constraints. For example, on very slow networks, a simplified version of the page can be provided to improve document load and first paint times. Likewise, higher-bandwidth activities like video playback can be made available only on
faster networks. Users benefit by being offered only that content that they can consume given network constraints, which results in faster paints, and less frustration.

Goals

The goal of the header and API is to provide network performance information, as perceived by the client, in a format that's easy to consume and act upon. The header's aim is to convey a level of performance in an intuitive format at a granularity that is coarse enough to key cache entries based on its value. The header aims to allow proxies and web servers to make performance-based decisions, even on the first request. The API extension aims to make it easy to make such decisions from within JavaScript. And the RTT API's aim is to provide more precise performance information for more sophisticated consumers.

Non-goals

We do not plan to convey the actual connection type used, because it is already available via the Network Information API's downlinkMax and its mapping to underlying connection technology, and it is not as actionable as the providing
the performance of the connection. E.g., a Wi-Fi or 4G connection can be slower than a typical 2G connection at times. We also do not aim to convey the flakiness (or variability) of the connection quality.

Header

Network speed is determined by mapping the estimated network performance to the most similar cellular connection type.

network-speed = "Network-Speed" ":" effective-connection-type  
effective-connection-type = "slow-2g" | "2g" | "3g" | "4g" 

The effective connection type (ECT) should be determined using a combination of transport layer round trip time and bandwidth estimates. The table below describes the initial mapping, which currently does not incorporate bandwidth.

ECT Minimum Transport RTT (ms) Maximum Bandwidth (Kbps) Explanation
slow-2g 1870 The network performs like a slow 2G connection, which is so slow that it can only support very small transfers, such as for text-only or highly simplified pages. 1870 ms is the 66.6th percentile of 2G RTT observations on Chrome on Android (i.e., slower 33.3% of 2G connections).
2g 1280 The network performs like a faster 2G connection, which supports loading images, but not video. 1280 ms is the 50th percentile of 2G RTT observations on Chrome on Android.
3g 204 The network performs like a 3G connection, which supports loading high-resolution images, feature-rich Web pages, audio, and SD videos. 204 ms is the 50th percentile of 3G RTT observations on Chrome on Android.
4g 0 The network performs like a 4G connection or better. It should support HD video, and real time video conferencing. If a new cellular technology is introduced, e.g., 5g, the minimum RTT and max bandwidth of 4g will be adjusted accordingly.

The above mapping was computed from the RTT samples observed by Chrome over different connection types and network technologies (e.g., EDGE, GPRS). Thus, the mapping is independent of the variation in the characteristics of network technologies of different countries.

The Network-Speed header is designed to work with the Vary response header (rfc7234) to support caching keyed on network performance level.

When is the header sent?

Browsers should by default only send these headers when ECT is slow-2g or 2g. Via a to-be-determined mechanism, servers might request Network-Speed on faster networks, on a per host basis. Client Hints provides a mechanism for a server to advertise interest in a client's contextual data, but as a response header it cannot be used to enable providing network performance data on the first request.

Network Information API extension

The Network Information API provides read only attributes that contain the connection type (e.g., wifi, cellullar, ethernet) and the maximum downlink speed of the underlying connection technology in Mbit/s, as well as a way for workers to register for connection change events. In addition, the API will be extended to provide effective connection type and RTT estimates:

partial interface NetworkInformation : EventTarget {  
  readonly attribute EffectiveConnectionType effectiveType;  
  readonly attribute Milliseconds rtt;  
}

EffectiveConnectionType has the values: offline, slow-2g, 2g, 3g, 4g. These mirror the Network-Speed values, except for offline, which is only available via API.

RTT provides developers with higher fidelity information for developers who want to finely tune their experience. RTT is rounded to the nearest 10ms to protect against fingerprinting attacks.

Browsers can compute the RTT estimate by computing median of the RTT observations across all transport-layer sockets. If the transport-layer sockets are managed by the operating system, then the browser can query the APIs provided by the platform that expose RTT estimates for individual sockets. Alternatively, if the browser does not have access to the RTT estimates via the platform APIs, they can approximate transport RTT with app-layer RTT.

Following discussion at an unconference session at BlinkOn, below is a modified proposal:

What is this?

This is a proposal for two new HTTP request headers and an extension to the Network Information API to convey the HTTP client's network connection speed.

  • The first header provides an estimate of the current round trip time (RTT) measured at the transport layer.
  • The second header provides an estimate of network bandwidth provided by the transport layer.
  • The API extension provides the same information as the headers.

The headers will be sent together, and will be sent only to hosts that opt in to receiving them.

Why do we care?

Web servers and proxies benefit, because they can tailor content to network constraints. For example, on very slow networks, a simplified version of the page can be provided to improve document load and first paint times. Likewise, higher-bandwidth activities like video playback can be made available only on faster networks. Users benefit by being offered only that content that they can consume given network constraints, which results in faster paints, and less frustration.

Goals

The goal of the headers and API is to provide network performance information, as perceived by the client, in a format that's easy to consume and act upon. The headers convey the bandwidth and latency constraints of the network. (Below we provide guidance as to how these map to levels of service supported by the network.) The headers aim to allow proxies and web servers to make performance-based decisions even on a main frame request. The API extension aims to make it easy to make speed-related decisions from within JavaScript.

Non-goals

We do not plan to convey the actual connection type used, because it is already available via the Network Information API's downlinkMax and its mapping to underlying connection technology, and it is not as actionable as the providing the performance of the connection. E.g., a Wi-Fi or 4G connection can be slower than a typical 2G connection at times. We also do not aim to convey the flakiness (or variability) of the connection quality.

Headers

Network speed is provided as estimates of current transport RTT and network bandwidth.

network-rtt = "Network-RTT" ":" delta-milliseconds  
delta-milliseconds = 1\*DIGIT
network-bw = "Network-BW" ":" kbps-value  
kbps-value = 1\*DIGIT  

As a guide, below are mappings of RTT and bandwidth to typical cellular generation performance.

Cellular Generatio Typical transport RTT (ms) Typical bandwidth (kbps) Explanation
2G 2800 40 The network is so slow that it can only support very small transfers, such as for text-only or highly simplified pages.
2.5G 1500 75 The network supports loading images, but not video.
3g 200 400 The network loading high-resolution images, feature-rich Web pages, audio, and SD videos.
4g 80 1600 The network support HD video, and real time video conferencing.

The above table was generated from observations by Chrome over different connection types and network technologies (e.g., EDGE, GPRS). Observations are agnostic to variations in the characteristics of network technologies of different countries.

The Network-RTT and Network-BW headers have numeric, continuous values, which limits the applicability of the Vary response header (rfc7234) with them.

When is the header sent?

The headers are sent and are sent together only after an explicit per-origin opt-in. The opt-in is via a response header. The browser should, but is not guaranteed to retain the preference across browsing sessions.

allow-network-speed = "Allow-Network-Speed" ":" boolean  
boolean = "True" | "False"

Origins may also use an equivalent HTML meta element with http-equiv attribute ([W3C.REC-html5-20141028]).

In the future, origins can avoid sending the opt-in header by specifying in their Origin Policy.

Network Information API extension

The Network Information API provides read only attributes that contain the connection type (e.g., wifi, cellular, ethernet) and the maximum downlink speed of the underlying connection technology in Mbit/s, as well as a way for workers to register for connection change events. In addition, the API will be extended to provide RTT and bandwidth estimates:

partial interface NetworkInformation : EventTarget {
readonly attribute Milliseconds rtt;
Readonly attribute Megabit downlink;
}

The rtt and downlink attributes provide the same values that are provided by the Network-RTT and Network-BW headers, except that the bandwidth attribute is in Mbits to be consistent with downlinkMax, which the header is in Kbps to avoid use of floating point. Implementations should provide null when an rtt or downlink estimate is not available.

Browsers can compute the RTT and bandwidth estimates by computing the median of the RTT and bandwidth observations across all transport-layer sockets. When observing bandwidth, browsers should employ heuristics to determine that the network is being used to capacity.

So the OS / browser keeps some sort of sliding-window of RTTs, and uses this to provide a single average/estimate RTT to JavaScript and/or to an HTTP server via headers?

What happens in the train-tunnel scenario, where a request is initially started under ideal network conditions, but those conditions immediately worsen for the rest of the request? In this scenario, what would the user experience if the app / proxy / server uses information that is based on past performance to deliver content that is ill-suited for current network conditions?

It occurs to me that a device could use travel speed/direction and a map of poor coverage (or something to that effect) to enhance/augment the accuracy of historical RTTs. So a device could sense that it was travelling towards an area with poor coverage, and the APIs / headers would indicate this.

It seems to me that accurate portrayal of network conditions is a very complex subject, which makes it such a fun and exciting topic to discuss. :)

This is amazing! The info provided here won't be perfect in a train tunnel secerio, but that's okay, this is so far ahead of what we have right now. Being able to understand the user's bandwidth on the first request is going to be pretty big in terms of the improvements that we can make to performance and performance logging.

The headers are sent and are sent together only after an explicit per-origin opt-in. The opt-in is via a response header. The browser should, but is not guaranteed to retain the preference across browsing sessions.

I'd like to suggest that we try to make it so that the information is sent to origins who have not provided any indication as to if they want the information. Ideally I think you want this flow:

  1. The first time you talk to an origin you get the header. You are also told this is the first request
  2. If you want to get networking data, you say so. You don't repeat this on following requests (so you avoid the header bloat)
  3. After N days the cached state could expire (so as to allow sites to change their opt-in status)

I think this would be a nice pattern for any kind of case where we send data about the state of the browser

@jokeyrhyme: Yes, the browser maintains an estimate. For the train-tunnel scenario, browsers can be a little smarter. Chrome, for example, will decrease the weight of older RTT samples if newer ones have a very different signal strength. Using location is an interesting idea, but would require careful consideration of privacy and privacy/performance tradeoffs.

@n8schloss: Awesome!

@bmaurer: I considered you're protocol before writing the proposal. I think the hard part is that the browser would then need to keep a list of every origin it has every communicated with. Not impossible. Just a PITA. Also, I think having a header that turns off the network headers explicitly is more useful than a timeout. We could also do both. Wdyt?

@bmaurer

You don't repeat this on following requests (so you avoid the header bloat)

HTTP/2.0 has header compression with a compression dictionary. I feel that it would be a shame to make user agent implementations more complex due to perceived "bloat" in an old (but admittedly very popular) protocol.

HTTP/2.0 has header compression with a compression dictionary. I feel that it would be a shame to make user agent implementations more complex due to perceived "bloat" in an old (but admittedly very popular) protocol.

You'll still need to send the headers on the first request on that socket, namely the one where you actually render your main page.

I worry that there are a fair number of features like this where the browser should ideally send information about the client state to the server -- examples include pixel ratio, screen size, time zone, etc. I'd love to see us come up with a really solid idiom for "this is a piece of information that the browser can tell you, we'd like to send it only if necessary"

I worry that there are a fair number of features like this where the browser should ideally send information about the client state to the server -- examples include pixel ratio, screen size, time zone, etc. I'd love to see us come up with a really solid idiom for "this is a piece of information that the browser can tell you, we'd like to send it only if necessary"

We spent a lot of time discussing this in context of Client-Hints and the resolution and guidance from http-wg was to use separate headers. Yes, it may consume a few more bytes on the wire, but on the upside you at least have a chance of caching and dictionary re-use. The names for the headers don't need to be large either, so the actual byte difference is very small -- said differently, any packing you come above can be replicated with separate headers and small delta overhead. I propose we keep it simple and stick with terse, separate headers.

Yeah, totally fine with separate headers. My suggestion here is a clear, consistent api for opt-in headers. Client hints is a great example of this. The idiom there is that you pass the Accept-CH header with the list of client hint headers that you want and those headers are returned to you. If you say Accept-CH: X, Y you will get headers X and Y. This is extensible -- more examples can easily be added to this pattern.

OTOH the currently proposed api doesn't feel consistent. You say Allow-Network-Speed and you get headers Network-RTT and Network-BW. This spec also defines a caching mechanism which is value but not in a way that could be reused across other parameters.

It seems like the key issue here is the Accept-CH has no means of being cached for future document requests. One of the primary use cases of this api is for the server to send different main-page content based on the network bandwidth.

What about doing the following:

  1. Make NetBW and NetRTT a Client Hint
  2. Make a client hints cacheable.

Accept-CH-For-Document:

Would cache the Accept-CH list for future fetch()s with destination == document.

Not only would this be more consistent, I actually think it'd solve a pretty concrete problem that we face which is that many of the client hints are most useful on the main document (for example, the way FB works today we're pretty dependant on knowing the DPR during the request).

Gotcha, thanks Ben.. all that makes sense.

  • There is nothing special about "making X a client hint"... we can simply define the headers we want and specify that the opt-in can be communicated via Accept-CH, as it is an extensible mechanism.
  • Making Accept-CH opt-in cacheable: that would be nice, as it would benefit other hints as well. In theory origin policy should (indirectly) solve this.. but the timelines for that are not clear. /cc @mikewest

The updated protocol based on the feedback here, and discussion with @igrigorik and @bengreenstein :

  • Browser will include NQ headers only if the server has opted-in to receiving the NQ hints via Accept-CH mechanism.
  • Chromium will start caching Accept-CH opt-ins on the disk across browser restarts.
  • On each response, Chromium will update the opt-ins for the corresponding server. So, if the opt-ins in the response headers are different from what the browser remembers, then the browser will update its values for the opt-ins. This provides a way for the server to opt-out from receiving the hints.

Awesome, sounds like a great chance.

Will not sending an accept-ch on a future request be an explicit opt-out? It may be hard to get all images, etc to send the header

I also still wonder if it makes sense to be able to scope Accept-CH to specific fetch destinations (eg only document for bandwidth). Even if hpack reduces the networking cost it seems like sites could start getting an excessive number of headers and increase the processing cost on the server, etc. Maybe i'm just worrying about it too much though

Yes, in the current proposal not sending an accept-ch on a future request will be considered as an explicit opt-out. What are the other possible ways of opt-out? Would opting-out N days after no implicit opt-out is received be a better option?

I think restricting the header to certain content types is feasible, but it is not clear what content types should be whitelisted. e.g., a case can be made for images and media content to be whitelisted too.

I think it'd be better to require an explicit Accept-CH: clear

I think it'd make sense to use fetch's destination as the way to restrict to specific types. Eg you only want to send DPR to image requests.

Will not sending an accept-ch on a future request be an explicit opt-out? It may be hard to get all images, etc to send the header

How does adding a new header make it any easier, in comparison to omitting it? The benefit to "omit" = clear is that there is no extra distinction between "I never used it and don't care" and "I used it but don't care now".

I think it'd make sense to use fetch's destination as the way to restrict to specific types. Eg you only want to send DPR to image requests.

That's not true. HTML, CSS, and JS can all be optimized based on DPR.. many sites do exactly that.

How does adding a new header make it any easier, in comparison to omitting it? The benefit to "omit" = clear is that there is no extra distinction between "I never used it and don't care" and "I used it but don't care now".

Because it can be extremely hard to ensure that 100% of all requests on a given domain contain a header. If any request from facebook.com can blow away the CH setting then it could get very difficult to debug how that happened. I'd also be fine with the opt-out after N days of not seeing any accept-ch headers where N is large (say 15-30)

That's not true. HTML, CSS, and JS can all be optimized based on DPR.. many sites do exactly that.

Right, my point is that you'd say "I want DPR for fetch destinations document and image"

Another update: After some discussion, it has been decided that the network quality headers will be sent only on HTTPS connections.

A few questions:

  1. From the discussion:

Bandwidth is more important than FB, RTT more important for Salesforce

Can someone that attended the session elaborate on the actual use-cases for RTT info? (The use-case for effective bandwidth seems pretty clear to me...)

  1. I'd like to make sure we're taking into consideration the fact that adapting based on 2 hints would significantly increase cache variance, and AFAIU even if Key is adopted and implemented, it doesn't have "and" semantics that enable us to vary the cache on multiple ranges from different headers. (/cc @mnot)

If there are use-cases for adaptation based on both RTT and bandwidth separately, maybe there's room for those as well as the Effective-Connection-Type signal that was in the original proposal. Since we're talking about an opt-in anyway, servers can indicate which hint is interesting for them, and Vary (and in the future Key) based on that.

  1. @tarunban and/or @bengreenstein - is it possible to get some more details on how the network-bw value is calculated? Will an algorithm for that be part of the spec, or is it possible that it'd vary between UAs in similar conditions (to enable the algorithm to evolve and improve in the future)?
  1. RTT is generally a good enough predictor for page load performance. RTT prediction (compared to bandwidth) is also easier to implement, more well defined and generally the prediction accuracy is better.
  2. This is a good idea. I will add ECT back here. Thanks.
  3. I believe algorithm should not be a part of the spec since estimating bandwidth is a pretty open-ended problem, and there is a lot of scope of improvement. Here is the current algorithm used in Chromium: https://docs.google.com/document/d/1eBix6HvKSXihGhSbhd3Ld7AGTMXGfqXleu1XKSFtKWQ/edit#bookmark=id.lxtaomk8d17p

Updated proposal based on the feedback so far:

What is this?

This is a proposal for three new HTTP request headers and an extension to the Network Information API to convey the HTTP client’s network connection speed.

  • The first header provides an estimate of the current round trip time (RTT) measured at the transport layer.
  • The second header provides an estimate of network bandwidth provided by the transport layer.
  • The API extension provides the same information as the headers.

The headers will be sent together, and will be sent only to hosts that opt in to receiving them via HTTP client hints.

Why do we care?

Web servers and proxies benefit, because they can tailor content to network constraints. For example, on very slow networks, a simplified version of the page can be provided to improve document load and first paint times. Likewise, higher-bandwidth activities like video playback can be made available only on faster networks. Users benefit by being offered only that content that they can consume given network constraints, which results in faster paints, and less frustration.

Goals

The goal of the headers and API is to provide network performance information, as perceived by the client, in a format that’s easy to consume and act upon. The headers convey the bandwidth and latency constraints of the network. (Below we provide guidance as to how these map to levels of service supported by the network.) The headers aim to allow proxies and web servers to make performance-based decisions even on a main frame request. The API extension aims to make it easy to make speed-related decisions from within JavaScript.

Non-goals

We do not plan to convey the actual connection type used, because it is already available via the Network Information API’s downlinkMax and its mapping to underlying connection technology, and it is not as actionable as the providing the performance of the connection. E.g., a Wi-Fi or 4G connection can be slower than a typical 2G connection at times. We also do not aim to convey the flakiness (or variability) of the connection quality.

Headers

Network speed is provided as estimates of current transport RTT, network bandwidth and an enum effective connection type which indicates the connection type whose typical performance is most similar to the performance of the network currently in use.

network-rtt = "Network-RTT" ":" delta-milliseconds
delta-milliseconds = 1*DIGIT
network-bw = "Network-BW" ":" kbps-value
kbps-value = 1*DIGIT
network-ect = "Network-ECT":” 
ect-value = "slow2g" | "2g" | "3g" | "4g" 

The Effective connection type (ECT) should be determined using a combination of transport layer round trip time and bandwidth estimates. The table below describes the initial mapping from RTT and bandwidth to ECT.

ECT Minimum transport RTT (ms) Maximum Bandwidth (Kbps) Explanation
slow2g 1900 50 The network is so slow that it can only support very small transfers, such as for text-only or highly simplified pages.
2g 1300 70 The network supports loading images, but not video.
3g 200 700 The network loading high-resolution images, feature-rich Web pages, audio, and SD videos.
4g 0 The network support HD video, and real time video conferencing.

The above table was generated from observations by Chrome over different connection types and network technologies (e.g., EDGE, GPRS). Observations are agnostic to variations in the characteristics of network technologies of different countries.

The Network-RTT and Network-BW headers have numeric, continuous values, which limits the applicability of the Vary response header (rfc7234) with them. Both RTT and bandwidth will be rounded up to the nearest 25ms (or 25 kbps) to protect against fingerprinting attacks. Network-ECT categorizes the network quality as one of the 4 enums, and makes it possible for content providers to Vary based on ECT.

When are the headers sent?

The headers are sent after an explicit per-origin opt-in. The opt-in is via Client hints mechanism defined here. The browser should, but is not guaranteed to retain the opt-ins across browsing sessions. In particular, browser may clear the opt-ins based on user actions (e.g., clearing cookies or browsing history). Three new hints will be added:

Accept-CH: network-rtt

Accept-CH: network-bw

Accept-CH: ect

To opt-out from receiving the network quality hints, the origin should stop sending the Accept-CH, which would cause the browser to stop sending the hints.

Origins may also use an equivalent HTML meta element with http-equiv attribute (W3C.REC-html5-20141028).

The headers would be sent only over secure HTTPS connections.

Network Information API extension

The Network Information API provides read only attributes that contain the connection type (e.g., wifi, cellular, ethernet) and the maximum downlink speed of the underlying connection technology in Mbit/s, as well as a way for workers to register for connection change events. In addition, the API will be extended to provide RTT, bandwidth and effective connection type estimates:

partial interface NetworkInformation : EventTarget {
readonly attribute Milliseconds rtt;
readonly attribute Megabit downlink;
readonly attribute EffectiveConnectionType effectiveType;
}

The rtt, downlink and effectiveType attributes provide the same values that are provided by the Network-RTT, Network-BW and Network-ECT headers, except that the bandwidth attribute is in megabits to be consistent with downlinkMax, which the header is in kilobits per second to avoid floats. Implementations should provide null when an rtt or downlink estimate is not available. effectiveType has the values: slow2g, 2g, 3g, 4g, and mirror the values in the header.

Browsers can compute the RTT and bandwidth estimates by computing the median of the RTT and bandwidth observations across all transport-layer sockets. When observing bandwidth, browsers should employ heuristics to determine that the network is being used to capacity.

It's unlikely FB will be able to make use of this api if the absence of an accept-ch heading clears the cache. Ensuring full coverage across all of our endpoints gets really tricky.

The headers are sent after an explicit per-origin opt-in. The opt-in is via Client hints mechanism defined here. The browser should, but is not guaranteed to retain the opt-ins across browsing sessions. In particular, browser may clear the opt-ins based on user actions (e.g., clearing cookies or browsing history). Three new hints will be added:

Should probably read that the browser must clear hints when clearing cookies.

Accept-CH: network-rtt, Accept-CH: network-bw, Accept-CH: ect

Should be consistent about the net prefix. Also, consider shortening to net-rtt, etc

I want to make sure I understand the "When are the headers sent?" and the decision to make it opt-in only.

AFAIU this means that for a totally new origin (nothing cached, no CH and no resources), the origin will get no netinfo for the first request (e.g., mydomain.com/index.html, vids1.mydomainvids.com/videoA.mp4) and thus cannot send optimized content for the "first load". Arguably, it is exactly this first load that could benefit from this information the most, since no other optimizations (e.g., resource caching, service workers) are in place yet. Working with separate origins for additional content (e.g., video, image CDNs) possibly makes this even worse.

If feel like @bmaurer was saying something similar in his comment but that the discussion then moved more to "should we cache the CH headers" instead of "should we send the info for the first request as well?". Earlier versions also had things like "Browsers should by default only send these headers when ECT is slow-2g or 2g" and talked about "Origin Policy", which have been left out of the latest version. Possibly there are good reasons for taking this approach, but from the content in this thread and the BlinkOn doc, these are not 100% clear to me.

I would be partial to sending this info for all "first loads" (new origin, nothing cached, only on HTTPS). Only if the server answers with Accept-CH do we send it for subsequent requests as well. Servers who don't support it would just ignore the extra initial headers. This might require some custom caching approach (ignoring these specific headers when caching the initial response?) but I'm not sure about how that would work/other impact?

Can we dive into the rationale for having to opt-in via headers and having an origin cache?

If these headers are only transmitted over HTTPS, then don't we automatically benefit from some level of compression?

~80% of all browsers support HTTP/2 now: http://caniuse.com/#feat=http2

So, these headers are either:

  • 0 extra bytes if the user's privacy preference disallows them

  • an extra 10 or so bytes over HTTP/1 without encryption / compression

  • potentially just a few extra bytes over HTTP/1 where HTTPS compression is enabled

  • just a few extra bytes over HTTP/2 due to header compression and session dictionary

Unless there's an enormous saving to be had, may I propose that we reduce the complexity a great deal and make it more useful for more websites by removing the opt-in round-trip and thus no longer requiring implementations to have an origin cache?

@bmaurer : See #47 which discusses changes needed in Accept-CH spec.

@rmarx and @jokeyrhyme : I think the problem is not just the overhead of network quality headers. There are many other client-hints that the browsers can send, and the list may expand in the future. Sending all the client hints is not scalable.

The origin-policy spec can hopefully solve this problem in future (We need to put that back in the spec here. I will work on that).

From #46 (comment).

The rtt, downlink and effectiveType attributes provide the same values that are provided by the Network-RTT, Network-BW and Network-ECT headers, except that the bandwidth attribute is in megabits to be consistent with downlinkMax, which the header is in kilobits per second to avoid floats. Implementations should provide null when an rtt or downlink estimate is not available. effectiveType has the values: slow2g, 2g, 3g, 4g, and mirror the values in the header.

Above sgtm and a related question..

We fire events to notify [1] application of changes to connection.{downlinkMax, type}. Presumably, we would want to do the same for the new attributes (rtt, downlink, effectiveType), to allow developers avoid the need to poll manually.. Except, how often are these values updated? Does the current implementation have thresholds that we should consider? This question is a close cousin of #30.

[1] http://wicg.github.io/netinfo/#handling-changes-to-the-underlying-connection

About the thresholds: One way is to update the values if the difference between the new value and the old value is at least X units, AND the percentage difference between the new and old values is at least Y%.

For example, for creating new net log entries, Chromium NQE uses X = 100 msec (for RTT) or 100 kbps (for throughput), and Y = 20% (relevant Chromium code here). We can tweak the values of X and Y and use a similar approach here too.

IMO, allowing the thresholds to be set by the listeners will make the browser implementation and the listener implementation too complex, with probably not significant benefit.

@tarunban thanks for the pointer. Curious, how were the current thresholds determined? Do you have a sense for how often that logic triggers? My goal here is to avoid generating unnecessary noise for developers.. e.g. we don't want to be firing these events every 30s.

The thresholds were determined using local experiments. With those thresholds, for a new connection, I see a few triggers in the first 30 seconds as NQE adapts. For a connection that we have seen before or after the first page load for a new connection, there should be ~1 trigger every couple of minutes.

Landed the JS interface that exposes effective rtt, downlink, and type: 0653980. You can see it live @ https://wicg.github.io/netinfo/ - ptal and let me know if you spot any issues.

We'll tackle header advertisement + CH integration in a separate PR.

Hi!

I have been looking into the spec a bit today, and I'm not entirely on board with waving off the fingerprinting concerns with an explanation that the information can always be computed manually.

This is true when we're concerned with fingerprinting against a single origin - without this API, it will take some time to get RTT and downlink estimates, and they might not be as precise, but yes, you can do so.

However, for cross-origin fingerprinting, two origins that do their own manual measurements will likely get somewhat different results, and possibly even different from the single-origin case as they only have half the bandwidth. Yet with this API, if they call it at the same time, they would get precisely the same value.

I wonder if we should address that, e.g. by:

  • Making it harder for them to call the API at the same time (e.g. very high refresh time, randomly delayed response)
  • Proving that the real-life distribution of RTT and downlink is such that the vast majority of users would end up in a small number of 25ms (kbps) buckets, which would mitigate the problem for the majority of users.
  • Using global estimates early in the page lifetime, and local estimates (this page's traffic only) later.
    etc.

Hey all, we've been using the JS portion of this API for a bit now so I wanted to quickly chime in with our experience. We're using the JS api and specifically effective connection in our performance logging and have seen a nice correlation between effective connection and performance. There are clear bands that seem to match nicely with effective connection. We're excited for the headers part of this api so we can start shipping different product experiences (ie, better quality videos on good connections) based on connection.

In terms of if the effective connection types vs raw number usage, currently we’re mostly using effective connection type, but this is because we’re only looking at doing simple logging estimates. When it comes to shipping different product experiences I expect that we’ll mostly rely on the raw rtt/downlink numbers and not effective connection type. I think that’s one of the best aspects of this api, we have a great simple field to help quickly get started to do some quick rough analysis and then for more detailed things we still expose the raw numbers.

@n8schloss thanks for the great feedback! With Accept-CH-Lifetime landing this week in Chrome Canary, we can now tackle the second portion of this issue.. which is the RTT, Downlink, and ECT headers.