WICG / interventions

A place for browsers and web developers to collaborate on user agent interventions.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Touch listeners defaulting to passive

RByers opened this issue · comments

Now that we have an API for passive event listeners, chromium wants to experiment with sometimes defaulting listeners into the passive state (in order to get results like this). If we can succeed in making some breaking changes for this, then we'll work with the TECG to explore changes to the TouchEvents spec.

Note that most browsers already have a form of intervention here in terms of a timeout. But this timeout does not follow most of the intervention guidelines and is in some ways the worst of both worlds. The hope here is that we can eliminate the need for such timeouts by replacing it with something that is both more rational/predictable for developers and provides a better user experience.

We only have a skeleton of a concrete proposal so far, but are collecting metrics from the wild to evaluate the tradeoffs of some possible proposals. Chrome 52 includes a flag allows users/developers to opt-in to a couple different modes of passive-by-default touch listeners.

EDIT 2/12/17: See #35 and this post for details of a specific intervention now shipping in Chrome 56. Updated to reflect shift in thinking from "forced" to just "default" (but overridable).

/cc @tdresser @dtapuska who are working on the engineering in chromium for this.

Note that we're thinking that this will start by applying only to cases where passive isn't specified like: addEventListener("touchstart", handler). That would then behave differently from addEventListener("touchstart", handler, {passive:false}). But this would be more of a migration strategy / hint than behavior we'd expect to keep long term (i.e. if CNN made their touch listener passive:false for some reason without fixing the jank, we'd still want to intervene on the user's behalf). So I don't think that distinction would ever really belong in the DOM spec. /cc @annevk @smaug---- @jacobrossi, thoughts?

If we start doing something like this, then it isn't clear to me at all anymore why we'd need 'passive'.
Especially given that there is touch-action and such.

Interventions are primarily about improving the user experience at the expense of some developer rationality. A core part of the guidelines is that if developers follow best practices, they will never be impacted by an intervention. This lets us generate warnings / metrics around intervention triggering and drive them down as a thing to avoid. We can't do any of that without an explicit API where developers can opt-in in a rational manor to passive behavior.

Quick update:

  • The most promising change here seems to be to treat touch listeners on window, document, document.documentElement and document.body as passive by default (spec issue).
  • Chrome has shipped the above behavior to 50% of dev-channel users and saw a 50% reduction in scroll start time at the 95th and 99th percentiles, without any reports of substantial breakage.

Have you considered other options here, since this kind of change would make the platform less coherent internally and would add yet more special cases to the already complicated platform?
Like, a browser could have a "performance hint console" to tell to the web developers that whatever they are doing is probably slowing down ux and what they could do instead is and if they want to keep the existing behavior but don't see the warning again do .

I'm rather worried that if we end up adding more and more "interventions", we end up polluting the platform with tons of small hacks and inconsistencies, and such things tend to beat us later, when designing new APIs or spec'ing how the platform works.

Yeah I'm worried about this too. We've long had devtools features highlighting scroll performance problems, and in general we've found they're helpful for developers motivated around perf, but when our goal is to improve the 95th percentile latency for users they're nearly useless (developers looking at perf in devtools are generally well below the 95th percentile, it's the sites where nobody is even measuring that matter most to the 95th percentile).

Long term ideally I think we'd aim to make touch events universally passive by default (basically the same way pointer events are, but with an opt-out if you really want it). That, I think, would be clean / rational from a web developers perspective. WDYT? Of course we'd need some transition path over many years to avoid breaking the web too badly in order to get there.

it's the sites where nobody is even measuring that matter most to the 95th percentile.

And your solution is adding hacks to the browser that affect every website everywhere and which can and do (note the referenced PhotoSwipe issue) suddenly break previously perfectly valid code which is intercepting touch events at the document level for anything as basic as page-wide drag&drop.

Here's a suggestion: put these hijinks behind a switch that devs can turn off with a <meta content="back-the-hell-off"/> tag, so there is an escape hatch until third-party libraries can catch up.

The idea here is that web sites could explicitly use non-passive listeners by passing the right kind of dictionary to addEventListener.
But I agree, this kind of changes are a bit web dev hostile, which is why I was wondering if other mechanisms have been investigated to improve the ux of pages. Sounds like no.

The PhotoSwipe code is setup to use PointerEvents but they are using the proprietary detection mechanism so when Chrome ships pointer events (M55) and FireFox does later this year it wouldn't take advantage of them.

meta tags are difficult for libraries to override set. So the escape hatch here is to provide a fully defined value. But really in this case it should be switched to use pointer events as it would be more efficient.

And your solution is adding hacks to the browser that affect every website everywhere and which can and do (note the referenced PhotoSwipe issue) suddenly break previously perfectly valid code which is intercepting touch events at the document level for anything as basic as page-wide drag&drop.

Yep, that's the nature of interventions: make the experience substantially better for a LARGE number of users at the cost of some small compat / developer pain cost. There's definitely a legitimate concern here about striking a good tradeoff, but in general the user experience has gotten so bad on the mobile web (putting the entire platform at such risk of economic collapse) that I don't think anyone involved really believes the right tradeoff is to land entirely on the side of developers over users. Our (Google web platform team's) thinking on that is mostly summarized here and yes it definitely includes that developers should usually have a way to opt-out (as they do in this case).

But note that if you're already following best practices (eg. making your site work correctly with touch on Microsoft Edge) then you'll already have a touch-action: none rule in such case and your site will continue to work fine. Even if you don't, it's likely your site will still work ok (eg. if it's really a full screen drag and drop then the page won't be scrollable). Specific counter examples appreciated, making the ideal tradeoff is challenging.

And just to make sure we're all clear on the benefit - we're talking about giving all users the experience on MANY major websites seen on the right of this video: https://www.youtube.com/watch?v=NPM6172J22g. Given the huge positive response that video has gotten from users, we're willing to accept a little bit of hacks / compat pain here.

meta tags are difficult for libraries to override set.

Hence the libraries will have to fix their problems. A meta tag switch would be an escape hatch for developers depending on third-party libraries that have not been updated yet.

developers should usually have a way to opt-out (as they do in this case).

They're effectively stuck until all the libraries their project relies on, update to handle what is essentially a breaking change in their years of handling touch events.

Complex touch event scenarios are a [[censored]] nightmare hellscape. And now you're saddling devs with either switching to a different library (with its own potential weaknesses) or forcing them to dive into the guts of those libraries themselves to sort it out. If you call that a viable opt-out, you're mad.


Besides: do you know what the easiest way is to patch those issues? You override whatever abstraction for addEventListener the library uses to detect passive event listener support and include a passive:false that is hardoced for all events and which gives Chrome the finger. Road of least resistance. Road of least cost. And a road which ends in a status quo that is atleast known to work correctly, even if it performs worse.

Guess what solution companies that are already not interested in performance are going to use?

In this case, developers don't need to wait for libraries to update, they can apply touch-action to the parts of their page that should prevent scrolling.

In this case, developers don't need to wait for libraries to update, they can apply touch-action to the parts of their page that should prevent scrolling.

In many complex interaction cases touch-action needs to be applied and unapplied dynamically, determined by the user's interaction state with various parts of a UI. That interaction state may well be under the control of a third-party library that is used to render part of the UI; or that implements an abstraction on top of touch events for complex user gestures that a UI requires.

So please explain again why you believe developers wouldn't need to wait for libraries to update, because there certainly are plenty of cases where they will...

And just to make sure we're all clear on the benefit - we're talking about giving all users the experience on MANY major websites seen on the right of this video: https://www.youtube.com/watch?v=NPM6172J22g. Given the huge positive response that video has gotten from users, we're willing to accept a little bit of hacks / compat pain here.

In my opinion, It should be the responsibility of those major websites to update their code to opt-in to new browsers features. If these websites are major, then they can certainly afford the dev resources to do so.

It's my understanding that a large part of the spirit of the web is that backwards compatibility is baked-in. Have a really old page from 1996? It should still work. Have a new webapp in 2017? That should work too.

This change can break pages and webapps that rely on touch events for the benefit of making sites with video, like CNN, faster. I don't think it's a fair trade-off to put speed over correctness --- regardless of how nice the speed is.

Please reconsider making this an opt-in feature rather than an opt-out one. The feature itself is a good idea, but I strongly disagree with changing well-established, default behaviour.

@maxkfranz that's an argument against all interventions, not just this one. There's legitimate debate to be had here, but let's keep it to #43 rather than spread across each individual intervention issue.

Why?

Why change the API like this?

It would be OK if availability of the object form of third argument was sniffable, but it's not. We can't know when to pass {passive: false, capture: useCapture} or just useCapture

So we can't prevent touchstart events from blurring by calling preventDefault.

(╯°□°)╯︵ ┻━┻

You can feature detect, to determine whether to pass the third argument an object (though it's a bit clumsy).

See the EventListenerOptions explainer.

Though hopefully it's exceedingly rare to need to opt-out of passive touch listeners. touch-action is the simplest work-around in 99% of use cases.

Also the exact API design came from months of contentious standards debates - see here and the other links here for most of the history if you really want to know "why" the EventListenerOptions API has the design it does.

@tdresser Yes, you can feature detect. But feature detection is useful only if you can change all affected code in your app.

If you're the only one using listeners, then you're OK. If you're using a lib that uses the addEventListener() function and it doesn't have a workaround specifically for this deviation from previous, standard behaviour, then you're really stuck.

@RByers To quote a previous comment about this: The main concern regarding Cytoscape is that the lib is used in many places, in both academia and commercial organisations, not all of which can easily update their code. Having support for passive events in Chrome is great but unfortunately, changing the default behaviour breaks our lib and many apps that depend on it.

I notice the following issue by @RByers in particular: WICG/EventListenerOptions#38. That seems much more sensible to me. Disabling preventDefault() (i.e. passive: true) for touch events should be opt-in, and a separate lib can facilitate making passive: true default just as you describe --- without breaking existing apps.

If you're using a lib that uses the addEventListener() function and it doesn't have a workaround specifically for this deviation from previous, standard behaviour, then you're really stuck.

I mentioned the same thing before. But for some reason some Google engineers seem to be deaf to this genuine software compatibility problem and live in a la-la land where developers have the luxury of being able and allowed to fork and patch any such library they're stuck using.

I don't terribly mind that passive would be the default behavior, but at the very least we need an easy one-shot method of turning this intervention off when compat problems arise.

I don't terribly mind that passive would be the default behavior, but at the very least we need an easy one-shot method of turning this intervention off when compat problems arise.

I think you have good intentions here: New apps you are making or apps you have direct control over could use a method like you propose.

My opinion is that's not enough, because apps or sites that won't be updated would still be broken. I don't think that's a fair trade-off.

My opinion is that's not enough, because apps or sites that won't be updated would still be broken. I don't think that's a fair trade-off.

Then perhaps Google should themselves maintain a white-list of websites on which this intervention could be enabled and disable it for all others.

Thanks for the feedback. Most of these comments apply to interventions in general. As rbyers@ commented above, there's legitimate debate to be had here, but let's keep it to #43 rather than spread general feedback about the interventions across each individual intervention issue.

Since it's specific to the history of this issue, I'll re-iterate what I said on the Chrome bug here:

I'm deeply sorry for the frustration this has caused you. We've long tried the "opt-in" approach but have learned that on average developers don't make the effort to opt-in to better performance. In particular, in this case we started pushing passive touch listeners heavily [back in June] (https://developers.google.com/web/updates/2016/06/passive-event-listeners) including during the Google I/O Chrome keynote, and outreach to a large number of frameworks and other major sites would we knew could benefit. They almost all told us "can't you just detect this and do it for me automatically so I don't have to change my code?". As you can see in the graph here, we've had very little impact on real-world scroll performance via the "opt-in+outreach" approach.

So we believe that when only tiny number of sites are negatively impacted, and a huge number are positively impacted, we should prefer a "fast by default" policy instead.

We've done our best to do this in a responsible way - in discussion with all the browser vendors and standards groups, with an easy fix (touch-action), a full opt-out (though I admit it's not exactly easy), console warnings, developer outreach, and careful roll out via trials, dev and beta channel where were heard very little complaints. We need to work harder at this - eg. see #44. But in Chrome we're fundamentally unwilling to allow the mobile web to continue to die from performance bankruptcy. Other browsers are less aggressive, and people who prefer to be more conservative (preferring maximal compatibility over being part of moving the web forward aggressively) should prefer to use a more conservative browser.

Unfortunately very few people outside the browser development community have any interest in watching keynotes etc - and something like this is very abstract until it comes into effect.

Among other things, this completely breaks the ability of Chrome to be used to test mobile sites as it is completely non-standard behaviour that destroys any possibility of testing web apps without using something like BrowserStack etc.

The way this appears to a web developer:

  1. Bad websites take too long within touchstart events (without calling .preventDefault()) as the options parameter isn't supported in IE (assuming they even know about it), and don't want to use timer/rAF callbacks. End result is that initial touch scroll is delayed by a noticeable amount of time.
  2. Good websites / webapps want to prevent mobile scrolling so call event.preventDefault().

Possible solutions:

  1. Log warnings on websites that take too long (and possibly don't call preventDefault).
  2. Pop up an alert() style box for websites that do this (actually give some incentive to these popular sites such as in that video etc).
  3. Change things in an incompatible way (please explain how to use useCapture in a compatible manner without access to options) breaking webapps and websites that want to work on IE / Edge.

With this solution there's no incentive for sites to fix things. There's no pressure on MS to support options (even though I'd love to be able to use it, having to support older browsers means it's ~3 years or more away if they added support today). Older websites and apps are oing to break and users will have no clue why - the log spam will help developers - but that's assuming that there's a developer around to fix things.

But in Chrome we're fundamentally unwilling to allow the mobile web to continue to die from performance bankruptcy.

Yet you are not opposed to letting the weaker underbelly of sites on the web die out by willfully breaking those sites without anyone being available (or able) to fix them? Sites that might not even have performance issues, even...

@rjgotten I think the sites they're going to break are the ones that call preventDefault() in there - and almost by definition they won't have any performance issues to begin with...

@Rycochet
Good point. Makes this decision all the more disturbing.

We know from metrics we've collected from the wild that the vast majority of touch listeners don't call preventDefault, and of those that do, the majority do so intermittently in ways we can't predict. Eg. it's common for a touch listener on the document to use jQuery event delegation such that preventDefault gets called only when the touchstart occurred over some element. In most of the cases where preventDefault would be called, the page isn't scrollable anyway so there's little visible breakage to scrolling not being disabled.

Can you give us the URLs of some sites that are seriously broken by this? I assume it's the specific change in #35 that we're talking about here. Over 15% of Chrome Android users now have that change and we've received almost no complaints from users about sites being broken.

It's definitely possible that we've made the wrong choice (the tradeoff between advancing the web and maintaining full compatibility is a constant challenge for all browser - we all expect to make some mistakes in both directions). But after years of debate on this sort of thing, we're far enough along that abstract arguments aren't particularly helpful. As this stage, the main things that could convince me that we've made the wrong decision for the web at large (rather than just for a tiny number of developers/sites who are personally impacted) are some or all of:

  1. If a large number of users filed/starred bugs providing URLs of sites that are broken. Eg. here's an example where we learned from users that (despite very promising metrics) our decision was wrong.
  2. We found some badly broken library/pattern that we can see (eg. via HTTP Archive) is highly prevalent on the web. Eg. see this issue where I've been holding back from fixing an annoying interop bug for years because 0.3% of top websites use a bad library that causes scrolling to break when the bug is fixed.
  3. An outpouring from many web developers arguing we're doing the wrong thing. I'm actually shocked that I can't find a single complaint on Twitter about this, in contrast to some other interventions which have still been judged to be worth the cost overall.
  4. Evidence from users that the benefit we're achieving isn't actually all that important to them. So far we've gotten a ton of feedback that users care about this.
  5. Examples of use cases where it's really impossible or unreasonably burdensome to fix sites to account for the new behavior. The "some sites can just never be updated at all" argument doesn't carry much weight with me because the only way to fully accommodate such sites on the web is to stop changing browsers at all (including stopping fixing browser bugs). Here's a recent example where we relaxed an intervention after feedback only from a single developer that a legitimate use case had become impossible.

I'm sure we're going to see more of the above over the next couple weeks (I'm shocked to have not seen more already). But as you can see from the other examples, we're definitely paying attention to this sort of feedback and will course-correct if necessary. But we're determined to be thoughtful and disciplined about the global tradeoffs here, not make rash strategically impactful decisions based on the vehemence of a few individuals.

The change has been made globally on desktop and mobile, only affecting touch events, so the metrics are only useful for people leaving them on (most developers I know turn them off specifically). Where desktop machines are getting more touch capable that means the machines with the processing power to not have any performance issues are still having this change (unfortunately I fully understand that any change has to be universal). This also means that the DevTools device emulation also has this change - which is where I found out about it trying to test a site using FabricJS (which looks like it would need to get a cross-browser fix, and the css touch-action fix is not possible as it's a library, so it would need to use browser-detection for consistent behaviour).

Unfortunately I can't personally provide URLs to any sites that I've developed hitting on this as they're education sites for Collins, Hodder and a couple of other smaller publishers (fortunately Pearson didn't need anything like this) - and all the content is under NDA and/or behind school portals that need access through their own sites directly. Currently that stands at ~10k individual sites (ie, not using shared JS/CSS files that can be easily fixed in one place).

I can say that the specific uses for this are for paged displays (not unlike book page turning with swipe etc), and for delegated dragging events - especially global ones via React, where you either attach to every element, or delegate and only attach to the top (might not be perfect, but 1x delegated handler can be far easier to manage than 100x direct handlers).

As someone who only stumbled across this via a tweet and only occasionally does web development... I'd flip tables if this goes through and breaks an old page of mine(Because as someone has said backwards compat is one of the key features of Web). However, my biggest issue with it is inconsistency between browsers. It seems like RByers is very gung-ho over this change, but I'd argue that loading a popup to the user stating "We're sorry, this site's scrolling is slow due to them not following best practices, please report the issue to the site." would do enough to shame most major sites into fixing their code... while never breaking old sites.

@mawalker Something like this pseudo-code - if (timeTaken > 100ms) { (defaultPrevented ? console.log : alert)(" ... ") }

Basically yes... However... honestly, I don't know if that would be 'good' to do or not... I know it is a bit of a heavy handed approach that some might think goes too far in (annoying) Alerting the user. But it would at least not break any sites and would promote sites (devs) to follow the best practices(Because they wouldn't want their site to get that annoying alert+have visitors complain to them about it).

I started to write this thinking that it might not be the best approach but after further thinking... since there is no -need- for the intervention (since a best practices work around already exists) I think it would be the better part of valor to not move ahead with the intervention and instead use discretion and fall back to an alert for slow scrolling sites(and if you don't want to alert users, then put into console.log the warning message explaining the site UX lag. This wouldn't notify most users but would let devs know what is the root cause(I'd put link to page explaining issue in console log to further help newbie devs (such as myself))).

@RByers, ignoring whether or not it may be performant or difficult to do so, would it be possible to do some type of static analysis (or limited dynamic analysis) when a non-passive touch listener is bound to check if the Event Object is preventDefault'ed or passed to another function, and if so, assume preventDefault will happen?

Improving the performance of the web is a worthy goal. However, breaking APIs and going against spec to get there seems misguided, IMO. Chrome "interventions" (breaking changes to the most widely used implementation of the web platform) are an immensely strong tool and should be used sparingly.

In this case and with the knowledge I've got, forcing passive on scroll-related events by default seems like a misstep. I'm going to keep this comment focused 100% on the intervention at hand.

Passive event listeners were added to Chrome in June. It has been less than a year since they appeared, and the API still isn't supported by Edge or iOS Safari (caniuse). I can sympathize with the comment:

average developers don't make the effort to opt-in to better performance

source

But that alone isn't enough of a reason to break existing sites. Indeed placing the blame for the lack of passive event adoption at the feet of lazy developers is not a productive way to approach the problem. A developer might not have adopted passive events listeners for many valid reasons:

  • The developer isn't aware of the feature. Again, passive events came to Chrome in June. By October an intent to intervene was already opened. How quickly were websites and libraries supposed to take up this new API for the adoption rate to be acceptable? I say this as someone who contributed to a library that uses the API. IMO, 4 months was never going to be enough time to get lots of websites to adopt this. I expect most developers still don't know the feature exists.
  • The developer is working on something else, like feature work. Seems valid.
  • The developer has prioritized other performance work where it has a greater impact on their application. Performance is multi-faceted (initial load, JS execution speed etc), and you invest where it has greatest relevence to your users.
  • The developer doesn't know they are using active event listeners because they use them through a library or other interface. This definitely applies to "average developers", most of whom build on top of higher level tools.
  • There is no developer. Some software is written and handed off to a business, and there isn't an interested party at hand to advocate for this change.

There are a number of ways these issues would be addressed:

  • More evangelizing.
  • Warnings in the console (without breaking changes).
  • Warnings in devtools (as Chrome shows w/ forced layout)
  • Contributing to libraries, adapting them to the new APIs

None of these things is as exciting as an intervention. All of them maintain the stability of the web platform, which is one of its most important assets.

Measuring the impact of an intervention definitely seems hard. In the Intent to Intervene email there are a few stats, like the fact that 2% of dispatched events are potentially affected. There is lots of discussion about tracking breakage, but no real numbers shared.

After a small amount of investigation I can share a few things I know are broken today.

It seems likely that a lot of sites involving drag/drop and the mobile web are negatively impacted. Not all, but many.

Why isn't touch-action: none sufficient to set on the the sortable-items css to fix the ember sortable issue?

source

touch-action: none isn't sufficient because this not only impacts me as a developer, it impacts me as a user. I cannot fix Gmail, that is certain.

So we believe that when only tiny number of sites are negatively impacted, and a huge number are positively impacted, we should prefer a "fast by default" policy instead.

source

"Fast by default" is a great policy, and one that should guide future API design and implemetation of web features. However, I don't think improving scroll jank should cost us breakage on drag and drop across the mobile web.

Finally, I want to question some of the "support" for this change that has been touted:

Thanks for all your work Chrome team. I do hope you reverse your decision here, and that you can find another way to balance vehemence for your performance goals with the greater goals of the platform and your project.

I lack time to grasp the full scope of this intervention in details. In principle, for breaking existing sites, I am against a global flat-out reversal of spec behavior like this. However, here some quick thoughts for possible compromise or alternative approaches to this:

One relevant question is: In what primary context does this benefit users at large most?

With the CNN site example. My immediate assumption is that the vast improvement with forced passive touch listeners would seems to be mainly relevant to textual <main> content or <article(s)>. Things in headers, footers or any UI related elements outside such main content seems most likely not to benefit from passive by default, and rather be affected negatively.

If this observation has any merits. Maybe is there a way in which the fast-by-default behavior could only be enabled only inside such identifiable parent nodes, where it makes the most critical difference for users?

Has anyone done some profiling of the (bad site's) event code to try to figure out what's actually causing the slowdown problems and looked for consistencies there?

There are two no-change "fixes" I can see for the slowdown that site developers can use:

  • Add the passive:true option manually (which isn't well supported, and prevents you from using useCapture on all the unsupported browsers).
  • Move the slow code into requestAnimationFrame, setTimeout or setImmediate (really wish more than just IE/Edge supported that - it's very useful even with a polyfill).

Let's default var to let next.

@Rycochet
Add the passive:true option manually (which isn't well supported, and prevents you from using useCapture on all the unsupported browsers).

Not true. There is a quite clean way to detect support for event listener options:

var hasListenerOptions = ( function() {
  var result = false;
  window.addEventListener( "options-test", null, Object.defineProperty({}, "capture", {
    get: function() { result = true }
  }));
  return result;
}());

And from there it's a matter of engineering your site's code with an abstraction over addEventListener that can switch between using a listener options object, or the plain boolean for capture.

Actually; for browsers that support addEventListener it should afaik be possible to write code that supplants the built-in version of the method. So in theory, it should be possible to write a globally applied patch to those built-in methods that do not support event listener options objects with an additional layer that, when given an event listener options object, pulls out the value of the capture property and passes only that along as a boolean parameter.

Theoretically, that even gives you a drop-in solution, though ofcourse; this comes with its own performance trade-offs again.


Come to think of it:
I could totally see someone develop a globally applied patch like this to undo the damage this intervention would do. All it would need to do is add an explicit passive:false for all browsers that support passive listeners...

If I were a betting man, I'd put my money on that happening and being adopted as a drop-in by both large commercial sites on-the-cheap as well as by plugin-heavy CMSes for convenience , well before any of those would ever even consider integrating and rolling out the proper fixes. And once that genie is out of the bottle, it is never, ever going back in.

I myself won't stoop to the level of actually publishing the required code to sabotage this intervention à priori, but I can imagine others in the nay-camp may have less scruples about doing do, if only to underline the futility of this intervention.

https://gist.github.com/Rycochet/6ac0380841debbb65f78d36711a0dafa (public domain, so don't want to paste the long code including unlicense header in here).

Unfortunately this change got added so will be around for several months at least. I'd rather have safe code out there able to fix it, than everyone having to develop their own workarounds. If I get the time this week I'll wrap a cut down version of this into a Chrome extension for developers and users of un-maintained websites.

Measuring the impact of an intervention definitely seems hard. In the Intent to Intervene email there are a few stats, like the fact that 2% of dispatched events are potentially affected. There is lots of discussion about tracking breakage, but no real numbers shared.

Did you look at this document that was linked in the intent to intervene? What more details can I provide? I checked the stats and they are stable with 2% of page visits on Android still preventDefault with no touch-action.

Kendo UI, at least the version on their demo pages. demo

This isn't actually broken since the event listener is on the target (a div). What you are seeing in your video is the fact that devtools sometimes causes weird scenarios when touch is enabled without refreshing the page. If you refresh the page once enabling emulation mode you should see it works fine. If you are reproducing specific touch issues I recommend using a physical touch device as there are some weird side effects due to emulation.

There are a number of ways these issues would be addressed:
More evangelizing.
Warnings in the console (without breaking changes).
Warnings in devtools (as Chrome shows w/ forced layout)
Contributing to libraries, adapting them to the new APIs

All of these things we have tried since June last year. What is the recommended channels we should use to reach you? What evangelizing would you have liked to see that we didn't do to get your attention before this breakage? Have you tried Lighthouse. I think one problem is that if there is no developer how does a site know something else might be breaking. For document.write there is a proposal to send a HTTP header so the server can track issues but again this requires someone actively monitoring logs.

touch-action: none isn't sufficient
I should re-iterate that using touch-action is preferred as it declaratively indicates what sections of the page don't want scrolling to happen on. Since this can be done entirely on the compositor thread we know that not to do scrolling when someone interacts with one of those regions. Adding "passive: false" causes us to think the whole document is slow and can be subject to the touch-ack timeout and/or main thread responsiveness interventions.

We are trying to make the touch events much easier to reason about. Specifically the touch-ack timeout but doing things base on time on devices varies a lot. The top end of Android phones can be an order of magnitude faster that the shipping phones at the low end.

I've created an npm package, normify-listeners, to fix the breaking API change in Chrome 56.

There's also a higher-level package, normify, that can be used to pull in multiple packages. The idea is that if other issues come up in future in whatever browsers, then individual packages can be built around those specific issues. A dev can just call normify() to get all the fixes.

The package fixes the browser forcing passive: true by default, and it is flexible enough to work in future if more events are made passive by default in Chrome (e.g. wheel). It also makes it so you can use the options object with capture and passive in old browsers that don't support options. Old browsers will effectively ignore passive. This means you don't have to test for options support in all your code and the libs you use.

Of course I would prefer that Chrome followed the W3C spec for passive: false as the default, but workaround packages seem to be the only pragmatic alternative left to devs like myself.

I hope that packages like this one don't help to create the precedent that browser vendors can unilaterally break with standards with the expectation that devs will create workarounds. Browsers seemed to be really moving forwards by following standards better (even Microsoft's browser), but this change really feels like a step backwards.

@Rycochet

Unfortunately I can't personally provide URLs to any sites that I've developed hitting on this as they're education sites for Collins, Hodder and a couple of other smaller publishers (fortunately Pearson didn't need anything like this) - and all the content is under NDA and/or behind school portals that need access through their own sites directly. Currently that stands at ~10k individual sites (ie, not using shared JS/CSS files that can be easily fixed in one place).

How do you have this interoperable on IE and Edge since they don't send touch events on desktop. If you have support for PointerEvents then perhaps your user-agent check for pointer events is a little incomplete? We have long pondered if we really should follow the same model Microsoft does with only supporting touch events on the mobile platform and not the desktop platform.

If this observation has any merits. Maybe is there a way in which the fast-by-default behavior could only be enabled only inside such identifiable parent nodes, where it makes the most critical difference for users?

This is precisely what we've tried in this intervention. It currently limits listeners bound to the document, window and body to change the default passive behaviour.

This is precisely what we've tried in this intervention. It currently limits listeners bound to the document, window and body to change the default passive behaviour.

@dtapuska, is there any other way to limit this further? That is check the body of the listener function to see if it either returns false, calls preventDefault on the event argument, or passes the event argument to another function?

@dtapuska Effectively a single delegated method called for both mouse and touch events. The first line effectively checks which type it is and gets the correct coordinates, then it calls preventDefault to stop the event from getting passed on (to "click", or "mouse*" if it's touch) as it does all the handling internally. There is absolutely no need to check for pointer-events if I'm treating everything the same (and the UI need to behave identically on both touch and mouse if you're targeting children aged 4+). This bug just means that the handlers will get called twice, breaking everything that expects it to be called once and cancelled, and using 2x the CPU (just glad that I don't write laggy code to begin with).

Interestingly this also means that any site that delegates against anchor clicks (ie, lightbox modal style) will no longer be able to stop the links from opening - meaning until they've patched to stop it, navigation is slower.

@Rycochet have you tried adding a touchend handler? Cancelling the touchend handler prevents the click event from being generated from touch.

@dtapuska ...So totally ignoring my first paragraph then I guess. Just checking what you're asking: "Why didn't you add something that's not needed as per the standards to prevent something that was already prevented by following the standards?".

@mikesherov

is there any other way to limit this further? That is check the body of the listener function to see if it either returns false, calls preventDefault on the event argument, or passes the event argument to another function?

Unfortunately static analysis of javascript is impossible because of eval. ie. you can't compile code and say does anyone in this code path ever access a prototype's preventDefault method. That would be really powerful but unfortunately you can't because you can eval a dynamic string at any time. That being said we also debated whether you let the function run once (as normal) and then see the result for the next time. The problem with that is that it makes even the first interaction slow and is also subject to the touch ack timeout (which is something a lot of web developers aren't familiar with and is really hard to reason about). We really want to get content articulated correctly (with touch-action) because then we don't have to rely on these interventions to try to do the right thing for the user.

@Rycochet

Why didn't you add something that's not needed as per the standards to prevent something that was already prevented by following the standards?

The guidance in the standard is to use touchend. I believe example 5 matches the use case you are describing. Yes I do feel for your pain here because earlier in the text it does say that touchstart, touchmove can be prevented and don't allow clicks. One idea as well in this area was to have a half baked preventDefault that basically didn't prevent scrolling but did prevent the compatability events. This felt like another wort and we wanted to make the event as standard as possible since it is non-cancelable preventDefault does nothing and content would have to move to preventing clicks via the touchend.

You didn't answer whether your app works correctly on IE and Edge desktop.

eval() is evil, and sadly - checking through once to see if it's called won't mean that being called again doesn't cause it to be called (due to anything really, from scroll position to element under the pointer etc) - so the only static analysis that might work is "No code on this page uses eval at all" (or any of the various ways to get the same effect) :-/

If touchstart, touchmove, or touchend are canceled, the user agent should not dispatch any mouse event that would be a consequential result of the prevented touch event.

If, however, either the touchstart, touchmove or touchend event has been canceled during this interaction, no mouse or click events will be fired, and the resulting sequence of events would simply be:

Hence - single handler, preventDefault so it's not called on the "fake" mouse events after the real click events, or is called on the real mouse events. Example 5 is simply one example showing a way to prevent it. You can also cancel the click event directly on a click handler - however the click delay on touch devices means a lot of sites in general sit on the touchend / mouseup events simply to prevent that delay (and rely on cancelling that for preventing the click event).

There is no guidance to cancel any specific event, all touch* events are acceptable as repeated twice on the specs. The only guidance in those examples is not to have two code paths if the browser can handle touch at all.

All 10k+ pages used common code that was tested in everything from IE8 and up (plus LTS Firefox, and even Safari on Windows) due to client requirements...

Perfect is the enemy of the good here, no?

You can check the direct handler, not even all code paths, to see if it uses eval, returns, or passes the event along. If it didn't use eval directly nor returned directly, nor passed the event along to another function, wouldn't you be guaranteed that it's not going to preventDefault?

To clarify, the following seems to be true unless I'm missing something:

// certainly won't cancel
body.addEventListener('touchstart', () => {
  doSomething();
});
// may cancel because doSomething may return false
body.addEventListener('touchstart', () => {
  return doSomething();
});
// may cancel because doSomething may preventDefault on e
body.addEventListener('touchstart', (e) => {
  doSomething(e);
});
// may cancel because eval could be anything
body.addEventListener('touchstart', (e) => {
  eval(someString);
});

What if doSomething called window.event.preventDefault() in your first example?

@mikesherov Sadly it's an "almost" - you could check for a literal "preventDefault" in the source, but nothing stopping the coder from doing event["prevent" + "Default"](). Unless you're going to do static analysis on every possible branch in the code you're not going to know where it's been used - and some of the code branches might be deliberately (or accidentally) unreachable - causing false-positives... I can't see there being any way for them to figure it out from code use.

@Rycochet @dtapuska window.event and magic event are non-standard: https://developer.mozilla.org/en-US/docs/Web/API/Window/event and don't work in FF. So in my first case, you're breaking something that is already 100% broken in Firefox.

@mikesherov Don't look at me there - among other things I use Typescript whenever I can on top of actually reading the MDN pages and specs ;-)

@Rycochet @dtapuska @RByers to be clear:

body.addEventListener('touchstart', () => {
  doSomething();
});

will not preventDefault() the event unless it's relying on non-standard magic event or window.event that is already broken in FF.

@mikesherov If there is no argument to the function, then you also need to check that arguments is not used anywhere. You also need to check that nobody is being bad and using the function chain to get hold of arguments (yes, it's not a good idea, but still possible). The unfortunate thing with edge cases is that they're still possible - it's why there's so many security advisories because of people thinking "oh, I can't see that ever happening". I can say with certainty that I always use event as an argument (even when not needed - though UglifyJs takes care of that) - but can't speak for anyone else's code unless I've had a hand in code review etc.

If broad criteria were used as a heuristic, the detection of preventDefault() use would likely not have to be perfect. It's more important to avoid false negatives than false positives. As long as the percent of false positives is relatively low in the set of all listener code on all the web, you get the passive performance improvement almost whenever possible and cases that need preventDefault() definitely don't break.

For instance:

  • preventDefault string in the callback or referenced functions => assume the handler calls event.preventDefault()
  • use of arguments in the callback => assume the handler calls event.preventDefault()
  • use of dynamic function names in the callback or referenced functions => assume the handler calls event.preventDefault()

You could probably come up with better criteria, but this gives you a sense of the idea.

I would bet that the false positive rate for broad, relatively simple criteria would be low enough in practice. This all assumes that the criteria are broad enough for a zero false negative rate.

@mikesherov If there is no argument to the function, then you also need to check that arguments is not used anywhere.

@Rycochet still no, arguments would need to be passed downward.

// gauranteed not to cancel
body.addEventListener('touchstart', () => {
  doSomething();
});
// may cancel
body.addEventListener('touchstart', (...args) => {
  doSomething(...args);
});
// may cancel
body.addEventListener('touchstart', () => {
  doSomething(arguments);
});

@mikesherov https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function

Function.arguments
An array corresponding to the arguments passed to a function. This is deprecated as property of Function, use the arguments object available within the function instead.

Pretty sure I saw it being used in a very bad way several years ago - really hope nothing uses it now, but legacy code etc...

and the css touch-action fix is not possible as it's a library, so it would need to use browser-detection for consistent behaviour

@Rycochet, sorry, I don't understand this. The library has code which is deciding when to call preventDefault within events dispatched to some target, right? Why can't that same library have code which does target.style.touchAction='none' (or whatever describes the use case)? Also, even if you DO need to set passive:false, you can still use feature detection, not browser detection.

Unfortunately I can't personally provide URLs to any sites that I've developed hitting on this ... I can say that the specific uses for this are for paged displays

Have you verified that they are indeed broken (not just generating a console warning but still behaving OK)? In most paged-display cases, the page isn't actually scrollable so there's no real harm in practice (except perhaps on ChromeOS which I believe has a default horizontal overscroll action for back/forward).

my biggest issue with it is inconsistency between browsers

@mawalker that's my personal top priority as well. I talked with all the other major implementors of touch events before making this change, and although they didn't all comment publicly (so I'm not going to repeat details of what they told me) I am confident that we'll ultimately get all browser behaving the same and matching the spec. Otherwise (eg. after 2 years) I will have considered this intervention to have failed and something that we should seriously explore undoing in Chrome. This is (roughly) how web standards evolve.

@RByers, ignoring whether or not it may be performant or difficult to do so, would it be possible to do some type of static analysis (or limited dynamic analysis) when a non-passive touch listener is bound to check if the Event Object is preventDefault'ed or passed to another function, and if so, assume preventDefault will happen?

@mikesherov, we did some experiments with this years ago (we've been fighting this general problem for five years now, so we've tried a LOT). My conclusion from the research done was that, no, the design of JavaScript and today's frameworks just make such analysis impossible in the general case (i.e. >50% of the cases that occur in practice). Eg. consider a page with a image carousel at some point on it listening to events via jQuery event delegation. You really have to step through all the jQuery event delegation code (or bake in some magic knowledge of how jQuery works) to know whether the touchmove is going to land in a handler that's going to call preventDefault or not. We tried a bunch of heuristics, but none of them were at all reliable in predicting whether preventDefault would be called. Even if we found some magic formula, we'd almost certainly never be able to standardize it and get other browsers to implement it since it would be all heuristics and not really rational platform design.

It has been less than a year since they appeared, and the API still isn't supported by Edge or iOS Safari

Actually, caniuse is wrong about mobile safari - they were eager to get it into iOS 10. Submitted a PR to correct caniuse.

Warnings in the console (without breaking changes).
Warnings in devtools (as Chrome shows w/ forced layout)

Chrome has had performance warnings along those lines in devtools (red dog ear) for this sort of problem for years, and very specific console warnings for this ever since passive event listeners shipped:

image

In general we've found they make very little difference (though perf tools in general are still a big area of investment for us, we feel it's critical that motivated developers do have powerful tools to help them improve performance when they're willing to invest to do so).

Contributing to libraries, adapting them to the new APIs

Yep, we've put a lot of time into this over the past year too. Sometimes with success but mostly not.

@mixonic thank you for your thoughtful and well-reasoned comments! It's definitely too late to reconsider this decision for Chrome 56 (it was done), but undoing this change in a future version of Chrome is definitely still an option.

Google Analytics, when dragging segments around. video

Yep, this indeed seems broken when used in combination with pinch-zoom. I've filed an internal bug (35444264) for the team to suggest adding some touch-action: none rules.

Kendo UI, at least the version on their demo pages. demo

Thanks. Filed issue here.

Finally, I want to question some of the "support" for this change that has been touted:

This is fair criticism IMHO. We really don't have more than anecdote (and our metrics) on the "support" side, and no precise way to quantify the breakage. I'd love to have a more quantifiable way to make these sorts of trade offs. It looks like a search for the chromestatus string that appears in the console is a better way to get a sense of where this is causing trouble. The results from GitHub issues are certainly sobering - going through them all now.

However I don't believe this change is in the HTML5 spec, and making such a dramatic change without that kind of agreement, or at least without showing progress, seems unwise. I'd welcome some links to clear up the standards process here.

Actually, the touch events spec is intentionally hand-wavy on this because most browsers have long relied on heuristics to decide when to block scrolling on touch events:

Canceling a touch event can prevent or otherwise interrupt scrolling (which could be happening in parallel with script execution). For maximum scroll performance, a user agent may not wait for each touch event associated with the scroll to be processed to see if it will be canceled. In such cases the user agent should generate touch events whose cancelable property is false, indicating that preventDefault cannot be used to prevent or interrupt scrolling. Otherwise cancelable will be true.

But this isn't really fair - I'm an editor of this spec, and I wrote this wording myself :-). The standards work here definitely wasn't cut-and-dry, especially since the engineers for the main other browser relevant for this problem (Safari) aren't allowed to talk about touch event behavior publicly or participate in the standard. Standards are definitely very important to us, but the standards process is not as simple as you imply - it's an iterative process with implementations. There was by no means consensus from all the engines that this is the right choice. But I will say this, if this spec issue is still open in a year (or Chrome is still doing something here not explicitly described by the spec), then I have personally failed the open web and you should publicly shame me for it.

Has anyone done some profiling of the (bad site's) event code to try to figure out what's actually causing the slowdown problems and looked for consistencies there?

Yep, we've had dozens of different engineers analyzing the scroll performance of hundreds (if not thousands) of different sites suffering from this problem over the past 5 years (here's a few examples). To give you some idea of the scale of investment here on the Chrome team - there are two separate teams (input-dev and scheduler-dev) totally a couple dozen engineers, who have for years used this metric (how responsive is scrolling on Android) as their ##1 measure of progress. In general what we find is that it's almost never the touch handlers that are slow, but that the thread is just busy doing other things - 100s of milliseconds of JS framework initialization, huge DOM operations, etc. Here's one (of many) overview talks one of our engineers did investigating such real-world performance problems and identifying the specific guidance we should be giving developers to avoid them. Features like Intersection Observer have come out of such analysis. So the fundamental conclusion we've come to is that most (but certainly not all) websites are just not built with an architecture that allows the main thread to reasonably drive animations on mobile. For most websites, all animations should run in parallel to the main thread on the compositor thread. In such a world, touch event blocking scrolling by default is just a design flaw that needs to eventually be fixed.

I myself won't stoop to the level of actually publishing the required code to sabotage this intervention à priori, but I can imagine others in the nay-camp may have less scruples about doing do, if only to underline the futility of this intervention.

If people really find using such a global opt-out to be the best solution for their site, that's completely fine with me - I don't consider it "undermining" at all. In fact that's the whole reason I introduced the passive option in the first place - to let developers opt-out of what I long thought was likely to become our default (being "avoidable" is a core part of our intervention guidelines). I still expect it'll be rare for developers to opt-out this way when touch-action is usually simpler/easier AND provides a performance benefit. Worst case and much of the web gets janky again due to use of passive:false everywhere then clearly we screwed up and will need to explore other options.

however the click delay on touch devices means a lot of sites in general sit on the touchend / mouseup events simply to prevent that delay (and rely on cancelling that for preventing the click event).

Note that for over a year, the click delay has been essentially gone from the latest version of all major browsers, and for years before popular libraries like fastlick.js would disable themselves in Chrome/Firefox on mobile pages known not to have a click delay. Listening to touchend universally to avoid the legacy click delay is now pretty much an anti-pattern (as it causes a number of other problems).

Why can't that same library have code which does target.style.touchAction='none' (or whatever describes the use case)?

That's sort of like saying "JQuery should check if you're adding a touch* event and calling preventDefault, and if so then add touch-action:none, and if it's using a delegate then the behaviour might change..." - The question isn't as much "is it possible to work around this," as "how best to work around this." Ie, the same reason that this can't check for it ;-)

The library that triggered my google search was FabricJS - which is basically a canvas drawing library - and as it includes drag&drop within the canvas needs to attach listeners to the body when dragging. This means that it's got to change the touch-action style on the fly, and also that it needs to cache the original value - and it has no clue whether something else also changes that while it's active and dragging (whether removing it, or adding it) - so the state can go wrong. If there are two libraries needing to do things at the same time then they can potentially interfere with each other if trying to do this (as there's no "stack" of css changes that can be undone). Instead it could add a className instead - but you always have to be extra careful not to duplicate them by accident and getting the wrong styles attached (but at least it's not touching the target.style attribute) - you just need to add a css file (or inline style element).

Also, even if you DO need to set passive:false, you can still use feature detection, not browser detection.

Mostly we're talking about how much this has broken existing sites - and if they need to run on IE8/9 then feature detection is often a little worrying - working feature detection code was posted earlier in the thread though.

Have you verified that they are indeed broken (not just generating a console warning but still behaving OK)? In most paged-display cases, the page isn't actually scrollable so there's no real harm in practice (except perhaps on ChromeOS which I believe has a default horizontal overscroll action for back/forward).

Tested and slow / jerky sliding horizontal pages (as the first touch event isn't being cancelled so it's duplicating the call and running on mouse events as well). CPU use has obviously doubled (though that's not a significant amount to begin with).

Chrome has had performance warnings along those lines in devtools (red dog ear) for this sort of problem for years, and very specific console warnings for this ever since passive event listeners shipped

Those warnings only ever show up for pages that actually took too long - so developers who take the time and effort to do things "better" (ie, knowing that something might take a while so it's not on an event etc) never saw them: the downside of warning about problems is that the warning never gets seen when the problem doesn't occur. (Ie, The main thread AJAX warning was useful to me - when I had to do something that would have benefited from doing it the "wrong" way I knew not to).

In fact that's the whole reason I introduced the passive option in the first place - to let developers opt-out of what I long thought was likely to become our default.

I love the idea of it and if I only had to support browsers that supported it then I'd be making far more use of it - as it is I use a setImmediate() polyfill instead ;-)

Note that for over a year, the click delay has been essentially gone from the latest version of all major browsers.

I (and a great many content creators / contractors) still have to support older browsers - generally that means a latency of at least 3-5 years with a couple of exceptions (I have to support the iPad2, which means iOS9, for at least the next couple of years. I've only recently been able to take the iPad1 / iOS5 off the specs, and finally being able to drop IE10 and below from future projects is such a relief!).

Tested and slow / jerky sliding horizontal pages (as the first touch event isn't being cancelled so it's duplicating the call and running on mouse events as well). CPU use has obviously doubled (though that's not a significant amount to begin with).

Crap, sorry to hear it's actually broken. But the page doesn't support scrolling, right? I.e. it's not scrolling the page at the same time as moving your slider? That shouldn't happen - if you can get me a repro then it's possible there's a Chrome bug we could fix here. Note there are no mouse events sent during a drag (mouse events are sent only after a tap or long-press, but pointermove is sent during a drag). The jerkyness you're seeing is probably because touchmove events in Chrome get throttled to 10hz when there is an active scroll (but that's supposed to be only when something is actually scrolling, once it hits the scroll extent you should get back up to 60hz). The real problem though should be double handling - moving two things at once (page + widget) for a single gesture.

Those warnings only ever show up for pages that actually took too long

That's a good point. Once we knew we were going to do the intervention we should have had another warning saying never to rely on preventDefault in this case. We're generally pretty good about doing that for interventions / removals, I think we just missed it this time. I'll try to make sure such a mistake doesn't happen again.

I love the idea of it and if I only had to support browsers that supported it then I'd be making far more use of it - as it is I use a setImmediate() polyfill instead ;-)

You mean you're using setImmediate to get the same benefit of passive listeners? That won't work. Almost all of the benefit of passive listeners is not because the handler itself is slow, but because there are other things running on the thread that are slow. By the time the handler is invoked at all, the damage is done. Why not just use feature detection to pass passive:false when on a browser that supports it?

I am against this course of action. r56 has broken a lot of my applications. event.preventDefault() gave me a lot of control over touch events in highly interactive applications.

@RByers It's a private repo I can't share - plus about 100k-lines of Javascript including a complete IDE - I can email an exported activity so you can see it and share the specific section of code if it would be helpful. Basically it's horizontal scrolling, but only the active "page" and the ones to the direct left and right are visible - and it manipulates the .scrollLeft. Did just test with a different export - and seems that having an animation running (CreateJS) makes it obvious (and only when in Device mode on desktop - as I don't have an Android device to test on), so that 10hz thing might be related.

I think the warning should have been put in for both mouse* and touch* handlers that took too long - that way developers would have had fair warning, and it would have shown up on desktop devices where there's a console open (I doubt there's much Device testing going on beyond "does this css look right at this resolution"). Hindsight ;-)

Because I'd need to double lines to support useCapture as well as passive:true. The only reasons I use setImmediate or requestAnimationFrame are because either a. I know this will take a while or b. I know this can be triggered more often than I actually need it to run. I started coding properly with asm and then C on the Amiga - so tracking resources and potential bottlenecks is second nature ;-)

Just wanted to report this broke New Relic as well.

@RByers is there any other way out for this? A lot of web apps are broken due to this intervention. Even google's very own apps (gmail and analytics). I can imagine how many other apps are going to be affected. If you look at iOS, there is no such intervention. But scrolling through web page is definitely way smoother than the experience we have on Android. It's unfair to release such intervention because of the mediocre experience caused by Android. You should probably allow content block/ads blocker on chrome if you really want to bring it to the next level of superior web experience just like what iOS did.

I'm sorry, just saying. Google will probably fire you if you dare to block google ads on chrome.

@rickbias
is there any other way out for this?

It is possible to globally overwrite the addEventListener method and forcibly pass { passive : false } options when the passive flag isn't explicitly specified by the caller. That forces Chrome's behavior back to aligning with spec.

Earlier in this thread, @maxkfranz already linked to the normify-listeners package that handles this, but that only works for site developers, not for users suffering from broken sites. Maybe someone can cook this package into a Chrome extension?

Admittedly; it'd be a lot better if Google would get off their performance-oriented high-horse; admit they screwed the pooch on this one; and at the very least surface a user-reachable opt-out for this intervention.

Would it also be possible to lazily determine passiveness, @RByers?

That is to say, assume passive, record the scroll position at the beginning of every touch event, and if the listener eventually calls preventDefault, restore the scroll position to the correct location, and no longer consider the listener passive?

@mikesherov That's unlikely to fix many broken sites - as it's the chained event handlers firing that cause breakages rather than simply scrolling issues :-/

@mikesherov
That would result in guaranteed awkward stutter as opposed to potential jank, wouldn't it?

@ryochet I disagree that it won't help, unless you have evidence, we're both conjecturing.

@rjgotten, no. You'd gain passive perf immediately until preventDefault gets called, then you get a single frame of stutter followed by no stutter because once the first precentDefault, the listener would no longer be passive.

FWIW the main frame can run many frames behind the compositor thread which can lead to a jarring user experience if the page jumps around user after some delay to the user. Trying to reset just the scroll position is pretty naive as there may be nested scrollers on the page as well. But largely if all of these cases the code is calling preventDefault consistently it is far better they articulate the page with touch-action. From some of the comments I'm not sure people understand how threaded scrolling works; but the just of it is there is a thread that can respond to input events and move textures around and update the display (only if it knows if there are no touch listeners or they are all passive.) Otherwise it needs to goto the main thread where javascript runs and that is typically a slow thread where events queue up to get run.

@dtapuska sure. How about just changing the mode from passive to not passive when preventDefault is called the first time.

The situation as it stands is many existing sites are just straight up broken including anyone using fastClick, iScroll, etc.

Certainly any mitigation for this intervention is a win. Fine, if it's intractable to attempt to restore nested acrollers, then don't do it. But not reacting to preventDefault at all seems worse than any of the mitigations being discussed here.

@mikesherov Around 10k pages I'm responsible for that are affected, ~2% have anything to scroll (which is not being handled by the browser anyway), 100% are using the events to provide UX that is now (visibly) suffering from the problem that this intervention has fixed on badly coded websites.

@rjgotten

It is possible to globally overwrite the addEventListener method and forcibly pass { passive : false } options when the passive flag isn't explicitly specified by the caller. That forces Chrome's behavior back to aligning with spec.

This is exactly what the worst possible outcome the chrome team developers are trying to alleviate. It completely unset the intervention.

@dtapuska

But largely if all of these cases the code is calling preventDefault consistently it is far better they articulate the page with touch-action.

The touch-action itself is not extensive enough. What if I would like browser to handle scrolling still but still calling preventDefault for certain scenario. Should I completely disable browser's handling of scrolling and implement our own fake inertia scrolling which allow us to keep enabling preventDefault

~2% have anything to scroll (which is not being handled by the browser anyway), 100% are using the events to provide UX that is now (visibly) suffering from the problem that this intervention has fixed on badly coded websites.

I'm not sure i understand what's being said here. @Rycochet, are you saying that 2% are using preventDefault and it's causing the performance issue on those pages?

I suppose what I'm saying is that those authors, by using preventDefault, intended for a certain UX and decided that preventing those events provided the UX they intended. Many sites on the net are broken from this intervention, according to those authors. Many of them were compensating for other missing features of the web platform at the time. To say they are "badly coded" is to rewrite history.

Unless you're suggesting using preventDefault is always wrong, then we should be discussing ways to mitigate the breakages caused by this intervention.

A possible way to get perf by default and possibly have the behavior originally intended by the author is to switch from passive to not passive once preventDefault is detected. Yes, it would cause a stutter where it effects scrolling but at least it would restore the sites intended behavior according to the author instead of being just straight up broken according to the author.

Breaking the web, if it's going to speed up a whole bunch of pages, I suppose is acceptable, but we should still doing everything we can to not break pages that rely on certain non-performant web platform features.

@mikesherov No: ~2% of the pages have any scrollable areas at all (vertical scroll in those cases), but 100% of them override all touch (and mouse) events to handle scrolling manually for them, as well as providing a horizontal "page turn" type effect. As stated earlier in the thread...

The people using preventDefault knew what they were doing - and are the ones who have been badly affected by the change (you might want to read the entire thread again). This "fix" is for the sites that have been doing too much in the event handlers and causing the jerky scrolling (and also not calling preventDefault - or the collected stats wouldn't have led to the conclusion that this might be a good idea).

"Breaking the web", especially going against the standards, is not acceptable. It doesn't matter that the intentions were good - it was handled in a seriously short-sighted way (as acknowledged earlier in the thread). You know what would speed up the web even more? Blocking adverts. (Oh wait, I already do that, but I don't have an issue with speed). How about reducing image sizes on smaller screens? (There's already a standard for that - if people follow it). Final option - simply disallow Javascript...

The "correct" fix for this is for the people putting together badly coded sites to simply fix them. As we can pretty much guarantee that they won't do that (see earlier in the thread for incentives etc), we've ended up in this mess - which has the obvious conclusion that web developers who need touch events will simply patch Chrome to make it follow the standards again.

@Rycochet I don't get it? Are you for this intervention or against it? I honestly can't tell from your responses.

My point is simply that if we're forced to keep this intervention which breaks the web hard, then perhaps there's a middle ground that breaks it a bit less rather then asking every web dev in the world to go fix sites that were working to their specs for their clients before this intervention.

@mikesherov Against - though as it's already happened it will be around for several years even if it's reversed, I've simply taken to adding code to everything I do to reverse this change (the first code in this thread to reverse it is a gist posted by me).

The problem with the change is that their sites are still working as per the specs - which now means not working the same in Chrome. The change has literally only affected people following the specs - all the end user will get is either slightly more responsive scroll on some badly coded websites, or broken behaviour on web apps (and some JS libraries). The end users will blame the sites that have been broken rather than Chrome for breaking them, and probably won't notice that touch scrolling is slightly smoother (basic psychology - people notice bad UX, but forget about previous issues when they're not bad any more).

The only other intervention of this magnitude was the "no synchronous AJAX on the main thread" - which loudly spammed the console for a long time. As this was for touch events only, and very few developers see a console when debugging on devices, it meant that for the short time there was a warning it got no publicity or visibility by the very people who would have jumped on it before the change was made...

@Rycochet

all the end user will get is either slightly more responsive scroll on some badly coded websites, or broken behaviour on web apps (and some JS libraries). The end users will blame the sites that have been broken rather than Chrome for breaking them, and probably won't notice that touch scrolling is slightly smoother (basic psychology - people notice bad UX, but forget about previous issues when they're not bad any more).

Exactly. So much for this intervention. This is also what we noticed. On top end devices, Nexus 6P, Samsung S7 etc, you can't tell if the scrolling is any smoother. I guess this solution was aimed at those 2% sites from those 20-30% low end devices. That's pretty low for such a massive intervention.

@rickbias
This is exactly what the worst possible outcome the chrome team developers are trying to alleviate. It completely unset the intervention.

Worst for Chrome, maybe. However best for developers that are already performance-aware, but are up against tight deadlines and budget restrictions or have to deal with application architectures that just cannot be refactored to deal with this.

Pardon the rudeness, but as far as I'm concerned that's a matter of "tough shit" for Google.

You just broke http://foliotek.github.io/Croppie/. Thanks. 👎

And pinch to zoom on chrome 58.x.x broke. You can't 2-fingers pinch to zoom even on google maps on a chromebook with touch screen. Serve them right! So much for improving user experience when you break things all the time.

@rickbias pinch zoom is fixed see crbug.com/695905. It was broken due to stylus support; completely unrelated to this.

I hope this is constructive;

I just spent inordinate amounts of time fixing up performance and legacy code on my company's website. Huge overhaul with first-paint and mobile/tablet touch-perf as the kpi. All doc/body listeners gone. All scroll listeners gone. All click events have a counterpart touch event to make it feel good on touch devices, and in webview (no fast-click, all custom). Removed so much conditional browser logic in favour of modern, unified APIs. The perf and responsiveness is great because I cared about it. However, a requirement was to use jQuery and some particular plugins.

Now I need to add a "poly-unfill" to opt-out of this browser change to make sure the site behaves as expected across devices (by the way, luckily I was using Safari to demo to my stakeholders the day after 56 dropped!). Extra bytes, extra maintenance, extra debt. Gonna have to document for future developers why this is important, as well.

eStores with intricate carousel, zoom, and product-display get broken... not to mention delicate product-listing pages with touch-scroll handlers... A customer is going to have trouble making up their mind to buy something if they're getting sick/dizzy by messed up touch interactions while inspective the details! That's potential bottom-line damage, and it's people like me who get 💩'd on because of it.

eStores rely on libraries (jQuery for example) that are ancient, and vendor code... how on earth do you expect a small dev team to fix this kind of stuff when they have dependencies on those ancient javascript libraries, and barely-supported plugins for aforementioned carousels, zooms, etc? Please forgive me if I'm wrong, but in most scenarios (~80% of the top million sites use jQuery) there's absolutely no possible way for a developer to set passive to false without @Rycochet's poly-unfill, or something similar?

With evergreen power, comes evergreen responsibility!

Just my penny's worth.
Apologies if I shouldn't have left this here.

The emails I will get with: "WHY IS THIS BROKEN?"... 😞 It's embarrassing.

Please forgive me if I'm wrong, but in most scenarios (~80% of the top million sites use jQuery) there's absolutely no possible way for a developer to set passive to false without @Rycochet's poly-unfill, or something similar?

You're right; there is not. That's one of the major issues with this type of broken intervention that Google's engineers casually stepped over. Their proposed solution to that predicament amounts to a meme-tastic "patch all the libraries!"

@simeydotme Can you elaborate why touch-action didn't work for your jquery plugins? Is the site externally visible?

@simeydotme it looks like Google engineers are trying to tell us to take control of our own javascript libraries. Stop relying on jQuery. Start to give a damn on user experience. Ahem, and taking care of the experience on slow android devices. As developers that taking the advantage of the platform born from the hard work of Google engineers, we got to pay our shares. It doesn't seem that they are going to revert this intervention anytime soon. Good luck.

@rickbias @simeydotme touch-action is a CSS property and is very applicable to sites that are using jquery. You don't need a way to set passive: false provided you set the correct touch-action for the behaviour you want on the elements targeted. The question is whether touch-action describes the behaviour you want adequately. We know there are some deficiencies in the touch-action definition and want to be educated about the cases where touch-action doesn't work for them.

I hope that see my involvement here shows we do care about issues web developers face.

thanks @dtapuska for the response! :)
I will do my best to see if the css properties can help in my current scenarios. But I am not confident, And I really appreciate the time taken to feedback to individual comments. I can't imagine the amount of fires that need to be put out :P

I am not able to give immediate response about the viability of css touch properties, but upon reading https://developers.google.com/web/updates/2016/10/pointer-events quite carefully, its a little misleading as to what I really need to do. In my most pressing scenario I have a standard-ish carousel of product images that needs to block users vertically scrolling the page while it's being interacted with. That's default functionality which seems to be broken in chrome. Not only that; I am using native pinch-zoom to allow users to view the details of the images at will (on trackpad/screen), and finally I need to capture touchend/click to allow user to throw the image into a full-screen modal on larger viewports. But this is supposed to work flawlessly across device/screen. And of course I don't have the capability, or time, to remove libraries and write my own carousel.

Currently I haven't had a chance to properly investigate the chrome56 UX on touch-screen laptops, or android tablets. but while I have my devtools emulator open, I'm witnessing thousands of warnings in my console during normal use :( You'll understand aswell that I have this on internal testing servers right now, so not able to share.

I will come back as soon as possible with my results. And see if my findings can help others :) I didn't mean to ask for help/troubleshooting here, I wanted to express the thoughts of a slightly-privileged developer who builds things to get paid, and who's life is affected by things like this -- the feeling I had trouble describing in my previous comment was: In one way, it feels like all the corners that other developers and organisations have cut in perf/ux for their customers have been justified, and I got screwed for it. :/

Note that for over a year, the click delay has been essentially gone from the latest version of all major browsers, and for years before popular libraries like fastlick.js would disable themselves in Chrome/Firefox on mobile pages known not to have a click delay. Listening to touchend universally to avoid the legacy click delay is now pretty much an anti-pattern (as it causes a number of other problems).

Side point: True for the click event but there is still a discrepancy between touchstart and the CSS :hover state being activated - I made a library to get around it

I tried installing my custom pi-hole DNS, which blocks a lot of ads content (Google ads included), I can totally tell that it improves scrolling performance a lot. Sites load fast. Content is almost always immediately scrollable. Maybe chrome team should consider some kind of content blocker for chrome browser.

Maybe people can ditch Chrome and start using Brave.