TypeStrong / atom-typescript

The only TypeScript package you will ever need

Home Page:https://atom.io/packages/atom-typescript

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

atom-typescript has become slow

samu opened this issue Β· comments

commented

This package has become really slow. Autocomplete often takes a few seconds to load. Highlighted compile errors take a while to disappear once fixed. Is anybody else experiencing this?

I tried ide-typescript for atom and also VS Code, and both are much faster, so i doubt it has something to do with the size of the project i'm working on.

@samu I have experienced those as well. In the last version, we fixed some parts of it in #1554.

The main reason that atom-typescirpt is slower is that it directly uses typescript service instead of using atom-languageclient. ide-typescript for example does that.

I also update the linter and linter-ui-default packages recently that bring huge performance improvements. So definitely update to the latest version.

Atom-typescript should start using that package too (which is optimized for Atom) and move the possible missing parts to atom-languageclient (our fork here: https://github.com/atom-ide-community/atom-ide-languageclient).

In typescript 4, the order in which a project is processed has changed so that the currently open editor is processed first (it is concurrent):
https://devblogs.microsoft.com/typescript/announcing-typescript-4-0/#partial-editing-mode

But atom-typescript has not implemented that yet. For now, you should disable this option:
image

The main reason that atom-typescirpt is slower is that it directly uses typescript service instead of using atom-languageclient. ide-typescript for example does that

This is simply false. atom-languageclient is a relatively bloated wrapper around anything implementing Microsoft's language service protocol. And surprise-surprise, TypeScript doesn't support LSP, so one would need yet another wrapper (all compatibility problems aside) to translate from TypeScript's API to LSP and back. Guess which has more overhead, directly talking to tsserver or talking to tsserver through two wrappers?

As for Atom-TS recently becoming slow, I have an inkling of an idea of what's going wrong. Upstream recently broke Node event loop, so it tends to go to sleep at the most inconvenient moments, and tsserver interaction is done through Node's child_process.

@samu, if you could try v13.9.3 and see if it helps any that would be nice. Thanks in advance.

The main reason that atom-typescirpt is slower is that it directly uses typescript service instead of using atom-languageclient. ide-typescript for example does that

Here I meant that the atom part (those that control the editor, push the errors, etc) of the wrapper is optimized not the client itself. Anyways, I would like atom-typescript to move its code to other packages so every language can benefit from it! If there is something missing in languageclient, we should add it from this repository. This will make atom-typescript simpler and also let everyone use its features.

@samu, if you could try v13.9.3 and see if it helps any that would be nice. Thanks in advance.

I can't install!


npm WARN enoent ENOENT: no such file or directory, open 'C:\Users\aminy\AppData\Local\Temp\apm-install-dir-2020826-13988-cmm2ck.60x9\package.json'
npm WARN apm-install-dir-2020826-13988-cmm2ck.60x9 No description
npm WARN apm-install-dir-2020826-13988-cmm2ck.60x9 No repository field.
npm WARN apm-install-dir-2020826-13988-cmm2ck.60x9 No README data
npm WARN apm-install-dir-2020826-13988-cmm2ck.60x9 No license field.

npm ERR! code ENOENT
npm ERR! syscall chmod
npm ERR! path C:\Users\aminy\AppData\Local\Temp\apm-install-dir-2020826-13988-cmm2ck.60x9\node_modules\atom-typescript\node_modules\typescript\bin\tsc
npm ERR! errno -4058
npm ERR! enoent ENOENT: no such file or directory, chmod 'C:\Users\aminy\AppData\Local\Temp\apm-install-dir-2020826-13988-cmm2ck.60x9\node_modules\atom-typescript\node_modules\typescript\bin\tsc'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent 

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\aminy\.atom\.apm\_logs\2020-09-26T11_22_29_505Z-debug.log

Try again, check your cache. I mean, seriously, this looks like apm failed to download the package but tried to install it anyway.

Hmm. After removing my temps twice it finally worked. That was strange. I had not seen it before πŸ˜„

commented

Thanks for your replies. I have tested the latest version but don't see any improvements. Here's a video showing a real-world scenario:

slow

You can see two moments where it's pretty slow: waiting for the autocomplete after typing merge, and waiting for the red underline to disappear after typing item => item.

Here's what it looks like with ide-typescript:
fast

Okay, so apparently my guess wasn't correct (or not entirely correct). @samu, can I ask you to debug the issue a little?

First, it would be helpful if you could answer some questions:

  1. What Atom version are you using?
  2. What TypeScript version does Atom-TS find? (it's displayed in the bottom status bar when TypeScript file is open)

Please also post relevant Atom config (Edit -> Config..., then anything under atom-typescript and autocomplete-plus). Feel free to sanitize any data you find to be sensitive (like paths). Also, if your project is perchance open-source, a link could be helpful to run some tests on my side.

Now, for actual debugging, I'm curious to see what exactly is taking up the time. So if possible, I'd like to ask you to do the following:

  1. Open your project in Atom
  2. Close all open TypeScript editors
  3. Open the Atom dev. console (View -> Developer -> Toggle Developer Tools -> Console)
  4. Execute window.atom_typescript_debug = true
  5. Clear the console (Ctrl+L or image button in the top left of the console window, or from context menu, I'm certain you know the drill)
  6. Open a TypeScript file and repeat your autocompletion test
  7. post the console output (you can save the output by opening the context menu on some output line and choosing 'Save As...'). Again, feel free to sanitize any data you find to be sensitive (like paths)

Thanks in advance.

commented

@lierdakil thanks for the instructions. Your help is highly appreciated.

  • Atom version is 1.51.0
  • typescript version is 4.0.2

My config is quite underwhelming:

  "atom-typescript":
    preferBuiltinOccurrenceHighlight: true
  "autocomplete-plus":
    confirmCompletion: "enter"

Maybe some additional info that might be useful:

  • my codebase is based on yarn workspaces
  • it's a mono repo with multiple typescript based sub-projects (it currently has 6 tsconfig files, all based on the same base config)
  • i usually work on them simultaneously, which means that i have multiple tsc --watch processes running

I can't share the codebase, though.

Here's the output:

Screenshot 2020-09-29 at 19 33 50

What you see at the end is the logs created after doing the autocomplete experiment.

commented

I dug into the code and can share the following finding:

debug1

As you can see, there's nothing wrong with the interaction with tsserver. It sends and receives the messages immediately, but then it just waits a long time to actually render the result. Do you have any ideas why this is?

commented

I got a feeling it has to do with this line.

Commenting that makes the autocomplete instant.

You are writing this following line takes about 300ms on every single autocomplete!

const details = await this.lastSuggestions.client.execute("completionEntryDetails", {

A big time is spent in this call (taking map out to reveal the actual time spent on execute).

      const entryNames = suggestions.map((s) => s.displayText!)
     
	 let t = window.performance.now()

      const details = await this.lastSuggestions.client.execute("completionEntryDetails", {
        entryNames,
        ...location,
      })

      console.log(window.performance.now() - t)

image
image

The above time is still quite slow. So, I think we can even optimize some of the calls to the server. This does not block the main thread, but there is an obvious latency. @lierdakil might know about the root cause of this latency.

@samu Could you do the same benchmark?

In my opinion, the other parts of the code also need some optimizations. Here the issue is the server, but in other cases, there are many map functions here and there that just make the code slow by extra memory allocation or by making algorithms O(n^2) or O(n). Non-lazy (JavaScript) functional programming is evil. C++ 20 recently introduced lazy functional programming using "pipelines" which fuses all the loops without any memory allocation, but we should avoid that in JavaScript.
https://youtu.be/owcvg2YZ7Y8?t=867

One other possibility is to just move these parts of the code to multiple threads or parallel server calls. We can run the part of the code that does not use Atom API on a thread.

commented

Before thinking about doing optimizations, can somebody elaborate how getAdditionalDetails is useful? I've been using my hacked version of atom-typescript for a day now, and i don't see any impediments. The only thing i see is, that autocomplete is blazing fast.

Yes! It adds the type information in the autocomplete:
In #1561, I reduced the number to the first 6 (instead of 10) to make it faster.
image

I just checked with VSCode. It does not provide all the information until you go on it. So it is only the first one! Even when it does, it is not visible much.πŸ˜„
image

commented

Right, i see. To be honest, i didn't even notice that this type information was gone, which means that i probably never relied on it.

It could still be a useful feature, but i guess the UX should maybe change a bit:

  • allow to enable/disable this feature via config
  • defer loading of additional type info to a later point. I'm not sure if autocomplete-plus has an API for such a feature

@samu I updated #1561 to not await the additional information. So they will appear gradually as the server gets the details. Could you check that? Updating the details needs an additional keystroke (a letter).

Probably, we need to extend the autocomplete API to either update the autocomplete reactively once the states are updated or allow updating the details as we move over them (like VsCode).

@aminya, I think ac+ actually has something for that (according to https://github.com/atom/autocomplete-plus/wiki/Provider-API), getSuggestionDetailsOnSelect

@samu, your original report also said something about error checking being somewhat unresponsive I believe? Does that also become more responsive with the getAdditionalDetails commented out, or is that unrelated?

@aminya, I think ac+ actually has something for that (according to https://github.com/atom/autocomplete-plus/wiki/Provider-API), getSuggestionDetailsOnSelect

I think we should implement "on select method" in combination with async calling of 10 entries. Bests of both worlds. We need to test this.

@aminya, implemented in d11f744. That said, getSuggestionDetailsOnSelect seems to be a little bit janky at the moment, when there's documentation, the autocomplete window seems to jerk visually, at least on my system. Can you check it out?

Haha. I made a PR as well πŸ˜„ #1562

@aminya, implemented in d11f744. That said, getSuggestionDetailsOnSelect seems to be a little bit janky at the moment, when there's documentation, the autocomplete window seems to jerk visually, at least on my system. Can you check it out?

It looks fine on my system.

We should ask @samu to test this.

@lierdakil I also have the problem of the linter messages being removed with delay after they are fixed. I need to dive into the code.

@lierdakil Any reason to not use Standard-linter-v2 instead of Indie-linter-v2?

I can't recall from the top of my head the rationale exactly (and bear in mind I got roped into maintaining this project way after this was implemented), but "indie" interface gives us way more control over when and what we're checking.

Basically, standard linter works in "pull" mode, i.e. linter decides what to check and when. This will clash with functions of Atom-TS like "check all files", for instance. Granted, one can create a separate "indie" linter instance for each mode of operation, but that would create duplicate error reports and overall experience ends up rather clunky.

Side note, some 300-500 ms latency is somewhat expected, geterr isn't the fastest thing in the world. So it's probably more promising to look for alternative ways of getting errors out of tsserver rather than micro-optimizing the linter interactions -- unless you're looking at literal thousands of errors, linter takes a small fraction of time compared to actual error checking.

I have not benchmarked Indie vs Standard, so I'm not sure. Here I guess we might better look into optimizing the tsserver itself. The bottleneck is there. For example, we might be able to get the information without spawning another Node! Calling it directly from JavaScript for example.

Calling it directly from JavaScript for example.

How is that an optimization? Instead of two OS threads you get one, and now your GUI has to wait while tsserver does its thing. No, no, separate instance is completely fine, despite the JSON de-/serialization. The bottleneck very obviously isn't there.

Okay, there's one suspicious place in the code, which has been there way before I got to the project, namely, geterr is delayed for purposes of bunching several subsequent requests. This is good -- sometimes -- but probably not in the context of onDidStopChanging or initialization. I've amended this in 4939c95, so it should become a bit more responsive hopefully.

Side note: we could move geterr call to buffer.onDidChangeText handler, which would make it feel even more responsive, but at the cost of almost each keystroke generating a geterr call, which isn't very performance-friendly I'm afraid.

P.S. what we can do however is make the frequency of geterr configurable. This should work, I guess

Calling it directly from JavaScript for example.

How is that an optimization? Instead of two OS threads you get one, and now your GUI has to wait while tsserver does its thing. No, no, separate instance is completely fine, despite the JSON de-/serialization. The bottleneck very obviously isn't there.

Spawning Node is expensive. We can instead use WebWorkers to calculate things in parallel without blocking anything.

Side note: we could move geterr call to buffer.onDidChangeText handler, which would make it feel even more responsive, but at the cost of almost each keystroke generating a geterr call, which isn't very performance-friendly I'm afraid.

P.S. what we can do however is make the frequency of geterr configurable. This should work, I guess

This does not seem like a good idea!

At least, you should wrap the function inside a debounce/timeout to wait for a little before updating.

Use onDidStopChanging instead.

Let's wait for @samu to test the changes. I'm afraid we change things too much and make things worse 😁. I tested the changes until the last commit which was 2 hours ago. The linter was quite responsive.

Spawning Node is expensive.

Yes, but it's expensive once. We don't restart tsserver on each request, otherwise it would indeed be slow as a glacier. After node is started, it's not that expensive, otherwise using it would be entirely pointless, yet we do for some reason. And, side note, having tsserver in a separate process saves us a lot of headaches wrt bugs in TypeScript (imagine Atom crashing because tsserver leaks memory -- which it in fact does).

We can instead use WebWorkers

Performance-wise, WebWorker isn't that much different from a separate process, unless some worker-specific data sharing optimizations are used extensively. And as I said above, JSON de-/serialization isn't the bottleneck here.

Use onDidStopChanging instead.

We do use onDidStopChanging on the current release. It adds 300 ms of delay between actual input and getErr call, which contributes quite a bit to the overall sense of latency (300 ms from onDidStopChanging, and then another 300 from getErr and you're over half a second already). The idea is to make this delay configurable (for onDidStopChanging it isn't).

Anyway, did that in 1182b19, the default is 150ms which feels like a reasonable compromise.

commented

Are you going to publish this with a new release, or do you want me to check master first?

@samu, if it's not too much trouble, I'd prefer if you tested master first. Thanks in advance.

commented

Unfortunately it doesn't help much. The first autocompletion now is fast, but then it takes pretty long for the second one to load. See here:

debug2

@lierdakil As I told you, the bottleneck is in the tsserver call. That is the only part remaining here.

@samu, okay, so either completionEntryDetails call is blocking any other operations on the tsserver side (which is rather unfortunate), or I'm missing something huge. Could you test fix-1158 branch? I've disabled eager loading of completion details.

commented

I can already tell you that this will work, because that's exactly how i hacked atom-typescript to speed things up

commented

As i mentioned earlier, maybe the best fix right now would be to provide an option to enable/disable the additional loading of type info. I don't find that information useful and very much prefer the speedy autocomplete. Also keep in mind that it's an edge case on my end, so you're doing optimizations which won't affect many. It obviously still makes sense to investigate the underlying problem.

@samu, the difference is fix-1158 is lazy-loading details for the highlighted entry, unlike your hacked version. The question is whether it will still cause noticeable slowdowns.

Also, while at it, I've tried to reduce the number of spurious documentHighlights requests. Really doubt it would affect anything in any major way, but probably a good idea to avoid running this request on nearly every keystroke (just pushed this change to fix-1558 branch)

It is like a rocket now. What have you done! πŸš€

commented

Yes, i can confirm. Fast like a rocket! Very nice.

I've released v14.0.0, which includes the changes from fix-1558 and some additional fluff (like caching suggestion details to avoid repeated requests). It also includes #1560. Hopefully I didn't accidentally break anything. Thank you for your patience.

commented

Thanks to both of you. It's good to see that the atom editor is backed by such a strong community!

@lierdakil What did you use instead of onDidStopChanging? I want to introduce a new API in Atom itself using the method we used. I am having the same delay problem inside other packages. It makes sense to extend Atom with the faster API.

I'm just calling a debounced (via lodash.debounce) function from onDidChangeText. This isn't by itself faster than onDidStopChanging, except the delay is configured to be half of what Atom gives for onDidStopChanging.

Well, actually, another difference is that lodash's debounce implementation invokes the debounced function straight away, while Atom's apparently always waits 300ms: https://github.com/atom/text-buffer/blob/02750247dcdd64f30278be6dc88c0dcbd35ffc4c/src/helpers.js#L9-L26 I suggest you look into fixing that perhaps.