tediousjs / tedious

Node TDS module for connecting to SQL Server databases.

Home Page:http://tediousjs.github.io/tedious/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[perf] Performance loss on `<instance_members_initializer>` during token parsing

arthurschreiber opened this issue · comments

While fixing up the benchmarks over in #1578, I noticed that performance on Node.js 16 is looking quite abysmal, compared to Node.js 18 and newer.

Here's an example benchmark output:

$ nvm use 16
Now using node v16.20.2 (npm v8.19.4)

$ node benchmarks/token-parser/done-token.js 
token-parser/done-token.js tokenCount=10 n=10: 5,442.1768707483 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=10: 1,237.330355821565 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=10: 319.13832651840033 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=10: 74.9748603670075 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=100: 12,311.669924129335 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=100: 3,418.593978789402 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=100: 790.7078668268937 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=100: 94.66702766359882 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=1000: 33,506.730664522234 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=1000: 8,362.086716929776 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=1000: 946.0767769857122 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=1000: 96.43051592995027 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
$ nvm use 18
Now using node v18.18.0 (npm v9.8.1)

$ node benchmarks/token-parser/done-token.js 
token-parser/done-token.js tokenCount=10 n=10: 6,112.783296941897 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=10: 1,568.5351842048508 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=10: 417.7603982025274 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=10: 130.23108477167162 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=100: 15,515.501770551486 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=100: 4,488.003006962015 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=100: 1,271.7219326073343 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=100: 159.5728765154215 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=1000: 39,580.51149024228 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=1000: 13,182.018524532448 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=1000: 1,618.2029474696094 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=1000: 178.8290860615 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)

The performance difference is somewhere from 10% to 85% faster on Node.js 18 compared to Node.js 16. Of course there's performance improvements in newer Node.js versions, but I still was curious to see where that difference comes from exactly.

I collected a performance profile for one example benchmark run on Node.js 16 (token-parser/done-token.js tokenCount=1000 n=1000). Here's a screenshot of the CPU profile data:

Image

And here's a screenshot of the flamegraph for this same cpu profile:

Image

Interesting. 🤔 It looks like <instance_members_initializer> is where a lot of time is spent. But where's that coming from? It's a V8 internal function. Some googling lead me to a bug report in v8. According to this report, this was a known performance issue in the v8 version that's included in Node.js 16.

Here's a also a blog post describing the performance improvements that were implement in the v8 version included in Node.js 18: https://v8.dev/blog/faster-class-features

Okay, so we now understand that this is coming from class properties in the classes we define, but where are those class properties exactly coming from? 🤔

When we define classes in TypeScript, properties need to be defined at the class level with their types. When the classes get transpiled into JavaScript by Babel (and I think the TypeScript compiler as well), those get turned into JavaScript class properties.

Here's an example class definition:

export class DoneToken extends Token {
  declare name: 'DONE';
  declare handlerName: 'onDone';

  more: boolean;
  sqlError: boolean;
  attention: boolean;
  serverError: boolean;
  rowCount: number | undefined;
  curCmd: number;

  constructor({ more, sqlError, attention, serverError, rowCount, curCmd }: { more: boolean, sqlError: boolean, attention: boolean, serverError: boolean, rowCount: number | undefined, curCmd: number }) {
    super('DONE', 'onDone');

    this.more = more;
    this.sqlError = sqlError;
    this.attention = attention;
    this.serverError = serverError;
    this.rowCount = rowCount;
    this.curCmd = curCmd;
  }
}

Babel turns this into the following:

class DoneToken extends Token {
  more;
  sqlError;
  attention;
  serverError;
  rowCount;
  curCmd;
  constructor({
    more,
    sqlError,
    attention,
    serverError,
    rowCount,
    curCmd
  }) {
    super('DONE', 'onDone');
    this.more = more;
    this.sqlError = sqlError;
    this.attention = attention;
    this.serverError = serverError;
    this.rowCount = rowCount;
    this.curCmd = curCmd;
  }
}
exports.DoneToken = DoneToken;

Now, there's no real reason for our use cases to have those class properties in the final JavaScript that's generated - they're only relevant in TypeScript. Fortunately, there's a straightforward way to keep the definitions in TypeScript, but remove them from the generated JavaScript.

As you can see in the example from above, not all properties are showing up in the output. Properties that are defined with the declare keyword don't show up in the final output.

Changing all the properties for DoneProcToken and Token to use declare, here's the result of running the benchmark on Node.js 16:

$ node benchmarks/token-parser/done-token.js 

token-parser/done-token.js tokenCount=10 n=10: 5,617.715578430893 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=10: 1,345.4499171068305 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=10: 398.2377977449785 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=10: 142.2137940639308 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=100: 13,189.638378365617 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=100: 4,333.522448252976 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=100: 1,503.0935619526617 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=100: 203.96679726490723 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=1000: 42,241.9188780276 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=1000: 15,271.178020046871 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=1000: 2,087.518699601101 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=1000: 202.8660920675844 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)

Performance is even better on Node.js 16 than it was on Node.js 18 with the class properties. Here's also the results of running these changes on Node.js 18:

$ node benchmarks/token-parser/done-token.js

token-parser/done-token.js tokenCount=10 n=10: 6,147.384779567077 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=10: 1,639.7810892245886 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=10: 427.98142560612865 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=10: 130.68357914304713 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=100: 16,061.99931736503 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=100: 4,590.340775423316 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=100: 1,362.9440562030152 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=100: 165.92129979427833 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10 n=1000: 44,082.15379408265 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=100 n=1000: 13,407.251982597387 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=1000 n=1000: 1,746.4033177317986 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)
token-parser/done-token.js tokenCount=10000 n=1000: 175.4514679802906 (minor: 0 - 0ms, major: 0 - 0ms, incremental: 0 - 0ms)

(Seems like there's been a performance regression in other areas between Node.js 16 and Node.js 18, as 18.x actually seems to be slower than 16 now. 🤷)

I also took CPU profiles with these changes in place:

Image

Image

As you can see, the constructor functions don't show up anywhere in the flamegraph anymore. 🎉

@MichaelSun90 @mShan0 Can either of you prepare a pull request that changes all class properties to use the declare syntax? I think that's a pretty nice performance improvement especially for the little amount of work that would be involved.

Hi @arthurschreiber, we will work on a PR as soon as possible.

Closed via #1581.