ramda / ramda

:ram: Practical functional Javascript

Home Page:https://ramdajs.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ramda is 3x slower because it uses the arguments object?

icetbr opened this issue · comments

commented

Hi, I'm no expert, and I don't know exactly what this means. I was benchmarking some ideas when I saw this. It is a simple benchmark, just add 2 tests of any function. I'm using filter.

https://github.com/icetbr/experiments/tree/main/perf/packages/arguments

Quoting from my repo:

suite
    .add('ramda 1', function () {
        return R.filter(x => x % 2 !== 0)(numbers);
    })

    .add('ramda 2', function () {
        return R.filter(x => x % 2 !== 0)(numbers);
    })
ramda 1 x 2,988,212 ops/sec ±1.41% (23 runs sampled)
ramda 2 x 964,727 ops/sec ±1.70% (20 runs sampled)

See the ts-belt test for the with/without arguments test.

One possible explanation

The difference between using arguments vs. not using it could be anywhere between 1.5 times to 4 times slower, depending on the browser
-- stackoverflow

I'm sorry, but where exactly is the difference between these two functions?

They're the same, on purpose, that's the test.

Yeah, but how does one use the arguments object and the other doesn't if it's the same function? I'm confused about it.

They both use it.

Alright, now I get it.

I don't know exactly what this means

I don't know either! If I patch our filter benchmark in the repo to look more like yours

    'filter(isEven)(nums) 1': function() {
      filter(isEven)(nums);
    },
    'filter(isEven)(nums) 2': function() {
      filter(isEven)(nums);
    },

and run it, I get

┌────────────────────────┬────────────────────────┬────────────────────────┐
│ filter                 │ Hertz                  │ Margin of Error        │
├────────────────────────┼────────────────────────┼────────────────────────┤
│ filter(isEven)(nums) 1 │ 8,720,752              │ 1.39%                  │
├────────────────────────┼────────────────────────┼────────────────────────┤
│ filter(isEven)(nums) 2 │ 8,633,275              │ 1.33%                  │
└────────────────────────┴────────────────────────┴────────────────────────┘

Not sure why you are getting significantly different numbers in your experiment

I don't think it affects your isEven. Would you try to inline it?

Ya that did it (not quite as dramatic)

┌────────────────────────┬────────────────────────┬────────────────────────┐
│ filter                 │ Hertz                  │ Margin of Error        │
├────────────────────────┼────────────────────────┼────────────────────────┤
│ ramda 1                │ 8,461,934              │ 2.41%                  │
├────────────────────────┼────────────────────────┼────────────────────────┤
│ ramda 2                │ 5,847,963              │ 0.56%                  │
└────────────────────────┴────────────────────────┴────────────────────────┘

Strange one, not sure why

The gap may be related to the benchmark library. Also the node version and computer specs. Perhaps others might chime in with their results.

ty for the issue @icetbr , closing as not actionable for ramda