How should Math.min/max(0, 0n) be ordered?
Jack-Works opened this issue · comments
I suggest to add those methods to globalThis.BigIntMath.*
so we can get rid of bigMin
and bigMax
.
And please consider the Decimal proposal, if we follow this naming approach, will we have decimalMin
and decimalMax
?
Another option is just to have
BigInt.min, BigInt.max
Another option is just to have
BigInt.min, BigInt.max
But this means we should add Math.min
to Number.min
to keep the consistency.
I think the committee would probably push back against adding a new global. They would probably rather add methods to BigInt
.
I will raise the possibility of using BigInt.hypot
, min
, and max
(and perhaps adding corresponding methods to Number
) at the next meeting.
I think the committee would probably push back against adding a new global. They would probably rather add methods to BigInt.
I will raise the possibility of using BigInt.hypot, min, and max (and perhaps adding corresponding methods to Number) at the next meeting.
BigInt.min/max
are good to me too (compared to BigIntMath.min/max
). I just don't like the name bigMin/bigMax
But this means we should add Math.min to Number.min to keep the consistency.
While this would probably be ideal, there’s currently very little rhyme or reason to how Number operations and constants are distributed between Number and Math and it might be sufficient to say “new stuff follows this pattern” or “new stuff for numeric types other than Number follow this pattern” without worrying too much about Math.
After the TC39 meeting, I currently plan to extend Math.min
, max
, and hypot
to accept mixed BigInts and Numbers, because comparing BigInts and Numbers is well defined. We can already sort mixed BigInts and Numbers without loss of precision. Math.max(1, 1n)
would be 1
. @ljharb @rwaldron @sarahghp @michaelficarra @DanielRosenwasser @bakkot @gibson042 @devsnek
Math.max(1, 1n)
would be1
As in "first wins when a Number and BigInt have equivalent mathematical values", or as in "Number wins"? Is Object.is(Math.max(1, 1n), Math.max(1n, 1))
true or false?
"first wins" is one option, but another option might be "always prefer bigint" or "always prefer number" - iow, 1n
would be defined as being "greater" than 1
for "max", but "less" than 1
for "min".
Maybe we should match Array.prototype.sort
and be stable on ordering.
Actually…in order to really match Array.prototype.sort
, we perhaps should make
min
return the first minimum (i.e., the first element of the sorted array) and
max
return the last maximum (i.e., the last element of the sorted array),
because this is what min
and max
already do (with regards to .sort
) anyway.
> [1, 0].sort()
[ 0, 1 ]
> Math.max(...[1, 0])
1
> Math.min(...[1, 0])
0
> [0, 0n].sort()
[ 0, 0n ]
> Math.max(...[0, 0n])
0n
> Math.min(...[0, 0n])
0
> [0n, 0].sort()
[ 0n, 0 ]
> Math.max(...[0n, 0])
0
> Math.min(...[0n, 0])
0n
@js-choi since it's a list of arguments instead of an array, i don't think that's observable either way, so i think we're free to make a choice here (since it's just about fictional mental models)
Can we please not bring up Array.prototype.sort
's default comparator? It has nothing to do with this; it sorts strings.
[10n, 2].sort()
> [10n, 2]
[10n, 200].sort()
> [10n, 200]
@sarahghp i 100% agree that mental models are important; i'm just not convinced that the majority of users have a mental model about ordering in Math.max/min arguments.
Is there a reason not to make it work like:
[1, 1n].sort((a, b) => a < b)
[1, 1n].sort((a, b) => a > b)
Because my initial instinct is that's the parallel that makes sense. What is max and min but sorting and shifting or popping?
@sarahghp, @michaelficarra, @ljharb: Yeah, I was sloppy. What I had originally meant was to propose to match Array.prototype.sort
using <
. This is indeed an already-observable behavior. My apologies!
Edit: Still-incorrect, sleep-deprived code
> [1, 0].sort((a, b) => a < b)
[ 0, 1 ]
> Math.max(...[1, 0])
1
> Math.min(...[1, 0])
0
> [0, 0n].sort((a, b) => a < b)
[ 0, 0n ]
> Math.max(...[0, 0n])
0n
> Math.min(...[0, 0n])
0
> [0n, 0].sort((a, b) => a < b)
[ 0n, 0 ]
> Math.max(...[0n, 0])
0
> Math.min(...[0n, 0])
0n
@michaelficarra If the default function works on strings and @js-choi and I both default to simple comparison, then it can be argued that's how comparison functions work. But honestly, as long as we are basing it on behavior that exists and is explainable, I think either approach is reasonable.
This also might be nice to do lightweight (Twitter poll) research on, to see if there is an obvious expectation.
@michaelficarra: Argh, sorry, I’m really sleep deprived right now. Yes. a < b ? -1 : a > b ? 1 : 0
is what I meant. I did not mean to coerce a boolean to a number. 💤
Okay, now that I’ve paid back my sleep debt, I should be able to think about this more coherently.
There are three valid developer mental models I see: a reduce model, a sort model, and a tower model.
Reduce model
“Min and max are what I’d get when I reduce the list using <=
(or <
?).”
I’ve implemented max
and min
a bunch of times using reduce
. I recall that several functional programming languages even do the same with their built-in max
and min
.
When I reduce, I tend to prefer reducing to the last value, rather than the first. So, with the reduce model, I personally expect min
to match the behavior you’d get from reducing like this:
> [ 0, 0n ].reduce((a, b) => a <= b ? a : b)
0
> [ 0n, 0 ].reduce((a, b) => a <= b ? a : b)
0n
…and max
to match this:
> [ 0, 0n ].reduce((a, b) => a <= b ? b : a)
0n
> [ 0n, 0 ].reduce((a, b) => a <= b ? b : a)
0
However: changing <=
to <
reverses this order (so that the reducer prefers reducing to the first versus the last value). So one could still argue that this model is arbitrary.
> [ 0, 0n ].reduce((a, b) => a < b ? b : a) // First example, but the <= is changed to <.
0
Sort model
“Min and max are the leftmost and rightmost values of the sorted array.”
We also may conceptually think of max
as the “rightmost value when you sort an array by <
” and min
as the “leftmost value when you sort”.
We generally expect sort
to work like this:
> [ 0, 0n ].sort((a, b) => a < b ? -1 : a == b ? 0 : +1)
[ 0, 0n ]
> [ 0, 0n ].sort((a, b) => a < b ? -1 : a == b ? 0 : +1)
[ 0n, 0 ]
…and, in this model, the first sorted value is the minimum and the last sorted value is the maximum.
Tower model
“There’s an intrinsic ordering to numeric types, and integer types come before equal non-integer types, and min and max use this type order.”
From what I recall, languages that have a numeric tower generally define a total order over all their numeric values. In this case, sorting a list with an exact integer and an equivalent non-integer-type number would always sort the integer before the fractional number.
However: JavaScript does not have a numeric tower and it has no total order over its numerics; its BigInts and Numbers are orthogonal in every way. If we elect to base min
and max
on a type order (such that BigInts would always come before loosely equal Numbers), we might setting general precedent in the language to have a numeric-tower-style ordering (BigInts are ordered before Numbers). Plus, it’d be inconsistent with sort
—from what I remember, in languages that have a totally ordered numeric tower, sorting lists of mixed-typed numerics will reorder equivalent numerics by their type order.
Conclusion
Each model has its own reasons to be considered arbitrary, so the whole choice could be considered arbitrary. So we can bikeshed this for as long as we need.
For now, my plan for the spec is to prefer the earliest equivalent value for min
and the latest equivalent value for max
, but I’m open to any suggestion from anyone who has strong reason for a preference.
Taking a step back here, would someone be able to shed some light on anticipated use cases? If we imagine code like:
let max = Math.max(...mixed_bag_of_numbers_and_bigints);
let min = Math.min(...mixed_bag_of_numbers_and_bigints);
let range_midpoint = (max + min)/2;
Then firstly that raises the question: where would mixed_bag_of_numbers_and_bigints
come from? Given that Numbers and BigInts generally don't mix, what would be a scenario where code is in a position of having such a mixed collection?
And secondly this code, as written, wouldn't generally work, because max + min
requires both values to have the same type, and /2
further requires that that type be Number
. Making the last line robust would mean writing it as:
let range_midpoint = (typeof max === "number" && typeof min === "number") ? (max + min)/2 : (typeof max === "bigint" && typeof min === "bigint") ? (max + min)/2n : undefined;
or some ugly contortion like that, which is clearly impractical (and forces subsequent code to deal with a potential undefined
!).
I'm not debating that there are several valid (though arbitrary) ways to spec what Math.max(0, 0n)
should do, but I'm struggling to picture a scenario where this feature would be useful to have. Finding such a scenario might also help settle the open question(s). Failing to find such a scenario might indicate that maybe Math.max shouldn't take mixed input types after all (in which case it could still be polymorphic).
I have no idea what I’d use the midpoint for - when i max and min things it’s to then take it and use it directly, often to render it in a UI, or iterate from or to it.
Yes, I think the usefulness of min
and max
over mixed Numbers/BigInts is equivalent to the usefulness of comparing Numbers and BigInts with <
. min
and max
are just extensions of <
to be “variadic”, so to speak.
That's actually one of my more common use cases - i have more than 2 items, and I don't want to hardcode a bunch of <
or >
comparisons in conditionals/ternaries, so i make an array and use max or min.
Yes, and to elaborate further on where those mixes might come from, I mostly imagine myself winding up here on the edges of joining different systems — using a new library with legacy code, getting values from various APIs, etc.
In terms of doing further arithmetic, even if it can't be performed on mixed values, reducing a list to a single value before converting instead of converting the entire list has its advantages. (I would argue it would be nice to be able to do the arithmetic and implicitly convert to BigInt, but I know that's not a popular POV these days. 😆 )
Anyways, this is not my proposal — I'm just some girl with an opinion who helped on the docs once — so I will stop posting so much, but that's basically how and why I can see this being useful and why I was surprised it did not already exist.
Personally I prefer the reduce model, and whether first or last depends on use case; I'd probably expect first more often. The sort model feels weird to me, can't explain why,
However, due to the weird handling of -0
, neither model can fully describe what these function actually do in JS.
So one could introduce another model, which doesn't depend on the order of arguments, but defines order on values.
Because -0
comes before +0
, but BigInt only has one zero, you could define their ordering like this:
..., -3, -3n, -2, -2n, -1, -1n, -0, 0n, 0, 1n, 1, 2n, 2, 3n, 3, ...
I presented a brief update presentation about this issue to the Committee at the October plenary today. I presented the four potential mental models we could go with: the two reduce models, the sort model, and the tower model. I didn’t get any signals over this issue, although feedback time was brief. I tentatively plan to move forward with the sort model when I present this proposal again for Stage 2 in several months.
What if we try to preserve precision as much as possible? What I mean is that both max
and min
should always return the Number
alternative, except if that number is an "unsafe integer".
Examples:
typeof Math.max(2 ** 52, 1n << 52n) // 'number'
typeof Math.max(2 ** 53, 1n << 53n) // 'bigint'
// this also applies to "unsafe" negatives
This has the advantage that further operations on the return values are fast (when possible) and accurate (when possible), depending on the input args
@Rudxain the committee explicitly decided to avoid having behavior change based on the "safeness" of the number, during the bigint proposal, so i don't think that would gain consensus.
@Rudxain the committee explicitly decided to avoid having behavior change based on the "safeness" of the number, during the bigint proposal, so i don't think that would gain consensus.
Oh... Then I would "vote" for choosing something similar to the reduce model. But I prefer that max
and min
were consistent between each other. What I mean is that the priority of max
should be the same as min
, like this:
Math.max(1n, 1) === Math.min(1n, 1)
So if we choose "left 1st" or "right 1st" for one function, the other one should do the same. This is easier to remember, and less prone to bugs, because there's no way to confuse them
They have different behavior with a single argument already because they're "opposite" operations, I wouldn't expect them to have the same ordering behaviors (assuming order matters),
They have different behavior with 0 args, not 1, but that's still a good point. "Asymmetric" definitions can be useful, a good example is the standard modulo operation which is defined in terms of floor division, this has useful properties like -1 mod 3 = 3 - 1
. So I'm open to the idea that min
and max
should behave slightly differently
The reduce model explains Math.max() === -Infinity
and Math.min() === Infinity
; the sort model cannot without introducing another rule, so by Occam's razor I'm in favor of the reduce model.
const max = (...args) => args.reduce((acc, cur) => cur > acc ? cur : acc, -Infinity);
(This is how it's typically implemented anyway, I think; also how it's defined in the spec)
@Josh-Cena: I agree that the reduce models are intuitive, but there are two reduce models: one using < / > and one using ≤ / ≥.
const max = (...args) => args.reduce((acc, cur) => cur > acc ? cur : acc, -Infinity);
const max = (...args) => args.reduce((acc, cur) => cur >= acc ? cur : acc, -Infinity);
max(0, 0n)
is 0
with >
(leftmost wins), but it is 0n
with >=
(rightmost wins). Which is more desirable?
Ahhh, because the current spec uses < / >, is there a strong enough argument for ≤ / ≥ to warrant a change? 😄 Otherwise I have to say I favor sticking with the existing semantic (although I believe it's mostly arbitrarily chosen). If I have to write actual code, I'd also be leaning towards < / >:
// Some actual code I've written when doing competitive programming
int highest = -2147483648;
for (int i = 0; i < N; i++) {
// Using > instead of >= will write to highest a few times less;
// Who doesn't like micro-optimizations?
if (val[i] > highest) highest = val[i];
}
But yeah, either would seem arbitrary and leak abstraction.