tc39 / proposal-bigint-math

Draft specification for supporting BigInts in JavaScript’s Math methods.

Home Page:https://tc39.es/proposal-bigint-math/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

General philosophy and vision

js-choi opened this issue · comments

Original post

Spinning this out of #8 (comment) and #9 (comment).

There are two dueling philosophies we could take for this proposal.

  1. BigInts and Numbers should always be interchangeable by default, unless there’s a strong reason they should not be (like precision loss or computational intractability).

    “It’s weird and confusing that they’re not already more interchangeable.”

  2. BigInts and Numbers should not be interchangeable by default. Floating point and arbitrary-precision integers are fundamentally different. The choice of which to use should be thought through by the programmer.

    “We already can’t have most of Math work for BigInts due to intractability [see #4], so even if we would like them to be interchangeable by default, that intuition just can’t hold up in practice. We need specific use cases for each one.”

Note that both philosophies would agree to add support for BigInt sign, abs, min, and max: there are clear use cases for all of these. (And if a clear use case appears in the future for certain other functions, then even the second philosophy would agree that we can add it in a future proposal.)

The proposal’s philosophy so far has been the first one (so it currently includes floor, ceil, etc. in #8). But the engine implementers have concerns about that, and we’re open to changing it. It would be good to make which philosophy we choose—and why we chose it—explicit.

There are also some relevant snippets from the 2021-08 meeting notes; I’ll wait until they’re public before I put them here.

CC: @syg, @yulia, @ljharb, @michaelficarra, @waldemarhorwat, @littledan

Edit (2021-09-17): The current answer is: The philosophy is neither maximizing interchangeability or maximizing separation. We maximize consistency with precedent instead.

It's great to see this fundamental question being addressed!

I think issue #10 is also related: if interchangeability is to be maximized, then putting functions on the Math object makes sense; whereas if the core strategy is to extend BigInt functionality while preserving separation from Numbers, then the BigInt object makes sense as a home.

My personal take is that there is already so much intentional non-interchangeability [1] that has been decided when BigInts were introduced, that maintaining a clear separation line is more consistent with earlier decisions, and less confusing for JS developers than blurring that line. To illustrate what I see as confusion risk: "Math.* is for Numbers, not for BigInts" is far easier to remember than "most of the Math.* functions are Number-only, but there is this more-or-less arbitrary [2] list of exceptions where you can also use BigInts". So personally I wouldn't start my reasoning with either of the two philosophies, instead I'd start with looking at the status quo and its history, and that brings me to "philosophy (2)" as a conclusion.

Aside from JavaScript-specific spec consistency issues, the argument that floating-point/rational/real math and integer math are fundamentally different also holds a lot of water. Certainly, there is a (comparatively small) set of calculations that both Numbers and BigInts can express, but there are also plenty of examples where they produce different results: pretty much any expression that involves non-integer values, or values beyond a certain "safe" range. To illustrate:
1 / 2 * 4 == 2, but 1n / 2n * 4n == 0n.
4 / (1 / 2) == 8, but 4n / (1n / 2n) throws a RangeError.
This proposal, as currently drafted, would add more differences:
Math.pow(2, Math.log2(7)) == 7, but Math.pow(2n, Math.log2(7n)) == 4n
Math.sqrt(11) * Math.sqrt(11) == 11, but Math.sqrt(11n) * Math.sqrt(11n) == 9n
Of course, chances are real code wouldn't spell out these constants; it'd have let d = a / b * c etc as part of a potentially long and involved sequence of computations, and someone would get to debug why it works for Numbers but doesn't work for BigInts, or vice versa. As @syg put it: The choice of which to use should be thought through by the programmer. Interchangeability at first glance seems like a nice idea, but doesn't hold up to reality.

[1] There is no precision loss or computational intractability reason why my_uint32_array[0] = 0n or my_biguint64_array[0] = 0 should throw, yet they do: at the time, TC39 decided that maintaining clear separation lines between Numbers and BigInts was more desirable than allowing interchangeability where it's possible. (Full disclosure: I argued against this restriction at the time and still think it's overly strict in this scenario; however given that this decision has been made, I believe that we should stick with it, because overall clarity and consistency is way more important than individual people's "but wouldn't this other bikeshed color have been prettier, and can't we at least paint the bikeshed's new door differently?" pet peeves, including my own.)
For another example of non-interchangeability, compare 1 >> -1 and 1n >> -1n (or various other shifts, e.g. 1 << 32 vs 1n << 32n).

[2] "None of the transcendental functions take BigInt arguments" would be less arbitrary (assuming you're sufficiently well-versed in math to know what a transcendental function is [3]), "most of the transcendental functions don't take BigInts, except log2 and log10, because... uh... maybe they looked a little less transcendental to someone?" is more arbitrary and yet another special case to remember.

[3] Does it have to do with teeth?

@jakobkummerow: Thank you for the excellent insights.

Your point about trying to stick with decisions that have already been made is well taken! Though I think there are a few different conclusions one might draw from that be-consistent-with-the-precedent philosophy:

  1. The decision was originally made to overload/extend existing math operations, like /, to accept BigInts. In a real sense, BigInts are syntactically interchangeable (even if not semantically interchangeable) with Numbers in expressions involving +, -, *, /, and **. To me, this is an argument for overloading/extending existing Math methods as possible. (The only exception I know of, unary +, was excluded explicitly because of asm.js, rather than any argument .)

    In other words, the decision was made to overload math operations in spite of differing behavior (1 / 2 * 4 == 2 and 1n / 2n * 4n == 0n). I think that adding methods to BigInt such as a BigInt.sign or a BigInt.sqrt would actually be breaking with this decision, but I understand that there’s more than one way to look at it.

    In other words, I feel that an API on BigInt that does not overload or resemble the API for Math (as kind of suggested in #14) would actually break with precedent. Even if they have different semantics, I think that math operations between Numbers and BigInts should look similar—as they already do for /, **, etc.

    The confusion risk from replacing the current “Math.* is for Numbers, not for BigInts” with “most of the Math.* functions are Number-only, with exceptions” is a good point. However, I think the precedent is already to accept inconsistent overloading (i.e., “at least one of the math operations is Number-only, with exceptions” is already true).

  2. In addition, BigInts and Numbers can be compared with one another using < and <=. This decision was presumably made because comparison between BigInts and Numbers is well defined.

    I feel that adding a BigInt.min and BigInt.max and continuing to have Math.min and Math.max support only Numbers—rather than allowing mixing of BigInts—breaks with this decision. After all, min and max are basically variadic, reducing versions of </<=, and the status quo of </<= is to allow mixing.

However, even though I feel that the precedent has been set to overload math operations, that does not mean that we should be maximizing interchangeability. In fact, I might be leaning more towards the second philosophy—but with the caveat that I think we should continue to overload math operations where appropriate.

The current BigInt status quo, I feel, is:

  1. BigInts and Numbers are not semantically interchangeable, but they are often syntactically interchangeable.
  2. Many (but not all) of the old math operations are overloaded/extended to allow BigInts, but they will act differently than their Number behaviors (1 / 2 * 4 == 2 and 1n / 2n * 4n == 0n), and the programmer should plan accordingly.
  3. Some of the math operations still only accept Numbers (e.g., unary +). The programmer does have to remember which ones are like this.

But I understand that there are many points of view on what the “current precedent” is, and we should be open to them all. In any case, the point that we should be consistent with current precedent is a good point.

Perhaps other TC39 delegates who participated in BigInts’ original design (particularly @littledan, but perhaps also @caiolima, @martinthomson, @sarahghp, @chicoxyzzy, @jmdyck, @bakkot, and otherse) can give some insight into what they feel would be most in keeping with the original design’s precedent.

See here for some earlier discussion: tc39/proposal-bigint#197

Key quote from @littledan:

the hope is to encourage people to be deliberate about whether they are using Number or BigInt--APIs are deliberately discouraged to "just work" between Number and BigInt, as sooner or later, this would lead to the risk of an accidental loss of precision for a BigInt.

@jakobkummerow: Great find with that @littledan quote; thank you!

That quote embodies a tension between avoiding accidental loss of precision due to interchangeability…and reusing the same operations (which is, in fact, a form of that interchangeability). (I include the Math methods as “operations” in this context; addition, exponentiation, and square root are all operations, although only two of them have syntactic operators.)

If we followed @littledan’s quote there to the letter, then we would not have overloaded +, -, *, /, **, etc, but rather have created. TC39 didn’t because creating new operators is a big cognitive load and also complicated. So, it’s all a matter of balance.

To make this balance more explicit, I see a spectrum between “maximizing interoperability” and “no interoperability”.

  1. “Perhaps every single math operation should accept both Numbers and BigInts unless it is impossible. BigInts and Numbers should be as interchangeable in the language as possible.”
  2. “Perhaps certain operations (like Math.floor/ceil/sin/etc.) should not accept BigInts. BigInts are fundamentally different than Numbers. Certain operations are meaningless/impossible with BigInts, so they shouldn’t accept them.”
  3. “Perhaps there should also be separate Math functions for BigInts. BigInts are fundamentally different than Numbers. Math.pow shouldn’t accept BigInts; it should be BigInt.pow.”
  4. “Perhaps there should also be separate syntactic operators for BigInts. BigInts are fundamentally different than Numbers. Like Math.pow, ** shouldn’t accept BigInts; it should be something like **{b}.”

TC39 designed BigInts and Numbers not to be fully interchangeable, and for good reason. But they did not go all the way—they did not make the fourth choice.

I think there are good arguments for the second choice (avoiding BigInt floor etc.). But it gets weirder when you consider something like Math.pow. Why is ** overloaded for both Numbers and BigInts, but Math.pow isn’t?

Consistency with previous decisions should be our goal (as eloquently put in #14), but the previous decisions are already striking a fine balance between full interoperability and no interoperability. So it’s kind of tough.

Having said that, I agree with @syg’s finding of “completeness” to be a very weak argument (#14 (comment)). I think that instead of “completeness”, we should strike for “consistency”…but how precisely to go about being more consistent will be a bit tricky.

My current inclination is to remove Math.floor, ceil, etc. while keeping everything else the same (a few overloaded Math methods), but that’s just an inclination.

This is a great issue — thanks for creating the space to discuss, @js-choi. I keep mulling over the bigger points, but there are two things that stand out for me just now:

  1. I'm not certain that restricting interoperability is as much a protector against unintentional loss of precision as we expect. I don't expect folks to know every operation they want to perform and then pick the correct representation; rather I think we'd see precision lost in order to perform an operation: Oh now I need to do this math, let's throw the data in, oh no doesn't work on BigInt, better make it a Number... That is, it's a nice idea that developers can intentionally choose which representation to work with and then deliberately build up a system, but that's not a situation most folks I know tend to find themselves in all so often. Rather it's like, Here's a system twenty people built over five years, please add this here and change as little else as possible., and in that situation one people only has so much control over the initial representation of the data they will be working with.

  2. Whatever we do here is going to end up influencing what we do with Decimal — we won't want to have a hodgepodge of methods that work with Number and Decimal but not BigInt or some sort of other combo that devs need to memorize. I think it's worth keeping that future in mind here.

Why is ** overloaded for both Numbers and BigInts, but Math.pow isn’t?

BigInts use the usual operators because the creators/champion(s) of the BigInt proposal strongly believed that that syntax is the most ergonomic way to denote common mathematical operations; in other words they strongly preferred bigint1 + bigint2 over BigInt.add(bigint1, bigint2) or bigint1 +{b} bigint2 or any such alternatives; and the committee agreed with that stance and accepted the proposal.

I don't think "interchangeability" was ever a goal behind this syntax decision, just programmer convenience and readability of BigInt-using code. In fact, making all binary operators throw for mixed BigInt/Number operands can be seen as minimizing interchangeability, while preserving the familiar and concise operator syntax. There's no technical reason why x ** 2 or x >> 2 couldn't "just work" when x is a BigInt, just an intention to keep BigInts and Numbers separate and non-interchangeable.

For the Math object, the BigInt proposal decided to follow the simple rule "Math.* is for Numbers, period".

Oh now I need to do this math, let's throw the data in, oh no doesn't work on BigInt, better make it a Number...

The hypothesis is that that by introducing an "oh no doesn't work on BigInt" point, there is at least an opportunity for reflection and education that would nudge folks down the path of thinking harder about their number representation.

@jakobkummerow gave me this compelling example the other day. Suppose Math.floor and Math.ceil worked on BigInts, and I typed Math.ceil(3n / 2n). If I'm not one to ponder much about number representations, it's pretty reasonable to think that should return 2n. But it doesn't, because 3n / 2n already truncates to 1n. This is arguably worse than having an "oh no doesn't work on BigInt" error point, because you might not notice this kind of numeric inaccuracy for a long time.

Since floor and ceil are meaningless operations on things that have no decimal part, not allowing these to take BigInts seems fine with me, and your footgun example is compelling. I don't think that thinking automatically precludes every method, though (and certainly not max/min)

@syg: This is a great point. It might be specific to floor/ceil/round/trunc, right? Do you think it applies to pow, abs, sign, min, and max? Those are the five operations that I think have the strongest argument for type overloading.

I do plan to drop BigInt floor/ceil/round/trunc from the proposal completely. The Math.ceil(3n / 2n) anti-example is compelling.

(I’ll try to figure out if there are similar anti-examples for BigInt-truncating sqrt/cbrt/log2/log10.)

@ljharb @js-choi That's right, that was a narrower point for rounding functions, not every function.

The broader point isn't that no Math functions make sense and we shouldn't provide BigInt functionality for them, but that a sufficiently small enough subset does that we might still want to keep them separate.

I’ll try to figure out if there are similar anti-examples for BigInt-truncating sqrt/cbrt/log2/log10

The first post in this thread states that "precision loss" is a "strong reason" not to aim for interchangeability. The fact that sqrt(4n) == sqrt(5n) == sqrt(6n) == sqrt(7n) == sqrt(8n) == 2n is a pretty bad case of precision loss: much worse than a Number result truncating to 53 bits of precision, here we observe a truncation to 2 significant bits.

cbrt/log2/log10/hypot have the same problem, of course.

Yes, that is true. But avoiding precision loss is also a spectrum. @waldemarhorwat pointed out in the August plenary that there is precedent for opt-in truncation in /. It was decided that overloading / with a truncating BigInt division was acceptable. So it’s already not an all-precision-loss-or-no-precision-loss binary decision. I find the floor anti-example above compelling, but I’m not as sure right now about automatically excluding sqrt etc. based only on that, because division is also a precedent.

But, anyways, I’d be happy to drop sqrt/cbrt/log2/log10 based on lack of clear use cases, though. Like I said at plenary, we could do this piecemeal. (log2 does have a use case as bit length, but, as you point out in #14, we could create a bitLength method with a different name—and that’s another debate that we should probably make an issue for.)

I think that these decisions’ precedents are already on a spectrum between “maximum interchangeability” and “maximum separation”. The trade-offs are finicky.

My current concrete plan is to drop, in addition to the already-dropped transcendentals, overloaded Math.floor/ceil/round/trunc/sqrt/cbrt/log2/log10, while keeping overloaded Math.sign/abs/pow, with a vision for adding more overloaded operations like modulo exponentiation and popcount to Math later. I think that would strike a good balance that matches current precedent (some limited interchangeability in operations).

I think I’ve settled pretty firmly on eschewing both “maximal interchangeability” and “maximal interoperability” in favor of “maximal consistency with precedent”. I’ve edited the explainer with the following:

Philosophy

The philosophy is to be consistent with the precedents already set by the language.
These precedent include the following five rules:

  1. BigInts and Numbers are not semantically interchangeable.
    It is important for the developer to reason about them differently.
  2. But, for ease of use, many (but not all) numeric operations
    (such as division / and exponentiation **)
    are type overloaded to accept both Numbers and BigInts.
  3. These type-overloaded numeric operations
    cannot mix Numbers and BigInts, with the exception of comparison operations.
  4. Some numeric operations are not overloaded (such as unary +).
    The programmer has to remember which operations are overloaded and which ones are not.
  5. asm.js is still important, and operations on which it depends are not type overloaded.

In this precedent, only syntactic operators are currently considered as math operations.
We extend this precedent such that Math methods are also considered math operations.

BigInt Math.floor/ceil/round/trunc/sqrt/cbrt/log2/log10 have all been removed. Only BigInt Math.sign/abs/pow/clz32 remain. A new Vision section has also been added about creating more overloaded operations, like modulo exponentiation and popcount, to Math later.


@sarahghp: Whatever we do here is going to end up influencing what we do with Decimal — we won't want to have a hodgepodge of methods that work with Number and Decimal but not BigInt or some sort of other combo that devs need to memorize. I think it's worth keeping that future in mind here.

This is a good point. I think, unfortunately, that some amount of developer memorization is inevitable.

We’re either going to require developers to:

  • Memorize which operations accept BigInts/Decimals: “Does Math.log accept Decimals? Does it accept BigInts?”.
  • Memorize which methods are present on the Decimal/BigInt objects: “Does Decimal.log exist? Does BigInt.log exist?”

I think these two approaches are basically equivalent in memorization burden. And both approaches throw TypeErrors when invalid operations are attempted on invalid types. But I think the former approach (type-overloaded math methods with some exceptions) is more consistent with precedence (type-overloaded operations with some exceptions like unary +).

We could take this discussion to #14, too.

We should keep sqrt and cbrt for BigInts because we have pow. The truncation towards zero behavior is unsurprising (it matches /) and mathematically useful, both directly and in various algorithms.

For example, if you want to compute an arbitrary-precision square root of a BigInt, the truncated square root provides a great first step of the algorithm. You then square the truncated square root, subtract it from the original number, and proceed with the algorithm.

For another example, if you want to compute a truncated square root of a BigInt to 2 decimal places, multiply your original BigInt by 10000n, take the truncated square root, and you'll get the answer times 100n.

Another example: Suppose you want to compute the square root of a BigInt n rounded to the nearest integer instead of truncated. This is how you'd do it:

roundedSqrt = (Math.sqrt(4n * n) + 1n)/2n

Combining the two examples above, here's how to compute the square root of a BigInt n rounded to nearest to two decimal places:

let t = (Math.sqrt(40000n * n) + 1n)/2n;
let i = t / 100n; // Integral part of result
let f = t % 100n; // 00-99 decimal part of result

We should keep sqrt and cbrt for BigInts because we have pow.

The truncation towards zero behavior is unsurprising (it matches /) and mathematically useful, both directly and in various algorithms.

For example, if you want to compute an arbitrary-precision square root of a BigInt, the truncated square root provides a great first step of the algorithm. You then square the truncated square root, subtract it from the original number, and proceed with the algorithm.

@waldemarhorwat: Thanks warmly for the comments.

Some feedback that we’ve gotten from engine implementers like @syg, @codehag, and @jakobkummerow is that they do not find “completion” rationales compelling (this is why floor, ceil, round, and trunc were also dropped, although of course those are also useless on BigInts). In response to that feedback, the general philosophy has been to greatly contract to a minimum core of obviously useful type overloads and deferring less-certainly-useful overloads to the future.

Having said that, I suspect that BigInt truncating sqrt and cbrt would be useful, but the pushback that I’ve gotten is that I cannot give any specific, definite use cases in which calculating the square/cube root of a BigInt would be useful. To use the example above, calculating arbitrary-precision square roots might conceivably be useful in applications, but what specific such applications would those be? We probably need even-more-specific applications to convince the engine implementers.

This is part of why I am soliciting research from the TC39 research incubator group as well as from developer surveys. I would welcome further suggestions regarding applications.

(This all falls under the greater general philosophy of now-restricting type overloads to obviously useful use cases. I guess that’s another piece of the philosophy that needs to get documented in the explainer.)

floor, ceil, round, and trunc are useless on BigInts and provide no value. In fact, they add to the confusion as some of the discussion has indicated.

sqrt and cbrt are just inverses of the most common cases of pow. It would be as weird to have one and not the other as it would be to have * but not /. They're easy and very lightweight to implement — the implementation cost of including them is trivial enough that it's not worth the effort to conduct developer surveys.

I'm afraid we're getting into analysis paralysis and design-by-voting rather than picking the simplest option, which is including the Math functions that mathematically make sense.

Suppose you want to compute the square root of a BigInt n rounded to the nearest integer

Then I suggest you do Math.round(Math.sqrt(Number(my_bigint))). In fact, if you're interested in results rounded to integer (or to two decimal places, for that matter), then your entire calculation is probably better off with Numbers.

sqrt and cbrt are just inverses of the most common cases of pow. It would be as weird to have one and not the other

That argument cuts both ways: pow is a legacy function rendered obsolete by the introduction of **, so there's little reason to extend it in any way (for BigInts or otherwise). So if the only reason to have sqrt is that we have pow, but the latter isn't really motivated other than by "because we could", then we might as well have neither of them.

the implementation cost of including them is trivial enough

From that claim, in turn, one could also conclude that it's perfectly fine to leave implementations to user space, especially as long as we know of no use cases.

sqrt and cbrt are just inverses of the most common cases of pow. It would be as weird to have one and not the other

That argument cuts both ways: pow is a legacy function rendered obsolete by the introduction of **, so there's little reason to extend it in any way (for BigInts or otherwise). So if the only reason to have sqrt is that we have pow, but the latter isn't really motivated other than by "because we could", then we might as well have neither of them.

For what it’s worth, I would like to gently push back against the notion that pow is merely a “legacy function”. pow remains in use today in functional programming as a reducible and partially applicable function object. I would be quite surprised if it weren’t still being used in new JavaScript code today. It’s not like it’s the actually deprecated with statement.

And if pow is still being used in new code today, then it will remain surprising whenever it doesn’t act like **.

In addition, as long as we’re assuming that “BigInt sqrt is useful as long as BigInt pow is in the language”, I would argue that BigInt ** also does count as being “in the language”. Under the previous assumption, BigInt sqrt is useful as long as BigInt ** is in the language. sqrt being an inverse of ** should be just as important as sqrt being an inverse of pow.

To be accurate, let's keep the distinction between "sqrt is useful" (of which no evidence has been presented) and "it's weird not to have sqrt" (which is an opinion that some individuals have expressed).

@jakobkummerow
https://github.com/Yaffle/continuedFractionFactorization/blob/main/continuedFractionFactorization.js#L36
well... it can be implemented in user space ... but would be nice to have bitLength

@Yaffle I agree that bitLength would be useful.

Then I suggest you do Math.round(Math.sqrt(Number(my_bigint))).

And then you'd sometimes get the wrong answer.

In fact, if you're interested in results rounded to integer (or to two decimal places, for that matter), then your entire calculation is probably better off with Numbers.

Yes, lots of things can be done using Numbers. That doesn't say anything about the cases where you want guaranteed precision and rounding/truncating behavior. I'm not interested in rehashing the debate about the general usefulness of BigInts.

the implementation cost of including them is trivial enough

From that claim, in turn, one could also conclude that it's perfectly fine to leave implementations to user space, especially as long as we know of no use cases.

One could reach an incorrect conclusion. I was referring to the amount of code this would take, which is minuscule — smaller than some of the comments on this thread. However, the knowledge required to do it correctly is quite specialized and not accessible to most users. Also, a user-space implementation would not work as well as a built-in one because it would not be able to take advantage of the internal representation.

I presented a brief update presentation about this issue to the Committee at the October plenary today. I didn’t get any strong pushback over the overall philosophy and vision, although feedback time was but brief.

I also am moving discussion about BigInt sqrt to #16.