Measurements with missing errors
Gregstrq opened this issue · comments
Is it hard to make measurements support data with non-missing
value, but missing
uncertainty? Data with missing uncertainty is quite different from the data with zero uncertainty, so it is good to fistinguish these two cases.
Looks like linear propagation of errors should be easy in this case: you just propagate missing
.
I've stumbled upon an actual example, where such option is required, while parsing the file with the data of half-lifes of different nuclear isotopes. I have noticed that normally half life has an uncertainty provided. However, for some of the isotopes, the uncertainty is simply missing.
Duplicate of #59?
Yes, it is duplicate of #59 and of #62 , and it was decided that measurement(1, missing)
should throw error with argumentation as following:
This case doesn't seem as clear-cut to me. My reasoning is that just because the uncertainty has gone missing somewhere, doesn't mean the actual value is not known and should be
missing
. Amissing
uncertainty might even be interpreted to mean zero uncertainty by some. I think it is better to throw an error here, than just make assumptions for the user.
But why should it be a problem if missing
uncertainties just propagate?
x = 1.0 ± 2.0
y = 1.0 ± missing
x + y
> 2.0 ± missing
We do not make any assumptions for the user, they can always decide how to treat the Missing
s in the final result.
But why should it be a problem if missing uncertainties just propagate?
That'd require changing the layout of the datastructure, with a likely significant impact on performance.
It looks like the missing uncertainty can be emulated by some non-physical value, like NaN
.