Allow `max_value` and `min_value` to control range for all numpy dtypes
dcherian opened this issue · comments
It looks like these are only obeyed for floating dtypes (https://hypothesis.readthedocs.io/en/latest/_modules/hypothesis/extra/numpy.html#from_dtype).
It would be quite useful to limit for the other dtypes too.
I think we already do support it for signed and unsigned integers (via this helper); complex numbers don't have a natural ordering but we do support min/max magnitude (as for st.complex_numbers()
), and while it'd be nice to support bounds on datetimes and timedeltas it's really not clear how to do so effectively given that the temporal resolution can vary.
Thanks.I was looking to control the range here:
hypothesis/hypothesis-python/src/hypothesis/extra/numpy.py
Lines 222 to 225 in 5578efc
But you're right max_value
is not a good name for control the ints thatget cast to datetime.
Indeed - if we have bounds here, they should be expressed over the represented moments or durations, not the underlying integer. But in that case, there's no one value which has both the precision to bound (sub) nanosecond granularity, and the magnitude to bound year granularity!
Hypothesis tends to solve such problems by avoiding them, so that users can write their own strategy with whatever specific decisions make sense downstream.
Sounds good.