HypothesisWorks / hypothesis

Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Home Page:https://hypothesis.works

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Allow `max_value` and `min_value` to control range for all numpy dtypes

dcherian opened this issue · comments

It looks like these are only obeyed for floating dtypes (https://hypothesis.readthedocs.io/en/latest/_modules/hypothesis/extra/numpy.html#from_dtype).

It would be quite useful to limit for the other dtypes too.

I think we already do support it for signed and unsigned integers (via this helper); complex numbers don't have a natural ordering but we do support min/max magnitude (as for st.complex_numbers()), and while it'd be nice to support bounds on datetimes and timedeltas it's really not clear how to do so effectively given that the temporal resolution can vary.

Thanks.I was looking to control the range here:

if allow_nan is not False:
elems = st.integers(-(2**63), 2**63 - 1) | st.just("NaT")
else: # NEP-7 defines the NaT value as integer -(2**63)
elems = st.integers(-(2**63) + 1, 2**63 - 1)

But you're right max_value is not a good name for control the ints thatget cast to datetime.

Indeed - if we have bounds here, they should be expressed over the represented moments or durations, not the underlying integer. But in that case, there's no one value which has both the precision to bound (sub) nanosecond granularity, and the magnitude to bound year granularity!

Hypothesis tends to solve such problems by avoiding them, so that users can write their own strategy with whatever specific decisions make sense downstream.

Sounds good.