JuliaMath / FixedPointNumbers.jl

fixed point types for julia

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Making display style consistent with `Base`

kimikage opened this issue · comments

Thanks to the JuliaLang/julia#36107, Julia nightly (1.6.0-DEV) now uses the exported type-aliases (e.g. N0f8) for printing types.

Julia 1.4.2

julia> N0f8
Normed{UInt8,8}

julia> N0f8[0, 1]
2-element Array{N0f8,1} with eltype Normed{UInt8,8}:
 0.0N0f8
 1.0N0f8

julia> Float32[0, 1]
2-element Array{Float32,1}:
 0.0
 1.0

Julia 1.6.0-DEV.356

julia> N0f8
N0f8 = Normed{UInt8,8}

julia> N0f8[0, 1] # the summary is customized and the elements don't check :typeinfo
2-element Array{N0f8,1} with eltype N0f8:
 0.0N0f8
 1.0N0f8

julia> Float32[0, 1]
2-element Vector{Float32}:
 0.0
 1.0

I think we should avoid unnecessary customization on Julia v1.6 and above. Also, FixedPointNumbers.showtype should be more strict.

julia> FixedPointNumbers.showtype(stdout, N0f8);
N0f8
julia> FixedPointNumbers.showtype(stdout, Normed{UInt128,8});
N120f8
julia> N120f8
ERROR: UndefVarError: N120f8 not defined

julia> FixedPointNumbers.showtype(stdout, Normed{UInt8, Int32(8)}); # cf. #162
N0f8

In the next minor update of ColorTypes (i.e. v0.11), the display bug of AGray32 will be fixed. In line with that, I plan to revise the display of Color Types as well. (cf. JuliaGraphics/ColorTypes.jl#191, JuliaGraphics/ColorTypes.jl#202)

This is an off topic, but I think we should also revise the message of throw_converterror.

We often use the aliases (e.g. N0f8 instead of Normed{UInt8,8}), and Julia v1.6 will always show them.

julia> 2.0N0f8
ERROR: ArgumentError: Normed{UInt8,8} is an 8-bit type representing 256 values from 0.0 to 1.0; cannot represent 2.0

Also, I prefer the "2^n" notation for numbers larger than 5 digits.

julia> 1e10N16f16
ERROR: ArgumentError: Normed{UInt32,16} is a 32-bit type representing 4294967296 values from 0.0 to 65537.0; cannot represent 1.0e10

julia> Normed{UInt128,100}(10^9) # 0 values!?
ERROR: ArgumentError: Normed{UInt128,100} is a 128-bit type representing 0 values from 0.0 to 2.68435e8; cannot represent 1000000000

What I personally think is more important is the compilation time of throw_converterror. Although we normally only use a few types such as N0f8 and N0f16, the throw_converterror is a "terror" for testing, since it is compiled for many types exhaustively.
Of course, applying @nospecialize is one measure, but I think it is more effective to avoid the interpolation.

@noinline function throw_converterror(::Type{X}, x) where {X <: FixedPoint}
n = 2^bitwidth(X)
bitstring = bitwidth(X) == 8 ? "an 8-bit" : "a $(bitwidth(X))-bit"
io = IOBuffer()
show(IOContext(io, :compact=>true), typemin(X)); Xmin = String(take!(io))
show(IOContext(io, :compact=>true), typemax(X)); Xmax = String(take!(io))
throw(ArgumentError("$X is $bitstring type representing $n values from $Xmin to $Xmax; cannot represent $x"))
end

It looks like the github's database is inconsistent. I'm sorry for making noise.