nholthaus / units

a compile-time, header-only, dimensional analysis and unit conversion library built on c++14 with no dependencies.

Home Page:http://nholthaus.github.io/units/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Compound assignment's rhs's (underlying type) shouldn't be converted to lhs's

JohelEGP opened this issue · comments

This is part of the reason I don't love first-class integral units.

I think this is a deficiency in gcc/clang. With MSVC the following operation:

meters<int> c(16);
c *= 2.0;

produces:

warning C4244: 'argument': conversion from 'double' to 'const int', possible loss of data

on gcc and clang I don't get anything even with -Wall and -Wconversion when it implicitly converts from double at the site of the operator*= call, whose rhs is the underlying type of lhs, which doesn't seem like correct behavior.


We can get the warning to be generated by inserting an l-value into the multiplication operators like so:

template<class UnitTypeLhs, typename T,
	std::enable_if_t<std::is_arithmetic_v<T> && traits::has_linear_scale_v<UnitTypeLhs>, int> = 0>
constexpr traits::replace_underlying_t<UnitTypeLhs, std::common_type_t<typename UnitTypeLhs::underlying_type, T>>
operator*(const UnitTypeLhs& lhs, T rhs) noexcept
{
	using CommonUnit = decltype(lhs * rhs);
	decltype(std::decay_t<typename UnitTypeLhs::underlying_type>{}) result = lhs.value() * rhs;
	return CommonUnit(result);
}

which produces:

/mnt/e/workspace/units/unitTests/main.cpp:1625:14: required from here
/mnt/e/workspace/units/include/units/core.h:2987:88: warning: conversion from double to int may change value [-Wfloat-conversion]
2987 | decltype(std::decay_t{}) result = lhs.value() * rhs;

Which, unusually for this library, I find to be pretty helpful for the user (the required from here: points to the intuitive spot in the user code).

The verbose and seemingly unnecessary decltype invocation is just to get the compiler to say int instead of incomprehensible template alias types referring to the lhs underlying types. I didn't check the assembly yet but I don't think this affects runtime performance when optimized.


What I'm a little unclear about from the mpusz thread is whether that kind of change would address your concerns, or whether you think we should static assert there instead?

What concerns me is correctness with respect to how the operations are done directly on the underlying types. Someone upgrading their code from ints to units will find change in behavior when mixing-in doubles. So I think that the type of rhs should be generalized to be any underlying_type/unit (as appropiate) rather than that of lhs's.

Shouldn't the intermediate types simply follow C/C++'s default type promotion rules, and let the compiler flags handle unsafe conversions? I think that is what @nholthaus suggests above?

By forcing rhs's type to be that of lhs's (underlying type), such defaults are being intervened with.

you're specifically referring to this?

template<class UnitTypeLhs, std::enable_if_t<traits::is_unit_v<UnitTypeLhs>, int> = 0>
constexpr UnitTypeLhs& operator*=(UnitTypeLhs& lhs, const typename UnitTypeLhs::underlying_type& rhs) noexcept
                                                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{
	lhs = lhs * rhs;
	return lhs;
}

In that case, yeah, we can open up rhs to be any arithmetic or dimensionless type. Obviously dimensioned types would require a change to the type being assigned, which can't be done.

All rhss that have type detail::type_identity_t<UnitTypeLhs> and typename UnitTypeLhs::underlying_type.