Operator Space->Field can't be part of a ProductSpaceOperator
kohr-h opened this issue · comments
Context
I stumbled upon this issue while trying to use a scalar-valued side constraint in an optimization problem.
Problem
We currently have a few places with hard-coded checks isinstance(space, LinearSpace)
, and one of them is the range of operators provided to ProductSpaceOperator
. This excludes functionals (i.e., Functional
instances and Operator
instances whose range
is a Field
) in the first place.
Workarounds
If that restriction is relaxed to isinstance(space, (LinearSpace, Field))
-- as already done in a few places -- we get to the implicit LinearSpace
assumptions. The most obvious one is in ProductSpaceOperator._call
: it uses op(input, out=out[i])
in the loop, which isn't allowed for Functional
because its range elements are not assignable in place.
If we work around that by first checking op.is_functional
and doing out[i] = op(input)
for True
, we get into the next issue: ProductSpaceElement.__setitem__
hard-codes in-place semantics in the loop over the parts.
At this point, it becomes really tedious to do all those checks over and over, and to change the logic of complicated functions just to make them work with Field
instead of LinearSpace
.
The actual problem
Looking at it from a different angle, the fundamental problem that we try to avoid by disallowing op(input, out=field_elem)
is that Field
elements are primitive Python types that cannot be assigned to in place.
Solution?
Thus, a logical way out of this problem is to create a wrapper class, e.g., RealNumber
, for those types, that does allow assignment and usage as out
in operators. Of course, to make that class behave nicely, it needs to implement basically all the methods that LinearSpaceElement
provides, and Field
also needs to be extended correspondingly. It's not totally straightforward, but certainly doable.
So the question is: how do we want to resolve the original issue?
I feel that the current limitation is severe enough that we address it in some way, but I'm not entirely sure which way is the best. I'd be glad to hear opinions on this.
I think that this is more or less the same problem as #563, to which @adler-j create an example of a work-around for in PR #565 (which however was closed and not merged).
My thoughts on the issue: in general it would be nice if Field
worked more like LinearSpace
, since then it would be easier to really interpret Functional
as a special instance of Operator
. However, when it comes to how to implement this in the best way, I am not sure. What you describe sounds good to me, and it sounds like #565 may be reused for this purpose.
Makes me wonder: Do we have a good enough reason to have Field
and its subclasses in the first place? What would break if we used rn(1)
instead of RealNumbers()
? Just a thought.
I'm asking that because to solve the issue, it seems to be necessary to make the two more and more similar, to the point where they hardly differ anymore.
I suppose two elements of rn(1)
can't be multiplied in ODL as general LinearSpace
s aren't algebras. However, this one is.
Mathematically true, but practically, all TensorSpace
's do support (componentwise) multiplication.
I think the closest analogy here is with NumPy scalars versus one-element arrays: np.array(1.0)
versus np.array([1.0])
. The space rn(1)
is 1-dimensional, whereas RealNumbers()
could be considered 0-dimensional.
Probably the correct space would be rn(())
rather than rn(1)
. And because it's kind of unreadable, we could have a function real_numbers()
as an alias for rn(())
.