BitBlt rule blendAlphaScaled may not be correct
MariusDoe opened this issue · comments
The BitBlt
blending rule Form blendAlphaScaled
(34
) doesn't blend the colors 0x00000000
(source; fully transparent) and 0xFFFFFFFF
(destination; white) correctly. The output is 0xFEFEFEFE
(slightly transparent white), whereas it should be 0xFFFFFFFF
(fully opaque white).
I think the problem lies in the implementation of BitBltSimulation>>#alphaBlendScaled:with:
. When calculating the summand containing the destinationWord
, a right bit shift by 8
is performed to normalize the result after multiplying with unAlpha
(semantically a division by 256
). However, unAlpha
is in the range 0x00
- 0xFF
and thus a division by 0xFF = 255
should be used instead. I think that the other bit shifts by 8
in the function are ok, as they are only used to extract or compose certain bytes and not to (semantically) divide a value by 256
.
This problem causes the described behavior, because the following computation is performed in each channel:
((0xFF * 0xFF) >> 8) + 0x00 = 0xFE01 >> 8 = 0xFE
A division by 0xFF
produces the expected result:
((0xFF * 0xFF) / 0xFF) + 0x00 = 0xFE01 / 0xFF = 0xFF
Code to reproduce:
| src dst |
src := (Form extent: 1 @ 1 depth: 1)
colorAt: 0 @ 0 put: Color black; "transparency is applied with the fillColor:"
yourself.
dst := (Form extent: 1 @ 1 depth: 32)
colorAt: 0 @ 0 put: Color white; "16rFFFFFFFF"
yourself.
dst
copyBits: (0 @ 0 corner: 1 @ 1)
from: src
at: 0 @ 0
clippingBox: (0 @ 0 corner: 1 @ 1)
rule: Form blendAlphaScaled
fillColor: Color transparent. "16r00000000"
(dst pixelValueAt: 0 @ 0) hex "16rFEFEFEFE"
Notes:
- I did not thoroughly test the proposed change, just calculated the result for the described case.
- I am no expert in writing high performance code, but a potential improved implementation may not use a division but rather a more optimized combination of
+
,*
and>>
to increase performance. A quick search on the internet shows some possible alternatives. - I read in the contribution guidelines, that the VM is developed in Smalltalk and the C code is only generated from the Smalltalk code. I did not have the time to set up a VM image and just looked into the C code to find the problem. I hope the problem description still helps.
- The described behavior doesn't occur when blending two 32 bit
Form
s. I think this is due toalphaSourceBlendBits32
being called in this case, which is optimized for edge cases with full or zero transparency and handles those correctly (it doesn't callalphaBlendScaledwith
).
The described behavior doesn't occur when blending two 32 bit Forms.
Would you elaborate on your scenario? How do those lower-depth forms come to be these days?
The described behavior doesn't occur when blending two 32 bit Forms.
Would you elaborate on your scenario? How do those lower-depth forms come to be these days?
We use Pen
, which is a subclass of BitBlt
. More specifically, we use Pen>>roundNib:
among others to create a circular brush for painting on a Form
. This method creates a Form
with depth 1
. One can then set the color of the brush with Pen>>color:
, which calls BitBlt>>fillColor:
which in turn sets its halftoneForm
. According to the class comment of BitBlt
, the pixel values of the halftoneForm
are ANDed with the sourceForm
pixels before the blending rule is applied.
A workaround for us might be to use Pens with 32 bit sourceForms, if that is possible. I only noticed the different behavior of 32 bit forms while trying to create a code snippet to reproduce the problem, so I haven't tried this workaround yet. Also, I think that the problem still occurs when using alpha values between 1
and 254
, even if those might not be as noticeable.
An example of how to efficiently divide by 255 in very similar context:
https://docs.google.com/document/d/1tNrMWShq55rfltcZxAx1N-6f82Dt7MWLDHm-5GQVEnE/edit
Oups, see https://source.squeak.org/VMMaker/VMMaker.oscog-nice.3249.diff
For some reason, I often forget a bitAnd: operation necessary for safely multiplexing the division...
Should be fixed by 085c500