nodeca / pica

Resize image in browser with high quality and high speed

Home Page:http://nodeca.github.io/pica/demo/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

USM color shift?

tomdav999 opened this issue · comments

Hi, I've been doing some comparisons of pica's USM output to Photoshop and ImageMagick. Photoshop and ImageMagick seem to produce similar results to each other (with comparable USM parameters) but pica is not really comparable to either. It seems to me that pica sharpens significantly more, and there seems to be some color shift.

The color shift is the bigger issue for me, and it's very different from Photoshop and ImageMagick which seem more natural. At first I wasn't sure if the color shift was a byproduct of canvas conversion and/or resizing, so I resized with pica using no sharpening (only canvas resize) and the results are comparable to Photoshop / ImageMagick, so I'm pretty sure the color shift is coming from pica's USM recipe (and it seems to add a bluish tint, although it may depend on image).

I tried looking at the code to see what pica is doing and of course it is well beyond my area of expertise, ha. I did a search on SO and found this thread where Vitaly refers to color shift, so maybe this is to be expected?

https://stackoverflow.com/questions/13296645/opencv-and-unsharp-masking-like-adobe-photoshop/23322820#23322820

Is this just "how it is" with pica? Has anyone developed alternative USM recipes for pica that come closer to Photoshop / ImageMagick?

Here is my test image (not my image, but it seemed to be a good one to test various things like chroma sampling, USM, etc.)

https://live.staticflickr.com/65535/51137845528_83bd94d4bf_o.jpg

If you run it thru photoshop USM you can see color of the water stays somewhat neutral and golden, but pica USM tends to replace the golden highlights in the water with more blue.

Unfortunately, i'm not strong with image processing formulas. I can only implement those fast :). I found 2 major approaches about sharping:

  • apply unsharp mask at the end
  • per-blur before resize filter

For the obvious reasons (image size), post-processing with unsharp mask is faster. But you still can apply pre-blur manually, if you wish different approach https://github.com/nodeca/glur.

Unsharp mask is here https://github.com/nodeca/multimath/tree/master/lib/unsharp_mask. If you wish to experiment - disable webassembly in options and play with js sources.

In short: if you know exactly what to change, i can do that. But i can't participate in new investigations (only try to remember my old findings, if you have questions). I'm really far from this area, sorry. May be we could add alternate sharpening option if it works.


#35 - one of distortion source is, pica should use 32-bits intermediate storage between vertical/horizontal passes of convolvers everywhere. But i think this is something generic, not causing problems with color.

Thank you, I'm starting to appreciate why you say pica (and blob reduce) is best for kittens and selfies but not resizing high quality images. No wonder nobody seems to have come up with a good solution to resize file uploads client side. Even if pica's USM worked as well as Photoshop the images would still suffer quality loss due to various issues you mention, and the sad reality that modern browsers still can't competently convert canvas to jpeg without chroma subsampling (barring save at 100% quality in chrome and safari which defeats the purpose of trying to reduce filesize on the client). It seems only Firefox disables chroma subsampling at jpeg quality 0.9 and higher.

Edit: I found a pure js library that can encode jpeg without chroma sub-sampling.

https://github.com/vigata/petitoJPEG

Yes, it's slower than canvas.toBlob() but it's fast enough and you aren't forced to use 90% quality in Firefox or 100% quality in Chrome and Safari to preserve chroma, with bonus points that results will be consistent for all browsers (instead of "black box" jpeg encoding).

Yes, unfortunately, js api has limitations. If you need top quality in browser, the only way is to compile some lib with emscripten and work with jpeg directly, without canvas & in LAB color space. But that's completely different approach, with different "costs".

If you decide to not investigate USM issue - please close the ticket.

Closed

Re-opening this as I think I found the problem. I believe USM should be applied to the V channel (HSV) instead of lightness (HSL).

See: https://en.wikipedia.org/wiki/HSL_and_HSV

"The difference between HSL and HSV is that a color with maximum lightness in HSL is pure white, but a color with maximum value/brightness in HSV is analogous to shining a white light on a colored object (e.g. shining a bright white light on a red object causes the object to still appear red, just brighter and more intense"

This is exactly what I'm seeing with pica's USM. It tends to white wash the image instead of brightening it. For example, in my test image pica's USM shifts the golden shimmer in the water to blue, whereas photoshop and imagemagick preserve the golden shimmer.

Unfortunately, my head is spinning from the code which works with 16 bit and 12-bit unsigned integers so I'm not exactly sure how to change the conversion from HSL to HSV. I think I figured out RGB > HSV, but not sure how to convert HSV back to RGB after applying USM to the V channel. Is this something you could tweak?

@tomdav999 could you attach test images with comments?

  • original
  • after USM (invalid)
  • after photoshop (valid)
  • all above + area marks where to see difference

FYI: formulas are here http://www.easyrgb.com/en/math.php, USM src here https://github.com/nodeca/multimath/blob/master/lib/unsharp_mask/unsharp_mask.js (can be used for override)

Thanks, I had tried working from the examples in http://www.easyrgb.com/en/math.php, but the problem is... I'm not sure how to tweak those examples to use 16 bit and 12 bit unsigned integers used in the code.

JS has no integers in public api. Internal JIT optimization is too sophisticated thing to explain in couple of words. Just disable webassembly, and do all in floats (with auto-converted values). The only difference will be speed. But if you tell me result is right - i will care about rewrite with optimizations.

PS. If you wish to dive into JS JIT inline caches optimization, i'd recommend start with reading https://mrale.ph/

Here is a comparison of pica to imagemagick (note: photoshop produces almost identical output as imagemagick so I did not include it in the comparison). This is an animated png that swaps pica and imagemagick every half second. You can see the golden highlights in the center of the image pulsing in/out if you look carefully.

compare

For imagemagick/photoshop I use USM 150, 0.5, 2. For pica I use USM 75, 0.5, 2. I think Pica's first parameter is equivalent to half of the first parameter in photoshop / imagemagick.

The original image is linked above (see OP) if you want to try yourself. Also, I used https://github.com/vigata/petitoJPEG to encode the output from Pica (instead of using browser encoding) to preserve chroma. As noted above, the native browser jpeg encoding will subsample the chroma unless you save at 100% jpeg quality (firefox being the exception), so if you run the same tests be sure to save as PNG (or 100% jpeg quality) in pica!

@tomdav999 please, understand me right - i'm far from image processing, and don't remember a lot of details. If i could skip thinking about images, there are chances to do math fast. If i have to think about images - everything will be slow.

#209 (comment) - here is list of desired fixtures, having those all in one place (for example, ZIP) could help me to "not thinking about images".

OK, I was able to re-write the code to apply USM to the V (brightness) channel instead of the L (lightness) channel. Results are much better now, in fact there is less color shift than USM with Photoshop or Imagemagick. I believe this is because Photoshop and Imagemagick apply USM to all channels, so should have some color shift, but also more sharpening (I think). See this article on why it's recommended to sharpen V (instead of all channels):

https://www.linuxtopia.org/online_books/graphics_tools/gimp_advanced_guide/gimp_guide_node63_003.html

I still haven't figured out why Pica's amount parameter needs to be so much smaller than Photoshop. Perhaps due to different Gaussian blur? In any case Pica's recommended USM parameters (80, 0.6, 2) produce acceptable results, very similar to USM (1.50, 0.5, 2) in Photoshop which is what I typically use to restore sharpness after resize.

Edit: I think I found the problem causing the parameter differences, see "emulatePhotoshop" option:

module.exports = function unsharp(img, width, height, amount, radius, threshold) {
	var r, g, b;
	var min, max;
	var diff, iTimes4, h, s, v, floor_h, y, z;

	if (amount === 0 || radius < 0.5) {
		return;
	} else if (radius > 2.0) {
		radius = 2.0;
	}

	var amountFp = (amount / 100 * 0x1000 + 0.5)|0;
	var thresholdFp = (threshold * 257)|0;

	var size = width * height;
	var Orig = new Uint16Array(size);

	// option to sharpen RGB (comparable to photoshop USM on RGB channels with standardized "amount" and "threshold" parameters)
	const emulatePhotoshop = false;
	if (emulatePhotoshop) {
		for (var j = 0; j < 3; j++) { // r, g, b
			for (var i = 0; i < size; i++) {
				Orig[i] = (img[i * 4 + j] * 257)|0;
			}

			var Blur = new Uint16Array(Orig);
			glur_mono16(Blur, width, height, radius);

			for (var i = 0; i < size; i++) {
				// do NOT multiply by 2 (makes "amount" and "threshold" comparable to photoshop)
				diff = Orig[i] - Blur[i];

				if (Math.abs(diff) >= thresholdFp) {
					y = Orig[i] + (amountFp * diff + 0x800 >> 12) >> 8;
					if (y > 255) y = 255; else if (y < 0) y = 0;
					img[i * 4 + j] = y;
				}
			}
		}
		return;
	}

	for (var i = 0; i < size; i++) {
		iTimes4 = i * 4;
		r = img[iTimes4];
		g = img[iTimes4 + 1];
		b = img[iTimes4 + 2];
		max = (r >= g && r >= b) ? r : (g >= b && g >= r) ? g : b;

		// HSV v in terms of 16 bit unsigned integer in [0, 0xffff]
		Orig[i] = (max * 257)|0;
	}

	var Blur = new Uint16Array(Orig);
	glur_mono16(Blur, width, height, radius);

	/* eslint-disable indent */
	for (var i = 0; i < size; i++) {
		// this is non-standard (multiplying by 2 effectively halves the "threshold" and doubles the "amount")
		diff = 2 * (Orig[i] - Blur[i]);

		if (Math.abs(diff) >= thresholdFp) {
			// add unsharp mask mask to the v (brightness) channel
			v = Orig[i] + (amountFp * diff + 0x800 >> 12);
			if (v > 0xffff) v = 0xffff; else if (v < 0) v = 0;

			// convert RGB to HSV, use sharpened v above, and convert back to RGB
			// math is taken from here: http://www.easyrgb.com/en/math.php
			iTimes4 = i * 4;
			r = img[iTimes4];
			g = img[iTimes4 + 1];
			b = img[iTimes4 + 2];
			max = (r >= g && r >= b) ? r : (g >= r && g >= b) ? g : b;
			min = (r <= g && r <= b) ? r : (g <= r && g <= b) ? g : b;

			if (min === max) {
				r = g = b = v >> 8;
			} else {
				// HSV s in terms of float in [0, 1]
				s = (max - min) / max;

				// HSV h in terms of float in [0, 6)
				h = (r === max) ? (g - b) / (max - min) : (g === max) ? 2 + (b - r) / (max - min) : 4  + (r - g) / (max - min);
				if (h < 0) h += 6; else if (h >= 6) h -= 6;

				floor_h = Math.floor(h);
				y = v * (1 - s) >> 8;
				z = v * (1 - s * ((floor_h & 1) ? h - floor_h : 1 - (h - floor_h))) >> 8;
				v = v >> 8; // convert v from [0, 0xffff] to [0, 255]
				if (floor_h == 0) {
					r = v; g = z; b = y;
				} else if (floor_h == 1) {
					r = z; g = v; b = y;
				} else if (floor_h == 2) {
					r = y; g = v; b = z;
				} else if (floor_h == 3) {
					r = y; g = z; b = v;
				} else if (floor_h == 4) {
					r = z; g = y; b = v;
				} else {
					r = v; g = y; b = z;
				}
			}

			img[iTimes4] = r;
			img[iTimes4 + 1] = g;
			img[iTimes4 + 2] = b;
		}
	}
};

Perhaps due to different Gaussian blur?

AFAIK, gaussian blur can not be "different". It can be gaussian or not gaussian only. But sources are available with all references, you can try to verify correctness.


Thanks for HSV inverstigations. That helped significantly.

OK, I think I figured out why Pica's Amount parameter is not comparable to Photoshop. The problem is the "diff" calculation which multiplies (O - GB) by 2. I think you were misled by Royi's comments in this SO topic:

https://stackoverflow.com/questions/13296645/opencv-and-unsharp-masking-like-adobe-photoshop/

The multiplication by 2 has two side effects. Not only does it effectively double the Amount parameter, it also effectively halves the threshold parameter (at least how it was implemented in Pica). For example, Amount = 80, Threshold = 2 in Pica is comparable to Amount =1.60, Threshold = 1 in Photoshop. I'm not sure if it's worth correcting or not (for consistency with prior versions).

Just to test I added an emulatePhotoshop option (see edited code above) and found results are very similar to Photoshop (testing a range of Amount and Threshold parameters).

I also tested sharpening the LAB "L" channel in Photoshop and it seemed to have similar side effects to sharpening "L" in HSL I'm by no means an expert in any of this, but my conclusion is sharpening the HSV brightness channel is preferable.

@tomdav999 i hope, new unsharp code will be commited tomorrow, and after you check we can decide what to do with x2 amount.

I have no principal objection about make params more close to something well known. May be, i would add option { unsharpMode: 'gimp' } to be more comfortable for different users. If anyone knows how to normalise params.

I think it's fixed now, I replaced unsharp on L with unsharp on V. Please verify.

As an additional optimization, RGB->HSV->RGB conversion doesn't look to be necessary. Since (H,S,V*x) is equivalent to (R*x,G*x,B*x), we can apply unsharp on V channel directly to RGB.

@tomdav999 if USM works as expected, we need to clarify params scaling. It's a good chance for breaking changes, because release will be major.

@puzrin, I am unfamiliar with GIMP as I do not use that software. I have only used Photoshop and ImageMagick. For Photoshop and ImageMagick I am fairly certain the threshold is compared to (Original - Blur) without 2x multiplier. I would suggest removing the 2x multiplier so that Amount and Threshold parameters are comparable to Photoshop and ImageMagick. It seems GIMP parameters are comparable to Photoshop per these discussions:

https://www.dpchallenge.com/forum.php?action=read&FORUM_THREAD_ID=137173
(this discussion is almost 20 years old and I believe GIMP has been enhanced to sharpen V channel only in recent years)

https://redskiesatnight.com/2005/04/06/sharpening-using-image-magick/

@rlidwka, not sure how to really test your updated code as I am using blob reduce (not pica). Great observation regarding optimization. I didn't realize scaling V was equivalent to scaling RGB. It's so simple, it makes me wonder why we don't see more USM algorithms sharpening "V" (max RGB). GIMP is the only software that seems to do this. I believe USM in Photoshop and ImageMagick sharpens all channels independently. The Photoshop "pros" recommend sharpening the "L" channel in LAB. From my tests, sharpening LAB "L" is similar to sharpening HSL "L" and produces color shift, whereas sharpening V produces no color shift, but maybe there are other reasons to sharpen "L", not sure.

@puzrin, given the lack of certainty in terms of "best practice" I would suggest offering 3 options, original HSL,, standardized HSL (removing 2x multiplier to make parameters comparable to photoshop LAB sharpening), and HSV (removing 2x multiplier to make parameters comparable to GIMP).

@tomByrer

not sure how to really test your updated code as I am using blob reduce (not pica).

https://github.com/nodeca/pica/tree/master/demo try demo folder at your local disk (i did not updated gh-pages). Use build to update /dist folder (after you pulled git updates).

I would suggest offering 3 options, original HSL,, standardized HSL (removing 2x multiplier to make parameters comparable to photoshop LAB sharpening), and HSV (removing 2x multiplier to make parameters comparable to GIMP).

  • I'm strongly against adding zillions of options. Pica api is simple because everything possible to reject is rejected :). Only one sharpening profile should survive.
  • I'm ok with remove 2x multiplier and announce this in changelog & migration docs.

Is that all or more breaking (or non-breaking) changes required?

2x multiplier removed in da08393

@tomByrer i've updated dist folder & demo http://nodeca.github.io/pica/demo/. Try it please.

If that's ok, i will release.

I tested the pica demo and it seems to be working fine (no color shift even with maximum amount and radius). It's a little hard to verify if the 2x removal is working (due to small output image) but looking at the code it appears good.

Released 7.0.0

I've updated image-blob-reduce

Thanks, it looks like some of the documentation was updated to reflect the change in parameters, but these still reflect old parameters (to eliminate confusion you might want to update them):

// Resize from Canvas/Image to another Canvas
pica.resize(from, to, {
unsharpAmount: 80,
unsharpRadius: 0.6,
unsharpThreshold: 2
})

unsharpAmount - >=0, in percents. Default = 0 (off). Usually between 50 to 100 is good.

Also, the readme has a bad link for the comparison of USM parameters to popular software. Here is the correct link:

https://github.com/nodeca/pica/wiki/Unsharp-params-in-popular-softare

I can't update that wiki but threshold comparison is wrong for ImageMagick. In ImageMagick the threshold is defined in terms of [0, 1] whereas Pica is defined in terms of [0, 255] (similar to photoshop) so I think the wiki threshold factor for ImageMagick should be 1/255x.

Thanks! Updated all as you said (also changed header/link in wiki).

PS. AFAIK, you should see edit button in wiki. I've checked permissions, those are not restricted to "project team". https://github.com/nodeca/pica/wiki/Unsharp-mask-params-in-popular-softare i'm not sure how correct is this info and trust to anyone who think this can be improved.