pixelogik / ColorCube

Dominant color extraction for iOS, macOS and Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Later find pixel belongs to which color segment

deshan opened this issue · comments

Hi,
I need to replace the pixel colors of the image by output 4 colors.
Here how do I find a particular pixel belongs to which color segment? What would be the calculation.

Thanks
deshan

Tried few methods

  1. RGB distance : too slow
    https://stackoverflow.com/a/11550492

  2. With tollerance : faster
    https://stackoverflow.com/a/21622229

Anything better?

w
img_3954

commented

@deshan Could you explain your challenge / problem a bit more in detail? What are you trying to achieve and why? Then I might be able to help you.

@pixelogik Thanks for your reply

I am trying to segment an image based on color. Thanks to the ColorCube project I managed to find the dominent colors of the image.
Now trying to segment the original image based on the found dominent colors.
The above picture shows so far progress. But I am not sure about the color comparison method I used to compare the dominent color with image pixel color.

Please correct me if I am wrong.

This is how I did.

NSArray *extractedColors = [colorCube extractBrightColorsFromImage avoidColor:nil count:4];
UIColor* dominentColor1 = extractedColors[0];
UIColor* dominentColor2 = extractedColors[1];
...

// Draw Segmented Image
UIGraphicsBeginImageContext(image.size);
for(i : imagewidth)
{
  for (j : imagehight)
  {
     UIColor* currentPixelColor= getCurrentPixrlColor(i, j); // holds the color of current pixel of the image

    // Compare the currentPixelColor with dominentColors (dominentColor1, ...)
    CGFloat r1   = currentPixelColor.red;
    CGFloat g1  =  currentPixelColor.green;
    CGFloat b1  = currentPixelColor.blue;    
    
    CGFloat r2  = dominentColor1.red;
    CGFloat g2 = dominentColor1.green;
    CGFloat b2  = dominentColor1.blue;

    // Below comparison I am not sure
    float tolerance = 0.1f;

    if(fabs(r1  - r2) <= tolerance &&
               fabs(g1 - g2) <= tolerance &&
               fabs(b1 - b2) <= tolerance ) {

                CGContextSetRGBFillColor(context, r2 , g2, b2 , 1);
                
            }
    ...
  }
}

// Segmented Image
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
commented

@deshan I see.

The ColorCube algorithm is using euclidian distance in color space as the metric. So using the same metric for assigning one of the dominant colors to each pixel makes sense.

In pseudo code this would look something like this:

for each pixel p:
    newColor = white
    // Color space is a 1.0/1.0/1.0 cube so this value is higher than all possible distances 
    closestDistance = 100000.0 
    for each dominant color c:
        deltaR = c.r - p.r
        deltaG = c.g - p.g
        deltaB = c.b- p.b
        delta = deltaR*deltaR+deltaG*deltaG+deltaB*deltaB
        if delta < closestDistance:
            closestDistance = delta
            newColor = c
    p = newColor

Maybe this helps.