DeepSceneSeg / EfficientPS

PyTorch code for training EfficientPS for Panoptic Segmentation

Home Page:http://panoptic.cs.uni-freiburg.de/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cityscapes inference

GabrieleGalimberti-GaleSelector opened this issue · comments

Hi,
I want obtain different color for different intances of the same class, but if I put an input image I have this result:

stuttgart_000105_000019_panoptic

If I use a combined code with "tools/cityscapes_inference.py" and "tools/cityscapes_save_predictions.py" I have istances with the same color and their boundaries white.

Is this the final output of the network? or did I skip a step?

If I skipped a step, which step and code can I use to obtain the final panoptic output result?

-Ln 49 and 50 define the color palette in tools/cityscapes_save_predictions.py which is defined for the 18 (stuff and thing classes). You will have to first define a color palette of maybe 256 (or even 30 should be enough depending on the image or you can run np.unique to find the total number of instances+stuff_classes pan_pred has)

-remove lines from 70-71 to 75-83 and 86
modify ln 72 and 73 as
pan_pred = pan_pred.numpy()
panoptic_col[pan_pred==0] = colors.shape[0] - 1
panoptic_col = Image.fromarray(colors[panoptic_col])
(assuming the colors array last row stores the void pixel color)

-replace sem_img with panoptic_col in Ln 85 with panoptic_col

@mohan1914 Do you have suggestions on how to properly define the palette?