yuanming-hu / exposure

Learning infinite-resolution image processing with GAN and RL from unpaired image datasets, using a differentiable photo editing model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SaturationPlus Filter Parameter returns 0

XononoX opened this issue · comments

On all of my results, I am seeing in the .Steps. image that the parameter used for the SaturationPlus filter is always 0.00. Sometimes it's 0.01, but it's never enough to make a meaningful difference in the next image.

I thought that maybe it was a problem with the data I used to train, but when I looked at the pretrained example's outputs, I saw the same issue with the SaturationPlus filter on those .steps. images.

That's the problem I'm experiencing right now. If anyone can offer some guidance, I would appreciate it. Now I'll describe some of the steps I've tried to take to fix it and explain why they haven't worked:

I noticed that Tensorflow now has a function for adjusting the saturation of images directly:

enhanced_s = tf.compat.v1.image.adjust_saturation(img, scale)

where scale is the multiplier applied to the input image's saturation. I tried replacing the process() function of the SaturationPlusFilter() class in filters.py with this function, of course there's no preexisting gradient for the adjust_saturation() function so I just hardcoded the scale value to 1.5 and used the output param to linearly interpolate between the input and enhanced images as Yuanming-Hu did, but the network still doesn't learn how to properly use the filter after 20000 iterations of training.

I got a new model trained after changing some of the code in filters.py. I modified the class SaturationPlusFilter() to more closely resemble the ContrastFilter. After doing this and training for 20000 iterations over my own data, I now am getting the Saturation filter to give me different parameters! Unfortunately it's consistently returning parameters between -0.45 and -0.51, so it's reliably reducing saturation instead of enhancing it, which is what I expected the training to teach the network to do based on my Uncorrected and Corrected image dataset...

Here's the code I changed to get the SaturationPlusFilter to work:

class SaturationPlusFilter(Filter):

  def __init__(self, net, cfg):
    Filter.__init__(self, net, cfg)
    self.short_name = "S+"
    self.num_filter_parameters = 1

  def filter_param_regressor(self, features):
    sat_param = tf.tanh(features)
    return sat_param # This parameter gets passed to the process() 
    # return tf.sigmoid(features) # Default Parameter

  def process(self, img, param):
    param = param[:, :, None, None]
    full_color = tf.compat.v1.image.adjust_saturation(img, 1.5) # MODIFIED

    img = tf.minimum(img, 1.0)

    return lerp(img, full_color, param)

  def visualize_filter(self, debug_info, canvas):
    exposure = debug_info["filter_parameters"][0]
    if canvas.shape[0] == 256:
      cv2.rectangle(canvas, (8, 40), (56, 52), (1, 1, 1), cv2.FILLED)
      cv2.putText(canvas, "S %+.2f" % exposure, (8, 48),
                  cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 0))
    else:
      self.draw_high_res_text("Saturation %+.2f" % exposure, canvas)

Note that I've updated the code globally to reduce images down to a resolution of 256x256 pixels instead of 64x64 in order to preserve the histograms as much as possible since I'm working with very high-resolution (6000x4000) images and the default source image size seems to dropping too much information before processing. This may have some significant impact on the filters which I'm not aware of. So far, performance on my images seems to have improved with this change.