andjoer / AI_color_grade_lut

Creating color LUTs with artificial intelligence

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"No training configuration found in the save file"

acaffe opened this issue · comments

In 64pix2pix after running the last bit of code under "save the model" it says:

tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model.

and in model_img2LUT it says:

WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually.

Did anyone find a solution this problem?

Hi, thanks for your comment. Are you going to retrain a loaded mode? I guess indeed I never tried out to retrain a pretrained model. If you not trying to retrain I guess these messages could be ignored

I'm training a new model, the thing is that at the end of this page:
https://github.com/andjoer/AI_color_grade_lut
it says:
Wait until you see the generated .cube file in the content folder
Download the .cube file and apply it in any program that can apply LUTs (Photoshop, Premiere, Resolve etc.)

But no .cube file is ever generated all I got is the warning messages from above, I trained a new model 20 mins ago but now I also get this at the end of the model_imgLUT colab:

ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

I will try it again with the 128pix2pix maybe that one will work?

(Also for anyone that gets a problem at the "Read the files" stage, there is this bug where when it extracts the "pictures.zip" the "train" and "test" folders get moved to the root and the "pictures" folder gets deleted so you have to create a new folder called "pictures" and move the "test" and "train" folders back in.)

it has been a while since I worked on this. But what I see is that the parameter img_size needs to be set manually in order to fit the trained model. so if you trained 64px it needs 64. Could you maybe try to model_img2LUT with a 128pix model that I published?

I just tried it with wallstreet.h5
it gave me this error:

ValueError Traceback (most recent call last)

in ()
14
15
---> 16 output = processing(advanced, max_factor, model_path, file_path, reverse, size)

8 frames

in processing(advanced, max_factor, model_path, image_path, reverse, img_size)
209 LUT = img_to_lut_adv(input_image[0], target[0], max_factor)
210 else: # create the LUT in standard mode
--> 211 LUT = img_to_lut(input_image[0], target[0])
212 filename, file_extension = os.path.splitext(image_path)
213 output = filename + '.cube'

in img_to_lut(input_image, target_image)
94 interpolation = neighbors.KNeighborsRegressor(40, weights='distance')
95 LUT = np.asarray(LUT)
---> 96 tRp = interpolation.fit(icolor, tR).predict(LUT)
97 tGp = interpolation.fit(icolor, tG).predict(LUT)
98 tBp = interpolation.fit(icolor, tB).predict(LUT)

/usr/local/lib/python3.7/dist-packages/sklearn/neighbors/_regression.py in fit(self, X, y)
211 self.weights = _check_weights(self.weights)
212
--> 213 return self._fit(X, y)
214
215 def predict(self, X):

/usr/local/lib/python3.7/dist-packages/sklearn/neighbors/_base.py in _fit(self, X, y)
398 if self._get_tags()["requires_y"]:
399 if not isinstance(X, (KDTree, BallTree, NeighborsBase)):
--> 400 X, y = self._validate_data(X, y, accept_sparse="csr", multi_output=True)
401
402 if is_classifier(self):

/usr/local/lib/python3.7/dist-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params)
579 y = check_array(y, **check_y_params)
580 else:
--> 581 X, y = check_X_y(X, y, **check_params)
582 out = X, y
583

/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator)
977 )
978
--> 979 y = _check_y(y, multi_output=multi_output, y_numeric=y_numeric)
980
981 check_consistent_length(X, y)

/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in _check_y(y, multi_output, y_numeric)
988 if multi_output:
989 y = check_array(
--> 990 y, accept_sparse="csr", force_all_finite=True, ensure_2d=False, dtype=None
991 )
992 else:

/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator)
798
799 if force_all_finite:
--> 800 _assert_all_finite(array, allow_nan=force_all_finite == "allow-nan")
801
802 if ensure_min_samples > 0:

/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in _assert_all_finite(X, allow_nan, msg_dtype)
114 raise ValueError(
115 msg_err.format(
--> 116 type_err, msg_dtype if msg_dtype is not None else X.dtype
117 )
118 )

ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

sorry, I currently don't have the time to debug myself. But could you try to set advanced = 0? this has less parameters so less could go wrong

"advanced" was already set to false, I set it to = 0 (maybe there is a difference?) just now it didn't work. Don't worry about it, somebody else might come up with a solution.