KichangKim / DeepDanbooru

AI based multi-label girl image classification system, implemented by using TensorFlow.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training script can't read dataset?

SoulflareRC opened this issue · comments

Hi I was trying to train the model using the commandline tool, but I'm not sure if the data is actually read and used to train the model. The training completed instantly for me and I don't think it has actually read the data, as the image below shows
image
My dataset folder's directory tree looks like this
image
and my-dataset.sqlite file is in the same folder as images folder is in.
Here is my project.json
{ "image_width": 299, "image_height": 299, "database_path": "D:\\pycharmWorkspace\\DeepDanbooru\\test\\MyDataset\\my-dataset.sqlite", "minimum_tag_count": 0, "model": "resnet_custom_v2", "minibatch_size": 32, "epoch_count": 10, "export_model_per_epoch": 10, "checkpoint_frequency_mb": 200, "console_logging_frequency_mb": 10, "loss": "binary_crossentropy", "optimizer": "adam", "learning_rate": 0.001, "rotation_range": [ 0.0, 360.0 ], "scale_range": [ 0.9, 1.1 ], "shift_range": [ -0.1, 0.1 ], "mixed_precision": false }

What am I missing here? Could you help with this issue?
Thank you.

Make sure that your image filenames have all lower-case.

Hi thank you for replying! My filenames are all md5 so I'm pretty sure filenames are not an issue. I was suspecting the issue was from the image size. Do all the images have to be 299x299(or generally the same size)? Can I change this parameter in the json file?

You don't need to resize your image files. It can be any size and DeepDanbooru resize it automatically.