jinnypark9393 / amazon-lookout-for-vision-image-augmentation-pipeline

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Image Augmentation for normal and abnormal images for Lookout for Vision Model Training

1 Generate augmented normal image dataset and prepare input file using mask and manifest generated from Amazon Ground Truth

Make sure you have a normal image such as s10.png to start with.

Start an Amazon Gound Truth labeling job and generate a mask such as mask.png and a manifest file such as output.manifest in this example.

run notebook im_augmentation_input.ipynb to generate an augmented normal image dataset and input file for SageMaker endpoint.

2 Image Augmentation for synthetic defects generation

  1. download pre-trained model from LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions and put it in s3 model folder

Install tool for yandex disk link extraction:

pip3 install wldhx.yadisk-direct

Download the best model

curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip
unzip big-lama.zip

Create a .tar.gz file for the model folder

tar -czvf big-lama.tar.gz big-lama/

Upload the .tar.gz file of the model to the target s3 model folder

aws s3 cp big-lama.tar.gz s3://qualityinspection/model/big-lama.tar.gz

  1. go to synthetic-defects-endpoint folder.
cd synthetic-defects-endpoint

  1. run notebook deploy-run-async-endpoint.ipynb to deploy the endpoint that generates missing component synthetic defects.

The deployment of endpoint will take about 6-8 minutes. If the inference of the asynchronous endpoint is successful, you will see the following message in the log file: Inference request succeed. The total process time after invoking the endpoint is about 3.5 minutes for this dataset.

3 Check Results in S3 buckets.

Please check the train/test manifests generated by the endpoint in s3 bucket at

s3://qualityinspection/synthetic_defect/manifest/train/l4v_train.manifest

s3://qualityinspection/synthetic_defect/manifest/test/l4v_test.manifest

We will use them for Lookout for Vision model training.

4 Create dataset for Lookout for Vision anomaly localization and trigger model training

The train/test manifest files contain information about the images and image labels that you can use to train and test a Lookout for Vision anomaly localization model. You can create a project in Amazon Lookout for Vision and choose Create dataset - Import images labeled by SageMaker Ground Truth. Then you can give the manifest file s3 locations we just generated from last step for training and test dataset separately then click 'Train model' button on the right corner of the page to start model training.

About

License:Other


Languages

Language:Jupyter Notebook 85.0%Language:Python 15.0%