renan-siqueira / my-own-WGAN-GP-implementation

My own GAN implementation (WGAN-GP with Pytorch)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

My Own WGAN-GP Implementation

My own GAN implementation with Pytorch. Adapted to utilize GPU if you have an NVIDIA graphics card.

The architecture chosen for this project was WGAN-GP.


With this project you will be able to:

  • Train your own GAN with the images of your choice;
  • Generate as many images as you want after completing the training (Beta version);
  • Produce videos through interpolation of the generated images (Beta version);

How to Use This Project


1. Cloning the Repository:

To clone this repository, use the following command:

git clone https://github.com/renan-siqueira/my-own-WGAN-GP-implementation.git

2. Creating and activating the virtual environment:

Windows:

python -m venv virtual_environment_name

To activate the virtual environment:

virtual_environment_name\Scripts\activate

Linux/Mac:

python3 -m venv virtual_environment_name

To activate the virtual environment:

source virtual_environment_name/bin/activate

3. Installing the dependencies:

Windows / Linux / Mac:

pip install -r requirements.txt

If you have a GPU, follow the steps in the "How to Use GPU" section (below). Otherwise, if you're not using a GPU, install PyTorch with the following command:

pip install torch torchvision torchaudio

How to Use GPU:

1. Installing specific dependencies:

After creating and activating your virtual environment:

Windows/Linux/Mac:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Note: Make sure your hardware and operating system are compatible with CUDA 12+.


4. Preparing the dataset:

    1. Create a folder named dataset inside the src folder.
    1. Inside the dataset folder, create another folder with a name of your choice for the labels.
    1. Copy all the images you wish to use for training into this folder.

5. Configuring training parameters:

The src/json/training_params.json file is set up with optimized parameters for this type of architecture. However, feel free to modify it according to your needs.


6. How to use the main script:

The run.py script is now your central point for executing various operations. It has been set up to accept arguments to dictate its behavior. Here's how to use it:

Training the model:

To train the model, execute the following command:

python run.py --training

7. Monitoring the Training:

  • You can follow the progress directly in the terminal or console.
  • A log file will be generated in the directory specified version training.
  • At the end of each epoch, samples of generated images will be saved in the folder of version training, inside the samples folder.

8. How to generate images after completing the training (Beta version):

To generate images after completing the training, execute:

python run.py --image

You can adjust the parameters for image generation in the configuration file at settings.PATH_IMAGE_PARAMS.


9. How to generate a video through interpolation of the generated images (Beta version):

To generate a video through interpolation of the generated images, execute:

python run.py --video

Adjust the parameters for video generation in the configuration file located at settings.PATH_VIDEO_PARAMS.


10. Upscaling:

If you want to upscale the generated images or video, use the --upscale argument followed by the width value:

python run.py --image --upscale 1024

Replace --image with --video if you're generating a video. The above command will upscale the images to a width of 1024 pixels. Adjust as needed.


License

This project is open-sourced and available to everyone under the MIT License.


Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request if you find any bugs or have suggestions for improvements.

About

My own GAN implementation (WGAN-GP with Pytorch)

License:MIT License


Languages

Language:Python 100.0%