ChristophReich1996 / Multi-StyleGAN

Official and maintained implementation of the paper "Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy" [MICCAI 2021].

Home Page:https://arxiv.org/abs/2106.08285

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy

arXiv License: MIT PWC

Christoph Reich*, Tim Prangemeier*, Christian Wildner & Heinz Koeppl
*Christoph Reich and Tim Prangemeier - both authors contributed equally

1

This repository includes the official and maintained PyTorch implementation of the paper Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy.

Abstract

Time-lapse fluorescent microscopy (TLFM) combined with predictive mathematical modelling is a powerful tool to study the inherently dynamic processes of life on the single-cell level. Such experiments are costly, complex and labour intensive. A complimentary approach and a step towards completely in silico experiments, is to synthesise the imagery itself. Here, we propose Multi-StyleGAN as a descriptive approach to simulate time-lapse fluorescence microscopy imagery of living cells, based on a past experiment. This novel generative adversarial network synthesises a multi-domain sequence of consecutive timesteps. We showcase Multi-StyleGAN on imagery of multiple live yeast cells in microstructured environments and train on a dataset recorded in our laboratory. The simulation captures underlying biophysical factors and time dependencies, such as cell morphology, growth, physical interactions, as well as the intensity of a fluorescent reporter protein. An immediate application is to generate additional training and validation data for feature extraction algorithms or to aid and expedite development of advanced experimental techniques such as online monitoring or control of cells.

If you find this research useful in your work, please cite our paper:

@inproceedings{Reich2021,
    title={{Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy}},
    author={Reich, Christoph and Prangemeier, Tim and Wildner, Christian and Koeppl, Heinz},
    booktitle={{International Conference on Medical image computing and computer-assisted intervention}},
    year={2021},
    organization={Springer}
}

Method

1
Figure 1. Architecture of Multi-StyleGAN. The style mapping network (in purple) transforms the input noise vector into a latent vector , which in turn is incorporated into each stage of the generator by three dual-style-convolutional blocks. The generator predicts a sequence of three consecutive images for both the brightfield and green fluorescent protein channels. The U-Net discriminator [2] distinguishes between real and a fake input sequences by making both a scalar and a pixel-wise real/fake prediction. Standard residual discriminator blocks in gray and non-local blocks in blue.

1
Figure 2. Dual-styled-convolutional block of the Multi-StyleGAN. The incoming latent vector w is transformed into the style vector s by a linear layer. This style vector modulates (mod) the convolutional weights and , which are optionally demodulated (demod) before convolving the (optionally bilinearly upsampled) incoming features of the previous block. Learnable biasses ( and ) and channel-wise Gaussian noise () scaled by a learnable constant (cb and cg), are added to the features. The final output features are obtained by applying a leaky ReLU activation.

Results

1 1
1 1
Figure 3. Samples generated by Multi-StyleGAN. Brightfield channel on the top and green fluorescent protein on the bottom.

Table 1. Evaluation metrics for Multi-StyleGAN and baselines.

Model FID (BF) FVD (BF) FID (GFP) FVD (GFP)
Multi-StyleGAN 33.3687 4.4632 207.8409 30.1650
StyleGAN 2 3d + ADA + U-Net dis. 200.5408 45.6296 224.7860 35.2169
StyleGAN 2 + ADA + U-Net dis. 76.0344 14.7509 298.7545 31.4771

Dependencies

All required Python packages can be installed by:

pip install -r requirements.txt

To install the necessary custom CUDA extensions adapted from StyleGAN 2 [1] run:

cd multi_stylegan/op_static
python setup.py install

The code is tested with PyTorch 1.8.1 and CUDA 11.1 on Ubuntu with Python 3.6! Using other PyTorch and CUDA versions newer than PyTorch 1.7.0 and CUDA 10.1 should also be possible. Please note using a different PyTorch version eventually requires a different version of Kornia or Torchvision.

Data

Our proposed time-lapse fluorescent microscopy is available at this url.

The dataset includes 9696 images structured in sequences of both brightfield and green fluorescent protein (GFP) channels at a resolution of 256 × 256. Data loader classes can be found in the python package dataset.

Trained Model

The checkpoint of our trained Multi-StyleGAN is available at this url.

The checkpoint (PyTorch state dict) includes the EMA generator weights ("generator_ema"), the generator weights ("generator"), the generator optimizer state ("generator_optimizer"), the discriminator weights ("discriminator"), the discriminator optimizer state ("discriminator_optimizer"), and the path-length regularization states ("path_length_regularization")

Usage

To train Multi-StyleGAN in the proposed setting run the following command:

 python -W ingore train_gan.py --cuda_devices "0, 1, 2, 3" --data_parallel --path_to_data "60x_10BF_200GFP_200RFP20_3Z_10min"

Dataset path and cuda devices may differ on other systems! To perform training runs with different settings use the command line arguments of the train_gan.py file. The train_gan.py takes the following command line arguments:

Argument Default value Info
--cuda_devices (str) "0, 1, 2, 3" String of cuda device indexes to be used.
--batch_size (int) 24 Batch size to be utilized while training.
--data_parallel (binary flag) False Binary flag. If set data parallel is utilized.
--epochs (int) 100 Number of epochs to perform while training.
--lr_generator (float) 2e-04 Learning rate of the generator network.
--lr_discriminator (float) 6e-04 Learning rate of the discriminator network.
--path_to_data (str) "./60x_10BF_200GFP_200RFP20_3Z_10min" Path to dataset.
--load_checkpoint (str) "" Path to checkpoint to be loaded. If "" no loading is performed.
--resume_training (binary flag) False Binary flag. If set training is resumed and so cut mix aug/reg and wrong order aug is used.
--no_top_k (binary flag) False Binary flag. If set no top-k is utilized.
--no_ada (binary flag) False Binary flag. If set no adaptive discriminator augmentation is utilized.

To generate samples of the trained Multi-StyleGAN use the get_gan_samples.py script.

 python -W ingore scripts/get_gan_samples.py --cuda_devices "0" --load_checkpoint "checkpoint_100.pt"

This script takes the following command line arguments:

Argument Default value Info
--cuda_devices (str) "0" String of cuda device indexes to be used.
--samples (int) 100 Number of samples to be generated.
--load_checkpoint (str) "checkpoint_100.pt" Path to checkpoint to be loaded.

To generate a latent space interpolation use the gan_latent_space_interpolation.py script. For producing the final .mp4 video ffmpeg is required.

 python -W ingore scripts/gan_latent_space_interpolation.py --cuda_devices "0" --load_checkpoint "checkpoint_100.pt"

This script takes the following command line arguments:

Argument Default value Info
--cuda_devices (str) "0" String of cuda device indexes to be used.
--load_checkpoint (str) "checkpoint_100.pt" Path to checkpoint to be loaded.

Acknowledgements

We thank Markus Baier for aid with the computational setup, Klaus-Dieter Voss for aid with the microfluidics fabrication, and Tim Kircher, Tizian Dege, and Florian Schwald for aid with the data preparation.

We also thank piergiaj for providing a PyTorch i3d implementation and trained models, which we used to compute the FVD score. The used code is indicated and is available under the original licence.

References

[1] @inproceedings{Karras2020,
    title={Analyzing and improving the image quality of stylegan},
    author={Karras, Tero and Laine, Samuli and Aittala, Miika and Hellsten, Janne and Lehtinen, Jaakko and Aila, Timo},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={8110--8119},
    year={2020}
}
[2] @inproceedings{Schonfeld2020,
    title={A u-net based discriminator for generative adversarial networks},
    author={Schonfeld, Edgar and Schiele, Bernt and Khoreva, Anna},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={8207--8216},
    year={2020}
}

About

Official and maintained implementation of the paper "Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy" [MICCAI 2021].

https://arxiv.org/abs/2106.08285

License:MIT License


Languages

Language:Python 94.1%Language:Cuda 5.2%Language:C++ 0.8%