This repo is the official code for
Published on ICCV 2021. By MC2 Lab @ Beihang University.
- Python 3 (Recommend to use Anaconda).
- PyTorch = 1.0.1 .
- See environment.yml for other dependencies.
-
Run
python train.py
for training. -
Run
python test.py
for testing. -
Set the model path (where the trained model saved) and the image path (where the image saved during testing) to your local path.
line45: MODEL_PATH = ''
line49: IMAGE_PATH = ''
-
In this paper, we use the commonly used dataset DIV2K, COCO, and ImageNet.
-
For train or test on your own dataset, change the code in
config.py
:line30: TRAIN_PATH = ''
line31: VAL_PATH = ''
-
Here we provide a trained model.
-
Fill in the
MODEL_PATH
and the file namesuffix
before testing by the trained model. -
For example, if the model name is
model.pt
and its path is/home/usrname/Hinet/model/
, setMODEL_PATH = '/home/usrname/Hinet/model/'
and file namesuffix = 'model.pt'
.
-
Here we provide a training demo to show how to train a converged model in the early training stage. During this process, the model may suffer from explosion. Our solution is to stop the training process at a normal node and abate the learning rate. Then, continue to train the model.
-
Note that in order to log the training process, we have imported
logging
package, with slightly modifiedtrain_logging.py
andutil.py
files. -
Stage1: Run
python train_logging.py
for training with initialconfig.py
(learning rate=10^-4.5).The logging file is train__211222-183515.log. (The values of r_loss and g_loss are reversed due to a small bug, which has been debuged in stage2.)
See the tensorboard:
Note that in the 507-th epoch the model exploded. Thus, we stop the stage1 at epoch 500. -
Stage2: Set
suffix = 'model_checkpoint_00500.pt'
andtain_next = True
andtrained_epoch = 500
.Change the learning rate from 10^-4.5 to 10^-5.0.
Run
python train_logging.py
for training.
The logging file is train__211223-100502.log.
See the tensorboard:
Note that in the 1692-th epoch the model exploded. Thus, we stop the stage2 at epoch 1690. -
Stage3: Similar operation.
Change the learning rate from 10^-5.0 to 10^-5.2.
The logging file is train__211224-105010.log.
See the tensorboard:
We can see that the network has initially converged. Then, you can change the super-parameters lamda according to the PSNR to balance the quality between stego image and recovered image. Note that the PSNR in the tensorboard is RGB-PSNR and in our paper is Y-PSNR.
- The
batchsize_val
inconfig.py
should be at least2*number of gpus
and it should be divisible by number of gpus.
If you find our paper or code useful for your research, please cite:
@InProceedings{Jing_2021_ICCV,
author = {Jing, Junpeng and Deng, Xin and Xu, Mai and Wang, Jianyi and Guan, Zhenyu},
title = {HiNet: Deep Image Hiding by Invertible Network},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {4733-4742}
}