richzhang / PerceptualSimilarity

LPIPS metric. pip install lpips

Home Page:https://richzhang.github.io/PerceptualSimilarity

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why do I get different results when running the same data multiple times?

woaiwojia4816294 opened this issue · comments

import lpips
import os
from tqdm import tqdm
from PIL import Image
import numpy as np
import torch

loss_fn_alex = lpips.LPIPS(net='alex').cuda()

dataPath = r'D:\PycharmProjects\paper_instrument\ImageQualityAssessment\hdrnet'
groundTruthPath = r'D:\PycharmProjects\paper_instrument\ImageQualityAssessment\groundtruth'
assert len(os.listdir(dataPath)) == len(os.listdir(groundTruthPath))

LPIPS = 0.0
for x_name, y_name in tqdm(zip(os.listdir(dataPath), os.listdir(groundTruthPath))):
x = (np.array(Image.open(os.path.join(dataPath, x_name))).transpose(2, 0, 1).astype(np.float32) / 255) * 2 - 1
y = (np.array(Image.open(os.path.join(groundTruthPath, y_name))).transpose(2, 0, 1).astype(np.float32) / 255) * 2 - 1
x = torch.from_numpy(x).unsqueeze(0).cuda()
y = torch.from_numpy(y).unsqueeze(0).cuda()
LPIPS += loss_fn_alex(x, y)

avgLPIPS = LPIPS / len(os.listdir(dataPath))
print(avgLPIPS)

Probably because dropout is enabled by default. Try loss_fn_alex = lpips.LPIPS(net='alex', use_dropout=False).cuda()

Double check that it's in eval by setting .eval()

I also got the same behaviour even after I created loss_fn_alex with use_dropout=False and called loss_fn_alex.eval().