advimman / lama

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Home Page:https://advimman.github.io/lama-project/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

a minimal inference script would be really nice

alpercanberk opened this issue · comments

a minimal inference script would be really nice

You could consider this approach:

  1. convert the model from ckpt to pt #306
  2. load the .pt model and run inference. A sample code below

default_lama_path = '/LaMa_models/lama-places/lama-fourier/lama-model-last.pt'
default_lama = torch.jit.load(default_lama_path, map_location=torch.device('cpu'))
default_lama.eval()

in_img_path = ''
in_mask_path = ''

in_img    = cv2.imread(in_img_path)[:,:,::-1] / 255.0
in_mask = cv2.imread(in_mask_path) / 255.0

in_img = torch.from_numpy(in_img).unsqueeze(0).permute(0, 3, 1, 2).type(torch.float)
in_mask = torch.from_numpy(in_mask).unsqueeze(0).permute(0, 3, 1, 2).type(torch.float)
in_img = in_img * (1 - in_mask)
with torch.no_grad():
    default_input = torch.cat([in_img, in_mask], dim=1)
    default_lama_result = default_lama(default_input)

default_lama_result = default_lama_result.permute(0, 2, 3, 1).squeeze().numpy()
cv2.imwrite(os.path.join(out_dir, 'default_lama.png'), default_lama_result[:,:,::-1] * 255.0)