openai / CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi-thread usage of open_clip

mobines96 opened this issue · comments

hi I am using open_clip with a pretrained model in my project to compare images similarity but i only can run it in one thread i did not find any solution to use it as a multithread task is there any solution for it ?

i think the preprocess variable that create with the below code produce the problem :

preprocess_val = image_transform_v2(
        pp_cfg,
        is_train=False,
    )

the whole code :

device = "cuda" if torch.cuda.is_available() else "cpu"
model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16-plus-240', pretrained="laion400m_e32")
model.to(device)


def imageEncoder(img):
    img1 = Image.fromarray(img).convert('RGB')
    img1 = preprocess(img1).unsqueeze(0).to(device)
    img1 = model.encode_image(img1)
    return img1


def generateScore(image1, image2):
    test_img = numpy.array(image1)
    data_img = numpy.array(image2)
    img1 = imageEncoder(test_img)
    img2 = imageEncoder(data_img)
    cos_scores = util.pytorch_cos_sim(img1, img2)
    score = round(float(cos_scores[0][0]) * 100, 2)
    return score

score = generateScore(pil_image1, pil_image2)