tgxs002 / HPSv2

Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inference speed is too slow, how to optimize it and accelerate inference?

SkylerZheng opened this issue · comments

Hi, I used HPSv2 on A100(1 single GPU), it's about 21-23 seconds per run. Are there anyway we can try to improve the latency?

Do you have a minimal script to reproduce the issue?