Large-scale model inference.
Geek Repo:Geek Repo
Github PK Tool:Github PK Tool
Yummy813 opened this issue 10 months ago · comments
Why does the execution of opt 125m in the example result in inference and then remain in the async def wait (self, uid: Hashable) ->Any: stage in the engine? The environment established by the Docker used