dottxt-ai / outlines

Structured Text Generation

Home Page:https://dottxt-ai.github.io/outlines/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Outlines's cache not reusable across vllm startup

Lap1n opened this issue · comments

Describe the issue as clearly as possible:

When using vllm and outlines, when running it from a VM, it seems that the diskcache functionality is not working correctly. Every time the server is startup, it doesn't seem to be able to reuse the previously computed FSM cache.

One way that can fix this issue is to serialize the cache key object as a string.
The changes can be found in this PR that I submitted.

Steps/code to reproduce the bug:

- Start vllm server
- send a request
- FSM computation happens
- Stops and relaunch the server
- send a request
- FSM computation still happens

Expected result:

- Start vllm server
- send a request
- FSM computation happens
- Stops and relaunch the server
- send a request
- FSM computation does not happens as it is already in the cache

Error message:

No response

Outlines/Python version information:

Version information

``` (command output here) ```
Latest from main.

Context for the issue:

No response

The source of this issue appears to be vllm's use of outlines.cache on functions that are ultimately used as class methods. Those functions include the class type instances in their signatures and that affects caching (e.g. equality doesn't necessarily hold after deserialization of types between Python sessions).

See #1145.