jxnl / instructor

structured outputs for llms

Home Page:https://python.useinstructor.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Manage shots for better accuracy

geraud-recarta opened this issue · comments

Is your feature request related to a problem? Please describe.
In order to improve the accuracy, I'd love to send few shots (known pairs of {content, extracted objects}) to help the LLM "understand" what is expected of it.

Describe the solution you'd like

shots = [[files, object1], [file2, object2]]
client.chat.completions.create(
model="gpt-4o",
response_model=class_name,
messages=[
{
"role": "system",
"content": "my_system_desc",
},
{
"role": "user",
"content": f"Extract the corresponding data in the following document: {doc}",
},
],
shots = shots)

Describe alternatives you've considered
Using Pydantic for back and forth until a good accuracy is found

Additional context
LangChain or DSPy have this kind of features (if it helps for implementing it)

Thanks for the good work!!

you can just format a string