jxnl / instructor

structured outputs for llms

Home Page:https://python.useinstructor.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Any valid Pydantic type should be supported (e.g. TypedDict)

ADR-007 opened this issue · comments

Is your feature request related to a problem? Please describe.
Currently, only Pydantic models are supported as response_model.
But in some cases, I want to use TypedDict instead. For example, I don't want to do massive refactoring, so I just want to add response schema validation to an existing job.

Describe the solution you'd like
I would like to able to use TypedDict as response_model. E.g.:

class MyModel(TypedDict):
    my_key: str

result = client.chat.completions.create(
    messages=[...],
    response_model=MyModel,
)

It is very simple to implement in Pydantic V2:

from pydantic import TypeAdapter

adapter = TypeAdapter(MyModel)
json_schema = adapter.json_schema()

result = prcess_llm(prompt, json_schema, ...)

my_model = adapter.validate_python(result)

Happy to take or for this in process response.

@ADR-007 just pushed up a PR which introduces this. Is this what you had in mind for your use case?

from typing_extensions import TypedDict
from openai import OpenAI
import instructor


class User(TypedDict):
    name: str
    age: int


client = instructor.from_openai(OpenAI())

print(
    client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=User,
        messages=[
            {
                "role": "user",
                "content": "Timothy is a man from New York who is turning 32 this year",
            }
        ],
    )
)

"""
name='Timothy' age=32
"""

@ivanleomk yes, thank you!

Oh. That PR is not yet merged, so I should leave this issue open

I would really like to have this implemented. It is currently the only thing that blocks me from using this library :(

we're close!

@ADR-007 PR #758 has been merged so going to close this issue