jorge-menjivar / unsaged

Open source chat kit engineered for seamless interaction with AI models.

Home Page:https://unsaged.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Missing OpenAI models

johnbrownlow opened this issue · comments

Several useful OpenAI models are missing from the types. For example gpt-3.5-turbo-16k as well as the dated models.

Hi @jorge-menjivar, just wondering if there was any update on adding these models?

Hey y'all, this is a real easy patch ~
Just add this block between the models listed in Master/types/ai-models.ts

  'gpt-3.5-turbo-16k': {
    id: 'gpt-3.5-turbo-16k',
    maxLength: 48000,
    tokenLimit: 16000,
    requestLimit: 12000,
    vendor: 'OpenAI',
  },

The model will show up in the dropdown menu :)

Several useful OpenAI models are missing from the types. For example gpt-3.5-turbo-16k as well as the dated models.

gpt-3.5-turbo-16k has been added

@jorge-menjivar I think this issue can be closed

@sebiweise I would like to make it possible to have the option to see all models including the versioned ones, but most importantly models with dynamic names like fine-tuned models. Maybe as an advanced feature that is disabled by default. This would require some moderate changes to our models structure though, which is why I'm leaving for later.

You mean like a database table just for available models instead of the types integration?

Yeah something like that would do it. The other issue is getting the proper max token length for arbitrary models. I know ollama at least, doesn’t have a way to do this in their endpoints API so I will need to investigate and do a pr to them (I asked on discord, and no response so I’m assuming there is no way). If we were to do this, there would not be a need to implement new models manually anymore, unless something substantial is different about the model.

I just removed the PossibleAiModels types integration and returning every possible AiModel now, but I dont know how to get the correct information for maxLength, tokenLimit, requestLimit from the OpenAi API, any idea?
https://github.com/sebiweise/unSAGED/tree/feature/aimodel_vendor_update

Ollama isn´t working either but there is no endpoint for now - like you said.

We might need to wait for support for this to come from OpenAI or we might have to come up with a new solution to the problem. For OpenAI models we can parse the model id and assume their base model from it.
Fine-tuned models also include the name in the id. They are all named something like ft:gpt-3.5-turbo-0613:xxxxxxxxxxxx. Maybe letting the user manually set the max tokens per model in the UI could be another solution in the worst of cases.

@jorge-menjivar What do you think of a new service that will be hosted by me or us, that will have a database of every possible AI model and that will offer a submit form to let users post new found AI models and/or vendors that they found. Maybe we could create a cron job that will gather new known AI models from known vendors automatically. When the models are saved we can have something like a status "draft" so someone can complete the corresponding model settings.
And then we have an endpoint that will return the models including the settings and also offer some filters so we can decide which vendors or model types (text/image/...) I want to get via the API call.

I like this idea. So basically it's a community database/index for models?

Also check the pre-release app v0.1.1. I made it so that it detects all models and let's you put the correct token window size in the settings

Yes, so basically just the types we have now in a database that can be contributed to easily, I would create a new service for that so that we don´t have a problem with another endpoint that isn´t available in the desktop app.
I would just try to build a simple version to test, just a quick Nextjs application with Prisma and a free Planetscale database, Clerk auth for some admin pages and a public available submit form that will create models that will need to be check by a admin.

So then you can add another api call to that service (maybe later on including a api key) and get the "possible ai models" including all settings, you still need to get all models that the supported vendors in your app support but you could get the settings for max_tokens and so on from another endpoint.

Just created a basic Nextjs App, for now it will just display the data from the Planetscale db but I´m currently working on the submit form and a little admin dashboard: https://ai-services-web.vercel.app/

https://ai-services-web.vercel.app/vendor
image

https://ai-services-web.vercel.app/model
image

The submit forms aren´t working at the moment:
https://ai-services-web.vercel.app/form/model
https://ai-services-web.vercel.app/form/vendor

@sebiweise Thats really awesome. If vercel gets too slow to handle this let me know. I have over a dozen dedicated servers that I can get you a VPS to run this on at no cost.

I'd be happy to add missing params if you need help

Thank you for your help. I think we can just try to use Vercel for now and we will just need to have a look on the insights/usage of the endpoint. I think the database will need an upgrade later on but we will see. The data will be cached so I think for now no problem.
I will finish the database scheme and the management dashboard tomorrow so I can invite you to be able to edit/add ai models.

@sebiweise Looking great! Do we need to keep the max length and request limit? I removed them from the desktop app because they seemed redundant. Max length is just a guess for most models, and request limit could be set to token_limit - 100.

  • Max length is used for the max length of input text so if you go over a certain number of characters while typing, it will show you a warning. This is of course a guess because we cannot compute ahead of time the number of tokens a certain number of characters will use. The rule of thumb for now for this value has been token_limit * 3 or something like that.
  • Request limit is the max number of tokens to send in the request, so if we use token_limit - 100 for its value, we guarantee that the model can return at least 100 tokens in the response.

Yes I think we will have to take a look if we can remove some of the params and the params are need or not. I just pushed a working example that uses the "AI-Services" to get the possible ai models and the correct params/limits.
#163

Main implementation (maybe we can change some code in the future to reduce the http calls that are done):
https://github.com/jorge-menjivar/unsaged/pull/163/files#diff-b2b8156aafdf2a2697ee5d5d733f9e17a1d4e89734f8b97ab410121180b3baf0