xebia-functional / xef

Building applications with LLMs through composability, in Kotlin, Scala, ...

Home Page:https://xef.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Context length exceeded for embeddings

carbaj03 opened this issue · comments

Is this an expected behavior?
For me, when I try to load a file too big, this should be split into multiple embeddings and used for the current prompt.

suspend fun main() {
  OpenAI.conversation(createDispatcher(OpenAI.log, Tracker.Default)) {
    val filePath = "examples/kotlin/src/main/resources/documents/huberman.txt"
    val file = File(filePath)

    addContext(file.readText())

    val summary = promptMessage(Prompt("create a summary"))
    println(summary)
  }
}
Exception in thread "main" com.aallam.openai.api.exception.InvalidRequestException: This model's maximum context length is 8191 tokens, however you requested 21211 tokens (21211 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
	at com.aallam.openai.client.internal.http.HttpTransport.openAIAPIException(HttpTransport.kt:65)
	at com.aallam.openai.client.internal.http.HttpTransport.handleException(HttpTransport.kt:48)
	at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:23)
	at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(HttpTransport.kt)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Caused by: io.ktor.client.plugins.ClientRequestException: Client request(POST https://api.openai.com/v1/embeddings) invalid: 400 Bad Request. Text: "{
  "error": {
    "message": "This model's maximum context length is 8191 tokens, however you requested 21211 tokens (21211 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}```

This is a bug @Intex32, and I discussed it today, but that shows up in a different place. We must also ensure we chunk the text and send a request based on the model max allowed tokens. In this case, the text is being sent without chunking, or you have a configured a chunk size higher than the model max tokens.

Yes, today I had a look at this. addContext(String) which is used here does not have any logic for chunking the String. In this case, the string is exceeding the token limit from the embedding model.
I looked at the PDF example where you can ask questions about a pdf file. This uses TokenTextSplitter to split the text in chunk. Adding this here should fix it.