joshcho / ChatGPT.el

ChatGPT in Emacs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Given the slow processing time of requests, is there a way to generate the text word by word?

ziova opened this issue · comments

commented

Similar to the implementation in gpt.el

I will look into that over the holidays. The slow processing time is very painful.

EDIT: It turns out a tweak to chatgpt-wraper can support this.

commented

Do you have a branch with this tweak added that I we could try?

No, I haven't gotten around to implementing this unfortunately. I'd be happy to accept any pull requests.

chatgpt-wrapper already supports this feature, and it's a matter of changing the interface between Python epc server and ChatGPT.el. See llm-workflow-engine/llm-workflow-engine#37 (comment)

Also, I am not entirely certain, but comint-mode might be helpful here. #3

Thanks so much for this package!

I've changed chatgpt.py's bot.ask(query) to bot.ask_stream(query). Results in an error:

"TypeError(\"Object of type '<class 'generator'>' cannot be converted by `tosexp`. It's value is '<generator object ChatGPT.ask_stream at 0x10619a650>'\")"

which makes sense. Any ideas on how a "generator" response could be processed?

Addressed in c38c915