su77ungr / CASALIOY

♾️ toolkit for air-gapped LLMs on consumer-grade hardware

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature Requests & Ideas

su77ungr opened this issue · comments

Leave your feature requests here...

I have an idea I just tested, I got indexing time cut in half

Before:

Starting to index  1  documents @  729  bytes in Qdrant
File ingestion start time: 1683850217.4577804

llama_print_timings:        load time = 12337.31 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 12336.71 ms /     6 tokens ( 2056.12 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 12354.11 ms

llama_print_timings:        load time = 12337.31 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time =  3146.90 ms /     6 tokens (  524.48 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time =  3155.95 ms
Time to ingest files: 17.64212942123413 seconds

After:

Starting to index  1  documents @  729  bytes in Qdrant
File ingestion start time: 1683850298.211342

llama_print_timings:        load time =  3763.24 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time =  3762.93 ms /     6 tokens (  627.16 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time =  3770.30 ms

llama_print_timings:        load time =  3763.24 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time =  2754.55 ms /     6 tokens (  459.09 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time =  2762.24 ms
Time to ingest files: 7.84224534034729 seconds

Changed:
qdrant = Qdrant.from_documents(texts, llama, path="./db", collection_name="test") (Which by the way doesn't use the db_dir variable anyways) to qdrant = Qdrant.from_documents(texts, llama, path=":memory:", collection_name="test") to load it into my memory, and even with maxed out memory on this little laptop, that was a huge difference.

P.S. I reduced my ingestion size as you can see. You would also probably have to load ingestion each time you start the chat, but that was an interesting find.

That's a well written notice, thanks.
If I recall correctly, the memory storage is destroyed after the session end. With the db we should be able to generate a permament storage (besides currently set up for demos).

Maybe we add memory as the default one with a notice. I'll check it for myself, those numbers look good.

I guess there's still huge potential since we are using defualt values besides mmr. We could tweak both the ingestion process and retrieving speed.

Oh and I wanted to show you even with maxed out memory, its no problem.

Those results were on this lol
image

Even if the memory maxs out, it caches it to my SSD anyways, heres another possible hack:

I literally use alpaca7b model for ingestion, reading the db, and for the LLM, the exact same file and I dont seem to have an issue. So if we just do ingestion everytime we load the LLM using the memory, I am pretty sure we just need the single model loaded once into memory reducing the loading time between ingestion and questioning the AI. I am going to play around with it to see if I can get something working.

Read the bottom of this document

Almost looks like you can assign the ingestion to a memory location and save that value to reload on the LLM side of things, maybe you could use it to save to storage as persistant so you can check to see if its on storage, if not use the ram version until on storage. I dont know, kind of rambling now.

This sounds promising. I was asking myself what can be done by playing around with the LlamaCppEmbeddings. Keep me posted

A change in models would be the first; then we should tweak the argument

Ok, please remember you asked for it! ;-)
I am personally trying to find something like https://github.com/openai/chatgpt-retrieval-plugin but for self hosted, privacy first, open source solutions, so openai should be out of the picture.

More models:
I would love to see support for models like

Document parser:

  • Support more document types like pdf, docx and text extraction from slides, html.

Database types:

  • I am not familiar with qdrant yet, but I know I can use Redis sentinel to scale and there is miles and weaviate. From what I read thus far, Redis has the lowest performance? So I guess it's just a request to have support for multiple db types, of course, the faster the better.

Integration into UI:
I think Oobabooga has the momentum to become the StableDiffusion of generative text, but it has no way to properly finetune at this time. I would love to see an integration into Oobabooga along with API endpoints.

ChatGPT-retrieval-clone:

This should be our ultimate goal. With enough tweaking those models should be running with a decent run time. It is possible, therefore also see the new LlamaSharp Repo, that's a set of LlamaCpp in C# with great performance.

Model variation:

Thanks to @alxspiker in here we are able to convert GGML models to supported GGJI - I tested and uploaded the converted model here

hosting already converted models onm HuggingFace
create pipeline for an easy-convert

Data handling

  • PDF is already supported // pipeline to convert docx et. al. to PDF or TXT planned

  • We might want to head to the qdrant discord to discuss such features

UI

opened new issue for UI
see here

Is it possible to provide a not so air gapped in exchange of better performance and speed?

Also, thanks for your job. ##I'm an Energy Manager, never coded and I'm following your work to maybe launch a Specialized Q&A Bot, so that maybe, maybe call attention of recruiters.

I'm glad you found joy with this repo :)

Certainly if opting for speed is preferred you'd want to call OpenAI's API (or a competing model like MosaicML) itself, stream directly from HuggingFace etc.

This job can be done inside a jupyter notebook is basically THE prototype idea of LangChain. Starting point might be this

Edit: fixed link

Idea to create a "Administrative" UI to change parameters, models, stop, clear db etc. And a user interface just for the Q&A/Chat area?

@su77ungr: the latter can be and are already implemented in the GUI; the hotswap model is a great idea and reminds me of HuggingChat

Sorry for the slow development. I'm handling exams and a salty girlfriend rn. Back on the desktop soon.

Quick comment @su77ungr : this "issue" will soon become rather big and hard to synthetize (which is fine as a place for simple discussion), don't forget to open actual issues for each of the ideas you actually want to implement :)

Maybe Discussions would be a better place to host this than Issues ?

Created #76