Giters
AlexBuz
/
llama-zip
LLM-powered lossless compression tool
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
228
Watchers:
6
Issues:
16
Forks:
8
AlexBuz/llama-zip Issues
how to make n_gpu_layers work with cuda?
Closed
8 days ago
Comments count
1
possible to provide benchmark for these?
Closed
8 days ago
Comments count
1
generate binary (for better compression) instead of text
Closed
8 days ago
Comments count
2
ever tried tinyllama or smaller llamas?
Closed
8 days ago
Comments count
1
ollama version?
Closed
a month ago
Comments count
2
Does not work on all files because of utf-8 error
Closed
a month ago
Comments count
4
Request: Add newest cmix compressor to comparison table.
Closed
a month ago
Comments count
1
compare with brotli
Closed
a month ago
Comments count
1
Interesting side effect of decompression - original training data extraction
Closed
a month ago
Comments count
5
Interactive mode generate text instead of compressing when using specific text
Closed
a month ago
Comments count
2
Using the colab in the repository, we can notice it does not use the GPU
Closed
a month ago
Comments count
1
Question
Closed
a month ago
Comments count
2
Using different models like Phi-3
Closed
a month ago
Comments count
7
Post compression ratios
Closed
a month ago
Comments count
15
Gibberish produced on 1 word spaces
Closed
a month ago
Comments count
2
Perform compression in batches for texts exceeding the 8192 token limit of llama3.
Closed
a month ago
Comments count
6