TransformerOptimus / SuperAGI

<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.

Home Page:https://superagi.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Trying to use this with local llms but there is no easy way to figure it out.

dreemur99 opened this issue · comments

The option for setting up your own model is there but you cant point it to an actually directory or llm file directly? Why not?

Hey @dreemur99, you can select the exisitng models in the drop down, additionally you can also add models hosted on huggingface & replicate. We're working to add custom local llm support in the upcoming v0.0.14 release

It's out guys, you can go and try it out !
I made some changes to the docker file and docker compose files, this way I can run it with gpu acceleration
It's working perfectly well

docker-compose.yaml

version: '3.8'
services:
backend:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
volumes:
- "./:/app"
- "/home/mohcine/work/cloned/textgen/models/dolphin-2.2.1-mistral-7b.Q5_K_M.gguf:/app/local_model_path"
build: .
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/wait-for-it.sh", "super__postgres:5432","-t","60","--","/app/entrypoint.sh"]
celery:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
volumes:
- "./:/app"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"
- "/home/mohcine/work/cloned/textgen/models/dolphin-2.2.1-mistral-7b.Q5_K_M.gguf:/app/local_model_path"
build: .
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/entrypoint_celery.sh"]
gui:
build:
context: ./gui
args:
NEXT_PUBLIC_API_BASE_URL: "/api"
networks:
- super_network
super__redis:
image: "redis/redis-stack-server:latest"
networks:
- super_network
volumes:
- redis_data:/data

super__postgres:
image: "docker.io/library/postgres:16"
environment:
- POSTGRES_USER=superagi
- POSTGRES_PASSWORD=password
- POSTGRES_DB=super_agi_main
volumes:
- superagi_postgres_data:/var/lib/postgresql/data/
networks:
- super_network

proxy:
image: nginx:stable-alpine
ports:
- "3000:80"
networks:
- super_network
depends_on:
- backend
- gui
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf

networks:
super_network:
driver: bridge
volumes:
superagi_postgres_data:
redis_data:

Dockerfile :

FROM nvidia/cuda:12.1.0-devel-ubuntu22.04 AS compile-image
WORKDIR /app

RUN apt-get update && apt-get install --no-install-recommends -y
git vim build-essential python3-dev python3-venv python3-pip

RUN apt-get update &&
apt-get install --no-install-recommends -y wget libpq-dev gcc g++ &&
apt-get clean &&
rm -rf /var/lib/apt/lists/*

RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

COPY requirements.txt .
RUN pip3 install --upgrade pip &&
pip3 install --upgrade pip setuptools wheel ninja
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose

RUN pip3 install -r requirements.txt
#pip3 install --no-cache-dir -r requirements.txt
RUN python3 -m nltk.downloader averaged_perceptron_tagger punkt

COPY . .

RUN chmod +x ./entrypoint.sh ./wait-for-it.sh ./install_tool_dependencies.sh ./entrypoint_celery.sh

FROM nvidia/cuda:12.1.0-devel-ubuntu22.04 AS build-image
WORKDIR /app

RUN apt-get update && apt-get install --no-install-recommends -y
git vim build-essential python3-dev python3-venv python3-pip

ENV LLAMA_CUBLAS=1

RUN apt-get update &&
apt-get install --no-install-recommends -y libpq-dev &&
apt-get clean &&
rm -rf /var/lib/apt/lists/*

COPY --from=compile-image /opt/venv /opt/venv
COPY --from=compile-image /app /app
COPY --from=compile-image /root/nltk_data /root/nltk_data

ENV PATH="/opt/venv/bin:$PATH"

EXPOSE 8001

I'm trying to figure out how to run it on Windows 11.
I can't get the full address into my C drive.
"C:\1\dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf"

chatgpt Told me to try this:

volumes:
  - "./:/app"
  - "C:/1/dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf:/app/local_model_path/dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf"
build: .
depends_on:
  - super__redis
  - super__postgres
networks:
  - super_network
command: ["/app/wait-for-it.sh", "super__postgres:5432", "-t", "60", "--", "/app/entrypoint.sh"]

celery:
volumes:
- "./:/app"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"
- "C:/1/dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf:/app/local_model_path/dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf"
build: .
"

etc.

but I get a 404 error"

backend-1 | INFO: 172.27.0.7:45762 - "GET /models_controller/test_local_llm HTTP/1.0" 404 Not Found
proxy-1 | 172.27.0.1 - - [24/Dec/2023:15:11:43 +0000] "GET /api/models_controller/test_local_llm HTTP/1.1" 404 87 "http://127.0.0.1:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "-"

What am I missing here?
I'm sorry, but I don't understand paths except for Windows...

I have same issue at linux with the exact same model!

I think the llm isn't compatible!

at my end at the backend I just saw the errorr but Idk how to solve it: superagi-backend-1 | gguf_init_from_file: invalid magic number 00000000
superagi-backend-1 | error loading model: llama_model_loader: failed to load model from /app/local_model_path
superagi-backend-1 |
superagi-backend-1 | llama_load_model_from_file: failed to load model
superagi-backend-1 | 2023-12-26 09:19:35 UTC - Super AGI - ERROR - [/app/superagi/helper/llm_loader.py:27] -
superagi-backend-1 | 2023-12-26 09:19:35 UTC - Super AGI - ERROR - [/app/superagi/controllers/models_controller.py:185] - Model not found.
superagi-backend-1 | 2023-12-26 09:19:35 UTC - Super AGI - INFO - [/app/superagi/controllers/models_controller.py:203] - Error:
superagi-backend-1 | 2023-12-26 09:19:35 UTC - Super AGI - INFO - [/app/superagi/controllers/models_controller.py:203] -
superagi-backend-1 | INFO: 172.19.0.7:43076 - "GET /models_controller/test_local_llm HTTP/1.0" 404 Not Found

I do not see a dropdown to select a local LLM.
Also getting: gguf_init_from_file: invalid magic number 00000000

I guess it's because no model is selected?

Any idea how to get it working? I tried to add multiple gguf in the folder and adding it to the volumes.

I do not see a dropdown to select a local LLM. Also getting: gguf_init_from_file: invalid magic number 00000000

I guess it's because no model is selected?

Any idea how to get it working? I tried to add multiple gguf in the folder and adding it to the volumes.

Ok, I see what I did wrong.

You have to mount the gguf to the file local_model_path, not mount the directory to local_model_path.

volumes: - ./local_model_path/your-model.gguf:/app/local_model_path

dude i am a bit confused can you help me out with this maybe would be great since in no forum I got an answer or any help! so the thing is the my llm is in the path home/cronos/llms

this is the docker-compose.yaml:

version: '3.8'
services:
backend:
volumes:
- "./:/app"
- "./local_model_path/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path"
build: .
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/wait-for-it.sh", "super__postgres:5432","-t","60","--","/app/entrypoint.sh"]
celery:
volumes:
- "./:/app"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"
- "./home/cronos/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path"

i don't get what I am doing wrong I just first did the exact thing like in the video but that wasn't working for me so I researched but wasn't finding any answer anywhere and comment some issues because some had a close same error to me so though they can help me but no answer! huh I just cant anymore I spent like 30hours to get it work!

I don't get it. You can pick localllm in new model but you still can't point a path the local model you have? What's changed? Am I missing something bc I dont see any info on the main page.

@yf007
add your model path in this format: "C:/1/dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf:/app/local_model_path"
add your model path in both celery and backend volumes as shown in the video
We have added multi-gpu support with this pr #1391
you need to run this command to run local llms: docker compose -f docker-compose-gpu.yml up --build

lol at my end this file with gpu at the end doesn't exists let me check if there is an update available I could swear I am up to date!

Look at the comments on the YouTube video and it turns out that no one there was able to get it to work. I think that if they manage to make it work locally, even at the level of selecting a file from the computer, this is what will make the big leap in the field! I hope someone is working on it these days בתאריך יום ב׳, 8 בינו׳ 2024, 08:46, מאת alfi4000 @.***

: i don't get what I am doing wrong I just first did the exact thing like in the video but that wasn't working for me so I researched but wasn't finding any answer anywhere and comment some issues because some had a close same error to me so though they can help me but no answer! huh I just cant anymore I spent like 30hours to get it work! — Reply to this email directly, view it on GitHub <#1295 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AELRYWIF4RGXDMLM75XCPFDYNOI5ZAVCNFSM6AAAAAA5MVUC5OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGQ2TSOJWG4 . You are receiving this because you commented.Message ID: @.***>

yeah I already have seen that and was wondering because some many people were exiting about it! but also no video only from SuperAGI how to get it to work locally!

hehe lol now I was trying to get SuperAGI up to date and saw there isn't any button to update or command, now just replaced all files with the actual ones in the repository! Huh

I now hope that it works!

Lol now I got this error: Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]
That came when I executed: sudo docker compose -f docker-compose-gpu.yml up --build
I used a Nvidia Tesla p40 24gb vram!

What is that I dont get this why this happened!?

version: '3.8'
services:
backend:
volumes:
- "./:/app"
- "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path"
build:
context: .
dockerfile: Dockerfile-gpu
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/wait-for-it.sh", "super__postgres:5432","-t","60","--","/app/entrypoint.sh"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

celery:
volumes:
- "./:/app"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"
- "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path"
build:
context: .
dockerfile: Dockerfile-gpu
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/entrypoint_celery.sh"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
gui:
build:
context: ./gui
args:
NEXT_PUBLIC_API_BASE_URL: "/api"
networks:
- super_network

volumes:

- ./gui:/app

- /app/node_modules/

- /app/.next/

super__redis:
image: "redis/redis-stack-server:latest"
networks:
- super_network

uncomment to expose redis port to host

ports:

- "6379:6379"

volumes:
  - redis_data:/data

super__postgres:
image: "docker.io/library/postgres:15"
environment:
- POSTGRES_USER=superagi
- POSTGRES_PASSWORD=password
- POSTGRES_DB=super_agi_main
volumes:
- superagi_postgres_data:/var/lib/postgresql/data/
networks:
- super_network

uncomment to expose postgres port to host

ports:

- "5432:5432"

proxy:
image: nginx:stable-alpine
ports:
- "3000:80"
networks:
- super_network
depends_on:
- backend
- gui
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf

networks:
super_network:
driver: bridge
volumes:
superagi_postgres_data:
redis_data:

that's the docker-compose-gpu.yml

the llm ist stored in /home/kali/llms/
model: dolphin-2-5-mixtral-8x7b-Q2_K.gguf

So now finally an error the model is in the directory: /home/kali/llms
- "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf/:/app/local_model_path"
that I pasted in the gpu file but I got this error:

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf" to rootfs at "/app/local_model_path": mount /home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

Someone of the support might be feel free to help me I am using ubuntu server 22.04.3!
i am unsure what I am doing wrong!

a easier way for end users would might be for local llms like Autogenstudio just a thought!
Bildschirmfoto 2024-01-16 um 12 49 57
I love Autogenstudio but not like SuperAGI nice tools just basic stuff is still available why that I would prefer on SuperAGI!

@alfi4000 hey are you sure that this path exists: "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf" ?
while testing I was using "/home/ubuntu/models/vicuna-7B-v1.5-GGUF/vicuna-7b-v1.5.Q5_K_M.gguf:/app/local_model_path"
and it was working fine for me, I also checked your docker-compose-gpu.yml file it looks fine

have you the model inside of the SuperAGI folder? I have it outside of the SuperAGI folder!
that would be my question do the model needs to be inside of a folder in the SuperAGI folder where the docker compose file is or can it be outside because I have it outside!

that's might be the thing I am doing wrong!

Bildschirmfoto 2024-01-17 um 12 56 18

here check the screenshot that is what I did!

I hope I am not too much confusing!

I just saw that it has in the SuperAGI folder has it created that path just without the model exactly the path I wrote in the docker compose file I told to my self whate f*** is here going on!

I am finally done with it I wasted now 1hour to try a few things but no solution so maybe someone here answers or I will check in a few months the repo if there is an update that there is an easter way to do that, I will check the next few days for answers then I am done with it if no ones say something !

no your model doesnt need to be in the superagi folder, it can be anywhere.
are you sure that your model path is correct? bcz when I was using vicuna 7B gguf model, it had a folder and then inside that folder there were multiple gguf files and I tried with q5_k_m
in your case I can see that the gguf file is in "llms" folder have you deleted the other files?
could you attach a screenshot of the contents of the llms folder, also run "pwd" command inside the llms folder so that you can get the correct path

no I had not deleted the other folders inside but I created in that folder today another one where I putted the model in but it wasn't working i'll show you some screenshots!
Let me know if I should try the model you used if possible give me the link to yours so that I download the right one! :)

Bildschirmfoto 2024-01-17 um 23 38 12 Bildschirmfoto 2024-01-17 um 23 38 47 Bildschirmfoto 2024-01-17 um 23 39 14

I think the vicuna model is might be the only one yet supported or I did something wrong what I don't know!

@alfi4000
from the screenshots i can see that the correct path for the model should be: /home/kali/llms/dolphin-2-5-mixtral-8x7b-GGUF/dolphin-2-5-mixtral-8x7b-Q2_K.gguf
but you were using: /home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf
I have also tested it with other gguf models and it was working fine
after correcting the model path run this command again: docker compose -f docker-compose-gpu.yml up --build

Look at the second screenshot I had changed the path to the correct one!
The problem is might that I choosed one of the last options they need like I think it was close to 100gb ram I have 256gb ram I don't get it!

I will try 2 different versions of that model and reach you back just to make sure that the model isn't the problem!

Hmm i tried 2 not so heavy versions of it but I get every time the same error: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/kali/llms/llms-test/mixtral-8x7b-v0.1.Q3_K_M.gguf" to rootfs at "/app/local_model_path": mount /home/kali/llms/llms-test/mixtral-8x7b-v0.1.Q3_K_M.gguf:/app/local_model_path (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

Bildschirmfoto 2024-01-18 um 09 49 50 Bildschirmfoto 2024-01-18 um 09 51 11

Here is a screen video recording that might help you : https://youtu.be/_u-8bwoKHQc

in the video i saw that you were getting error: str can't be interpreted as integer
this issue has been resolved in #1393
could you please take the latest pull of "main" branch and try again

i have done it I replaced the hole superego folder with every file in it with the latest repo files but look by your self the log!:

superagi-backend-1 | llama_model_loader: loaded meta data with 25 key-value pairs and 995 tensors from /app/local_model_path (version unknown)
superagi-backend-1 | llama_model_loader: - tensor 0: token_embd.weight q3_K [ 4096, 32000, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 1: blk.0.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 2: blk.0.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 3: blk.0.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 4: blk.0.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 5: blk.0.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 6: blk.0.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 7: blk.0.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 8: blk.0.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 9: blk.0.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 10: blk.0.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 11: blk.0.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 12: blk.0.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 13: blk.0.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 14: blk.0.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 15: blk.0.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 16: blk.0.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 17: blk.0.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 18: blk.0.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 19: blk.0.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 20: blk.0.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 21: blk.0.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 22: blk.0.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 23: blk.0.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 24: blk.0.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 25: blk.0.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 26: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 27: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 28: blk.0.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 29: blk.0.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 30: blk.0.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 31: blk.0.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 32: blk.1.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 33: blk.1.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 34: blk.1.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 35: blk.1.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 36: blk.1.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 37: blk.1.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 38: blk.1.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 39: blk.1.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 40: blk.1.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 41: blk.1.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 42: blk.1.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 43: blk.1.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 44: blk.1.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 45: blk.1.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 46: blk.1.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 47: blk.1.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 48: blk.1.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 49: blk.1.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 50: blk.1.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 51: blk.1.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 52: blk.1.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 53: blk.1.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 54: blk.1.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 55: blk.1.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 56: blk.1.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 57: blk.1.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 58: blk.1.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 59: blk.1.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 60: blk.1.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 61: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 62: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 63: blk.2.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 64: blk.2.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 65: blk.2.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 66: blk.2.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 67: blk.2.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 68: blk.2.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 69: blk.2.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 70: blk.2.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 71: blk.2.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 72: blk.2.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 73: blk.2.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 74: blk.2.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 75: blk.2.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 76: blk.2.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 77: blk.2.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 78: blk.2.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 79: blk.2.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 80: blk.2.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 81: blk.2.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 82: blk.2.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 83: blk.2.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 84: blk.2.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 85: blk.2.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 86: blk.2.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 87: blk.2.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 88: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 89: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 90: blk.2.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 91: blk.2.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 92: blk.2.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 93: blk.2.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 94: blk.3.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 95: blk.3.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 96: blk.3.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 97: blk.3.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 98: blk.3.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 99: blk.3.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 100: blk.3.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 101: blk.3.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 102: blk.3.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 103: blk.3.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 104: blk.3.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 105: blk.3.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 106: blk.3.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 107: blk.3.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 108: blk.3.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 109: blk.3.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 110: blk.3.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 111: blk.3.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 112: blk.3.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 113: blk.3.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 114: blk.3.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 115: blk.3.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 116: blk.3.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 117: blk.3.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 118: blk.3.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 119: blk.3.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 120: blk.3.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 121: blk.3.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 122: blk.3.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 123: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 124: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 125: blk.4.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 126: blk.4.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 127: blk.4.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 128: blk.4.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 129: blk.4.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 130: blk.4.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 131: blk.4.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 132: blk.4.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 133: blk.4.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 134: blk.4.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 135: blk.4.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 136: blk.4.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 137: blk.4.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 138: blk.4.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 139: blk.4.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 140: blk.4.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 141: blk.4.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 142: blk.4.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 143: blk.4.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 144: blk.4.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 145: blk.4.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 146: blk.4.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 147: blk.4.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 148: blk.4.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 149: blk.4.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 150: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 151: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 152: blk.4.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 153: blk.4.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 154: blk.4.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 155: blk.4.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 156: blk.5.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 157: blk.5.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 158: blk.5.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 159: blk.5.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 160: blk.5.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 161: blk.5.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 162: blk.5.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 163: blk.5.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 164: blk.5.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 165: blk.5.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 166: blk.5.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 167: blk.5.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 168: blk.5.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 169: blk.5.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 170: blk.5.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 171: blk.5.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 172: blk.5.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 173: blk.5.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 174: blk.5.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 175: blk.5.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 176: blk.5.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 177: blk.5.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 178: blk.5.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 179: blk.5.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 180: blk.5.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 181: blk.5.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 182: blk.5.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 183: blk.5.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 184: blk.5.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 185: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 186: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 187: blk.6.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 188: blk.6.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 189: blk.6.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 190: blk.6.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 191: blk.6.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 192: blk.6.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 193: blk.6.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 194: blk.6.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 195: blk.6.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 196: blk.6.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 197: blk.6.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 198: blk.6.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 199: blk.6.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 200: blk.6.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 201: blk.6.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 202: blk.6.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 203: blk.6.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 204: blk.6.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 205: blk.6.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 206: blk.6.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 207: blk.6.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 208: blk.6.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 209: blk.6.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 210: blk.6.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 211: blk.6.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 212: blk.6.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 213: blk.6.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 214: blk.6.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 215: blk.6.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 216: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 217: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 218: blk.7.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 219: blk.7.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 220: blk.7.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 221: blk.7.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 222: blk.7.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 223: blk.7.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 224: blk.7.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 225: blk.7.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 226: blk.7.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 227: blk.7.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 228: blk.7.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 229: blk.7.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 230: blk.7.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 231: blk.7.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 232: blk.7.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 233: blk.7.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 234: blk.7.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 235: blk.7.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 236: blk.7.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 237: blk.7.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 238: blk.7.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 239: blk.7.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 240: blk.7.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 241: blk.7.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 242: blk.7.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 243: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 244: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 245: blk.7.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 246: blk.7.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 247: blk.7.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 248: blk.7.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 249: blk.8.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 250: blk.8.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 251: blk.8.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 252: blk.8.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 253: blk.8.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 254: blk.8.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 255: blk.8.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 256: blk.8.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 257: blk.8.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 258: blk.8.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 259: blk.8.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 260: blk.8.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 261: blk.8.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 262: blk.8.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 263: blk.8.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 264: blk.10.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 265: blk.10.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 266: blk.10.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 267: blk.10.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 268: blk.10.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 269: blk.10.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 270: blk.10.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 271: blk.10.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 272: blk.8.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 273: blk.8.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 274: blk.8.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 275: blk.8.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 276: blk.8.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 277: blk.8.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 278: blk.8.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 279: blk.8.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 280: blk.8.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 281: blk.8.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 282: blk.8.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 283: blk.8.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 284: blk.8.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 285: blk.8.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 286: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 287: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 288: blk.9.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 289: blk.9.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 290: blk.9.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 291: blk.9.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 292: blk.9.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 293: blk.9.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 294: blk.9.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 295: blk.9.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 296: blk.9.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 297: blk.9.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 298: blk.9.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 299: blk.9.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 300: blk.9.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 301: blk.9.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 302: blk.9.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 303: blk.9.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 304: blk.9.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 305: blk.9.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 306: blk.9.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 307: blk.9.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 308: blk.9.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 309: blk.9.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 310: blk.9.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 311: blk.9.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 312: blk.9.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 313: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 314: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 315: blk.9.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 316: blk.9.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 317: blk.9.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 318: blk.9.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 319: blk.10.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 320: blk.10.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 321: blk.10.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 322: blk.10.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 323: blk.10.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 324: blk.10.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 325: blk.10.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 326: blk.10.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 327: blk.10.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 328: blk.10.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 329: blk.10.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 330: blk.10.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 331: blk.10.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 332: blk.10.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 333: blk.10.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 334: blk.10.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 335: blk.10.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 336: blk.10.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 337: blk.10.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 338: blk.10.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 339: blk.10.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 340: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 341: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 342: blk.11.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 343: blk.11.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 344: blk.11.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 345: blk.11.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 346: blk.11.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 347: blk.11.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 348: blk.11.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 349: blk.11.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 350: blk.11.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 351: blk.11.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 352: blk.11.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 353: blk.11.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 354: blk.11.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 355: blk.11.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 356: blk.11.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 357: blk.11.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 358: blk.11.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 359: blk.11.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 360: blk.11.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 361: blk.11.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 362: blk.11.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 363: blk.11.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 364: blk.11.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 365: blk.11.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 366: blk.11.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 367: blk.11.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 368: blk.11.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 369: blk.11.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 370: blk.11.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 371: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 372: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 373: blk.12.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 374: blk.12.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 375: blk.12.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 376: blk.12.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 377: blk.12.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 378: blk.12.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 379: blk.12.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 380: blk.12.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 381: blk.12.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 382: blk.12.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 383: blk.12.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 384: blk.12.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 385: blk.12.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 386: blk.12.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 387: blk.12.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 388: blk.12.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 389: blk.12.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 390: blk.12.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 391: blk.12.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 392: blk.12.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 393: blk.12.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 394: blk.12.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 395: blk.12.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 396: blk.12.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 397: blk.12.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 398: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 399: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 400: blk.12.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 401: blk.12.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 402: blk.12.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 403: blk.12.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 404: blk.13.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 405: blk.13.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 406: blk.13.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 407: blk.13.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 408: blk.13.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 409: blk.13.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 410: blk.13.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 411: blk.13.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 412: blk.13.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 413: blk.13.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 414: blk.13.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 415: blk.13.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 416: blk.13.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 417: blk.13.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 418: blk.13.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 419: blk.13.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 420: blk.13.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 421: blk.13.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 422: blk.13.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 423: blk.13.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 424: blk.13.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 425: blk.13.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 426: blk.13.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 427: blk.13.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 428: blk.13.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 429: blk.13.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 430: blk.13.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 431: blk.13.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 432: blk.13.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 433: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 434: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 435: blk.14.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 436: blk.14.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 437: blk.14.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 438: blk.14.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 439: blk.14.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 440: blk.14.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 441: blk.14.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 442: blk.14.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 443: blk.14.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 444: blk.14.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 445: blk.14.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 446: blk.14.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 447: blk.14.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 448: blk.14.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 449: blk.14.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 450: blk.14.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 451: blk.14.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 452: blk.14.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 453: blk.14.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 454: blk.14.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 455: blk.14.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 456: blk.14.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 457: blk.14.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 458: blk.14.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 459: blk.14.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 460: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 461: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 462: blk.14.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 463: blk.14.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 464: blk.14.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 465: blk.14.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 466: blk.15.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 467: blk.15.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 468: blk.15.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 469: blk.15.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 470: blk.15.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 471: blk.15.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 472: blk.15.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 473: blk.15.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 474: blk.15.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 475: blk.15.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 476: blk.15.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 477: blk.15.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 478: blk.15.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 479: blk.15.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 480: blk.15.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 481: blk.15.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 482: blk.15.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 483: blk.15.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 484: blk.15.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 485: blk.15.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 486: blk.15.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 487: blk.15.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 488: blk.15.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 489: blk.15.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 490: blk.15.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 491: blk.15.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 492: blk.15.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 493: blk.15.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 494: blk.15.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 495: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 496: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 497: blk.16.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 498: blk.16.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 499: blk.16.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 500: blk.16.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 501: blk.16.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 502: blk.16.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 503: blk.16.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 504: blk.16.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 505: blk.16.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 506: blk.16.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 507: blk.16.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 508: blk.16.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 509: blk.16.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 510: blk.16.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 511: blk.16.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 512: blk.16.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 513: blk.16.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 514: blk.16.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 515: blk.16.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 516: blk.16.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 517: blk.16.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 518: blk.16.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 519: blk.16.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 520: blk.16.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 521: blk.16.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 522: blk.16.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 523: blk.16.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 524: blk.16.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 525: blk.16.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 526: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 527: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 528: blk.17.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 529: blk.17.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 530: blk.17.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 531: blk.17.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 532: blk.17.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 533: blk.17.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 534: blk.17.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 535: blk.17.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 536: blk.17.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 537: blk.17.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 538: blk.17.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 539: blk.17.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 540: blk.17.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 541: blk.17.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 542: blk.17.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 543: blk.17.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 544: blk.17.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 545: blk.17.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 546: blk.17.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 547: blk.17.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 548: blk.17.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 549: blk.17.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 550: blk.17.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 551: blk.17.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 552: blk.17.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 553: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 554: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 555: blk.17.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 556: blk.17.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 557: blk.17.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 558: blk.17.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 559: blk.18.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 560: blk.18.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 561: blk.18.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 562: blk.18.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 563: blk.18.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 564: blk.18.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 565: blk.18.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 566: blk.18.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 567: blk.18.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 568: blk.18.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 569: blk.18.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 570: blk.18.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 571: blk.18.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 572: blk.18.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 573: blk.18.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 574: blk.18.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 575: blk.18.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 576: blk.18.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 577: blk.18.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 578: blk.18.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 579: blk.18.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 580: blk.18.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 581: blk.18.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 582: blk.18.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 583: blk.18.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 584: blk.18.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 585: blk.18.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 586: blk.18.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 587: blk.18.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 588: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 589: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 590: blk.19.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 591: blk.19.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 592: blk.19.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 593: blk.19.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 594: blk.19.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 595: blk.19.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 596: blk.19.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 597: blk.19.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 598: blk.19.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 599: blk.19.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 600: blk.19.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 601: blk.19.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 602: blk.19.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 603: blk.19.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 604: blk.19.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 605: blk.19.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 606: blk.19.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 607: blk.19.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 608: blk.19.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 609: blk.19.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 610: blk.19.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 611: blk.19.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 612: blk.19.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 613: blk.19.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 614: blk.19.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 615: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 616: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 617: blk.19.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 618: blk.19.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 619: blk.19.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 620: blk.19.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 621: blk.20.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 622: blk.20.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 623: blk.20.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 624: blk.20.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 625: blk.20.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 626: blk.20.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 627: blk.20.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 628: blk.20.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 629: blk.20.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 630: blk.20.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 631: blk.20.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 632: blk.20.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 633: blk.20.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 634: blk.20.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 635: blk.20.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 636: blk.20.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 637: blk.20.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 638: blk.20.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 639: blk.20.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 640: blk.20.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 641: blk.20.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 642: blk.20.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 643: blk.20.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 644: blk.20.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 645: blk.20.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 646: blk.20.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 647: blk.20.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 648: blk.20.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 649: blk.20.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 650: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 651: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 652: blk.21.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 653: blk.21.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 654: blk.21.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 655: blk.21.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 656: blk.21.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 657: blk.21.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 658: blk.21.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 659: blk.21.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 660: blk.21.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 661: blk.21.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 662: blk.21.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 663: blk.21.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 664: blk.21.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 665: blk.21.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 666: blk.21.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 667: blk.21.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 668: blk.21.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 669: blk.21.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 670: blk.21.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 671: blk.21.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 672: blk.21.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 673: blk.21.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 674: blk.21.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 675: blk.21.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 676: blk.21.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 677: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 678: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 679: blk.21.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 680: blk.21.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 681: blk.21.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 682: blk.21.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 683: blk.22.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 684: blk.22.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 685: blk.22.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 686: blk.22.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 687: blk.22.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 688: blk.22.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 689: blk.22.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 690: blk.22.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 691: blk.22.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 692: blk.22.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 693: blk.22.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 694: blk.22.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 695: blk.22.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 696: blk.22.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 697: blk.22.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 698: blk.22.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 699: blk.22.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 700: blk.22.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 701: blk.22.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 702: blk.22.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 703: blk.22.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 704: blk.22.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 705: blk.22.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 706: blk.22.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 707: blk.22.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 708: blk.22.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 709: blk.22.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 710: blk.22.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 711: blk.22.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 712: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 713: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 714: blk.23.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 715: blk.23.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 716: blk.23.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 717: blk.23.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 718: blk.23.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 719: blk.23.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 720: blk.23.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 721: blk.23.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 722: blk.23.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 723: blk.23.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 724: blk.23.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 725: blk.23.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 726: blk.23.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 727: blk.23.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 728: blk.23.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 729: blk.23.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 730: blk.23.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 731: blk.23.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 732: blk.23.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 733: blk.23.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 734: blk.23.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 735: blk.23.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 736: blk.23.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 737: blk.23.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 738: blk.23.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 739: blk.23.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 740: blk.23.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 741: blk.23.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 742: blk.23.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 743: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 744: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 745: blk.24.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 746: blk.24.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 747: blk.24.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 748: blk.24.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 749: blk.24.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 750: blk.24.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 751: blk.24.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 752: blk.24.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 753: blk.24.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 754: blk.24.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 755: blk.24.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 756: blk.24.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 757: blk.24.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 758: blk.24.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 759: blk.24.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 760: blk.24.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 761: blk.24.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 762: blk.24.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 763: blk.24.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 764: blk.24.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 765: blk.24.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 766: blk.24.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 767: blk.24.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 768: blk.24.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 769: blk.24.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 770: blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 771: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 772: blk.24.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 773: blk.24.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 774: blk.24.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 775: blk.24.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 776: blk.25.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 777: blk.25.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 778: blk.25.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 779: blk.25.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 780: blk.25.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 781: blk.25.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 782: blk.25.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 783: blk.25.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 784: blk.25.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 785: blk.25.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 786: blk.25.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 787: blk.25.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 788: blk.25.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 789: blk.25.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 790: blk.25.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 791: blk.25.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 792: blk.25.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 793: blk.25.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 794: blk.25.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 795: blk.25.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 796: blk.25.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 797: blk.25.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 798: blk.25.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 799: blk.25.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 800: blk.25.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 801: blk.25.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 802: blk.25.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 803: blk.25.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 804: blk.25.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 805: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 806: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 807: blk.26.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 808: blk.26.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 809: blk.26.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 810: blk.26.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 811: blk.26.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 812: blk.26.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 813: blk.26.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 814: blk.26.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 815: blk.26.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 816: blk.26.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 817: blk.26.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 818: blk.26.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 819: blk.26.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 820: blk.26.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 821: blk.26.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 822: blk.26.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 823: blk.26.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 824: blk.26.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 825: blk.26.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 826: blk.26.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 827: blk.26.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 828: blk.26.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 829: blk.26.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 830: blk.26.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 831: blk.26.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 832: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 833: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 834: blk.26.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 835: blk.26.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 836: blk.26.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 837: blk.26.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 838: blk.27.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 839: blk.27.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 840: blk.27.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 841: blk.27.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 842: blk.27.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 843: blk.27.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 844: blk.27.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 845: blk.27.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 846: blk.27.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 847: blk.27.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 848: blk.27.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 849: blk.27.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 850: blk.27.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 851: blk.27.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 852: blk.27.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 853: blk.27.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 854: blk.27.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 855: blk.27.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 856: blk.27.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 857: blk.27.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 858: blk.27.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 859: blk.27.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 860: blk.27.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 861: blk.27.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 862: blk.27.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 863: blk.27.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 864: blk.27.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 865: blk.27.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 866: blk.27.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 867: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 868: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 869: blk.28.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 870: blk.28.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 871: blk.28.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 872: blk.28.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 873: blk.28.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 874: blk.28.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 875: blk.28.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 876: blk.28.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 877: blk.28.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 878: blk.28.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 879: blk.28.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 880: blk.28.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 881: blk.28.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 882: blk.28.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 883: blk.28.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 884: blk.28.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 885: blk.28.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 886: blk.28.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 887: blk.28.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 888: blk.28.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 889: blk.28.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 890: blk.28.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 891: blk.28.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 892: blk.28.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 893: blk.28.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 894: blk.28.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 895: blk.28.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 896: blk.28.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 897: blk.28.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 898: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 899: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 900: blk.29.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 901: blk.29.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 902: blk.29.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 903: blk.29.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 904: blk.29.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 905: blk.29.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 906: blk.29.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 907: blk.29.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 908: blk.29.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 909: blk.29.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 910: blk.29.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 911: blk.29.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 912: blk.29.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 913: blk.29.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 914: blk.29.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 915: blk.29.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 916: blk.29.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 917: blk.29.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 918: blk.29.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 919: blk.29.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 920: blk.29.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 921: blk.29.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 922: blk.29.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 923: blk.29.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 924: blk.29.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 925: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 926: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 927: blk.29.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 928: blk.29.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 929: blk.29.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 930: blk.29.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 931: blk.30.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 932: blk.30.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 933: blk.30.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 934: blk.30.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 935: blk.30.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 936: blk.30.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 937: blk.30.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 938: blk.30.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 939: blk.30.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 940: blk.30.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 941: blk.30.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 942: blk.30.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 943: blk.30.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 944: blk.30.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 945: blk.30.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 946: blk.30.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 947: blk.30.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 948: blk.30.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 949: blk.30.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 950: blk.30.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 951: output.weight q6_K [ 4096, 32000, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 952: blk.30.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 953: blk.30.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 954: blk.30.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 955: blk.30.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 956: blk.30.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 957: blk.30.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 958: blk.30.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 959: blk.30.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 960: blk.30.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 961: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 962: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 963: blk.31.ffn_gate.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 964: blk.31.ffn_down.0.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 965: blk.31.ffn_up.0.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 966: blk.31.ffn_gate.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 967: blk.31.ffn_down.1.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 968: blk.31.ffn_up.1.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 969: blk.31.ffn_gate.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 970: blk.31.ffn_down.2.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 971: blk.31.ffn_up.2.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 972: blk.31.ffn_gate.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 973: blk.31.ffn_down.3.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 974: blk.31.ffn_up.3.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 975: blk.31.ffn_gate.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 976: blk.31.ffn_down.4.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 977: blk.31.ffn_up.4.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 978: blk.31.ffn_gate.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 979: blk.31.ffn_down.5.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 980: blk.31.ffn_up.5.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 981: blk.31.ffn_gate.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 982: blk.31.ffn_down.6.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 983: blk.31.ffn_up.6.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 984: blk.31.ffn_gate.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 985: blk.31.ffn_down.7.weight q3_K [ 14336, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 986: blk.31.ffn_up.7.weight q3_K [ 4096, 14336, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 987: blk.31.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 988: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 989: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 990: blk.31.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 991: blk.31.attn_output.weight q4_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 992: blk.31.attn_q.weight q3_K [ 4096, 4096, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 993: blk.31.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ]
superagi-backend-1 | llama_model_loader: - tensor 994: output_norm.weight f32 [ 4096, 1, 1, 1 ]
superagi-backend-1 | llama_model_loader: - kv 0: general.architecture str
superagi-backend-1 | llama_model_loader: - kv 1: general.name str
superagi-backend-1 | llama_model_loader: - kv 2: llama.context_length u32
superagi-backend-1 | llama_model_loader: - kv 3: llama.embedding_length u32
superagi-backend-1 | llama_model_loader: - kv 4: llama.block_count u32
superagi-backend-1 | llama_model_loader: - kv 5: llama.feed_forward_length u32
superagi-backend-1 | llama_model_loader: - kv 6: llama.rope.dimension_count u32
superagi-backend-1 | llama_model_loader: - kv 7: llama.attention.head_count u32
superagi-backend-1 | llama_model_loader: - kv 8: llama.attention.head_count_kv u32
superagi-backend-1 | llama_model_loader: - kv 9: llama.expert_count u32
superagi-backend-1 | llama_model_loader: - kv 10: llama.expert_used_count u32
superagi-backend-1 | llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32
superagi-backend-1 | llama_model_loader: - kv 12: llama.rope.freq_base f32
superagi-backend-1 | llama_model_loader: - kv 13: general.file_type u32
superagi-backend-1 | llama_model_loader: - kv 14: tokenizer.ggml.model str
superagi-backend-1 | llama_model_loader: - kv 15: tokenizer.ggml.tokens arr
superagi-backend-1 | llama_model_loader: - kv 16: tokenizer.ggml.scores arr
superagi-backend-1 | llama_model_loader: - kv 17: tokenizer.ggml.token_type arr
superagi-backend-1 | llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32
superagi-backend-1 | llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32
superagi-backend-1 | llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32
superagi-backend-1 | llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32
superagi-backend-1 | llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool
superagi-backend-1 | llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool
superagi-backend-1 | llama_model_loader: - kv 24: general.quantization_version u32
superagi-backend-1 | llama_model_loader: - type f32: 65 tensors
superagi-backend-1 | llama_model_loader: - type f16: 32 tensors
superagi-backend-1 | llama_model_loader: - type q8_0: 64 tensors
superagi-backend-1 | llama_model_loader: - type q3_K: 801 tensors
superagi-backend-1 | llama_model_loader: - type q4_K: 32 tensors
superagi-backend-1 | llama_model_loader: - type q6_K: 1 tensors
superagi-backend-1 | llm_load_print_meta: format = unknown
superagi-backend-1 | llm_load_print_meta: arch = llama
superagi-backend-1 | llm_load_print_meta: vocab type = SPM
superagi-backend-1 | llm_load_print_meta: n_vocab = 32000
superagi-backend-1 | llm_load_print_meta: n_merges = 0
superagi-backend-1 | llm_load_print_meta: n_ctx_train = 32768
superagi-backend-1 | llm_load_print_meta: n_ctx = 4096
superagi-backend-1 | llm_load_print_meta: n_embd = 4096
superagi-backend-1 | llm_load_print_meta: n_head = 32
superagi-backend-1 | llm_load_print_meta: n_head_kv = 8
superagi-backend-1 | llm_load_print_meta: n_layer = 32
superagi-backend-1 | llm_load_print_meta: n_rot = 128
superagi-backend-1 | llm_load_print_meta: n_gqa = 4
superagi-backend-1 | llm_load_print_meta: f_norm_eps = 0.0e+00
superagi-backend-1 | llm_load_print_meta: f_norm_rms_eps = 1.0e-05
superagi-backend-1 | llm_load_print_meta: n_ff = 14336
superagi-backend-1 | llm_load_print_meta: freq_base = 10000.0
superagi-backend-1 | llm_load_print_meta: freq_scale = 1
superagi-backend-1 | llm_load_print_meta: model type = 7B
superagi-backend-1 | llm_load_print_meta: model ftype = mostly Q3_K - Medium
superagi-backend-1 | llm_load_print_meta: model params = 46.70 B
superagi-backend-1 | llm_load_print_meta: model size = 18.96 GiB (3.49 BPW)
superagi-backend-1 | llm_load_print_meta: general.name = mistralai_mixtral-8x7b-v0.1
superagi-backend-1 | llm_load_print_meta: BOS token = 1 ''
superagi-backend-1 | llm_load_print_meta: EOS token = 2 '
'
superagi-backend-1 | llm_load_print_meta: UNK token = 0 ''
superagi-backend-1 | llm_load_print_meta: PAD token = 0 ''
superagi-backend-1 | llm_load_print_meta: LF token = 13 '<0x0A>'
superagi-backend-1 | llm_load_tensors: ggml ctx size = 0.32 MB
superagi-backend-1 | llm_load_tensors: using CUDA for GPU acceleration
superagi-backend-1 | error loading model: create_tensor: tensor 'blk.0.ffn_gate.weight' not found
superagi-backend-1 | llama_load_model_from_file: failed to load model
superagi-backend-1 | 2024-01-20 04:40:58 UTC - Super AGI - ERROR - [/app/superagi/helper/llm_loader.py:27] -
superagi-backend-1 | 2024-01-20 04:40:58 UTC - Super AGI - ERROR - [/app/superagi/controllers/models_controller.py:185] - Model not found.
superagi-backend-1 | 2024-01-20 04:40:58 UTC - Super AGI - INFO - [/app/superagi/controllers/models_controller.py:203] - Error:
superagi-backend-1 | 2024-01-20 04:40:58 UTC - Super AGI - INFO - [/app/superagi/controllers/models_controller.py:203] -
superagi-backend-1 | INFO: 172.19.0.7:45274 - "GET /models_controller/test_local_llm HTTP/1.0" 404 Not Found

is might be the cpp version the problem: https://github.com/abetlen/llama-cpp-python/releases the latest is 0.2.31 not like in SuperAGI 2.7

here is a screenshot of Nvidia it appeared after I clicked test!:

Bildschirmfoto 2024-01-19 um 20 42 28

Lol with this vicuna model it worked: https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF

This is what I learned so far.... and nothing works. Good luck to anyone willing to carry the torch forward until someone gets it running on Windows....... See my dev comment below.

Windows Setup:

  1. Open config.yaml

  2. Add '#' to line 12

  3. Remove '#' on line 13, save the file and close.

  4. Download or move a Local LLM file of your choice to 'local_model_path' folder

  5. Open docker-compose.yaml

  6. Paste under line 16 and line 18:

    • "./local_model_path/dolphin-2.6-mistral-7b.Q5_K_M.gguf:/app/local_model_path"
  7. Save the file and close.

  8. Open cmd.exe from start menu

  9. Paste and hit ENTER:

docker compose -f docker-compose-gpu.yml up --build

---- EXAMPLE of docker-compose.yaml ---- (example from 1/28/24 for Windows 11):

version: '3.8'
services:
backend:
volumes:
- "./:/app"
- "./local_model_path/dolphin-2.6-mistral-7b.Q5_K_M.gguf:/app/local_model_path"
build: .
depends_on:
- super__redis
- super__postgres
networks:
- super_network
command: ["/app/wait-for-it.sh", "super__postgres:5432","-t","60","--","/app/entrypoint.sh"]
celery:
volumes:
- "./:/app"
- "./local_model_path/dolphin-2.6-mistral-7b.Q5_K_M.gguf:/app/local_model_path"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"

I seriously recommend an easier way to VIEW local models within the Add Models tab (populate it from the local_model_path dir!! Even if unsupported, it should list them in the drop down. The TEST button errors out for me, so I spent 2 hours attempting to debug but this is ridiculous. One last attempt will be to try a third llm...

models_controller.py:203 Error:

It indicates that there's an error in the models_controller.py file at line 203.
The error is a 404 Not Found for a GET request to /models_controller/test_local_llm.
The request seems to come from IP 172.18.0.7:39800, and it's being proxied at http://localhost:3000/.

@shiloh92 you need to add the model path in the docker-compose-gpu.yml file, not in the docker-compose.yaml file
after this you can use: docker compose -f docker-compose-gpu.yml up --build
you can refer this video for set-up