ex3ndr / llama-coder

Replace Copilot local AI

Home Page:https://marketplace.visualstudio.com/items?itemName=ex3ndr.llama-coder

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Larger models just seem to return metadata

oc013 opened this issue · comments

When I run a model like codellama:34b-code-q6_K it does seem to spin up my GPUs but then I end up with unusable output. Running latest ollama and extension version on Ubuntu 22.04

2024-03-12 01:57:27.392 [info] Running AI completion...
2024-03-12 01:57:31.655 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.654294365Z","response":" ","done":false}
2024-03-12 01:57:31.700 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.699862247Z","response":"\n","done":false}
2024-03-12 01:57:31.746 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.745519433Z","response":"\u003c/","done":false}
2024-03-12 01:57:31.790 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.790407012Z","response":"PRE","done":false}
2024-03-12 01:57:31.836 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.835927564Z","response":"\u003e","done":false}
2024-03-12 01:57:31.881 [info] Receive line: {"model":"codellama:34b-code-q6_K","created_at":"2024-03-12T05:57:31.881386364Z","response":"","done":true,"total_duration":4488985784,"load_duration":270215,"prompt_eval_count":1427,"prompt_eval_duration":4261101000,"eval_count":6,"eval_duration":226981000}
2024-03-12 01:57:31.882 [info] AI completion completed:

image