load1n9 / chat

leverage llama3.2 and other large language models to generate responses to your questions locally with no installation

Home Page:https://jsr.io/@loading/chat

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

chat

Simply run the following command and thats it

deno run -A jsr:@loading/chat

[Optional] create a chat-config.toml file in the active directory to configure the chat

"$schema" = 'https://jsr.io/@loading/chat/0.1.16/config-schema.json'

[config]
model = "onnx-community/Llama-3.2-1B-Instruct"
system = [
  "You are an assistant designed to help with any questions the user might have."
]
max_new_tokens = 128
max_length = 20
temperature = 1.0
top_p = 1.0
repetition_penalty = 1.2

Run the server to kinda match a similar api to the openai chat api

deno serve -A jsr:@loading/chat/server

Try it out

curl -X POST http://localhost:8000/v1/completions \  -H "Content-Type: application/json" \  -d '{    "prompt": "Once upon a time",    "max_tokens": 50,    "temperature": 0.7  }'

(New) Code Companion

With the new code companion you can generate new projects and edit them and stuff

deno run -A jsr:@loading/chat/companion

type /help to get a list of commands

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

leverage llama3.2 and other large language models to generate responses to your questions locally with no installation

https://jsr.io/@loading/chat

License:MIT License


Languages

Language:TypeScript 100.0%