dylancl / nextjs-sag-llm

A small Next.js app which utilises search results to feed context to the LLM.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This is a Next.js project bootstrapped with create-next-app.

Kapture.2024-04-02.at.12.46.17.mp4

Getting Started

Llama.cpp

First, start a server for the LLM:

./server -m models/Hermes-2-Pro-Mistral-7B.Q4_K_S.gguf -c 12000

Then, start a server for the embedding model (on a different port, in this case 8081):

./server -m models/nomic-embed-text-v1.Q5_K_S.gguf -c 8192 --embedding --port 8081

Next.js

Run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.

About

A small Next.js app which utilises search results to feed context to the LLM.


Languages

Language:TypeScript 90.2%Language:CSS 8.9%Language:JavaScript 0.9%