SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.

Home Page:https://scisharp.github.io/LLamaSharp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Anyone really success by following the steps in the Get Started?

tiger2014 opened this issue · comments

Anyone realyy successs by following the steps in the Get Started? I never success.

Do you mean the Quick Start guide? If so, what's the problem you're seeing?

@tiger2014 I had done the setup some days ago. Let me know issues you are facing. I can help you out here.

I creatre a Cosole project, then add nuget LLamaSharp and LLamaSharp.Backend.Cuda12, reolace the Model path wiith 'C:\D\workspace\LLama\ggml-model-f32-q4_0.bin'. I replaced the code in Program.cs with the snippit in the Get Started. After that, when I run, it ecountered a error:

Severity Code Description Project File Line Suppression State Error CS1061 'ChatSession' does not contain a definition for 'Chat' and no accessible extension method 'Chat' accepting a first argument of type 'ChatSession' could be found (are you missing a using directive or an assembly reference?) LLama C:\D\workspace\LLama\Program.cs 26 Active

image
image

Unfortunately our docs aren't in a great state at the moment, they've fallen behind development of the asset. I believe it has been renamed ChatAsync and you'll need to await it.

@tiger2014 I had done the setup some days ago. Let me know issues you are facing. I can help you out here.

Can you share it if it is a simple sample. Thanks

@tiger2014 : Clone the repository at your end, install the required dependencies. Set the LLama.Examples as start up project and you should be able to see the console app with multiple options.

@tiger2014 : Clone the repository at your end, install the required dependencies. Set the LLama.Examples as start up project and you should be able to see the console app with multiple options.

Never work.
image
image
image
image

A .bin model is not compatible with llama.cpp. You need to use a .gguf model.

A .bin model is not compatible with llama.cpp. You need to use a .gguf model.

There is only .bin file link on the page of the 'Get Started'. So, can you share a link of .gguf.

And hope there will be working Console prject sample with full steps.

Try one of these :)

Try one of these :)

Thank you so much. Successed!

Since it looks like it's working for you I'll close this issue, feel free to re-open if there's still a problem