ex3ndr / llama-coder

Replace Copilot local AI

Home Page:https://marketplace.visualstudio.com/items?itemName=ex3ndr.llama-coder

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Not working with Remote-SSH form Microsoft, works fine on local files

eriktar opened this issue · comments

I work primarily remote using the ms-vscode-remote.remote-ssh-edit extention. I'm not able to get the llama-coder autocomplete to work on those projects. It works just fine for projects on my local machine. Not sure how to debug further, but seems that llama-coder does not trigger when I stop typing.

You can check if autocompletion triggers, in OUTPUT tab with Llama Coder source selected.

You can open the OUTPUT tab from the command palette Output: Focus on Output View.

Thanks for the directions @Kevsnz.

What seems to happen is that the plugin tries to use ollama on the remote computer where the ssh terminates, not on my local computer where ollama is available.

I would happily accept PR to fix this, i am struggling too!

My "programming" use for vscode usually extends to some bash, perl and awk scripts for embedded devices. So I'm a bit out of my element debugging a complex beast like vscode. :) I tried to get hints using the Output mentioned by @Kevsnz, but so far not finding anything relevant on how/why Remote-SSH steals the tcp session. It does not do that with the "AI Chat" interface of the Continue plugin so must be possible in some way to keep the tcp session local.

I did find a workaround however. Adding a remote port to the SSH config.
RemoteForward 11434 localhost:11434

Note that this will not work for everyone, as ssh servers can be configured to not accept tunnels. And it will add latency which is not ideal for latency laden access like the LTE connection I'm mostly working over.

According to documentation Remote-SSH runs extension host (and all non-UI extensions) on the remote machine, so Llama Coder tries to reach Ollama on remote machine (locally there).

Possible way to get it working could be either to run Ollama there (likely impossible) or point Llama Coder to the host where Ollama is running (which will probably defeat the purpose of keeping source code in the remote machine's perimeter). For the latter SSH port forwarding could be a solution.

In any case it seems there is no work around for Llama Coder extension itself.

I'm using Remote-SSH. I have ollama installed on the remote and it works perfectly.

@rbrcurtis that would work, but not a great solution for all use cases. Sometimes my remote environment is a raspberry pi. Sometimes it is a server I do not have rights to install ollama.

I have published new version that is forced to be run on client instead of remote, should work now!

I have published new version that is forced to be run on client instead of remote, should work now!

There a few different use cases to consider.

  1. I use a low end laptop that I take with me every where and I use remote-ssh to connect to my powerful dev machine back home. I want llama-coder to run on my remote machine.
  2. The OPs use case: wants llama-coder to run on the local machine because the remote is a low power machine.

Your update looks like it would break my use case.

Oh, I suppose I could use the remote server setting couldn't I? that could work.

I managed to make it work using forwarded port. If you are running ollama on an external server and want to access it on your local machine, run ssh -NL 11434:localhost:11434 remotehost on your local machine.
Conversely, if you are running ollama on your local machine and want to access it from a remote server, something like ssh -NR 11434:localhost:11434 remotehost on your local machine should work (though I have not tested this one by myself and it may be wrong ; those notations are a bit tricky).

I'm also running into a similar issue as @rbrcurtis, I want to run ollama on my local machine (M3 Pro MacBook Pro) and have my remote connect to it. My SSH config includes

   RemoteForward 11434 localhost:11434

but when I have the remote's config set to http://127.0.0.1:11434 I get

[info] Unsupported document: vscode-remote://ssh-remote%2B<IP REDACTED>/<REDACTED>tests/Dockerfile ignored.

no matter what file I try to edit.

I can see there are likely to be a few different ways people would want to use this, so I don't envy having to figure out the technicalities and UX of it!

Edit: Looks like I've managed to get it to work with the previous version of Llama Coder (v0.0.11). It was initially using the locally install ollama but I've removed that, and now it looks like it's connection, via the SSH reverse tunnel (setting the endpoint on the remote machine to http://localhost:11434), to my local machine. Now to figure out how to actually do useful things!

Facing similar issue in wsl.
On host its working but when opening VS code through wsl using code . its not working.

Hey guys, sorry for fuckup - i have fixed and allowed remote files in extension and it seems it is working as intended in 0.0.13

Hey guys, sorry for fuckup - i have fixed and allowed remote files in extension and it seems it is working as intended in 0.0.13

will it work on wsl?

@vinaykharayat afaik you can now install ollama directly to windows!

@vinaykharayat afaik you can now install ollama directly to windows!

but I code in wsl only, i mean I have setup everything on wsl not in windows, so ollama is also installed in wsl

@vinaykharayat afaik you can now install ollama directly to windows!

but I code in wsl only, i mean I have setup everything on wsl not in windows, so ollama is also installed in wsl

nvm it started working when I used windows version.

Hey guys, sorry for fuckup - i have fixed and allowed remote files in extension and it seems it is working as intended in 0.0.13

Works in my setup now. :) Thanks a lot for looking into and fixing this.