moyix / fauxpilot

FauxPilot - an open-source GitHub Copilot server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Any way of porting this to Colab?

datheia opened this issue · comments

Most of us don't have GPUs powerful enough to even run models with 6 billion parameters. Can we port this to Colab in any way so it would be more accessible?

This is a really interesting idea! Do you know if Google Colab has any way to listen on a network port that can be reached from the outside world?

I remember a storyteller AI (KoboldAI, absolute banger). It opens an HTTP server from Cloudflare, then you connect to that Cloudflare server from your browser and you could have interacted with it very well. Sending a prompt request, getting a response from the server. Maybe something like that can be done in this case?

I also just found this, which looks like it might be a good fit since the FauxPilot server is already using Flask: https://www.geeksforgeeks.org/how-to-run-flask-app-on-google-colab/

I will look into putting together a notebook! :) Thanks for the great suggestion!

I think Colab has started to ban tunneling. Colab FAQ

image

I used to use a similar tool called colabcode which would allow firing up VSCode in colab on a remote server, but with recent changes in their policy, they don't allow this anymore.

You can check more about this over here. abhishekkrthakur/colabcode#109

Well, isn't this project a server for code generation? Does it launch up VSCode? You might have been banned because of the third rule. Again, about the tunneling, the AI application at Colab I mentioned above (KoboldAI) is still up and running, even though they use tunneling. Here's the link for their Colab.

How about serverless?
Cloud Run on Google Cloud Platform you pay for each HTTP request and first 2 millions request are free.

Hmm would serverless work when the models are really big though? Loading the 16B model from disk -> GPU takes almost a minute, so I wouldn't want to have to do that on every completion request...

Not quite on topic but: if the model layers were broken up people could form small networks to share compute.

I think the network latency involved would make that pretty slow?