acheong08 / ChatGPT-to-API

Scalable unofficial ChatGPT API for production.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ChatGPT-to-API doesn't work with socks5h proxy.

hongyi-zhao opened this issue · comments

See below for more details:

werner@X10DAi:~/Public/repo/github.com/acheong08/ChatGPT-to-API.git$ cat proxies.txt
socks5h://127.0.0.1:18890

Start it with:

werner@X10DAi:~/Public/repo/github.com/acheong08/ChatGPT-to-API.git$ SERVER_PORT=18080 ./freechatgpt

Test it with:

$ curl http://127.0.0.1:18080/v1/chat/completions -d '{"messages": [{"role": "user", "content": "Say this is a test!"}],"stream": true}'
{"error":"error sending request"}

The message of ChatGPT-to-API is as follows:

werner@X10DAi:~/Public/repo/github.com/acheong08/ChatGPT-to-API.git$ SERVER_PORT=18080 ./freechatgpt
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env: export GIN_MODE=release
 - using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /ping                     --> main.main.func1 (4 handlers)
[GIN-debug] PATCH  /admin/password           --> main.passwordHandler (5 handlers)
[GIN-debug] PATCH  /admin/tokens             --> main.tokensHandler (5 handlers)
[GIN-debug] PATCH  /admin/puid               --> main.puidHandler (5 handlers)
[GIN-debug] PATCH  /admin/openai             --> main.openaiHandler (5 handlers)
[GIN-debug] OPTIONS /v1/chat/completions      --> main.optionsHandler (4 handlers)
[GIN-debug] POST   /v1/chat/completions      --> main.nightmare (5 handlers)
[GIN-debug] GET    /v1/engines               --> main.engines_handler (5 handlers)
[GIN-debug] GET    /v1/models                --> main.engines_handler (5 handlers)
2023/07/31 22:45:54 87971 127.0.0.1:18080
[GIN] 2023/07/31 - 22:46:01 | 500 |  231.936897ms |       127.0.0.1 | POST     "/v1/chat/completions"
^C2023/07/31 22:46:29 87971 Received SIGINT.
2023/07/31 22:46:29 87971 Waiting for connections to finish...
2023/07/31 22:46:29 87971 Serve() returning...

The proxy itself works smoothly:

werner@X10DAi:~$ curl -I -x socks5://127.0.0.1:18890 https://www.google.com
HTTP/2 200 
content-type: text/html; charset=ISO-8859-1
content-security-policy-report-only: object-src 'none';base-uri 'self';script-src 'nonce-JsAubfkCByMPIVf-Eq9AJw' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp
p3p: CP="This is not a P3P policy! See g.co/p3phelp for more info."
date: Mon, 31 Jul 2023 14:57:53 GMT
server: gws
x-xss-protection: 0
x-frame-options: SAMEORIGIN
expires: Mon, 31 Jul 2023 14:57:53 GMT
cache-control: private
set-cookie: 1P_JAR=2023-07-31-14; expires=Wed, 30-Aug-2023 14:57:53 GMT; path=/; domain=.google.com; Secure
set-cookie: AEC=Ad49MVEGIxjjVXmc5xpTlGuA73aKjDrQlheUJ9vGsuJTArHjLzZ_nGTCpA; expires=Sat, 27-Jan-2024 14:57:53 GMT; path=/; domain=.google.com; Secure; HttpOnly; SameSite=lax
set-cookie: NID=511=GAhRHvmPxnv7KxvzF6mbOdhOdkR2l0chZapInzlcDrC7k8DBiziBlMVaZWV3wcFnfxxP2UfVr8-sRM_pSQuaDlA-A0Js0M0IZlKiPRgYQkXeBhtt-v8W7VkEVD6KNvIpaeokEYvvcIFGDaU-ujjz0zu7TcBIBYwTHUBeq1MWTLw; expires=Tue, 30-Jan-2024 14:57:53 GMT; path=/; domain=.google.com; HttpOnly
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000

werner@X10DAi:~$ curl -I -x socks5h://127.0.0.1:18890 https://www.google.com
HTTP/2 200 
content-type: text/html; charset=ISO-8859-1
content-security-policy-report-only: object-src 'none';base-uri 'self';script-src 'nonce-hY85juwFYAGc73WlJX3FFQ' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp
p3p: CP="This is not a P3P policy! See g.co/p3phelp for more info."
date: Mon, 31 Jul 2023 14:57:57 GMT
server: gws
x-xss-protection: 0
x-frame-options: SAMEORIGIN
expires: Mon, 31 Jul 2023 14:57:57 GMT
cache-control: private
set-cookie: 1P_JAR=2023-07-31-14; expires=Wed, 30-Aug-2023 14:57:57 GMT; path=/; domain=.google.com; Secure
set-cookie: AEC=Ad49MVGuz4bm-Ynuu1ydBI4actsLso5tXRwDmv41NN5cxzgilRgx3yKdsWc; expires=Sat, 27-Jan-2024 14:57:57 GMT; path=/; domain=.google.com; Secure; HttpOnly; SameSite=lax
set-cookie: NID=511=Cpasi7cnUdv0hmHSnI1MSckrdT3WlyUi-DU0SRRi5ewrmV4-CzUOtUJLgo9C-lV1sGYDnxf5BxqWTidVYiTq2L1rRPQZkoE_BeaBPP-1RA07HYHnrvmRSLhCN7LSAqarh42BJa-IXgBQlBlfptvVVatoWgK8g117P01u3PJLWmk; expires=Tue, 30-Jan-2024 14:57:57 GMT; path=/; domain=.google.com; HttpOnly
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000

If not worked, you would not get {"error":"error sending request"} but connect timeout,

But if I use the socks5 proxy, the test will give the following message:

werner@X10DAi:~/Public/repo/github.com/acheong08/ChatGPT-to-API.git$ cat proxies.txt 
socks5://127.0.0.1:18890

werner@X10DAi:~/Public/repo/github.com/acheong08/ChatGPT-to-API.git$ SERVER_PORT=18080 ./freechatgpt
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /ping                     --> main.main.func1 (4 handlers)
[GIN-debug] PATCH  /admin/password           --> main.passwordHandler (5 handlers)
[GIN-debug] PATCH  /admin/tokens             --> main.tokensHandler (5 handlers)
[GIN-debug] PATCH  /admin/puid               --> main.puidHandler (5 handlers)
[GIN-debug] PATCH  /admin/openai             --> main.openaiHandler (5 handlers)
[GIN-debug] OPTIONS /v1/chat/completions      --> main.optionsHandler (4 handlers)
[GIN-debug] POST   /v1/chat/completions      --> main.nightmare (5 handlers)
[GIN-debug] GET    /v1/engines               --> main.engines_handler (5 handlers)
[GIN-debug] GET    /v1/models                --> main.engines_handler (5 handlers)
2023/08/01 19:39:03 17950 127.0.0.1:18080
[GIN] 2023/08/01 - 19:39:22 | 200 |   5.21839729s |       127.0.0.1 | POST     "/v1/chat/completions"

$ curl http://127.0.0.1:18080/v1/chat/completions -d '{"messages": [{"role": "user", "content": "Say this is a test!"}],"stream": true}'
data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"role":"assistant"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"content":"This"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"content":" is"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"content":" a"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"content":" test"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{"content":"!"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK","object":"chat.completion.chunk","created":0,"model":"gpt-3.5-turbo-0301","choices":[{"delta":{},"index":0,"finish_reason":"stop"}]}

data: [DONE]

And, obviously, the {"error":"error sending request"} is not the desired return info. So, I still don't understand what your comment above means.

Bug with https://github.com/bogdanfinn/tls-client/. socks5h not supported afaik