nwtgck / piping-server

Infinitely transfer between every device over pure HTTP with pipes or browsers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Too early to destroy?

mooglejp opened this issue · comments

receiver.res.connection.destroy();

At this point, I think some readers may still be reading.
Client often fails with error like:
"transfer closed with outstanding read data remaining."

@mooglejp Thanks.

Umm... I think so too. In particular, it is too early in the case of a slow receiver and a fast sender. One thing I'm wondering is I haven't seen the errors so much... I am one of the heavy users of Piping Server.

MYNOTE:
Sender: seq 1000 | curl -T - localhost:8080/s
Receiver: curl --limit-rate 1 localhost:8080/s

@mooglejp I think the destroy() may be OK because it is called in on("close"), not on("end").

piping-server/src/piping.ts

Lines 468 to 476 in 0095a03

senderData.on("close", () => {
for (const receiver of receivers) {
// Close a receiver
if (receiver.res.connection !== undefined) {
receiver.res.connection.destroy();
}
}
this.logger.info(`sender on-close: '${path}'`);
});

The server log in a successful case should be like the following. Only server on-end is logged, not server on-close.
image

@nwtgck

Sorry for the lack (and wrong) of information at my first post.
I saw the message below when I was transferring large(>=50MB) file over HTTPS.
curl: (56) Failure when receiving data from the peer.
(not curl: (18) ...)
And got file that missing last several (kilo)bytes.

After removing that code, the problem seems to be solved.

@mooglejp

Thank you very much for the investigation. That will contributes to the stability of Piping Server.

I'd like to reproduce in my environment. Could you tell me about your environment?

  • Are you using a local or remote Piping Server?
  • What is your OS? Linux/macOS/Windows? curl and the server run in the same OS?

@nwtgck Thank you for replying!

Are you using a local or remote Piping Server?

Remote.
Piping Server is running on docker on Linux on ESXi. (behind NAPT)

What is your OS? Linux/macOS/Windows?

Docker Host OS: RancherOS v1.5.4 see
Docker Image: nwtgck/piping-server:v0.15.5
Client OS: Windows10 (1809/1903)
curl: 7.55.1 (Windows) libcurl/7.55.1 WinSSL

curl and the server run in the same OS?

Not same.

@mooglejp

Thank you very much for the detail. I'd like to know about additional settings.

  • You said "file over HTTPS". Does it happen when HTTP, not only HTTPS?

@nwtgck
As far as I tried, It happens only when HTTPS.

@mooglejp Only HTTP...🤔 , very interesting. I'll try to imitate your setting and reproduce the problem.

Maybe, to fix quickly, you can use a reverse proxy to provide HTTPS. Here are an example of reverse proxy settings. They may help to introduce it although the main purpose of each repo is not for providing the SSL/TLS layer.

But anyway, I'll reproduce the error.

@nwtgck

https://github.com/nwtgck/piping-server-http2-docker-compose (I guess this is easier because of non-code-change)
https://github.com/nwtgck/piping-server-basic-auth-docker-compose
https://github.com/nwtgck/piping-server-load-balancer-docker-compose

Wow! Great works!
Thank you for letting me know.

I'll let you know if I get something about occurrence condition.

@mooglejp Great! Could you tell me which one did you use?

Is this issue a Node.js problem? If so, you will be a contributor to Node.js!

Oops. I just marveled at your work.
Sorry for misleading you 😢
I will try your "piping-server-http2-docker-compose".

@nwtgck Thank you very much! That's no problem. (I think I have a problem with my comprehension...)

@nwtgck

I tried "piping-server-http2-docker-compose".

Message below appears occasionally while repeating the test.
curl: (18) transfer closed with ***** bytes remaining to read

COMMANDS
Sender:
curl.exe --insecure -T pwsh.msi https://192.168.5.5:8443/pwsh
Receiver:
curl.exe --insecure https://192.168.5.5:8443/pwsh -o pwsh.msi
(add --limit-rate 10240000 if not reproduced)


Remote.
Piping Server is running on docker on Linux on ESXi. (behind NAPT)

Additionally, The problem appeared when I tried in my LAN.

@mooglejp

Thank you very much for the detail information. I'll try raw Piping Server to reproduce the problem first.

Currently, I'm trying to reproduce the problem on my machine. First, I'd like to figure out which one has the problem. Here are the candidates.

  • Piping Server
  • Node.js
  • curl
  • Docker
  • Docker base image
  • Server OS
  • Client OS
  • Combinations of above

nwtgck's Experiment 1

Environment

macOS Mojave 10.14.6

$ node -v
v10.18.0
$ npm -v
6.13.6
$ curl --version
curl 7.67.0 (x86_64-apple-darwin18.7.0) libcurl/7.67.0 SecureTransport zlib/1.2.11
Release-Date: 2019-11-06
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets

Method

  • Run Piping Server with self-signed certificates (revision: 2db3e8f)
  • Send infinite random bytes by curl over HTTPS
  • Receive the bytes and discard over HTTPS
  • The client and server run on the same machine.

Results

Over 50GB of data has been transferred successfully, and it will continue.
The following animated GIF is the result.
piping-server-local-https-over-50GB mp4 opt

Now, it reached 90.1GB while I'm writing this.

The problem not reproduced.


@mooglejp I'll change a part of the settings and try to reproduce the problem. When you have advice or suggestion, please feel free to tell me.

nwtgck's Experiment 2

Environments

client

macOS Mojave 10.14.6

6.13.6
$ curl --version
curl 7.67.0 (x86_64-apple-darwin18.7.0) libcurl/7.67.0 SecureTransport zlib/1.2.11
Release-Date: 2019-11-06
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets

server

Ubuntu 18.04

$ docker -v
Docker version 19.03.5, build 633a0ea838

The client and server not the same machines, but they are on the same LAN.

Method

  • Run Piping Server with self-signed certificates on the server
    • Docker image piping-server:v0.15.5, image ID: 059238feb5ce
  • Send infinite random bytes by curl over HTTPS on the client
    • cat /dev/urandom | curl -kT - ...
  • Receive the bytes and discard over HTTPS on the client
    • curl -k ... > /dev/null

Results

The following output from curl of the receiver is the result.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.4G    0 11.4G    0     0  10.6M      0 --:--:--  0:18:28 --:--:-- 10.7M
^C

Over 11GB of data has been transferred successfully, and it will continue.
The problem not reproduced.

nwtgck's Experiment 3

Environments

client

Windows 10 on VirtualBox
VirtualBox Version 6.0.14

image

server

Ubuntu 18.04

$ docker -v
Docker version 19.03.5, build 633a0ea838

The client and server not the same machines, but they are on the same LAN.

Method

  • Random 5GB file was created by fsutil file createnew my5GB.dat 5368709000
  • Run Piping Server with self-signed certificates on the server
    • Docker image piping-server:v0.15.5, image ID: 059238feb5ce
  • Send a 5GB file by curl over HTTPS on the client
    • curl -kT my5GB.dat ...
  • Receive the file over HTTPS on the client
    • curl -k ... > mydiscard

Results

The following output from curl of the receiver is the result.

image

Over 5GB of data was transferred successfully.
The problem not reproduced.


@mooglejp I'll change a part of the settings and try to reproduce the problem. When you have advice or suggestion, please feel free to tell me.

I think you are right. receiver.res.connection.destroy(); closes receiver's connection earlier. I may fix this issue.

However, the receiver.res.connection.destroy(); closes receivers when a sender is terminated intermediately with such as Ctrl-C.

@mooglejp Hi!

I fixed the early destroy issue in 399aa3c. I hope this fix solves your problem. v1.0.1 was released with this fix.

The commit above should have fixed the problem. Please feel free to open it when you have the problem.