inlets / inletsctl

Create inlets servers on the top cloud platforms

Home Page:https://docs.inlets.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Broken firewall rules for GCE

alexellis opened this issue · comments

I think there may be a conflict between the naming for firewall rules for inlets vs inlets-pro, which could lead to issues for users.

First of all I set up a host with OSS, and a firewallRule was created, i.e. for port 8080 I assume

alex@alexx:~/dev/sponsors-functions$ inletsctl create --provider gce --zone us-central1-a --project-id $PROJECTID --access-token-file key.json
Using provider: gce
Requesting host: priceless-blackburn2 in us-central1-a, from gce
2020/02/09 09:20:52 inlets firewallRule does not exist
2020/02/09 09:20:52 Creating inlets firewallRule opening port: 8080
Host: priceless-blackburn2|us-central1-a|alexellis, status: active
[1/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 
[2/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 
[3/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 

Then I set up a node for inlets-pro, which said the rule already existed:

alex@alexx:~/dev/sponsors-functions$ inletsctl create --provider gce --zone us-central1-a --project-id $PROJECTID --access-token-file key.json --remote-tcp localhost
Using provider: gce
Requesting host: objective-khorana7 in us-central1-a, from gce
2020/02/09 09:21:20 inlets firewallRule exists
Host: objective-khorana7|us-central1-a|alexellis, status: active
[1/500] Host: objective-khorana7|us-central1-a|alexellis, status: 
[2/500] Host: objective-khorana7|us-central1-a|alexellis, status: active

The problem here is that inlets-pro vs inlets OSS use different ports and so they need separate firewall rules, perhaps with different names too.

Pro: 8123 (auto-tls + control-port), * - any port the client can open

OSS: 8080 - control-port, - 80 - data port

This probably affects inlets-operator too.

inlets-issue

I actually have no idea how this works, for OSS, since the port 80 should be open and is not.

I forwarded out OpenFaaS and couldn't access it on the public IP, but port 80 was open.

@alexellis The ports 80 are opened by using the http-server target tag for the exit node while provisioning. Every project comes with some default firewall-rules

I just ran through this, and my screenshot shows that the port 80 was closed and not opened. I had to edit the firewall rule in the GCP console to access it.

This is what I got for inlets-pro, a firewall with the wrong port again, it should have at least 8123 for the control-port for auto-tls and then * open for any port the user decides to punch out via the client.

It's impossible to set a restrictive port range for inlets-pro since the client decides which ports to open.

no-8123

@adamjohnson01 does this apply at all to the EC2 provisioner? I think only GCE + EC2 have security group / firewall config added.

Also broken in the operator -> inlets/inlets-operator#46

This should fix all the problems #58
cc @alexellis
cc @burtonr