sonata-nfv / tng-industrial-pilot

5GTANGO Smart Manufacturing Pilot

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

UseCase #3: On-demand AR service

danielbehnke opened this issue · comments

Provide relevant data for AR maintenance. One idea is to use data from VNF Edge Analytics Engine here.

  • Idea: Hololens wants to connect to Internet through VPN VNF
  • VNF should contain simple VPN client (eg, OpenVPN) and connect to VPN server that we setup on some VM at UPB
  • Challenges:
    • How to run VPN client on Kubernetes and server on VM and connect them to each other
    • How to run and setup VPN client on Kubernetes such that it can be accessed from the outside. Bind physical network interface to the container? Goal: Connect the access point of the Hololens (or a Laptop) directly to that interface.

@KaiHannemann Start by researching for how to setup VPN with Kubernetes. How to setup VPN server (on VM here). How to connect VPN client on Kubernetes with physical interface. Etc.

Put relevant notes and findings on the wiki page: https://github.com/sonata-nfv/tng-industrial-pilot/wiki/use-case-3

Use this issue for questions, discussions, etc

The VPN server on the VM is running. Port 1194 needs to be opened though for outside access. Proceeding to VPN Client in Kubernetes

If we run into problems with OpenVPN, apparently WireGuard is a good and very simple alternative. Just leaving it here for future reference if we need it.

@KaiHannemann Running containers in privileged mode in Kubernetes should work without a problem :)

For future reference (once we tested it successfully on native k8s), if we want to start a privileged container via the Tango SP, we need to define it in the VNFD. In the CDU section, add for the corresponding container:

parameters:
    capabilities:
    -   NET_ADMIN 
    -   SYS_ADMIN

Where the capabilities correspond to these. For our use case, NET_ADMIN should suffice.

Currently the addition of openvpn-systemd-resolved package caused a problem with the docker.socket.
The logger returns the error: logger: socket: No such file or directory

Inside the Docker (client) or at the VPN server? What if you remove the package again?

Inside the client. The removal is possible. We then require another fix to the DNS problem. Will try to do so.

I got the setup to work on my VM, the current push on my fork is up to date.
Had to adjust the proxy to let all traffic pass. This will need more work but does its job for now.
Tried it on my local setup, did not work. Did some troubleshooting and found that the proxy is functioning but the VPN runs into an DNS error. Will work on a fix for this, seems to be server sided.

@KaiHannemann I was struggeling to reproduce the steps for use case 3 that we discussed in the meeintgs. Could you please document the exact steps and commands to run the container and test it with curl?

I started here: https://github.com/sonata-nfv/tng-industrial-pilot/wiki/use-case-3#steps
Please check, update, correct, and extend the steps.

Currently I'm stuck testing the container with curl. I'm not sure what the correct command is

Current issue with the container outside of my VM is the /etc/resolv.conf file does not get updated. It works on the VM but not on my local docker installation. Working on a fix
Edit: It seems like this bug exists since 2013 and different linux versions.

Either way, please document all steps to reproduce what you have done so far and how to reproduce your problem. Ideally also how to configure the server

Status update:

  • Using the Proxy-VPN-Container and starting it as Docker container works
  • When configuring the proxy, internet access, eg, YT videos, is only possible if the container is running
  • Prepared descriptors for deployment on k8s and 5GTANGO SP. Needs privileged mode for creating the tunnel.
  • So far not successful: Without privileged and just NET_ADMIN capabilities, can't open the tunnel. With privileged, it works, but Squid returns an DNS error:
...
<p>The following error was encountered while trying to retrieve the URL: <a href="http://ifconfig.me/">http://ifconfig.me/</a></p>

<blockquote id="error">
<p><b>Unable to determine IP address from host name <q>ifconfig.me</q></b></p>
</blockquote>

<p>The DNS server returned:</p>
<blockquote id="data">
<pre>No DNS records</pre>
</blockquote>

<p>This means that the cache was not able to resolve the hostname presented in the URL. Check if the address is correct.</p>

Not sure where the issue comes from and how to deal with it.

Also updated the wiki: https://github.com/sonata-nfv/tng-industrial-pilot/wiki/use-case-3


Useful links:

After Felipes, patch, privileged containers also work on Tango SP on int-sp-ath (and probably soon the others).

Now the error with the DNS resolution remains. Maybe check this: https://serverfault.com/questions/536975/dns-issue-with-squid

Not sure why it works with Docker but not K8s.

Let's test UC3 again with plain Docker deployment (should work) and with deployment on native K8s as well as through the Tango SP. @KaiHannemann Use the Tango machine that I mentioned today both for the k8s deployment and to check/debug the deployment with the Tango SP. I already uploaded NS3: https://int-sp-ath.5gtango.eu/service-management/network-services/services
I got a DNS error from Squid in both these cases, but we'll have to test more extensively.

If this is the case even though both are using the same container and have privileged mode, Manuel's suggestion was to log into the running container in K8s and try to curl from within the container. If it doesn't work, maybe the combination of K8s + VPN leads to DNS issues. Let's test and google into that direction.

Also update the wiki with any relevant info

Tried again with plain Docker deployment:

  • Running on extra VM on VirtualBox locally, everything works as expected
  • Also works fine on VM inside UPB's cluster
  • Running on machine of K8s master, even the plain Docker (no K8s, no Tango yet) deployment doesn't work and I get the DNS error! However plain curl ifconfig.me without the proxy does work.

This means it seems like this isn't a K8s isse (at least not yet), but more an issue with running the container on the specific machine which is used for K8s and Tango.

Also connecting to the running container from another machine in the same network to use it as a proxy doesn't work: Connection times out.

I still don't really get, why connecting locally to the deployed container works fine.

Simpler approach: For potential live demo, prepare container without VPN client and just proxy. Can still argue that it provides connectivity on demand and can limit traffic to certain destinations.

Due to the issues with the VPN described above, we drop the VPN for the live demo and just use the proxy. Hence, the proxy is deployed on demand to provide connectivity. If desired, the proxy could be configured to only allow access to pre-defined resources.

This version of NS3 now works fine also when deployed through the Tango SP on K8s.