hcavarsan / kftray

A cross-platform system tray application for managing multiple kubectl port-forward commands, with support for UDP and proxy connections through k8s clusters

Home Page:https://kftray.app/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AWS EKS Cluster Context recognised, but fails to load any data

nicc777 opened this issue · comments

Trying to connect to EKS, which uses AWS CLI as a authentication helper (get's the temporary token/credentials via AWS command) seems to fail.

Below is a sample config, with sensitive informati0on redacted:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED==
    server: https://REDACTED.gr7.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
    user: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
  name: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:000000000000:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws
      env:
      - name: AWS_PROFILE
        value: SOME_PROFILE

Screenshot of the problem: https://imgur.com/a/LcYvKV0

As you can see, the EKS context loads, but no namespaces or any other information is pulled - I'm guessing it fails to authenticate properly using the AWS CLI helper.

If you add an ability to create a log file with debug messages, I could perhaps give more meaningful feedback.

Looks like a very promising tool that will really help a lot of users, so I am hoping it is a simple fix required.

EDIT: Some more context:

OS: Windows 11
AWS CLI: aws-cli/2.15.5 Python/3.11.6 Windows/10 exe/AMD64 prompt/off
Kubectl version: 1.24 (matches server version)

hey @nicc777 , thanks for your feedback! ill investigate the error on my local machine and provide you with updates.

@nicc777 made the fix here: #37

It was released in version v0.3.3. can you test and see if it's okay in this version?

also, i opened an issue to create a way for enabling debug logs : #36

image

thank you for the feedback!

Thank you - that was a very quick turnaround time!

A quick test showed it was not working, but I am going to try a couple of things first thing tomorrow morning. Our setup is a little tricky so I just want to experiment with different setups a little bit. I will update this issue again with more details.

Cool!

ill try simulating using the same kubeconfig with STS profiles.

for my local test, i used LocalStack and this kubeconfig:

apiVersion: v1
clusters:
  - cluster:
        certificate-authority-data: XXXX
        server: https://localhost.localstack.cloud:4510
    name: arn:aws:eks:us-east-1:000000000000:cluster/cluster1
contexts:
  - context:
        cluster: arn:aws:eks:us-east-1:000000000000:cluster/cluster1
        user: arn:aws:eks:us-east-1:000000000000:cluster/cluster1
    name: arn:aws:eks:us-east-1:000000000000:cluster/cluster1
kind: Config
preferences: {}
users:
  - name: arn:aws:eks:us-east-1:000000000000:cluster/cluster1
    user:
        exec:
            apiVersion: client.authentication.k8s.io/v1beta1
            args:
              - --region
              - us-east-1
              - eks
              - get-token
              - --cluster-name
              - cluster1
              - --output
              - json
            command: aws

maybe i should implement STS profile authentication in the app. ill take a look

i just did a quick test with STS and a specific profile in LocalStack, and it worked...

so, if you still encounter the error tomorrow, please let me know, and ill investigate it further.

I tried a couple of things, but nothing works...

So the AWS command in the config works, so I am not sure if the token is passed properly to your app:

c:\Users\REDACTED> aws eks get-token --region us-east-1 --cluster-name my-cluster --profile MY_PROFILE
apiVersion: client.authentication.k8s.io/v1beta1
kind: ExecCredential
spec: {}
status:
  expirationTimestamp: '2023-12-29T04:40:14Z'
  token: k8s-aws-v1.REDACTED

Other things I considered:

  • The EKS cluster is on a Private VPC and I connect via VPN. I don't think this should be a problem as the DNS resolves to the correct address.
  • I setup a local k3s cluster to test that the app actually works on my machine, and it does work. I think that excludes something locally on my machine blocking the application somehow.

I think some logging would go a long way, specifically to see the following:

  • The host connected to
  • The resolved IP address
  • The result of the authentication helper command
  • Perhaps output similar to kubectl --v=9

I am not familiar with the stack you use, but the kubernetes client has some debug logging built in.

EDIT: Perhaps the logging options could be command line options? I have tried launching the app from the command line to see if it will produce any output, but unfortunately it goes to the background immediately.

hey @nicc777 Thank you for the suggestion! I have implemented a debug option.

To enable it on Windows, add these two variables and then run kftray.exe in the command prompt:
set RUST_LOG=trace
set KFTRAY_DEBUG=enabled

Then, all the app logs will be directed to C:/Users/$USER/.kftray/app.logs

I tested it on Windows, and the logs were generated correctly.

Can you update the app with this release and send me the logs ?

Thanks - this is rather frustrating, and I am not sure what the problem could be.

I executed the following command:

set RUST_LOG=trace & set KFTRAY_DEBUG=enabled & "C:\Program Files\kftray\kftray.exe"

An app.log file is created and it contains only the following lines:

[2023-12-30T06:59:19Z WARN  tao::platform_impl::platform::event_loop::runner] NewEvents emitted without explicit RedrawEventsCleared
[2023-12-30T06:59:19Z WARN  tao::platform_impl::platform::event_loop::runner] RedrawEventsCleared emitted without explicit MainEventsCleared

No matter how many times I restart the app or try to configure a new config, this is all I get.

EDIT:

I now also tried powershell:

$env:RUST_LOG = "trace"
$env:KFTRAY_DEBUG = "enabled"
& 'C:\Program Files\kftray\kftray.exe'

A much more substantial file was now created - the output is here: https://gist.github.com/nicc777/bb2f2d986adbcd989eb5b1f74e4291b5

I can only see HTTP connections to GitHub (looks like the app is trying to look for updates?). The attempt to connect to EKS is never logged.

So, in the dialog, the context is picked up from the kubectl configuration file, but it does not appear to ever try to pull actual data.

Out of desperation, I am going to try capturing some network packets in wireshark to see if that connection attempt is ever made.

Edit 2:

I did the network capture and could confirm that the connection to the EKS cluster is never even attempted. There are also no connection attempts to Amazon API's, so it also appears that AWS CLI command is never run to obtain the token.

@nicc777 hmm it's very odd, I performed some local tests:

  • installed a clean SO Windows 11 with AWS CLI and kubectl.
  • created an EKS cluster on a new AWS account (not in localstack) and granted access to the aws-auth configmap for the teste user.
  • configured EKS using the command aws eks update-kubeconfig --name my-cluster --profile test to simulate the same scenario as yours.
  • started kftray in debug mode via the command prompt and PowerShell (through PowerShell it behaved as you reported without network traffic, but via the command prompt, it generated the logs correctly).
  • i was able to configure the port forward successfully via kftray.

this is the screenshot of the configured kftray:
image
image

this is the generated logs:
https://gist.github.com/hcavarsan/71a149e3e4324a961b56855dd0af5049

this is my kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
    server: https://TEST.gr7.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:TEST:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:TEST:cluster/my-cluster
    user: arn:aws:eks:us-east-1:TEST:cluster/my-cluster
  name: arn:aws:eks:us-east-1:TEST:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:TEST:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:445147183740:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws
      env:
      - name: AWS_PROFILE
        value: teste

it seems to be something specific to your envinronment, maybe a permission issue, or a conflict with the kftray version ( two kftray versions installed).

can you remove all installations of the app and install the latest version again? its possible that some configuration or cache might have been left on your machine as well.

If it still doesn't work, please inform me of any other specific details about your environment. This will allow me to attempt to replicate the issue locally again.

Thank you!

EDIT:

In the Windows command prompt, if I open it with the whole command in one line, it does not enable logging. but if I run the 'set' command first and then execute kftray, the logs appear correctly.

could you try this test? Instead of execute this command:

C:\Users\Test>  set RUST_LOG=trace & set KFTRAY_DEBUG=enabled & "C:\Program Files\kftray\kftray.exe"

execute this:

C:\Users\Test>  set RUST_LOG=trace
C:\Users\Test>  set KFTRAY_DEBUG=enabled
C:\Users\Test>  "C:\Program Files\kftray\kftray.exe"

This way, the logs should logged correctly in the file, and the logs for the calls with errors should appear

Thanks for all your effort - I will try this again in the new year. Hope you and your projects have great success in 2024 !!

@nicc777 appreciate the thanks! wishing you success in the new year. feel free to reach out if needed. Happy 2024!

Hi - I finally figured out what was the issue on my side, and I am happy to report that it now works! Thanks - I think we can close this issue as it is resolved.

@nicc777 thanks for the feedback!