cyberark / KubiScan

A tool to scan Kubernetes cluster for risky permissions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Listing secret not capturing as a risky rule

prasenforu opened this issue · comments

My RBAC (ServiceAccount,Role & RoleBinding) as follows, which has a role of listing secrets.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-list-secrets
rules:
- apiGroups: ["*"]
  resources: ["secrets"]
  verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rolebinding-list-secrets
subjects:
- kind: ServiceAccount
  name: listsecrets
  namespace: testing
roleRef:
  kind: Role
  name: role-list-secrets
  apiGroup: rbac.authorization.k8s.io

image

But kubiscan -rr does not capturing/show as a risky rule.

image

Not sure what is the criteria of risky rule?

Interesting, I will try to restore it on my machine and update you.
It is weird because it suppose to find it.

But I have two thoughts:

  1. Maybe it is related to the context. When you run "kubectl get roles", can you see the role you created ?
  2. You didn't specify namespaces in the yaml file, so I suppose it will be automatically assigned to the default namespace but maybe kubiscan doesn't see the namespace and can't find it and do the correct compare.

I am using openshift, where I can set my namespace (oc project <namespace>)

Q. 2 -Ans.
so no need to specify namespace, its will take the namespace automatically. But it will NOT take default namespace.

Q. 1-Ans.

[root@ocpdns ocpscan]# oc get roles -n testing
NAME
role-list-secrets

As because its not capturing as a risky rule, associated serviceaccount mapped with POD also not showing as a risky pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-wo-sa
spec:
  automountServiceAccountToken: false
  serviceAccount: listsecrets
  containers:
    - name: pod-wo-sa
      image: bkimminich/juice-shop
      ports:
        - containerPort: 3000

image

I understand.
I didn't tested kubiscan on OpenShit, so this is why I didn't encounter it.
Can you show me the list of contexts you have in OpenShit?
Using "oc config get-contexts" ?
I want to see what is the current context being used and the admin contexts.

CURRENT   NAME                                          CLUSTER            AUTHINFO                               NAMESPACE
          default/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 default
          default/cluster-1/admin                       cluster-1          admin/cluster-1                        default
          event-controller/10-138-0-16:8443/admin       10-138-0-16:8443   admin/10-138-0-16:8443                 event-controller
          heptio-ark/10-138-0-16:8443/admin             10-138-0-16:8443   admin/10-138-0-16:8443                 heptio-ark
          heptio-ark/10-138-0-17:8443/admin             10-138-0-17:8443   admin/10-138-0-17:8443                 heptio-ark
          kubewatch-kubewatch-10-138-0-16:8443          10-138-0-16:8443   kubewatch-kubewatch-10-138-0-16:8443   kubewatch
          kubewatch/10-138-0-16:8443/admin              10-138-0-16:8443   admin/10-138-0-16:8443                 kubewatch
          logging/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 logging
          loki/10-138-0-16:8443/admin                   10-138-0-16:8443   admin/10-138-0-16:8443                 loki
          loki/10-138-0-17:8443/admin                   10-138-0-17:8443   admin/10-138-0-17:8443                 loki
          ocp-view/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 ocp-view
          ocp-view/10-138-0-17:8443/admin               10-138-0-17:8443   admin/10-138-0-17:8443                 ocp-view
          ocpwatch/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 ocpwatch
          openshift-logging/10-138-0-16:8443/admin      10-138-0-16:8443   admin/10-138-0-16:8443                 openshift-logging
          openshift-monitoring/10-138-0-16:8443/admin   10-138-0-16:8443   admin/10-138-0-16:8443                 openshift-monitoring
          sample-app/10-138-0-16:8443/admin             10-138-0-16:8443   admin/10-138-0-16:8443                 sample-app
          sample-app/10-138-0-16:8443/pkar              10-138-0-16:8443   pkar/10-138-0-16:8443                  sample-app
          sample-app/10-138-0-17:8443/admin             10-138-0-17:8443   admin/10-138-0-17:8443                 sample-app
*         security/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 security
          security/10-138-0-16:8443/pkar                10-138-0-16:8443   pkar/10-138-0-16:8443                  security
          testing/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 testing
          testing/10-138-0-16:8443/pkar                 10-138-0-16:8443   pkar/10-138-0-16:8443                  testing

Please check the code , looks like its working with clusterrole

Any update?

Please check now.
I noticed for an issue with the indents in the risky YAML file which didn't load the one of the risky roles.

By the way, can you add me on twitter @g3rzi ? I would like to advise with you on other stuff related to Kubernetes.

OK, will add.

But I don't think issue was in your yaml, u fixed in your code.

Now with -rr its showing roles but its NOT capturing with -rp (risky pod)

There was an issue with the YAML which provides the roles you want it to capture. There was a wrong indent with one of the roles after the list secrets role which ignored the list secrets in the YAML when I load it.

I will try to restore the -rp and then fix it.

If you saw my RBAC yaml top of discussion there was no issues.

True that your yaml was wrong.

I didn't speak about your RBAC yaml. I spoke about the risky_roles.yaml that the KubiScan use to check for risky permissions. There was an indent problem with it.

Oh! Sorry.

I am very sorry.

Its OK, don't worry :)

Let me know once it fix so I will close, it's a long discussion. :)

I tried to reproduce it with this YAML:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: testing
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets
  namespace: testing
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-list-secrets
rules:
- apiGroups: ["*"]
  resources: ["secrets"]
  verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rolebinding-list-secrets
subjects:
- kind: ServiceAccount
  name: listsecrets
  namespace: testing
roleRef:
  kind: Role
  name: role-list-secrets
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Pod
metadata:
  name: alpine3
  namespace: testing
spec:
  containers:
  - name: alpine3
    image: alpine3
    command: ["sleep 99d"]
  serviceAccountName: listsecrets
EOF

Notice that I have created a servuce account user named: listsecrets in the testing namespace.
I mounted this use to a pod inside the testing namespace.
With this YAML I can find the risky pod:
image

When I used your YAML, I noticed that you didn't use the testing namespace:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets

For trobuleshooting, check with -rs that the use listsecrets exist in the risky subjects.

I am using openshift, where I can set my namespace (oc project namespace)

So no need to specify namespace, its will take the namespace automatically. But it will NOT take default namespace.

Risky roles (-rr) shows correct output ...

But Risky Subject (-rs) & Risky POD (-rp) does not capture ..

image

But I have a another POD (ocpscan-dc-1-7rwng) running under security namespace which using clusterrole NOT role that pod is capturing by Risky Subject (-rs) & Risky POD (-rp).

image

@prasenforu ,
sorry for the late response, I wasn't able to reproduce it so I need your help here.
I want to you to edit the file utils.py inside the engine folder.
Replace the function get_all_risky_subjects() (rows 216 - 227):

KubiScan/engine/utils.py

Lines 216 to 227 in 35d6c04

def get_all_risky_subjects():
all_risky_users = []
all_risky_rolebindings = get_all_risky_rolebinding()
passed_users = {}
for risky_rolebinding in all_risky_rolebindings:
for user in risky_rolebinding.subjects:
# Removing duplicated users
if ''.join((user.kind, user.name, str(user.namespace))) not in passed_users:
passed_users[''.join((user.kind, user.name, str(user.namespace)))] = True
all_risky_users.append(Subject(user, risky_rolebinding.priority))
return all_risky_users

With this:

def get_all_risky_subjects():
    all_risky_users = []
    all_risky_rolebindings = get_all_risky_rolebinding()
    passed_users = {}
    for risky_rolebinding in all_risky_rolebindings:
        print('{0}:{1}'.format(risky_rolebinding.namespace, risky_rolebinding.name))
        for user in risky_rolebinding.subjects:
            print('\t{0}:{1}'.format(user.namespace, user.name))
            # Removing duplicated users
            if ''.join((user.kind, user.name, str(user.namespace))) not in passed_users:
                passed_users[''.join((user.kind, user.name, str(user.namespace)))] = True
                all_risky_users.append(Subject(user, risky_rolebinding.priority))

    return all_risky_users

I added two printings.
I want you to run scan -rs and send me the output (including the new printings).
This is will help us to see if the role-list-secrets is included in this function.

@prasenforu I found the bug with the namespace on -rs and -rp. If you have RoleBinding with service account without namespace, Kubernetes refer the service account as it has the namspace of the RoleBinding. I added support to this scenario and it worked for me so I think it will solve the problem you had.

Thanks 👍
Will check from my side.

@prasenforu did you have time to check it?

No, man. Did not get a chance to check bcoz of Covid.

Stay safe & take care.

Hi @prasenforu
How are you? I hope everything is okay on your side. Did you have some time to look on this issue?

Doing good, thanks.

Sorry didn't get a chance to look.

Will do end of coming week.

Checked in old openshift version, looks OK, need to test in new openshift version (unfortunately I do not have any environment).

Will update you if I replicate in new Openshift version, expected it will work :)

Anyway thanks for notification.

Great to hear :)
I will close it for now and if you will encounter it again you can open a new ticket or re-open this one.
Thanks for your update