jp-gouin / helm-openldap

Helm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

`customAcls` are not added

rhizoet opened this issue · comments

commented

Describe the bug
I've added customAcls for a read-only admin user. But it seems that the rules are not applied.

To Reproduce
Steps to reproduce the behavior:

  1. Add some customAcls
  2. Do a ldapsearchwith the user which should have the read-only admin rights
  3. Get
# extended LDIF
#
# LDAPv3
# base <dc=regio,dc=digital> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object

# numResponses: 1

Expected behavior
The customAcls should be updated with the values in the file.

Additional context
values.yaml:

global:
  ldapDomain: ldap.example.com
  adminPassword: password
  configPassword: password
service:
  type: LoadBalancer
customTLS:
  enabled: true
  secret: openldap-ldap
customAcls: |-
  dn: olcDatabase={2}mdb,cn=config
  changetype: modify
  replace: olcAccess
  olcAccess: {0}to *
    by dn.exact=gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth manage
    by * break
  olcAccess: {1}to attrs=userPassword,shadowLastChange
    by self write
    by dn="cn=admin,dc=example,dc=com" write
    by anonymous auth by * none
  olcAccess: {2}to *
    by dn="cn=admin-read,dc=example,dc=com" read
    by dn="cn=admin,dc=example,dc=com" write
    by set="user/employeeType & [ldap_reader]" read
    by self read
    by * none
ltb-passwd:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-prod
    hosts:
    - "passwd.ldap.example.com"
    tls:
    - secretName: passwd-ldap
      hosts:
      - passwd.ldap.example.com
phpldapadmin:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-prod
    hosts:
    - "admin.ldap.example.com"
    tls:
    - secretName: admin-ldap
      hosts:
      - admin.ldap.example.com

Hi @rhizoet

what the expected behavior of by set="user/employeeType & [ldap_reader]" read ?

commented

Hi @jp-gouin ,

that was a test for "when a user has the employeeType set with LDAP_READER the user should read the whole LDAP"
But that also does not work.

Ok did you follow the advance example ?

I think your ldapDomain does not match your acl . You have ldap.example.com so all dn in the acl should be cn=admin,dc=ldap,dc=example,dc=com and same logic for admin-read

commented

But the base-dn is dc=example,dc=com. I've only changed the domain later on. The base-dn is sticked to to dc=example,dc=com. The admin works for me. I can auth at the phpldapadmin and can sync with other apps. But I don't want to sync the whole ldap with the admin who can also write.

And yes, I've followed the advanced examples.

I faced this same issue. I worked around it exporting my ldap contents, deleting the deployment including PVs, deploying again with the new ACLs and then importing my ldap contents.

It seems that ACLs changes are not being applied after the first start.

The base image used is Bitnami openldap

Seems that indeed it loaded only on first boot. If you wish to be able to update this configuration, please raise an issue or PR on the container as handling the update at the container is best.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Is there even any changes needed here?

Is there even any changes needed here?

from what I can tell this is an upstream thing, though the related issue has been automatically closed by bots upstream bitnami/containers#44545

Wondering if an InitContainer can help with this.

I did some tests and came to the conclusion that I'll add functionality upstream to run script on every start. With that anyone can just add a small bash script to the container that does whatever they need.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

I still plan to fix this upstream eventually