Showing posts with label ldap. Show all posts
Showing posts with label ldap. Show all posts

2020-11-18

OpenLDAP local root access to OLC cn=config database

If, like me, you converted your OpenLDAP server installation from slapd.conf to OLC (On-Line Configuration), aka cn=config, you may find that local root privileges to modify your config are not configured; i.e. doing the following will fail:

ldapmodify -Y EXTERNAL -H ldapi:/// -f some_changes.ldif

This is because the olcRootDN for the cn=config database is probably not set up right. Mine looked something like:

dn: olcDatabase={0}config,cn=config

objectClass: olcDatabaseConfig

olcDatabase: {0}config

olcAccess: {0}to *  by * none

olcRootDN: cn=root,dc=example,dc=com

 

It may or may not also have an olcRootPW (root password) set.

You can query you you appear to be to the LDAP server by using ldapwhoami and specifying the SASL mechanism (-Y) and the LDAP URI (-H):

#  ldapwhoami -Y EXTERNAL -H ldapi:///

SASL/EXTERNAL authentication started

SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth

SASL SSF: 0

dn:gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth

In this case, the EXTERNAL mechanism is the Linux IPC (Inter-Process Communication), which gets the UID and GID of the client process. This is communicated via the domain socket transport (ldapi:).

The fix is straightforward. First, create a file to replace the olcRootDN field:

# replace_olcrootdn.ldif

dn: olcDatabase={0}config,cn=config

replace: olcRootDN

olcRootDN: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth

If you have an olcRootPW field, add another operation to delete: it. Then, apply the changes:

# ldapmodify -D cn=root,dc=example,dc=cluster -w somepassword -H ldapi:/// -f replace_olcrootdn.ldif

And that should do it. From now on, you should be able to modify the OLC with “-Y EXTERNAL -H ldapi:///” if you are root. 

This post is expanded from this answer at Server Fault.

2020-09-14

Switching to SSSD (from nslcd) in Bright Cluster Manager 9

In Bright Cluster Manager 9.0, cluster nodes still use nslcd for LDAP authentication. Since we have sssd working in Bright CM 6 (by necessity due to an issue with Univa Grid Engine and nslcd; see previous posts), we might as well change things over to sssd on Bright CM 9, too. The cluster now runs RHEL8.

First, we disable the nslcd service on all nodes. It was a little non-obvious how to do this since trying to remove it in the device services did nothing: the service just kept coming back enabled. I.e. do “remove nslcd ; commit” and then “list” and nslcd just reappears.

Examining that service in the device view showed that it “belongs to a role,” but it is not listed in any role, nor in the category of that node.

[foocluster]% category use compute-cat

[foocluster->category[compute-cat]]% services

[foocluster->category[compute-cat]->services]% list

Service (key)            Monitored  Autostart

------------------------ ---------- ----------

It turns out that nslcd is part of a hidden role which is not visible to the user. So, you have to write a loop to disable nslcd on each node. Within cmsh:

[foocluster]% device

[foocluster]% foreach -v -n node001..node099 (services; use nslcd; set monitored no ; set autostart no)

[foocluster]% commit

To modify the node image, I modify the image on one node, and then do “grabimage -w” in cmsh on the head node.

You will need to install these packages:

  • openldap-clients
  • sssd
  • sssd-ldap 
  • openssl-perl

Next, sssd setup. This may depend on your installation. The installation here uses the LDAP server set up by Bright CM, which uses SSL for encryption with both server and client certificates. (All self-signed with a dummy CA in the usual way.) The following /etc/sssd/sssd.conf shows only the non-empty sections. Your configuration may need to be different depending on your environment. 

[domain/default]

id_provider = ldap

autofs_provider = ldap

auth_provider = ldap

chpass_provider = ldap

ldap_uri = ldaps://fooserver.cm.cluster

ldap_search_base = dc=cm,dc=cluster

ldap_id_use_start_tls = False

ldap_tls_reqcert = demand

ldap_tls_cacertdir = /cm/local/apps/openldap/etc/certs

cache_credentials = True

enumerate = False

entry_cache_timeout = 600

ldap_network_timeout = 3

ldap_connection_expire_timeout = 60


[sssd]

config_file_version = 2

services = nss, pam

domains = default


[nss]

homedir_substring = /home

Then, 

# chown root:root /etc/sssd/sssd.conf 

# chmod 600 /etc/sssd/sssd.conf

I did not have to change /etc/openldap/ldap.conf

The next step is to switch to using sssd for authentication. But first, stop and disable the nslcd service: 

# systemctl stop nslcd

# systemctl disable nslcd

The old authconfig-tui utility is gone. The new one is authselect: you will have to force it to overwrite existing authentication configurations.

# authselect select sssd --force

There are other options to authselect, e.g. “with-mkhomedir”. See authselect(8) and authselect-profiles(5) for details. Other options may also require other packages to be installed.

Then, start and enable the sssd service. Check that user ID info can be retrieved:

# id someuser

Back on the head node, do “grabimage -w”. 

Then, modify the node category to add the sssd service, setting it to autostart and to be monitored.

2019-11-20

Even more about SSSD + PAM + LDAP -- password is still expired even right after being changed by user

This keeps coming back to haunt me, partly because of patchy and disparate documentation, and partly because I do not have a rock-solid understanding of all the details of SSSD + PAM + LDAP. (Previous post.)

This is for RHEL6.

Here is the issue: my users kept running into the instance when upon logging in, they were shown:

WARNING: Your password has expired.
You must change your password now and login again!
Changing password for foouser.
Current password:
And then it automatically logs you out, which is expected behavior.

However, when they login again (with the password that they just set), they are again presented with the same password expiration warning. This repeats ad infinitum.

When I check the OpenLDAP server, and ldapsearch for the user record, it does show that the password was changed by that user on the correct date.

The key bit that I seem to have missed: a setting in /etc/pam_ldap.conf You have to set the secure LDAP URI since SSSD password transmissions must be encrypted.
uri ldaps://10.9.8.7/
This should match the URI specified in /etc/openldap/ldap.conf
URI ldaps://10.9.8.7/
And the setting in /etc/sssd/sssd.conf

[domain/default]
...
ldap_uri = ldaps://10.9.8.7/
...

And that fixed it.

While you are at it, you might as well specify SHA512 for the hash in /etc/pam_ldap.conf
pam_password sha512
I RTFMed: "sha512" is not an option for pam_password. This is to hash the password locally, before passing on to the LDAP server. The default is "clear", i.e. transmit the password in the clear to the LDAP server, and assume the LDAP server will hash if necessary. Another option is "crypt" which uses crypt(3).
pam_password crypt
However, there does not seem to be a way to specify which hash algorithm is to be used.

I do not think this is a big issue because the connection to the LDAP server is encrypted, any way.

Why was this a surprise? Well, because in /etc/nsswitch.conf we specified sss as the source for the passwd, shadow, and group name services:

passwd:     files sss
shadow:     files sss
group:      files sss
I.e., everything should be mediated through SSSD, and the SSSD config does have the correct URI.

2019-10-23

Migrating LDAP server to a different machine and changing to OLC and from bdb to mdb (lmdb)

At our site, we have LDAP (openldap 2.4) running on one server. It uses the old slapd.conf configuration, and the Berkeley DB (bdb) backend.

As part of planning for the future, I want to move this LDAP server to a different machine. I wanted to also migrate to using on-line configuration (OLC), where the static slapd.conf file is replaced with the cn=config online LDAP "directory". This allows configuration changes to be made at runtime without restarting slapd.

I also wanted to change from using the Berkeley DB backend to the Lightning Memory-Mapped DB (LMDB; known as just "mdb" in the configs). LMDB is what OpenLDAP recommends as it is quick (everything in memory) and easier to manage (fewer tuning options, no db files to mess with). From here, I will refer to this as "mdb" per the slapd.conf line.

After doing the migration once, leaving the backend as bdb, I found out it was easier to do all three things at once: migrate to a different server, convert from slapd.conf to OLC, and change backend to mdb.

This is a multi-stage process, but nothing too strenuous.
  1. Dump the directory data to an LDIF:  slapcat -n 1 -l n1.ldif
  2. Copy n1.ldif to new machine
  3. Copy slapd.conf to new machine
  4. Edit new slapd.conf on new machine: change the line "database bdb" to "database mdb"
    1. Remove any bdb-specific options: idletimeout, cachesize
  5. Import n1.ldif: slapadd -f /etc/openldap/slapd.conf -l n1.ldif
  6. Convert slapd.conf to OLC: slaptest -f slapd.conf -F slapd.d ; chown -R ldap:ldap slapd.d
  7. Move slapd.conf out of the way: mv /etc/openldap/slapd.conf /etc/openldap/slapd.conf.old
Other complications:
  • You will probably need to generate a new SSL certificate for the new server
  • That may mean signing the new cert with your own (or established) CA
    • Or, you can set all your client nodes to not require the cert: in /etc/openldap/ldap.conf add “TLS_REQCERT never”. Fix up the /etc/sssd/sssd.conf file similarly: add ldap_tls_reqcert = never
  • Fix up your /etc/sssd/sssd.conf
Note that it is pretty easy to back things out and start from scratch. To restore the new server to a "blank slate" condition, delete everything in /etc/openldap/slapd.d/

     service slapd stop
     cd /etc/openldap/slapd.d
     rm -rf *

CAUTION: This process seems to create the n0 db with an olcRootDN of “cn=config” where it should be “cn=admin,dc=example,dc=com” (or whatever your LDAP rootDN should be; for Bright-configured clusters, that would be “cn=root,dc=cm,dc=cluster). I.e. you need to have:

dn: olcDatabase={0}config,cn=config
olcRootDN: cn=root,dc=cm,dc=cluster

but for olcRootDN, you have cn=config, instead.

To fix it, I dumped n0 and n1, deleted /etc/openldap/slapd.d, and “restored” from the dumped n0 and n1. Basically, emulating a restore from backup.

  • Dump {0} to n0.ldif
  • Shut down slapd
  • Modify n0.ldif to have the needed olcRootDN (as above)
  • Move away the old /etc/openldap/slapd.d/ directory: mv /etc/openldap/slapd.d /etc/openldap/slapd.d.BAK
  • Create a new slapd.d directory: mkdir /etc/openldap/slapd.d 
  • Add the dumped n0.ldif as the new config: slapadd -n 0 -F /etc/openldap/slapd.d -l n0.ldif
  • Fix permissions: chown -R ldap:ldap /etc/openldap/slapd.d ; chmod 700 /etc/openldap/slapd.d

These are the websites I found useful in figuring things out:

2014-05-14

SSSD setup in a Bright Cluster

UPDATE 2015-05-28: Also, on the master node, the User Portal web application needs a setting in /etc/openldap/ldap.conf:

    TLS_REQCERT      never


At my current job, we use Bright Cluster Manager and Univa Grid Engine on RHEL 6.5. We were seeing issues where submitted jobs ended up in an "Error" state, especially if many jobs were submitted in a short period, either an array or a shell script loop running qsub iteratively. The error reason was:

    can't get password entry for user "juser". Either the user does not exist or NIS error!

However, logging into the assigned compute node and running "id" or even some C code to do user lookups passed.

By default, our installation used nslcd for LDAP lookups. Univa suggested switching to SSSD (System Security Services Daemon) as Red Hat had phased out nslcd. The Fedora site has a good overview.

The switch to using SSSD turned out to be fairly easy, with some hidden hiccups. Running authconfig-tui and keeping the existing settings, and then hitting "OK" immediately turned off nslcd and started up sssd, instead. All the attendant changes were made, too: chkconfig settings, /etc/nsswitch.conf. However, I found that users could not change passwords on the login nodes. They could login, but the passwd command failed with "system offline".

Turns out, SSSD requires an encrypted connection to the LDAP server for password changes. This is a security requirement so that the new password is not sent in the clear from the client node to the LDAP server. (See this forum post by sgallagh.) This means an SSL certificate needs to be created. Self-signed will work if the following line is added to /etc/sssd/sssd.conf:

     [domain/default]
     ...
     ldap_tls_reqcert = never

To create the self-signed cert:
     root # cd /etc/pki/tls/certs
     certs # make slapd.pem
     certs # chown ldap:ldap slapd.pem

Then, edit /cm/local/apps/openldap/etc/slapd.conf to add the following lines:
    TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
    TLSCertificateFile /etc/pki/tls/certs/slapd.pem
    TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

Also, make sure there is a section - my config did not have access to shadowLastChange:
    access to attrs=loginShell,shadowLastChange
        by group.exact="cn=rogroup,dc=cm,dc=cluster" read
        by self write
        by * read

Then, restart the ldap service.

UPDATE Adding some links to official Red Hat documentation: https://access.redhat.com/solutions/42746