Next: Developer information, Previous: Kerberos via TLS, Up: Administration Manual [Contents][Index]
Setting up multiple servers is as easy as replicating the user database. Since the default ‘file’ user database is stored in the normal file system, you can use any common tools to replicate a file system. Network file system like NFS (properly secured by, e.g., a point-to-point symmetrically encrypted IPSEC connection) and file synchronizing tools like ‘rsync’ are typical choices.
The secondary server should be configured just like the master server. If you use the ‘file’ database over NFS you do not have to make any modifications. If you use, e.g., a cron job to ‘rsync’ the directory every hour or so, you may want to add a ‘--read-only’ flag to the Shisa ‘db’ definition (see Shisa Configuration). That way, nobody will be lured into creating or changing information in the database on the secondary server, which only would be overwritten during the next synchronization.
db --read-only file /usr/local/var/backup-shishi
The ‘file’ database is designed so it doesn’t require file locking in the file system, which may be unreliable in some network file systems or implementations. It is also designed so that multiple concurrent readers and writers may access the database without causing corruption.
Warning: The last paragraph is currently not completely accurate. There may be race conditions with concurrent writers. None should cause infinite loops or data loss. However, unexpected results might occur if two writers try to update information about a principal simultaneous.
If you use a remote LDAP server or SQL database to store the user database, and access it via a Shisa backend, you have make sure your Shisa backend handle concurrent writers properly. If you use a modern SQL database, this probably is not a concern. If it is a problem, you may be able to work around it by implementing some kind of synchronization or semaphore mechanism. If all else sounds too complicated, you can set up the secondary servers as ‘--read-only’ servers, although you will lose some functionality (like changing passwords via the secondary server, or updating timestamps when the last ticket request occurred).
One function that is of particular use for users with remote databases (be it LDAP or SQL) is the “database override” feature. Using this you can have the security critical principals (such as the ticket granting ticket) stored on local file system storage, but use the remote database for user principals. Of course, you must keep the local file system storage synchronized between all servers, as before. Here is an example configuration.
db --read-only file /var/local/master db ldap kdc.example.org ca=/etc/shisa/kdc-ca.pem
This instruct the Shisa library to access the two databases sequentially, for each query using the first database that know about the requested principal. If you put the ‘krbtgt/REALM’ principal in the local ‘file’ database, this will override the LDAP interface. Naturally, you can have as many ‘db’ definition lines as you wish.
Users with remote databases can also investigate a so called High Availability mode. This is useful if you wish to have your Kerberos servers be able to continue to operate even when the remote database is offline. This is achieved via the ‘--ignore-errors’ flag in the database definition. Here is a sample configuration.
db --ignore-errors ldap kdc.example.org ca=/etc/shisa/kdc-ca.pem db --read-only file /var/cache/ldap-copy
This instruct the Shisa library to try the LDAP backend first, but if it fails, instead of returning an error, continue to try the operation on a read only local ‘file’ based database. Of course, write requests will still fail, but it may be better than halting the server completely. To make this work, you first need to set up a cron job on a, say, hourly basis, to make a copy of the remote database and store it in the local file database. That way, when the remote server goes away, fairly current information will still be available locally.
If you also wish to experiment with read-write fail over, here is an idea for the configuration.
db --ignore-errors ldap kdc.example.org ca=/etc/shisa/kdc-ca.pem db --ignore-errors --read-only file /var/cache/ldap-copy db file /var/cache/local-updates
This is similar to the previous, but it will ignore errors reading and writing from the first two databases, ultimately causing write attempts to end up in the final ‘file’ based database. Of course, you would need to create tools to feed back any local updates made while the remote server was down. It may also be necessary to create a special backend for this purpose, which can auto create principals that are used.
We finish with an example that demonstrate all the ideas presented.
db --read-only file /var/local/master db --ignore-errors ldap kdc.example.org ca=/etc/shisa/kdc-ca.pem db --ignore-errors --read-only file /var/cache/ldap-copy db file /var/cache/local-updates
Next: Developer information, Previous: Kerberos via TLS, Up: Administration Manual [Contents][Index]