DLPS Security Alternatives

Sarr Blumson, sarr@umich.edu


I. Requirements (not all apply to all services)

  • Eliminate need for reliance on IP address.
    • Doesn't always get the right people.
    • Not usable by "travelers".
  •  Integrate with UM environment.
  •  Accommodate subscriber institutions.
  •  Accommodate subscriber individuals?
  •  Actually be secure
  •  Discourage sharing of credentials with unauthorized individuals.
  •  Allow bookmarkable URLs.


II. Possibilities

1. KLP

KLP integrates Kerberos authentication into http access. Using it for UofM access would address all of the local access requirements individual subscribers could be accommodated by providing with UofM accounts; this is a significant administrative burden but one we would probably have to take on in any form of individual service. 


Limited browser support. This is a Netscape browser solution. The KLP team has built versions for Lynx (Unix only, I believe) and Xmosaic, but there long term commitment is not clear. They have no intention of supporting Internet Explorer.

Subscribing institutions which have a Kerberos 4 environment could adopt KLP for their own use, but they would have to take on the support burden and we would have to maintain Kerberos keys for (all of) their cells on all of our affected servers.

Subscribing institutions which do not have their own Kerberos environment could not use KLP, unless we added all of their users to our realm. This would be an administrative nightmare, and their users would have no investment in protecting their UMCE passwords.


2. CIC model (KLP with proxies)

The CIC model was originally conceived around DCE, but generalizes straightforwardly. A proxy server is created for each subscribing institution which has UofM Kerberos keys, and _it_ annotates URLs it is passing with its own credentials, using (in essence, if not in detail) KLP. Using the authentication hooks in Apache or the MarketPlace Fast-CGI server (which is actually also Apache) the subscribing institution takes responsibility (logically, we could do the work if appropriate) for ensuring that only their authorized users are admitted by their proxy.


Potentially a lot of work for the subscribers.

All of the problems with proxies.

We become dependent on the subscribing institution to choose a mechanism that their users won't give away.


3. Basic Auth with SSL

This is simple names and passwords sent using standard ,HTTP mechanisms, with an SSL connection used to protect against snooping;


We would have to maintain a data base of all possible users at all possible institutions.

Users may or may not be willing to use the same password they use in other situations.

Users would demand the ability to change the password, completely breaking our ability to keep them valuable.


4. Basic Auth with SSL and external password database

This is the same model, but farms password checking out to the subscribing institutions.


We would have to build the mechanisms to do this.

Password checking would be slow. Basic auth requires it on every exchange, so this could be a serious issue.


5. PEAK/Wolverine Access model

In this model we would collect a name and password at first entry, checking it locally or remotely, and then annotate all URLs with a short term ticket which authenticates the URL; This would all be hidden by SSL to prevent snooping. This eliminates the performance hit of checking the password every time.


The tickets expire. If they never expire then they become something the user can give away without any consequences. If they do expire they make URLs unbookmarkable, and encourage bad user behavior (by popping up and asking for passwords at odd times.

Passing the ticket on every URL and form gets complicated fast.

Passing the ticket on every URL and form prevents creating bookmarkable URLs.


7. PEAK model using cookies

This uses the same techniques, but passes the tickets using "cookies" rather than in the URL.


Ticket expiration.

Uses cookies, which some browsers don't support and which offend some users.

The may matter less if we end up using cookies to store user state (e.g.bookbags) anyway.


8. X.509 User certificates

This uses X.509 certificates to identify users as well as servers. If the subscribing institution is the certifying authority for their users then we can directly identify users with institutions.


X.509 certificates expire, but they cannot be revoked if someone leaves the institution, or otherwise loses authorization.

Subscribing institutions (and the UofM) would need a secure mechanism for creating and distributing certificates to their members. See http://ist.mit.edu/services/certificates.

Page maintained by Kat Hagedorn
Last modified: 03/05/2013