Tuesday, March 26, 2013

The Happy Confluence of IAM, SOA and Cloud

Someone pointed me to this Gartner blog post on IAM, and I was once again reminded why Gartner doesn't get it, (or when they do, they get it much after everyone else).

The Gartner analyst in his presentation makes a big deal of the fact that LDAP, being a hierarchical data structure, is incapable of modelling the various complex relationships between entities in an IAM system. This is one of the reasons he believes we need to "kill IAM in order to save it". But is this limitation in traditional IAM systems really new? I'm no fan of LDAP, and it has been known in IAM circles for at least 5 years that LDAP directories are suited for nothing other than the storage of authentication credentials (login names and passwords)! Everything else should go into a relational database, which is much better at modelling complex relationships. A meaning-free identifier links an LDAP entry with its corresponding record in the relational database. I describe this hybrid design in a fair amount of detail in my book "Identity Management on a Shoestring". And this wasn't even my original idea. It was one of the pieces of advice my team got from a consultant (Stan Levine) that my employer hired to review our IAM plans.

Seriously, where has Gartner been?

Another big point made by the Gartner analyst was that IAM should not be "apart from" the rest of an organisation's systems but become "a part of" them. Joining the dots with my cynical knowledge of where Gartner tends to go with this kind of argument, I can see them making the case for big vendors that do everything including IAM. The cash registers at SAP, Oracle and Salesforce.com must have started ringing already, since Gartner has given those vendors' product strategies their all-important blessing.

Um, no. If there's anything we've learnt in the last few years (especially from SOA thinking), it's the great benefits that are gained from loose coupling. IAM should neither be "apart from" (decoupled) nor "a part of" (tightly coupled) with respect to an organisation's other, business-related systems. IAM needs to be loosely-coupled with respect to them.

What does this mean in practical terms? It means IAM needs to be a cross-cutting concern that can be transparently layered onto business systems to enforce access policies, but without disrupting those systems with IAM-related logic.

That's really what the latest IAM technology, OAuth 2, brings to the table. But the Gartner analyst, while dwelling for quite a while on how great OAuth is, completely omits to define its true contribution.

Eve Maler of Forrester says it much better in her presentations. She defines OAuth as a way to delegate authorisation, and positions it as a way to protect APIs. Can you see the confluence of IAM, SOA and the Cloud in that simple characterisation?

Let's take those two aspects one by one and have a closer look.

OAuth as a way to delegate authorisation:
The traditional model of authorisation works like this. There is an entity that owns as well as physically controls access to a resource. When a client requests access to that resource, the owning entity does three things:

1. Authenticates the client (i.e., establishes that they are who they claim to be)
2. Checks the authorisation of the authenticated client to access the resource (i.e., acts as a Policy Decision Point)
3. Allows (or denies) the client access to the resource (i.e., acts as a Policy Enforcement Point)

What OAuth does is recognise that the Policy Decision Point and the Policy Enforcement Point may be two very different organisational entities, not just two systems within the same organisational entity. The PDP role is typically performed by the owner of the resource. The PEP role is performed by the custodian of the resource. The owner need not be the custodian.

Under the OAuth model, there is a three-way handshake between the owner of a resource, the custodian of the resource and a client. Three separate trust relationships are established between the three pairs of entities in this model, and authentication is obviously required in setting these up (owner-to-client, owner-to-custodian and client-to-custodian-through-owner). Once the owner's permission to access the resource for a certain window of time is recorded in the form of an access token that the client stores, the owner's presence is no longer required when such access takes place. The custodian is able to verify the token and allow access in accordance with the owner's wishes even in the owner's absence. This is delegated authorisation.

And since the resource doesn't even know it's being protected, this is loose coupling. IAM is neither "apart from" nor "a part of" the business system with OAuth.

OAuth as a way to protect APIs:
The delegated authorisation model can be used to protect resources that are not just "things" but also "actions". In other words, OAuth can be used to control who can invoke what logic, and do so in a delegated manner. In other words, owners of business logic can grant access to clients to invoke business logic, and custodians that host such business logic can validate the access tokens presented by clients and allow or deny access in accordance with the wishes of the owners.

Now why does this development in the IAM world bring it into confluence with the SOA and cloud worlds?

The SOA bit is easy to understand. We did mention that an API is a form of resource. If all business logic can be reduced to service operations exposed through endpoints, then these form an API. Endpoints can be protected by OAuth as we saw, so OAuth can be an effective security mechanism for SOA.

The cloud bit isn't hard to understand either. If business logic can be abstracted behind APIs, then does it matter where that logic sits? Bingo - cloud! The cloud also forces separation of owner and custodian roles, with the cloud platform performing the role of custodian, and the cloud customer performing the role of resource owner or API owner. With OAuth as the authorisation mechanism, the cloud model becomes viable from an access control perspective as well.

So that's really what OAuth signifies. It's not just a development in IAM. It has profound implications for SOA security and the viability of the cloud model.

Watch for Gartner to break this news to their clients in 3 to 5 years' time...

(Meanwhile, someone at Gartner or elsewhere ought to tell that analyst that "staid" is not spelled "stayed". This presentation has irritated me on so many levels - spiritually, ecumenically, grammatically, as Captain Jack Sparrow said.)

Tuesday, March 12, 2013

How to Implement An Atomic "Get And Set" Operation In REST

This question came up yesterday at work, and it's probably a common requirement.

You need to retrieve the value of a record (if it exists), or else create it with a default value. An example would be when you're mapping identifiers between an external domain and your own. If the external domain is passing in a reference to an existing entity in your domain, you need to look up the local identifier for that entity. If the entity doesn't yet exist in your domain, you need to create (i.e., auto-provision) it and insert a record in the mapping table associating the two identifiers. The two operations have to be atomic because you can't allow two processes to both check for the existence of the mapping record, find out it doesn't exist, then create two new entity instances. Only one of the processes should win the race.

(Let's ignore for a moment the possibility that you can rely on a uniqueness constraint in a relational database to prevent this situation from occurring. We're talking about a general pattern here.)

Normally, you would be tempted to create an atomic operation called "Get or Create". But if this is to be a RESTian service operation, there is no verb that combines the effects of GET and POST, nor would it be advisable to invent one, because it would in effect be a GET with side-effects - never a good idea.

One solution is as follows (and there could be others):

Step 1:

GET /records/{external-id}

If a record exists, you receive a "200 OK" status and the mapping record containing the internal ID.

  "external-id" :  ...
  "internal-id" :  ...

If the record does not exist, you get a "404 Not found" and a one-time URI in the "Location" header.

Location: /newrecords/84c5d65a-2198-42eb-8537-b16f58733791

(The server will also use the header "Cache-control: no-cache" to ensure that intermediate proxies do not cache this time-sensitive response but defer to the origin server on every request.)

Step 2 (Required only if you receive a "404 Not found"):

2a) Generate an internal ID.

2b) Create a new entity with this internal ID and also create a mapping record that associates this internal ID with the external ID passed in. This can be done with a single POST to the one-time URI.

POST /newrecords/84c5d65a-2198-42eb-8537-b16f58733791

  "external-id" :  ...
  "internal-id" :  ... (what you just generated)
  "other-entity-attributes" : ...

The implementation of the POST will create a new local entity instance as well as insert a new record in the mapping table - in one atomic operation (which is easy enough to ensure on the server side).

If you win the race, you receive a "201 Created" and the mapping record as a confirmation.

  "external-id" :  ...
  "internal-id" :  ... (what you generated)

If you lose the race, you receive a "409 Conflict" and the mapping record that was created by the previous (successful) process.

  "external-id" :  ...
  "internal-id" :  ... (what the winning process generated)

Either way, the local system now has an entity instance with a local (internal) identifier, and a mapping from the external domain's identifier to this one. Subsequent GETs will return this mapping along with a "200 OK". The operation is guaranteeably consistent, without having to rely on an atomic "Get or Create" verb.

One could quibble that a GET that fails to retrieve a representation of a resource does have a side-effect - the creation of a one-time URI with the value "84c5d65a-2198-42eb-8537-b16f58733791" being inserted somewhere. This is strictly true, but the operation is idempotent, which mitigates its impact. The next process to do an unsuccessful GET on the same value must receive the same one-time URI.

It's a bit of work on the server side, but it results in an elegant RESTian solution.