Wednesday, June 30, 2010

The Case for Watermarked UUIDs

UUIDs are my latest toy

They fill my little world with joy
Forgive me for waxing lyrical. The more I play with UUIDs, the more wonderful uses I find for them.

Take one-time tokens used for idempotence, for instance.

Joe Gregorio's nugget on how to use UUIDs with hashes to prevent spoofing got me thinking. What if the hash could be hidden in the UUID itself? UUIDs are so roomy and spacious, they can tuck a lot of things inside without losing their core property of universal uniqueness.

But first, a summary of Joe's piece for those too impatient to go over and read it.

Idempotence is (or should be!) a basic concept that all IT folk are familiar with. Performing an idempotent operation more than once has exactly the same effect as doing it once. The way this is commonly implemented is through the use of one-time tokens. [If you do a 'View Source' on your bank's transaction confirmation page (the one that asks "Are you sure?"), you may see a hidden form variable with a large random value. That's the one-time token the bank uses to ensure that you don't end up posting a transaction twice even if you hit the submit button twice. It's a wonderful way to ensure a measure of reliability over an unreliable Internet.]

Joe Gregorio suggests the use of UUIDs as one-time tokens for RESTful idempotent services, but also raises a potential problem with the use of raw UUIDs. Anyone can spoof a token by just generating a UUID. For a server to be able to recognise that a UUID is a genuine one-time token that it itself had handed out earlier, it would normally be expected to store the token somewhere so it can verify it when it's presented later on. But such a naive design would open the server up to a "resource exhaustion attack". A malicious user (or bot) can swamp the server with millions of requests for one-time tokens, and the server's database will rapidly fill up with useless UUIDs that are never going to be used.

To address this, Joe suggests a hash. If the generated UUID is combined with a secret string that only the server knows, and this combination is hashed, then the combination of UUID and hash will let the server validate its genuineness, because no one else can generate a valid hash for a random UUID without knowing the server's secret string. In Joe's model, the one-time token is not the raw UUID, but a combination of the UUID and a hash. With this design, the server doesn't have to store a one-time token when it's handed out, only when it's actually used to perform a transaction. The number of such UUIDs can be controlled, because spoofed UUIDs can be filtered out through a mere computational check.

I think this solution is elegant and ingenious, but I believe it can be made even more "clean".

A UUID looks like this:


It consists of 5 groups of hex characters separated by hyphens, and the regular expression for it is


What I want to do is replace the last set of 12 hex characters with another one representing a hash (or at least part of a hash). Then, while the UUID will still look like a UUID (and indeed, will be a valid UUID), it is in effect "watermarked" by the server and can be verified by computation alone.

What do we lose?

Mathematically, we lose a great deal of uniqueness. We're now dealing with just 20 hex characters instead of 32 (It's not 24, because the 4 hyphens don't count). 20 characters are still a lot, and we actually get something back through the 12-character hash, because this is calculated not just on the 20-character UUID prefix but on the combination of the prefix and the server's secret string. So it's an independent 12-character hex string, and while it may be a more sparse range than its length may suggest, it's still something. So I don't believe we lose too much from a uniqueness perspective. UUIDs are so huge you can trim them and still not encounter conflicts.

Is there a danger that some random UUID out there may accidentally be computed as a valid watermarked UUID because its last 12 characters miraculously match the hash? Well, the probability of this is 1 in 16 raised to the power 12, which is about 1 in 3 quadrillion. I'd take my chances.

Architecturally, it would seem that we have introduced meaning into what should have been a meaningless identifier, and that would then open the door to implicit dependencies (tight coupling) and consequent brittleness. However, on closer inspection, there is no way an external system can seek to build any dependency on the structure of a watermarked UUID, because without a knowledge of the server's secret string, the hash cannot be externally calculated. The UUID remains necessarily opaque to all external parties. The implicit dependency of one part of the UUID on another would seem to be a limitation too, but this is by design! The "limitation" serves to exclude spoofed UUIDs.

And so, I believe there is no real downside to the use of watermarked UUIDs. On the contrary, they retain the visual elegance of plain UUIDs and furthermore simplify the design of idempotent services by encapsulating the entire token-validation function within the service with no leakage through to the interface.

I've written a couple of classes in Java that should help developers get started (no warranties, of course). The base class is called TokenWatermarker, and it performs the basic hashing and string concatenating logic on any token string. It can watermark LUIDs, for example. It also performs verification of previously watermarked tokens, of course. Then there's the UUIDWatermarker class that extends TokenWatermarker and provides the same capability for UUIDs.

Running the compiled classes using "java TokenWatermarker" or "java UUIDWatermarker" will print out a sample output that will show you how these classes work.


Friday, June 25, 2010

The Case for Locally Unique IDs (LUIDs)

It should be no surprise to regular readers of this blog that I am in love with UUIDs. As I have said before, they are an inexhaustible source of identifiers that are meaningless (not a pejorative term!) and whose generation can be distributed/federated without danger of duplication. As a result, they are an extremely powerful means of providing a uniform and federated identity scheme.

As a SOA-indoctrinated IT practitioner, I am loathe to expose domain-specific entity identifiers to the larger world because such leakage leads to tight coupling and brittle integration. Yet identifiers of "resources" must often be exposed. How do we do this? Even "best practice" guidelines fail to adequately admonish designers against exposing domain-specific identifiers. [e.g., Subbu Allamaraju's otherwise excellent RESTful Web Services Cookbook talks about "opaque" URIs in recipe 4.2 but ends up recommending the use of internal entity identifiers in recipe 3.10 :-(.]

My point is simple. If the 'employee' table in my application's database looks like this

| id | first_name | last_name | dob |
| 1122 | John | Doe | 12-Jan-1960 |
| 3476 | Jane | Doe | 08-Sep-1964 |
| 6529 | Joe | Bloggs | 15-Jun-1970 |

I do not want to be exposing resources that look like these


I don't want to expose my application's local primary keys to the entire world. They may be "meaningless" but they're still coupled to my domain's internal data structures. I need something else.

My standard solution so far has been the magnificent UUID. I add a candidate key column to my table, like so

| id | first_name | last_name | dob | UUID |
| 1122 | John | Doe | 12-Jan-1960 | 4885c205-8248-4e5b-9c45-4d042e7cc992 |
| 3476 | Jane | Doe | 08-Sep-1964 | cdbf87dd-93cb-4c53-9c5d-718c596b0a00 |
| 6529 | Joe | Bloggs | 15-Jun-1970 | 73feb1bf-e687-4d58-9750-5bf98ca7b9fa |

or I maintain a separate mapping table, like so

| id | UUID |
| 1122 | 4885c205-8248-4e5b-9c45-4d042e7cc992 |
| 3476 | cdbf87dd-93cb-4c53-9c5d-718c596b0a00 |
| 6529 | 73feb1bf-e687-4d58-9750-5bf98ca7b9fa |

and I expose my resources like so


I've still got unique identifiers, they're guaranteed not to conflict with anything else in time and space, and more importantly, my domain-specific identifiers remain hidden. I can now even change my entire domain model, including identifiers, and still preserve my external contracts. That's SOA.

But the sad fact of the matter is that many legacy systems and packaged software do not readily support UUID datatypes or even char(36) columns for various reasons. I have recently heard of a far-sighted software vendor that has provided for a "Public ID" field in their database tables for this precise reason, i.e., to allow an externally visible identifier to be specified for their entities. But alas, the column is defined to be varchar(20), much too small to hold a UUID.

It occurred to me that there is nothing sacrosanct about a 128-bit UUID (expressed as a 36-character string). It's just that the larger a random number gets, the more remote the probability of conflict with another such random number. 128 bits is a nice, safe length. But smaller lengths also have the same property, only with a lower degree of confidence.

The constraints of vendor packages like the one I described above led me to postulate the concept of the LUID (Locally Unique ID). This is just a string that is smaller than 32 hex digits (a UUID has 32 hex digits and 4 hyphens in-between). I call this a Locally Unique ID because the smaller it gets, the lower the confidence with which we can claim it to be universally unique. But we may still be able to rely on its uniqueness within a local system. If I'm only holding details of a few thousand employees (or even a few million customers) in my database, an LUID may still be expected to provide unique identifiers with a reasonable degree of confidence.

That vendor package definitely cannot hold a UUID such as "2607881a-fec1-4e5d-a7fc-f87527c93e2d" in its "Public ID" field, but a 20-character substring such as "4e5da7fcf87527c93e2d" is definitely possible.

Accordingly, I've written a Java class called LUID with static methods

String randomLUID( int _length ) and
String getLUID( String _uuidString, int _length )

The first generates a random hex string of the desired length (without hyphens). The second chops a regular UUID down to a hex string of the required length, again without hyphens.

You can download the class from here. Just running the compiled class with "java LUID" will result in some test output which should illustrate how it works. Feel free to incorporate it into your own projects, but be warned that there is no warranty ;-).

Of course, there is a limit to how small an LUID can become before it loses its utility, but I'm not going to draw an arbitrary line in the sand over this. The class above represents mechanism, not policy. Application designers need to think about what makes sense in their context. An LUID of the appropriate length can enable them to implement SOA Best Practice by decoupling externally visible resource identifiers from internal entity identifiers (another example of the difference between Pat Helland's Data on the Outside and Data on the Inside) when a standard UUID cannot be used.

Bottomline: If you can use a standard UUID, do so. If you can't, consider using an LUID of the kind I've described. But always hide the specifics of your application domain (which include entity identifiers) when exposing interfaces to the outside. That's non-negotiable if you want to be SOA-compliant.

Update 27/06/2010:
I should perhaps make it clear what my proposal is really about, because judging from a reader comment, I think I may have created the impression that all I want is for entity identifiers to be "meaningless" in order to be opaque. That's actually not what I mean.

To be blunt, I believe that any entity/resource that is to be uniquely identified from the outside needs _two_ identifiers (i.e., two candidate keys). One of them is the "natural" primary key of the entity within the domain application. The other is a new and independent identifier that is required to support the exposure of this entity through a service interface (whether SOAP or REST). There should be no way to derive this new key from the old one. The two keys should be independently unique, and the only way to derive one from the other should be through column-to-column mapping, either within the same table or through a separate mapping table as I showed above.

To repeat what I wrote in the comments section in reply to this reader:

There was a content management system that generated (meaningless) IDs for all documents stored in it, and returned the document ID to a client application that requested storage, as part of a URI. At one stage, it became necessary to move some of the documents (grouped by a certain logical category) to another instance of the CMS, and all the document IDs obviously changed when reloaded onto the other instance. The client application unfortunately had references to the old IDs. Even if we had managed to switch the host name through some smart content-based routing (there was enough metadata to know which set of documents was being referred to), the actual document ids were all mixed up.

If we had instead maintained two sets of IDs and _mapped_ the automatically generated internal ID of each document to a special externally-visible ID and returned the latter to the client, we could have simply changed the mapping table when moving documents to the new CMS instance and the clients would have been insulated from the change. As it turned out, the operations team had to sweat a lot to update references on the calling system's databases also.
I hope it's clear now.

1. Entity only seen within the domain => single primary key is sufficient
2. Entity visible outside the domain => two candidate keys are required, as well as a mapping (not an automated translation) between the two.
3. UUID feasible for the new candidate key => use a UUID
4. UUID not possible for some reason => use a Locally Unique ID or LUID of appropriate length (code included)

Friday, June 18, 2010

Annotations and the Servlet 3 Specification

Now I'm seriously beginning to wonder about the authors of the Java Servlet 3 specification. This time, it's not their architectural wisdom (or the lack of it) regarding session state. It's about something even more basic to the Java language - the nature of annotations.

Chapter 8 deals with annotations that developers may use to mark their classes. Anything about the following strike you as crazy?

Classes annotated with @WebServlet class (sic) MUST extend the javax.servlet.http.HttpServlet class.
Classes annotated with @WebFilter MUST implement javax.servlet.Filter.

If we must extend a base class anyway, I wonder what the annotation is for. Just to avoid putting a few lines of config code into the web.xml file?

I would have thought an annotation like @WebServlet would be capable of turning any POJO into a servlet class, not just subclasses of HttpServlet! And we could have annotations like @GetMethod, @PostMethod, @PutMethod and @DeleteMethod to annotate any arbitrary methods in the class. We shouldn't have to rely on overriding the doGet(), doPost(), doPut() and doDelete() methods.

The same applies with @WebFilter. It could be used to annotate any arbitrary class, and @FilterMethod could then annotate any arbitrary method in the class.

Look at the way JSR 311 and Spring REST work.

I'm disappointed in the Servlet spec committee. If you're going to use annotations, then use them smartly.

It wouldn't be out of place here to comment on the horrific class hierarchy of the Servlet spec. It certainly shows the era from which it began, an era where interfaces were underappreciated and inheritance hierarchies featured classes themselves. Naming conventions hadn't matured yet, either.

E.g., my application's concrete class "MyServlet" must either extend abstract class "GenericServlet", which in turn partially implements interface "Servlet", or implement "Servlet" directly. This by itself isn't so bad, but go ahead a bit.

My application's concrete class "MyHttpServlet" must only extend abstract class "HttpServlet" which extends abstract class "GenericServlet", which in turn partially implements interface "Servlet". There is no interface to implement.

And why GenericServlet should also implement ServletConfig is something I don't understand. There's a HAS-A relationship between a servlet and its configuration. It's not an IS-A relationship.

HttpServlet should have been an interface extending Servlet.

The abstract class GenericServlet (that partially implements the Servlet interface) should have been called AbstractServlet instead, and there could have been a concrete convenience class called SimpleServlet or BasicServlet that extended AbstractServlet and provided a default implementation that subclasses could override.

Similarly, there should have been an abstract class called AbstractHttpServlet that partially implemented the HttpServlet interface and only provided a concrete service() method, dispatching requests to doXXX() methods that remained unimplemented. There could have been a concrete convenience class called SimpleHttpServlet or BasicHttpServlet that extended the AbstractHttpServlet class and provided a default implementation that subclasses could override.

My application's concrete classes should have had the option to implement one of the interfaces directly or to extend one of the abstract or convenience classes.

Oh well, too late now.

Thursday, June 17, 2010

REST and the Servlet 3 Specification

I've been going through the Java Servlet 3 specification, and I just came across this gem at the start of the chapter on Sessions:

The Hypertext Transfer Protocol (HTTP) is by design a stateless protocol. To build effective Web applications, it is imperative that requests from a particular client be associated with each other. [...] This specification defines a simple HttpSession interface that allows a servlet container to use any of several approaches to track a user’s session [...]

I don't think the spec authors have been adequately exposed to the REST philosophy, or they wouldn't be talking so casually about how "imperative" sessions are to build "effective" Web applications. A few years ago, I would have read this without batting an eyelid. Now, I had to keep myself from falling off my chair in shock. One would think spec writers of advanced technology would know a bit better. At the very least, they could have written something like this:

The Hypertext Transfer Protocol (HTTP) is by design a stateless protocol, and it is strongly recommended that Web applications be built in a stateless manner to be effective, with all state management delegated to the persistence tier. If, for legacy or other reasons, it is unavoidable to maintain in-memory state in a web application, the servlet specification defines a simple HttpSession interface that provides a relatively painless way to manage it. Application developers should however be aware of the severe scalability and recoverability issues that will accompany the use of this feature.

There! Now I feel much better.

Saturday, June 12, 2010

Does REST Need Versioning?

In my ongoing conversations with JJ Dubray, he has often made the point that "REST couples identity and access together in a terrible way". When pressed to explain, he provided the following example.

Assume that there is a Resource identified by "/customers/1234". Updating the state of this customer requires a PUT. JJ asks how REST can handle a change to the business logic implied by the PUT.

Since we cannot say

PUTv2 /customers/1234

implying a change to the logic of PUT, he believes we have no option but to say

PUT /customers/v2/1234

but this is different from the identity of the customer, which remains at


Hence REST "couples identity with access".

Well, I disagree. First of all, it's a mistake to think there are only two places where the version of business logic can be exposed - the Verb and the Resource. The data submitted is an implicit third, which I'll come to in a moment. But this example only makes me question the whole basis for versioning.

Does REST need versioning? For that matter, does any service need versioning? What is versioning in the context of SOA?

I would say service versioning is a mechanism that allows us to simultaneously maintain two or more sets of business logic in a consumer-visible way.

Why does it have to be consumer-visible as opposed to just consumer-specific? After all, if the service implementation can distinguish between two classes of consumer, it can apply two different business rules to them in a completely opaque manner. The consumer doesn't even have to know that two (or more) different sets of business rules are being weighed and applied under the covers.

Let's ask the more interesting question: Why do we need to maintain two or more sets of business logic simultaneously? The interesting (and circular) answer is often that business logic happens to be consumer-visible, hence a new version of business logic also needs to be distinguished from the old in a consumer-visible way. This is often stated as the need to support legacy consumers, i.e., consumers dependent in some way upon the previous version of business logic. But why do we have to support legacy consumers? Because existing contracts break when services are silently upgraded.

This argument leads to an interesting train of thought. Perhaps the answer lies in the opposite direction to what JJ believes, i.e., not versioning of services but abstraction of detail. Are our service contracts too specific and therefore too brittle? Service versioning is perhaps a "smell" that says we are going about SOA all wrong. Let us see.

I want to take up a more real-world example than the customer access example that JJ talked about. After all, that's more of a "data service" than a business service. Let's look at a real "business service".

Let's take the case of the insurance industry where a customer asks for a quote for an insurance product. The client-side application has to submit a set of data to the service and get a quote (a dollar value for the premium) in return.

In REST, here's how it could work.


POST /quotes


201 Created
Location: /quotes/06fb633b-fec4-4fb6-ae32-f298b8f499c1

The client is referred to the location of the newly-created quote Resource, which is at /quotes/06fb633b-fec4-4fb6-ae32-f298b8f499c1. When the client does a GET on this URI, the quote details are transferred.

So far, so good. Now let's say the business logic changes. Premiums are now calculated using very different logic. The first question is, can this new business logic be applied to all customers, or do we need to keep track of "old" customers and keep applying the old business logic to them? If we can "upgrade" all customers to the new business logic, there is, of course, no problem at all. The interface remains the same. The client application POSTs data to the same URI, and they are redirected in the same way to the location of the newly-created quote Resource. The business logic applied is all new, but customers don't see the change in their interface (only in the dollar values they are quoted!)

However, if we do need to maintain two sets of business logic, it could be for three reasons. One, the data that the client app needs to submit has changed, so the change is unavoidably visible to the customer and has to be communicated as a new and distinct contract. Two, there is another business reason to tell two types of customers apart, perhaps to reward longstanding customers with better rates, and this difference between customers is not obvious from the data they submit. Third, the client app somehow "knows" the behaviour of the old version and is dependent on it. In this case, we need a new version just to keep legacy clients from breaking.

We can readily see that the third reason is an artificial case for versioning. It's in fact a case to break implicit dependencies that have crept in.

In contrast, the first and second reasons provide their own resolution. If the type of data submitted by the client changes, that is itself a way to distinguish new clients from old ones and apply different business logic to them. In other words, we only need to tell newer customers about the change in the data they need to POST. Older customers don't need to do a thing. Also, if we can somehow derive that the customer is an existing one, even if this is not explicit in the data submitted, we can still apply different business logic transparently.

JJ may consider this a messy and unstructured approach to versioning. Business stakeholders may have the opposite view. It's less disruptive. The less clients are exposed to the way services are implemented, the better.

Service versions are not really an interface detail. They're an implementation detail that often leaks into the interface.

That means version numbers are a problem, not a solution.

None of these arguments may satisfy someone like JJ. In that case, if service versioning is absolutely essential, there is a simple way to include it, after all. Include the version number in the message body accompanying a POST or PUT request. In fact, message bodies are allowed even for GET and DELETE requests (anything except a TRACE), so versioning of any type of service is possible. REST does not enforce versioning, (that would be a bad thing considering that versions are often a smell), but doesn't impede it either.

With this approach, neither Verbs (e.g., POST) nor URIs (e.g., /quotes) are affected by versions and the "terrible" coupling of identity and access is avoided.

It seems to me that the problem is not with REST, it's with looking at REST through WS-* eyes.

Friday, June 11, 2010

Is REST Another Variant of Distributed Objects?

In my discussions with JJ on REST, he's often made the observation that REST is nothing but Distributed Objects and is therefore a bad style. But is it really?

I have two different arguments why I believe it isn't.

1. Let me first talk about my experience as a migrant from India to Australia many years ago. Working in Indian companies had acclimatised me to a rather direct management style. If my manager ever wanted me to do something, he or she would say so in so many words, "Do this!"

When I arrived in Australia and began working, I was struck by the very different style of Australian managers. I would hear things like, "You may want to do this," or "You may like to do this." It took me a while to realise that they were essentially saying the same thing, only less directly. Let me coin a term for this style, because I'm going to use it to explain a concept with Distributed Objects. Let me call this a polite command.

Let's now turn to methods that set an object's attributes. We may see setter methods like these:

widget.setSomeAttribute( someValue );
widget.setAnotherAttribute( anotherValue );

Now consider another style of doing the same thing.

widget.updateSelf( widgetDTO );

where widgetDTO is a structure that holds the new values of someAttribute and anotherAttribute.

Let's call the direct setter methods commands. "Remoting" these commands leads to a tightly-coupled, RPC mechanism. This is the Distributed Objects style. I would be the first to agree with JJ that this is a bad approach.

But the second style is a polite command. It's requesting the object to update itself based on values held in a Data Transfer Object (i.e., a representation). Now this is a style that can be remoted without problems, because it's not really RPC.

The REST style of updating resources follows the latter model.

PUT /widgets/1234
<some-attribute>some value</some-attribute>
<another-attribute>another value</another-attribute>

In other words, this is a polite command. It can be safely remoted. It's not Distributed Objects.

2. JJ laughs at the approach of annotating object methods to turn them into REST resources, the way JAX-RS does. This is another reason why he considers REST to be Distributed Objects. The annotations seem to be doing nothing more than remoting object methods. Therefore, REST = Distributed Objects and consequently a horrible way to design systems.

Not so fast.

Let's not forget the concept of Service Objects, which are not really Domain Objects.

Let's look at a simplistic domain model for a banking system. The major Domain Object in this model is an Account. An Account object has the following methods:

class Account()
public double getBalance() {...};
public void credit( double amount ) {...};
public void debit( double amount ) {...};

The Domain Model is internally-focused. Nothing here understands the concept of a transfer. Indeed, a transfer cannot be elegantly modelled using domain objects.

That's because a transfer is an aspect of a Service. Service verbs are really free-standing verbs. They don't belong to classes the way domain methods do. The methods getBalance(), credit() and debit() don't make any sense by themselves. It always has to be account.getBalance(), and account.debit().

In contrast, transfer() can be a free-standing verb. In fact, it seems downright clumsy to push transfer() into a class, because it really doesn't belong inside any class. It's just that languages like Java are unyieldingly object-oriented and don't tolerate free-standing verbs. So in practice, designers create an awkwardly-named class like AccountService and stick transfer() inside it, like this:

class AccountService()
public void transfer( Account fromAccount, Account toAccount, double amount )
throws InsufficientFundsException
if ( fromAccount.getBalance() >= amount )
fromAccount.debit( amount ); amount );
throw new InsufficientFundsException();

[If J2EE designers feel a sense of déjà vu on seeing this, it's the old Session Façade pattern all over again, with Stateless Session Beans acting as a service façade for domain objects represented by Entity Beans. Stateless Session Beans were not part of the Domain Model at all. They were an explicit service layer that lent itself to being remoted through (what else?) a Remote Interface.]

Now, if we tried to remote a method like credit() or debit(), it would be a classic case of RPC (RMI, strictly speaking) and therefore Distributed Objects hell.

But a "method" like transfer() readily lends itself to being remoted! That's because an instance of AccountService isn't a Domain Object but a Service Object.

If designers took care to annotate only the methods of Service Objects and avoided doing so with methods of Domain Objects, then they neatly avoid being trapped into the Distributed Objects paradigm.

All that leads to a certain insight. And that is that although REST is an architectural style, it isn't prescriptive enough. We need to tell designers what types of domain objects can be modelled as resources and what cannot.

With a nod to the term polite command, perhaps it's not enough for systems to be RESTful. They should also be RESPECTful :-).

I realise I have blogged extensively on this over 2 years ago, here and here.

Wednesday, June 09, 2010

The Real and Imagined Limitations of REST

One of the things that struck me about my discussion with JJ Dubray on a previous blog posting was how closely we agreed on fundamental architectural principles (decoupling of interface from implementation, avoiding a hub-and-spokes architecture, etc.), yet how diametrically opposed our views were on REST.

For example, I think REST does a great job of decoupling interface from implementation. JJ feels the exact opposite. Why?

Analysing the problem more closely, I guess the common examples of RESTful interface design are partly to blame.

1. A URI like

where 1234 is the actual id of the customer in my database, would be a horrible example of tight coupling, in my opinion. I believe URIs should be opaque and meaningless. I think designers should take care to create a mapping layer that decouples their implementation data model from their externally exposed service data model.

For example, I would prefer this to be exposed:

There should be a mapping within the service implementation that relates the opaque identifier "cb77380-7425-11df-93f2-0800200c9a66" to the customer's id within the domain data model, i.e., 1234. That would be true decoupling.

Mind you, the earlier example of tight coupling is not a limitation of REST, merely bad application design.

[Incidentally, I think UUIDs have given the world a wonderful gift in the form of an inexhaustible, federated, opaque identification scheme for resource representations. Decoupling identity is a key part of decoupling domain data models (Pat Helland's Data on the Inside) from service data models (Data on the Outside).]

2. Another bad example is the use of "meaningful" URIs. Even though the following two URIs may seem obviously related, client applications must not make any assumptions about their relationship:

In other words, a client application must not assume that it can derive the resource representation for the set of customers by merely stripping off the last token "/1234" from the URI of an individual customer.

And this is not a limitation of REST either. The HatEoAS (Hypermedia as the Engine of Application State) principle says that client applications must only rely on fully-formed URIs provided by the service to perform further manipulations of resources. In other words, URIs are to be treated as opaque and client applications must not attempt to reverse-engineer them by making assumptions about their structure.

Examples like these are bad from the perspective of architects who understand the evils of tight coupling, and the ones who produce these examples understand it too, but these naive examples have the benefit of being easy to understand.

A RESTful system would work perfectly well with URIs that looked like these:

The first is a URI for a customer, the second is a URI for a bank account and the third is a URI for an insurance policy. How about that? This would be architecturally clean, but most REST newbies would go, "Huh? Weird!" and turn away with a shudder.

Sometimes, the desire for understandability of a design by humans introduces semantic coupling that is anti-SOA. I guess there's a trade-off that we need to be aware of, but it's not a limitation of REST itself. It's an application designer's choice.

3. In another context, JJ has expressed his opinion that SOA is about asynchronous peer-to-peer interaction, and I completely agree. Where I think he has misunderstood REST is in its superficial characteristic of synchronous request-response. There are several well-known design patterns for asynchronous interaction using REST, so in practice, this is not a limitation at all. The HTTP status code of "202 Accepted" is meant for precisely this condition - "I've received your message but have not yet acted on it". At one stroke, it's also a model for reliable message delivery. Combine a POST with a one-time token to ensure idempotence, then keep (blindly) trying until you get a 202 Accepted response. Voila! Reliable message delivery.

I find I am able to look beyond the superficial characteristics of REST that seem like showstoppers to JJ. I can see simple workarounds and Best Practice guidelines that can make a RESTful application fully compliant with SOA principles. JJ stops at the characteristics as given and concludes that REST is horribly antithetical to SOA.

The Need for SOA in the Real World

To those who think SOA is just hype, this should be a sobering piece of news from the real world:

ANZ Bank hits a wall on financial software rollout

Sources have told The Australian that the bank is having second thoughts about going ahead with installing a Teradata solution made up of enterprise data-warehousing and database software into a predominantly Oracle-run house.
"It just doesn't marry well with the rest of the organisation as it's an Oracle house," one source said.
Anyone see the problem here?

Why should the brand of software being introduced have anything to do with a decision to implement? We in IT tend to treat this sort of argument as perfectly reasonable. Of course software from one vendor won't play nicely with software from another! We've internalised the obscenity of the situation. That's what makes this more horror than farce. We don't see the lack of interoperability between vendors as a problem, merely as a constraint.

Well, SOA thinking would have challenged this right off the bat. The brand of existing software and new software just doesn't matter. What matters is business functionality. As long as it's all wrapped up in contract-based services, they should play well together.

But we obviously don't live in that world. We live in a world where a strange kind of Stockholm Syndrome grips customers, preventing them from asserting their rights and causing them to acquiesce in a patently unfair seller's market. The "pragmatic" decisions that then follow create vendor lock-in and much higher outflows in the long run. So much for pragmatism.

What this news item tells me is that not only is the software vendor world not SOA-friendly, the business user community doesn't think in SOA terms either.

More's the pity.

Thursday, June 03, 2010

SOAP, REST and the "Uniform Interface"

When REST folk compare their approach with SOAP-based Web Services (which they often mistakenly call "SOA", not realising that REST is an equally valid way to do SOA), they often refer to a "uniform interface" that REST provides that SOAP does not.

The response of the SOAP crowd (when there is one), is "What's a uniform interface?" (They'd be surprised to hear that they have one too, which is the topic of this post.)

The uniform interface refers to a standard way to do something regardless of what the specific activity is.

As an example, if I asked what the SOAP interface would be like for a service, and I refuse to say what the actual business function is, the SOAP people would have to stop after describing the service at a very high level. There's a SOAP message with a header and a body, and what goes into them depends entirely on what I want to do. The general semantics of a SOAP message are "Process this". That's actually too general to be useful. Any further detail they could give me would depend on what I tell them about the actual operation I'm trying to perform.

In contrast, the REST guys can actually tell me much more about what my service is going to look like based only on the rough contours of my business function.

They can tell me that what I operate on is a resource that will look like this:


They can tell me that I will be using one of four verbs (GET, POST, PUT or DELETE).

They can tell me that my response status codes will be one of some 50-odd pre-defined status codes.

Plus they can give me other general tips that will apply, such as a GET operation will have no side-effects, a deferred processing request will return a "202 Accepted" status when successful, a POST request will be to a resource collection, and its response status code will be "201 Created" when successful, and the URI of the newly created resource will be found in a response header called "Location:", etc., etc.

That's actually quite a bit of information considering I haven't said anything yet about the business function I'm trying to implement.

That's what is called a "uniform interface". It's a structured framework within which application designers have to work. Far from being restrictive and limiting, these constraints actually make things easier for the designer because they provide standard patterns that can be reused in similar situations and deliver predictable results.

So far, so good. What the REST side doesn't understand is that SOAP-based Web Services offer a uniform interface too. Obviously not for the application itself, but for qualities of service.

Ask a SOAP service designer how they plan to implement reliable messaging, and they can show you very detailed WS-ReliableExchange headers that are virtually the same regardless of the business service. Ask them about security (encryption and signing/authentication) and they can show you the detailed WS-Security/SecureConversation/Trust headers required, and these too are standard and unchanging.

Security and Reliability are "Qualities of Services". SOAP-based Web Services have standardised these, and are also on track to standardising Transactions through WS-AtomicTransaction and WS-BusinessActivity (the latter deals with real-world transactions that require compensating actions to "roll back"). [Some may argue that these have in fact been standardised, but there isn't yet a WS-I Basic XXX Profile for these, like there is for Security and Reliability, which means there is no benchmark yet for interoperability in these areas.]

This isn't something to be sneezed at, because a uniform interface leads to improved interoperability, which in turn reduces the cost of integration. From my own experience in IT, I know that integration cost forms a very high proportion of the cost of most projects. Indeed, Project Tango proved the interoperability of Java and .NET-based Web Services using declarative statements in policy files to have these QoS headers automatically generated. Now that's cool.

REST promises to reduce integration cost and thereby overall cost, mainly though the mechanism of the "uniform interface". However, REST does not have a standard model for Security or Reliability, let alone Transactions or Process Coordination. SSL is a rigid security mechanism that is adequate for most, but not all situations. And while it's true that Reliability is probably better implemented through patterns like Idempotence than through the TCP-like model followed by WS-ReliableExchange, it requires an application designer to consciously build it. Transactions and Process Coordination are also missing from the REST lineup.

That's the challenge. Both SOAP and REST have standardised a part of the service interface, but not all of it. REST has standardised the interface of the application's core logic. SOAP has standardised the interface for Qualities of Service. What we need is a model that provides us with both.

Wednesday, June 02, 2010

Spring Integration - Enabling the Enterprise Service Cloud

Inversion of Control is a great concept, and the Spring framework excels at it.

Little did I think that Spring could also effect an Inversion of Architecture! I just recently blogged about the need for organisations to move from a hub-and-spokes ESB (Enterprise Service Bus) to its architectural opposite, a federated ESC (Enterprise Service Cloud).

In the hub-and-spokes model, you run an application (a Service) on the ESB. To 'invert' that architecture and make it federated, you need to embed an ESB into your application! Spring Integration allows us to do just that.

This manual provides a number of examples that explain how the tiny, deconstructed pieces of ESB functionality can be injected into an application as required.

That neatly solves the problem of having to deploy multiple heavyweight ESB servers at every endpoint to front-end each application, which would have been the only way to get federation with that heavyweight approach, - a clearly unacceptable solution even if one ignores licence costs.

I used to think that a federated ESB was the answer and that JBI (Java Business Integration) provided the most flexibility to deploy tailored endpoints. (JBI allows one to plug in specific ESB modules into a common "backplane", but Spring Integration goes a step further and uses the application itself as the backplane.)

The future seems to be here already.

Wanted: An ESC, not an ESB

Here's my fundamental beef with an ESB: It follows a hub-and-spokes model that is more suited to an EAI initiative of the 1990s. In the year 2010, we should be doing SOA, not EAI.

SOA is federated where EAI is brokered. It's the architectural antithesis of EAI. The smarts are pushed out from the middle to the endpoints until there's no "middle" anymore. I wrote about the difference between "federated" and bad old "point-to-point" before. Federated is a standardised smartening of the endpoints.

It doesn't matter whether we do SOAP or REST. Either way, it's SOA, and it's inherently federated.

Both SOAP's Web Service Endpoints and REST's Resources are represented by URIs. URIs offer a "flat" address space. They're completely opaque. DNS can hide where the services physically sit, so any node can talk to any other without the need for an intermediary.

Look at a SOAP message. Look at the headers - WS-Addressing, WS-Security, WS-ReliableExchange. They're all end-to-end protocols. The SOAP engines at the endpoints negotiate and handshake every Quality of Service they need without depending on anything in between.

WS-ReliableExchange uses endpoint-based logic such as acknowledgements, timeouts and retries to provide a measure of reliability even with unreliable transports, much like TCP provides connection-oriented communications over a plain packet routing protocol like IP. There is no need for a "guaranteed delivery" transport like a message queue.

WS-Trust does key exchange, WS-SecureConversation sets up a session key and WS-Security encrypts messages, all in an end-to-end fashion. There is no need for a secure transport.

Any node that talks SOAP and WS-* and conforms to the WS-I Basic Profile (and Basic Security Profile and Basic Reliability Profile) can participate. That's federation.

This isn't an academic discussion. The main drawbacks of a hub-and-spokes model are Availability and Scalability. The ESB is a single point of failure and a performance bottleneck. The normal "solution" is to beef up the ESB by providing for redundancy and "High Availability", but this costs a fair bit and only postpones the inevitable. That's why I said this isn't an academic discussion. There are serious dollars involved. Organisations would do well to think about the implications of an ESB approach and take early steps to avoid the inevitable costs that lie along that path. The correct solution would be to do away with the centralised ESB altogether and embrace the inherently federated model of SOAP (or REST).

Otherwise, we're forcing inherently federated protocols into a hub-and-spokes model, - a needless throwback to an earlier era , - and inviting problems that need to be "solved" at great expense. (Of course, it's a different matter if we use multiple instances of the ESB as brokers close to endpoints, but economics militates against such a topology, so I suspect federated ESBs will not be feasible.)

In the year 2010, when clouds are all the rage, we should recognise that what SOAP and REST give us are "service clouds".

So that's my buzzword for today - ESC (Enterprise Service Cloud)!

The ESB is dead, long live the ESC!