Monday, January 21, 2008

Roy Fielding's Fundamental Omission

I think there is a fundamental omission at the heart of the argument that leads to the REST architectural style. It doesn't invalidate the style itself, but it does lead one to question REST's pretensions to being "the only way".

To understand this omission, it's important to study Roy Fielding's doctoral thesis, because that is the Bible of REST. The architectural principles of the Web were enunciated here for the first time, as an architecture. Before this, for many people, the Web just was. After Fielding's work, the Web began to be seen as something consciously designed and built according to well thought-out principles.

In Chapter 5, Fielding develops the REST model systematically and step-by-step, starting with a "null style" architecture with no constraints and then layering constraints upon it one by one in a controlled and logical manner. Indeed, this seems to be one of the core RESTian philosophies, that "constraints empower". With each set of restrictions that you place on an architecture, you actually get new benefits. When you're finished with all the constraints (capabilities) you need, the theory goes, you end up with the ideal architecture.

Indeed, on first walking through the analysis, the REST architecture seems to evolve naturally, inexorably and inevitably. With each new constraint (e.g., statelessness, caching, a uniform interface), the model moves closer to REST. The diagram towards the end of section 5.1 (which I'll call "All Roads Lead to REST" :-) reinforces this impression. No matter in what order we choose to layer our constraints (capabilities), we will always end up with REST.

Diagram: "All Roads Lead to REST"

At this point, I will invite my readers to pause and pop over to this page in Fielding's thesis to see for themselves if they're convinced, or if they think there's something fishy there. Don't read my post any further until you've formed your own opinion.

My fundamental issue with Fielding's argument is how he leaps straight from the null style architecture to the client-server model. When I first read this, I went, "Whoa, not so fast!" Because isn't there something between the null style and the client-server style? How about peer-to-peer?

I would argue that peer-to-peer is a more basic model than client-server, because we can always layer an additional constraint on the peer-to-peer model to convert it into client-server. The constraint is simply that only one peer can initiate an interaction and the other peer can only respond. In other words, request-response semantics.

Now here's my second objection to the REST philosophy. We don't have to glorify all constraints as unqualified positives for an architecture. Sometimes constraints are nothing more than just irritating, chafing restrictions. What does the request-response constraint buy us except the immediate loss of event notification capability (including callbacks)? (Using firewalls as a justification to ban unsolicited communications is circular reasoning. Firewall behaviour is informed by the web's request-response model.)

Besides, even if request-response wasn't a crippling limitation, client-server doesn't model the way things work in the real world. The real world is not client-server, it's peer-to-peer. Between organisations, there is a relationship of equals ("peers"). No organisation likes to see itself as controlled by another. All organisations are proudly autonomous. We need a peer-to-peer model to govern interactions between autonomous organisations, because that most closely models reality.

Even within a corporate setting, the various departments and product systems behave like autonomous units. My employer's HR system isn't sitting there passively, just waiting to answer queries and accept updates to employee data. It must actively remind me to fill in my weekly timesheet and remind my boss that it's time for my annual performance appraisal. But with a web architecture, that's needlessly hard to implement. With only browser-based interfaces to the HR system, we require users to log in to the system before they can receive any events. And that's just on login, or when they actually do something. They can't receive event notifications when they're just sitting there staring at the screen. So we resort to polling the system through page refreshes, or we go right around the limitations of the web architecture and get the HR system to send us e-mails instead. (Or we use AJAX clients, which hide the polling. AJAX is clever lipstick on the request-response pig.)

[I know that at this point, many REST advocates will point out that REST is not about one-way client-server. Each node can be both a client and a server. But two pairs of client-server nodes facing in opposite directions are not the same as a simple peer-to-peer system. That's trying to put Humpty-Dumpty together again.]

So now go back and read Roy Fielding's thesis again. Why, pray why, did Roy curse us with client-server with one perfunctory stroke of his pen, when he could have spent a few minutes exploring the peer-to-peer model? Mind you, there are enough constraints built into the peer-to-peer model when you assume autonomous peers. In fact, the autonomous peer-to-peer model itself can lead to the specification of important concepts like high cohesion (within a peer's domain) and loose coupling (between peers), the messaging paradigm, stateless interactions, discovery, logical addressing (which in turn leads to late binding, substitutability, routability, proxyability), metadata (generic policies as well as domain models), etc. Proxyability in turn leads to the notion of intermediaries (which can do caching, encryption, authentication and a whole heap of domain-aware functions as well). And then there's the whole hierarchy of capabilities that comes out of defining context and refining it:

Context -> Correlation -> Coordination -> Transactions

My, my, so many of the concepts we're wrestling with today, and we haven't had to move a step away from peer-to-peer! It appears that we can build a complete architectural style for SOA based on messaging between autonomous peers without taking on the needless additional constraints of client-server (which, as we argued before, buys us nothing but a loss of event notification capability). What was Roy thinking?

I can't help having mischievous thoughts at this stage. If Roy Fielding's analysis had been more rigorous, and he had properly explored the peer-to-peer model instead of jumping straight into the client-server model, would he have been known today as the father of SOAP messaging?

5 comments:

Integral ):( Reporting said...

Ganesh:

yes, your analysis is correct. I think the RESTian community gets it now.

The RESTian style is important, but as you point out so eloquently it is not enough for the "enterprise".

Great post. I don't think we will see much more WS-* bashing anyways, they have too much work to do...

JJ-

stu said...

"Roy curse us with client-server with one perfunctory stroke of his pen, when he could have spent a few minutes exploring the peer-to-peer model? "

What are the constraints on a peer-to-peer model? As far as I can tell, there are none. It's same as the null style.

On the other hand, Roy does speak about a couple of constrained peer-to-peer models: EBI and C2.

But, bascially, I assume your issue is: "why doesn't HTTP have WATCH/NOTIFY semantics?" And the answer is: the people building the web didn't think it was necessary at the time.

There will come a day where a successor to REST becomes implemented and popularized and relaxes the client/server constraint. Maybe you could build it. ;-)

Now here's my second objection to the REST philosophy. We don't have to glorify all constraints as unqualified positives for an architecture.

Firstly, I don't think anyone is glorifying all constraints. Look at the section again: Roy goes through both positives and negatives of a variety of architectural styles and their various constraints.

Secondly, the study of constraints isn't a REST philosophy, this is the _definition_ of software architecture styles, at least as defined by the academic community, such as the SEI or the IEEE.

"What does the request-response constraint buy us except the immediate loss of event notification capability (including callbacks)?"

Separation of concerns. This leads to enhanced scalability (if you can simplify the server, you can enhance the overall system's scalability. Most servers don't have to render a UI, for example.) It also helps implementation evolvability by splitting responsibilities, so long as the interface doesn't change.

Besides, even if request-response wasn't a crippling limitation, client-server doesn't model the way things work in the real world. The real world is not client-server, it's peer-to-peer.

Orly? The real world isn't binary, I guess we should all stop using computers.

Look, arguments like the above get us nowhere -- you're basically saying "it doesn't feel right", or "client/server doesn't cover every possible use case". No doubt it doesn't. No architecture does! You have to fit the domain to your desirable properties.

I find many in the WS-* world suffer under some kind of belief that there is one universal systems architecture, and it is peer-to-peer asynchronous messages. Sorry, that's only universal because it doesn't actually constrain anything. It's just a better toolkit from which you can build anything. Some may find that interesting, I think it just makes for more re-invention and less progress on the stuff that matters. The argument is not very different from saying we should build all of our applications on top of UDP, since TCP is unduly constrained.

I can't name a successful, scalable, multi-organizational software system that truly works as an full-stack peer-to-peer based architecture. Even constrained event-based systems, like TIBCO/Rendezvous, are very localized due to their scalability constraints as you add members to the peer group, and require client/server bridges to scale up.

"All organisations are proudly autonomous. We need a peer-to-peer model to govern interactions between autonomous organisations, because that most closely models reality."

Interestingly enough, the Web, and networked hypermedia, enables exactly the above. Yet it sits on top of client-server. Again, this is why I mentioned Galloway. Control at one level of the stack can lead to freedom at higher levels of the network stack.

AJAX is clever lipstick on the request-response pig.

Sure, but it's often advantageous to fit within a deployed & successful architecture than to create your own non-interoperable one.

Again, I think the day will come where the web standards will become less client-server constrained and adopt a successor to REST -- we're already seeing it with Comet, AJAX, etc. But the point is not that it should be unconstrained peer-to-peer, like SOAP. Any successor should take the lessons of the web to date and apply them when relaxing the client/server constraint.

In fact, the autonomous peer-to-peer model itself can lead to the specification of important concepts like high cohesion (within a peer's domain) and loose coupling (between peers), the messaging paradigm, stateless interactions, discovery, logical addressing (which in turn leads to late binding, substitutability, routability, proxyability), metadata (generic policies as well as domain models), etc.

You do realize that REST already has constraints to enable almost all of the above -- the only exception is the client/server constraint. So, in your systems, why not relax it, but keep the rest? The main trick is to fit it within the uniform interface.

Rohit Khare's ARRESTED thesis shows an example of how this might be done (and emulated within the current confines of HTTP, foreshadowing AJAX/Comet).

prasadgc said...

Stu,

Your comments confirm many of my thoughts.

> What are the constraints on a peer-to-peer model? As far as I can tell, there are none. It's same as the null style.

I don't agree. The null style corresponds to the monolithic application, where everything can be assumed local.

Peer-to-peer introduces a very important constraint - autonomous domains. Why is autonomy a constraint? Because one peer's autonomy is another peer's lack of control. The inability to assume control forces designers towards good architectural practice, namely high cohesion/low coupling. It encourages the design of more modular systems in the large, which is what SOA is about.

If you introduce peer-to-peer between the null style and client-server and explore it rigorously, you will find you can surface many of the issues that Roy Fielding does after (hastily) moving on to client-server. These are not client-server issues, they are issues that emerge even with peer-to-peer. No, I still don't think Roy did a good job of this analysis.

On the other hand, Roy does speak about a couple of constrained peer-to-peer models: EBI and C2.

Yes, but disappointingly, he doesn't explore other ways to develop the peer-to-peer model.

But, bascially, I assume your issue is: "why doesn't HTTP have WATCH/NOTIFY semantics?" And the answer is: the people building the web didn't think it was necessary at the time.

There will come a day where a successor to REST becomes implemented and popularized and relaxes the client/server constraint.


Well, it would be good to have this more generally recognised as a REST limitation which is on the roadmap to fix. It's limitations like this I'm looking to understand, but it's not something the REST community volunteers by itself unless someone asks hard questions.

Maybe you could build it. ;-)

I was never a systems programmer. Application development is my space. I just criticise the platforms I have to work with. I don't fix them ;-).

Firstly, I don't think anyone is glorifying all constraints. Look at the section again: Roy goes through both positives and negatives of a variety of architectural styles and their various constraints.

I'll elaborate on this in my next comment.

Secondly, the study of constraints isn't a REST philosophy, this is the _definition_ of software architecture styles, at least as defined by the academic community, such as the SEI or the IEEE.

I stand corrected. But the spirit of my comment is that I've noticed that REST proponents make a big deal out of constraints, as though something becomes a virtue simply because it is a constraint.

>"What does the request-response constraint buy us except the immediate loss of event notification capability (including callbacks)?"

Separation of concerns. This leads to enhanced scalability (if you can simplify the server, you can enhance the overall system's scalability. Most servers don't have to render a UI, for example.) It also helps implementation evolvability by splitting responsibilities, so long as the interface doesn't change.

I have a number of orthogonal objections to these statements.

1. "Separation of concerns" is an overloaded term. I suspect that the main two concerns that people think of are user interface and business logic. I think that is a second-order issue. I think "separation of domains" is a far more important issue for architecture because it leads directly to modular distributed systems.
2. Separation of concerns can be applied at any level. It's not a feature of distributed systems alone. The MVC model does this on the client, separating model, view and controller, all different concerns. I think the bigger issue between the "client" and the "server" peers is separation of domains. I think in the new world of mashups (client-side as well as server-side), we will see business logic (esp. composition) beginning to reside on the "client" peer, as part of the controller.
3. The big issue with traditional client-server has always been event notification (server-to-client). Every client-server system I know has its own (non-standard, proprietary) mechanisms to achieve this. To my mind, it's a smell that indicates that some architectural boundaries were hastily and incorrectly drawn. To paraphrase Don Knuth, premature separation of concerns is the root of all evil.

>Besides, even if request-response wasn't a crippling limitation, client-server doesn't model the way things work in the real world. The real world is not client-server, it's peer-to-peer.

Orly? The real world isn't binary, I guess we should all stop using computers.

That's a strawman argument. Think seriously about what I'm saying. The world consists of autonomous peers. It's the separation of domains that peer-to-peer models.

Look, arguments like the above get us nowhere -- you're basically saying "it doesn't feel right", or "client/server doesn't cover every possible use case". No doubt it doesn't. No architecture does! You have to fit the domain to your desirable properties.

By definition, models simplify and abstract out details - no argument there. Good models focus on the essence of what they model. I believe the essence of the "real world" is collaboration between autonomous peers. "Peer-to-peer" models that reality more faithfully than "client-server". Unlike you, I think "autonomous peer-to-peer" is rich with meaningful constraints. It's not a null style at all.

I find many in the WS-* world suffer under some kind of belief that there is one universal systems architecture, and it is peer-to-peer asynchronous messages. Sorry, that's only universal because it doesn't actually constrain anything.

Well, apart from nudging designers towards more modular distributed systems design with high cohesion and low coupling (the goal of SOA), it contributes nothing at all!

"All right... all right... but apart from better sanitation and medicine and education and irrigation and public health and roads and a freshwater system and baths and public order... what have the Romans done for us?" - Monty Python's Life of Brian

It's just a better toolkit from which you can build anything. Some may find that interesting, I think it just makes for more re-invention and less progress on the stuff that matters. The argument is not very different from saying we should build all of our applications on top of UDP, since TCP is unduly constrained.

Obviously, I wasn't talking about plain peer-to-peer. Autonomous peer-to-peer with additional constraints such as statelessness, logical addressing, etc., are what together do the trick. Basically REST but without the (unjustified) client-server constraint.

I can't name a successful, scalable, multi-organizational software system that truly works as an full-stack peer-to-peer based architecture.

Until REST came along, I'll bet no one could name a successful, scalable, multi-organizational software system that worked according to client-server principles either. That's hardly an argument. There's interesting stuff happening with SIP and OSGi that may do for peer-to-peer what REST did for client-server.

Even constrained event-based systems, like TIBCO/Rendezvous, are very localized due to their scalability constraints as you add members to the peer group, and require client/server bridges to scale up.

You know where I stand on centralised (even "federatedly" centralised) message brokers.

>"All organisations are proudly autonomous. We need a peer-to-peer model to govern interactions between autonomous organisations, because that most closely models reality."

Interestingly enough, the Web, and networked hypermedia, enables exactly the above. Yet it sits on top of client-server. Again, this is why I mentioned Galloway. Control at one level of the stack can lead to freedom at higher levels of the network stack.

I wasn't arguing that REST doesn't work. Clearly it does. I'm saying Roy Fielding may have arrived at a very different model (based on peer-to-peer) if he had explored that rigorously enough. If the REST folk are working to incorporate WATCH/NOTIFY semantics into HTTP, then I assume my criticism is valid.

Roy missed out on the advantages of peer-to-peer by leaping ahead to client-server, and now REST needs enhancements to put back those capabilities.

>AJAX is clever lipstick on the request-response pig.

Sure, but it's often advantageous to fit within a deployed & successful architecture than to create your own non-interoperable one.

That's probably the strongest argument for REST, i.e., that the infrastructure is already there. But my criticism in this blog entry is about the architectural basis for REST. To my mind, that is still questionable.

Again, I think the day will come where the web standards will become less client-server constrained and adopt a successor to REST -- we're already seeing it with Comet, AJAX, etc. But the point is not that it should be unconstrained peer-to-peer, like SOAP. Any successor should take the lessons of the web to date and apply them when relaxing the client/server constraint.

Oh, absolutely. I hate the client-server aspect of SOAP-RPC as much as the next guy, but I think the concept of SOAP Messaging (however inadequately supported today) is already making that point. Conceptually at least, it looks like SOAP Messaging is closer to that vision than REST is. And SOAP Messaging is not "unconstrained peer-to-peer", as you claim. It is autonomous peer-to-peer with a bunch of other constraints as well (e.g., statelessness). Of course, a lot of this can be negated by poor application design, but that's the case with any platform, even REST.

>In fact, the autonomous peer-to-peer model itself can lead to the specification of important concepts like high cohesion (within a peer's domain) and loose coupling (between peers), the messaging paradigm, stateless interactions, discovery, logical addressing (which in turn leads to late binding, substitutability, routability, proxyability), metadata (generic policies as well as domain models), etc.

You do realize that REST already has constraints to enable almost all of the above -- the only exception is the client/server constraint. So, in your systems, why not relax it, but keep the rest? The main trick is to fit it within the uniform interface.

Sure, but again, I'm not arguing that REST doesn't do these things. This post was a critique of the architectural basis for REST. Rather than leap ahead to client-server and then backtrack to pick up the event notification capability you dropped, why not start the right way and get to where you want by developing the peer-to-peer model logically and rigorously?

Rohit Khare's ARRESTED thesis shows an example of how this might be done (and emulated within the current confines of HTTP, foreshadowing AJAX/Comet).

I'll have a look, but from your description, I don't think it will negate any of my points.

Thanks for a very interesting discussion. My mental models of distributed systems architecture have become far more sophisticated than they were a couple of months ago, thanks to such lively debate.

Regards,
Ganesh

stu said...

I don't agree. The null style corresponds to the monolithic application, where everything can be assumed local.

All that "null" means, in Roy's thesis anyway, is the absence of distinguished constraints.

I'm not sure I would read into it as a "monolithic & local application" so much as architecture without clearly identified concerns or without clearly identified constraints that address those concerns.

Peer-to-peer introduces a very important constraint - autonomous domains. Why is autonomy a constraint? Because one peer's autonomy is another peer's lack of control.

Firstly, that's not at all implicit in the term "peer-to-peer".

Secondly, autonomy can be a nebulous concept.

Autonomy has many different levels and scope. There's deployment automony. Data definition autonomy. Interface autonomy. All three imply different power structures and require different architectural constraints.

For example, a set of interacting peers via a shared message-queue. Are they autonomous in their deployments? Not if they share a queue manager! Move that queue manager, and dependencies break.
With MQ that's a common scenario, as a single queue manager is pretty easy to manage.

A more autonomous approach would be for each peer to have its own queue and cross-subscriptions. That way, any peer can join or leave the collaboration at will -- a clear example of autonomy. But now we've got some complexity to deal with in managing these peers.

But, let's say they are autonomous. How about data definitions? Does each peer determine its own data format? That certainly is "autonomous", but it means each peer will also need to be able to translate each other peer's data format. So, a common approach is to use a "canonical data model" that is managed autonomously from all the peers.

Yet, now we now have to ask: are these peers are under the governing control of some higher (federal) authority that imposes their "canonical version" of data on us? Could a peer break out of this arrangement if it wanted to? If not, it's being compelled by a higher authority, and not really autonomous.

But, if so, perhaps peers are just adopting this data as part of a (confederated) authority wherein the peers adopt this canonical model out of their own self interest. That would preserve autonomy, but would recognize that not everything will be "canonical" except the most general concepts, and specific data elements would require a more complex set of constructs to deal with how semantics are mapped between organizations.

These sorts of questions have a major impact on the scale of the system. A federal model can only scale so far, for example. Confederation (which is a polite synonym for "anarchy") seems to be all that works at a huge scale for data interoperability; say, across the entire U.S. Department of Defense.

>>On the other hand, Roy does speak about a couple of constrained peer-to-peer models: EBI and C2.

Yes, but disappointingly, he doesn't explore other ways to develop the peer-to-peer model.


Actually, Roy spoke quite admiringly about C2. REST and C2 have a lot in common. Roy's presentation on Waka (his some-day-maybe successor to HTTP) added WATCH/NOTIFY semantics to the uniform interface.

Well, it would be good to have this more generally recognised as a REST limitation which is on the roadmap to fix. It's limitations like this I'm looking to understand, but it's not something the REST community volunteers by itself unless someone asks hard questions.

Eh, it's pretty hard to upgrade the Web. People are taking their time because it's not a huge pain (yet), and there are workable alternatives. We don't need a single architecture to rule them all.

The big issue with traditional client-server has always been event notification (server-to-client). Every client-server system I know has its own (non-standard, proprietary) mechanisms to achieve this. To my mind, it's a smell that indicates that some architectural boundaries were hastily and incorrectly drawn. To paraphrase Don Knuth, premature separation of concerns is the root of all evil.

I don't follow your logic.

Firstly, event notification is a completely separate architectural style than client/server. You are free to integrate your system with it (i.e. have both events and client/server), but one should be disciplined to understand when and where this should occur. For a mostly reactive service, you are gaining a savings on latency and resource consumption vs. polling. Sometimes that's a necessary thing, particularly if a piece of data changes often, and resources are scarce. But I would suggest that saying one "must" have events is more of an example in practice of Knuth's original quote (which was originally from Hoare, and is often misunderstood ).

Perhaps the most common Client/Server system is Remote Data Access (i.e. SQL databases). They generally do not have server-> client communication. I admit that recent drivers have changed this by enabling change notifications for cache invalidation. But for 95% of uses, there is no such need for events. Want to check if a row changed? Issue a new SELECT statement.

Until REST came along, I'll bet no one could name a successful, scalable, multi-organizational software system that worked according to client-server principles either.

EDI.

SMTP.

Both of these have application-layer asynchrony layered on top of rather synchronous client/server protocols.


You know where I stand on centralised (even "federatedly" centralised) message brokers.


Ahh, but TIBCO/Rendezvous isn't a centralized broker. It's true peer to peer multicast. (Which is why it's so fast.) Still doesn't scale easily beyond a well-managed LAN. ;-)

I'm saying Roy Fielding may have arrived at a very different model (based on peer-to-peer) if he had explored that rigorously enough. If the REST folk are working to incorporate WATCH/NOTIFY semantics into HTTP, then I assume my criticism is valid.

Peer-to-peer hypermedia would have been rather far fetched at the time, no? I mean, most modern work on Internet-scale peer-to-peer didn't start until the late 1990's. Most so-called P2P technologies are client/server hybrids (i.e. SMTP, NNTP, Napster, BitTorrent, etc.).

The examples we do have of pure peer-to-peer models working on the Internet leave a lot to be desired (i.e. Gnutella, Freenet).

Do I think the criticism is valid? That depends. I agree it's a limitation of the style, and that other styles are better suited to event notification (though we're doing it with blogs). On the other hand, I don't agree that it was an oversight or a mistake. It was a well understood trade-off that was made.


Roy missed out on the advantages of peer-to-peer by leaping ahead to client-server, and now REST needs enhancements to put back those capabilities.


The Web was invented by TBL. I don't think Roy was trying to re-invent the Web, he was trying to codify why it worked, and ensure that improvements to it kept those constraints in mind.

Good models focus on the essence of what they model. I believe the essence of the "real world" is collaboration between autonomous peers. "Peer-to-peer" models that reality more faithfully than "client-server".

If you're talking about conceptual modeling, I'm in complete agreement.

If you're talking about software architecture, we have a problem.

First, let's consider architecture as something that needs to occur at various levels of a stack. The OSI model has 7 layers, for example, though I like to include layers 8 (economics), and layer 9 (politics), which I learned from Rohit Khare.

Given this, I find it bemusing that we're talking about the Web being some kind of client-server authoritarian dystopia that doesn't mimic reality. The web seems very clearly a peer-to-peer network at levels 8 and 9 -- way more than most corporate information systems, which are of the client/server all the way to these economic and political layers of the stack. The same holds for Email (SMTP, POP, and IMAP).

Being client/server at the application layer is immaterial to its peer-to-peer nature at higher levels. Control at the lower layer enabled freedom at the higher level. Check out this old excerpt on the politics of Web networking from a favorite author of mine.

My point is that the digital world is not congruent with the real world, and it's fallacy to assume that it should be at every layer of abstraction.

Yes, we should work towards building abstractions that mirror the world, but this isn't appropriate at every level. Event-based styles have some fundamental trade-offs (scalability and complexity being two), so we need to be judicious in where they are used and why.

Thanks for your time.

Unknown said...

@Ganesh: in addition to everything Stu said (which is amazingly insightful), if there's anything I learned from working on SOA- and RPC-oriented standards like CORBA and WS-* for all those years, it's that such "architectures" tend to consist largely of a lot of design by committee stuff that's never even been tried in the real world. Some vendor decides they need it in the standard, they propose some half-assed specification for it that's never even been implemented, and then they pull whatever behind-the-scenes political maneuvers they need to in order to get it voted through.

REST, thankfully, does not suffer from this malady at all. Fielding did exactly the right thing by sticking to YAGNI principles, thereby avoiding the premature standardization and design-by-committee mistakes that continually plague the enterprise standards crowd.