Tuesday, November 25, 2008

JSON Schema is a Game-Changer

I have just become aware of a proposal that could change my opinion of JSON, of XML and a number of other positions that I had.

In the paper I co-authored on SOFEA, we were emphatic that JSON could not cut it as a format for Data Interchange because it lacked sufficient rigour to enforce service contracts. One of the main points behind a *Service-Oriented* Front-End Architecture was the ability to connect seamlessly to services, and services (by definition) need to have formal contracts. A front-end that doesn't respect data becomes a weak link in the end-to-end chain of data integrity and defeats a major goal of SOA.

With regard to data, we need to be able to specify three things - data types, data structures and data constraints (rules). JSON has very loose data types. It does support hierarchical data structures, but doesn't enforce data constraints. XML in contrast supplies all three, making it a superior choice.

I will freely admit that our choice of XML over JSON was not made without regret. JSON is far simpler to work with than XML, and one of our goals with SOFEA has been simplicity. We had to give a reluctant thumbs-down to JSON only because of its lack of rigour.

But now at last, it appears that our requirement for rigorous contract definition and enforcement is being addressed with JSON. This is the JSON Schema proposal from Kris Zyp.

I used to make a distinction between SOFEA and other similar approaches such as SOUI and TSA (Thin Server Architecture) based on this one aspect of rigorous contracts around data. I said at the time that better XML tooling would blunt JSON's edge in ease of use, but the opposite has happened. Better schema definition in JSON has instead blunted XML's edge in rigour. If JSON Schema becomes a reality, the distinction between SOFEA and its various cousins dissolves, and SOFEA will no longer be an XML-only architecture. All these architectures will be essentially the same.

Looking beyond SOFEA, I see JSON Schema as having very big implications for SOA itself. In an extreme scenario, the need for XML itself goes away! If we can define data rigorously and move it around in a structure that verifiably conforms to that definition, then our requirement is satisfied. XML may end up being seen as the EJB of data structures - clunky, unwieldy, intrusive, and ultimately replaced by a Spring-like lightweight rival that sacrifices nothing by way of rigour.

This is a development that definitely bears watching. There is a JSON Schema Google Group that is fairly active, and anyone with an interest in contributing should probably join this group.

Wednesday, November 12, 2008

Google Flu Trends

Now here's a development that's both heartening and disturbing.

Google Flu Trends is a new tool from the philanthropic foundation Google.org.

The idea is simple but revolutionary. Most statistics about epidemics are trailing indicators, i.e., they collect and organise data after events have happened. Google Flu Trends is about collecting and organising data as searches take place. The idea is that people will do Google searches on terms that affect them at the moment. So searches on "flu" will tend to rise when influenza is doing the rounds, "hayfever" searches will rise when hayfever season hits, and so on. By tracking where the searches are coming from, Google can provide a real-time (as opposed to a lagging) indicator of where official responses need to be targetted.

This is a heartening development because it promises a more rapid response to future pandemics like the Asian Bird Flu virus outbreak. The earlier warning and more precise pinpointing of affected areas can speed up intervention, save lives and waste fewer resources.

This is also a profoundly disquieting development in spite of Google's reminders about its privacy policy. What is being used in Google Flu Trends is aggregate data, but it shows that detailed per-user data with location-specificity is available to Google and can conceivably be used for less philanthropic purposes as well.

Tuesday, October 14, 2008

Australia's Digital Education Revolution is Truly Revolutionary

It's been less than a year since I wrote an Open Letter to the Australian Prime Minister (Kevin Rudd, then newly-elected) suggesting that when he delivered on his election promise to equip every schoolchild with a computer, he should favour Linux and Open Source software.

Now I don't know whether he read that letter or not, but the New South Wales state government (not to be confused with the Australian federal government) may be independently doing just that, using federal government funding provided to the states under the banner of Rudd's digital education revolution.

From the news item, it looks like NSW students are about to be introduced to Edubuntu, a variant of the extremely friendly Ubuntu distribution that I use, a variant specially designed for schoolchildren and packed with educational software. My son has been using an Edubuntu desktop for a year now, and has recently been introduced to the pleasurable power of the GIMP and XBMC as an aid to doing school projects.

Revolutionary indeed!

Thursday, October 02, 2008

How SOFEA fits into a SOA Ecosystem

How do user interface front-ends fit into a landscape that is increasingly service-oriented? A brief discussion yesterday with Peter Svensson sparked off a flurry of activity, and this diagram is the result. Thanks, Peter. Thanks also to Justin Meyer for his feedback and suggestions.

I won't waste words here, except to say that this diagram and its accompanying text should be sufficient to show how a SOFEA-based client application fits into a Service-Oriented ecosystem and works seamlessly with services (both SOAP and REST) and with processes (both the orchestrated and the choreographed variety).

[For those who are wondering what SOFEA is, read this gentle introduction. The original detailed paper is here and my original blog posting on it is here. There is also the original ServerSide article, but be warned that it points to an earlier version of the paper. Then there's the InfoQ article co-authored with Peter Svensson, which is not just about SOFEA but a family of similar architectures that rationalise the Presentation Tier. And finally, a Google Groups community that any interested party can join.]

Saturday, September 27, 2008

Google's Android - The Promise

So Google's Android mobile platform is finally here.

I'm not going to add to the cacophony of voices discussing the technology itself. There are far more competent people doing that already.

Stepping back from the technobabble for a moment, I have a sense of déjà vu. I feel this is a moment rather like the one back in 1981 when IBM launched the PC.

The personal computer era can be neatly classified into a BC-AD kind of history with the launch of the IBM PC corresponding to the birth of Christ. Before, it was all about proprietary hardware and proprietary software. After, it was all about open hardware and, uh, proprietary software.

Even with its partial openness, the IBM PC proved to be a neutron bomb that eliminated all its rivals in short order. Only Apple survived (although it suffered a near-death experience in the mid-nineties).

The echoes of the IBM PC's debut are still reverberating through the IT industry today, its megatonnage a direct result of its comparative openness. [Of course, the fact that the IBM PC was from IBM didn't hurt, but I wager it wouldn't have done anywhere near as well if it hadn't been for its openness. After all, the IBM PS/2 (which was meant to be the PC's successor) bombed badly because of its proprietary MCA (Micro Channel Architecture) bus, which lost out to the open ISA (Industry Standard Architecture) bus. The PC moved on, leaving IBM behind, proving that it was openness rather than pedigree that was responsible for its success.]

With Android, Google is out-IBMing IBM. It's an open platform all the way. Android itself is released under an Apache license, like its underlying Java Virtual Machine (more correctly, the Dalvik Virtual Machine based on Apache Harmony). The Linux kernel underlying both of them is covered by the GNU GPL. Unlike Apple, Google will not attempt to control the application ecosystem on top of the platform either. This is true openness.

Where will this lead us? Let me make a prediction. This obviously sounds audacious in late 2008, but I'm betting it will appear obvious in hindsight by 2010. I'm predicting Blackberry/iPhone capabilities in devices costing $10. That's right, ten dollars. The hardware's made in China, the platform's Open Source, the applications are downloadable free of charge. How much does a cheap plastic LCD wristwatch cost? That's how much a mobile device will cost within 2 years, thanks to openness.

Update 30/09/2008: The first comment on this entry was a very well-informed one warning of the very different economics and market constraints in the mobile market compared to the PC market. But you know what? These are mobile devices we're talking about, not just mobile phones. They're basically wireless-capable PCs in a smaller form factor. If the existing mobile players are exploiting their natural monopoly to essentially charge rents, I think they're about to have an unpleasant deflationary surprise when for example, Google moves the action away from the telco-dominated mobile phone network onto the wireless Internet. Google has a habit of introducing game-changing features into their products. Think Suggest for Google Search, Street View for Google Maps (itself a game-changing product), and Google Documents, the online office suite. Google also has an interesting way of slipping products rather unobtrusively into the market with ultimately unpleasant consequences for their competitors. 

How could Google fight the established players in the mobile phone market? Let's think. The bulk of mobile subscribers are in cities. The 80-20 rule means that investing in relatively few Internet Wireless Access Points in major cities will cover the bulk of subscribers, and the rest can be switched through the regular mobile network. ISPs, not telcos, could be the main carrier partners of the device manufacturers who go with Android. Google can use its huge additional advertising revenue to provide cross-subsidies and hide any differences in economics by having to use different networks. And the subsidy required will keep decreasing as WAP coverage increases. This is something that just occurred to me as a casual afterthought, so I'm sure the Googleheads have even better ideas.

The bottomline to the consumer is that massive reductions in cost-per-MIPS began to occur with PCs as soon as an open platform appeared. Can this happen again with mobile devices? I think it can.

But even if Google breaks the telco oligopoly and creates an Internet-based mobile device ecosystem, it still faces a formidable challenge because it can't stop other players from exploiting this more open network.

This time, Apple is far stronger as an incumbent than it was in 1981. The challenger (Google) is just as new to the battleground market segment as IBM was in 1981 but just as deep-pocketed and determined as IBM was.

Will Android's openness be sufficient to overcome Apple's brand, technical sophistication, polish and level of entrenchment in the market?

That's not the only challenge before Android. There is an open competitor as well. Will LiMo queer the pitch for Android?

Well, I'm both pro-openness and pro-competition, so the more the merrier :-). And I'll save my money till the price of a mobile device drops to about $10. That's me not putting my money where my mouth is!

Go, Android!

Thursday, September 11, 2008

A Tool for Generating Data Flow Diagrams

Many months ago, I posted an entry called Context Diagrams as a Tool for SOA. Today I learnt that Douglas Barry has created a tool to help software designers visualise data flows. Just specify your inputs and outputs and "wire" them up, then the tool generates visual representations of the relationships. You can get what Barry calls "Business Process Diagrams", "Data Flow Diagrams for Services" and "Data Flow Diagrams". [I couldn't tell the difference between the plain DFDs and the DFDs for Services. They seemed to be identical in this version.]

Anyway, it's quite an interesting application, and I had lots of fun playing with it.

My suggestions to further develop the tool:

1. Allow flow chaining, i.e., allow the outputs of one step to become inputs to the next. [Or perhaps the tool is already doing this and I haven't understood the notation yet.]
2. Provide support for localised and progressive decomposition of modules, i.e., allow the designer to explode a node within its own context without having to go back to the top-level listing of all inputs and outputs, because beyond a point, this tends to become confusing. [Barry explains that this is already catered for, but perhaps my initial examples were too simple and didn't support a second level of decomposition that would have tested this feature.]
3. Refine the "DFDs for Services" view to be truly service-oriented, by allowing for a different paradigm of construction below level 1. Then the tool can help to implement the Viewpoint Flip that I've argued is the deciding characteristic of SOA. [Perhaps we need to support UML class or object diagrams once we explode Services into their components. I'll have to think about this some more.]
4. Aesthetics - create some conventions to lay out the diagram in a more predictable manner, so that it is aesthetically appealing and more intuitive at first glance. I had to drag the components and arrows around a fair bit to see the flows more clearly, once the diagram got beyond a certain size (i.e., more than 2 inputs and 2 outputs). (I know this is just quibbling, but the tool has a lot of potential, and some smarts around the generation of graphics would make it very powerful.)

It's a free site (for now, at least) and you're not asked to register with your personal details either, so I'd recommend that you give the tool a try. If you missed the link earlier, you can find it here.

Wednesday, September 03, 2008

Google Chrome - The Promise

I'm excited about Google Chrome, the newest browser on the block, even though I haven't used it yet.

How come? Well, I use Linux at home, and Google hasn't seen fit to release a Linux version of Chrome yet...

Still, I am excited.

But do we really need yet another browser? Well yes, I think Chrome is important for at least a couple of reasons.

1. It's high time Netscape's original vision of the browser as an application platform was realised. Our current generation of browsers is architecturally hobbled. Chrome implements the behind-the-scenes improvements required for a browser that has ambitions of being a true application platform. See this tech-friendly explanation of what these improvements are.

2. It's high time Microsoft's browser market share received another blow. For those who think Microsoft has begun to play nice in the web standards space, think again. They're up to their old dirty tricks once more. Anyone whose employer has deployed Sharepoint will know what I mean. There are heaps of Sharepoint features that don't work in Firefox. You're forced to use IE if you have Sharepoint. We learn once again that it's dangerous to give Microsoft control of both ends of the Web. Apache is slipping. And Firefox isn't enough. I hope Chrome provides enough buzz to draw users away from IE.

For all the coolness of Chrome, there are still important features missing. The obvious one for me is E4X support. How can an application platform not have native support for XML manipulation? We live in a SOA world, don't we? We have to deal with business logic in the form of services, right? So what's with the lack of XML support? I certainly hope this is a temporary issue, because I believe the application platform of the future must support SOFEA, and without XML support, it's just not going to cut it.

Of course, these are early days, so the glass is really half-full. Good on ya, Google!

Saturday, June 21, 2008

Apple Mac - A Great Way To Get To The Wrong Answer

Scott McNealy was wrong. It isn't Linux that's a great way to get to the wrong answer. It's the Mac. But try telling it to the crowds thronging the new Apple retail outfit on Sydney's George Street at its opening.

No one argues with the design excellence and "insanely great" user experience afforded by the Mac. But that only justifies the adjective "great". It's still the wrong answer.

Please explain? Gladly.

Can someone explain to me why we should abandon a closed operating system on open hardware (Wintel) to go to a closed operating system on closed hardware (the Mac)? I would think the right direction is towards an open operating system on open hardware (Lintel). [Aside: Of course, the hardware side of the Lintel platform is "open" thanks only to the presence of AMD. One shudders to think of an untrammeled Intel monopoly.]

Folks, here's the right way to get to the right answer - Ubuntu Linux. With just one important missing feature (slated to be remedied in the next version), it is poised to be the best desktop operating system, bar none.

Don't agree? Watch this space.

Saturday, June 07, 2008

The Best Windows Ever

This is a rather odd topic for me to be writing on. I rarely write anything about Windows or Microsoft unless it's to say something nasty :-). But all the recent talk about Windows 7 made me think back over my long career in IT (21 years now, I realise with unending surprise) to try and remember the "good" versions of Windows.

I can think of just three:

Windows NT 3.51
Windows 95
Windows XP

Why these three and not any other?

Windows NT 3.51: This was Microsoft's first real challenge to Unix, and it was a good one. For those whose first experience of "enterprise" Windows was NT 4.0, let me remind you that later isn't always better. Microsoft actually ruined Windows NT when it moved from version 3.51 to 4.0.

NT 3.51 was solid and stable, just like Unix, but with two terrific advantages that promised to win the enterprise market for Microsoft in a very short time. The first was the ability to run on commodity Intel hardware instead of the more expensive RISC chips that were favoured by the Unix vendors. The second was a graphical interface, admittedly modelled after Windows 3.1 (the first viable desktop version, with a somewhat crude interface) but miles ahead of the Motif and OpenLook interfaces that were the best that Unix could muster. If Microsoft had merely upgraded the UI in Windows NT 4.0 (to make it resemble Windows 95), they would have been onto a winner. But no, they got greedy. They changed something fundamental in the Windows NT architecture. One of the things affecting NT 3.51's performance was the restriction on device drivers that forced them to run in non-privileged mode or user mode. Microsoft changed that in NT 4.0 to let device drivers run in privileged mode. Bad idea. It certainly made NT faster, but also far less stable, because badly-written device drivers running in privileged mode could bring the whole system down, something that was just not possible in NT 3.51. This was a problem Microsoft couldn't simply address with a patch, because device drivers are generally written by third parties (usually the makers of hardware), and quality is always uneven.

I think Microsoft shot itself badly in the foot with NT 4.0, and much of its lingering reputation for instability in enterprise circles (the "Blue Screen of Death") is because of NT 4.0. People have completely forgotten how stable NT 3.51 was.

Windows 95: This was the first real competition to Apple in the user interface area, and Microsoft blew Apple away with the numbers (i.e., the number of units shipped). Same friendly interface, more mainstream hardware. Apple didn't recover for a decade. Microsoft really did its homework on this one. Windows 3.1 received its share of criticism for a poorly-designed UI (remember the Interface Hall of Shame? Pity they've removed the references to Windows 3.1 now, but they were pretty funny). Windows 95 addressed all those criticisms comprehensively. I would say Apple's bragging rights on sensible user interface design ended in 1995 (and Ubuntu Linux made it a three horse race in 2005). Also, unseen by users but very visible to developers, Windows 95 unified the API between 16-bit and 32-bit versions of the product. The API now became 32 bit externally (the famous "Win32" API), even if Windows 95 was internally 16 bit (prompting the famous OS Beer joke about opening a 32 oz. can of beer and finding only 16 oz. inside). API unification was a big deal for Microsoft. It eliminated the needless porting effort when building software for the home and enterprise markets. So Windows 95 not only succeeded as a product in its own right, it laid the foundation for the success of many other products, including Microsoft Office.

Windows XP: I never thought I'd be calling XP one of the best Windows ever. I remember the reactions to XP when it came out: No substantial improvements over Windows 2000, required much higher-end hardware and most seriously, privacy concerns (that it took a snapshot of all software installed on your machine and sent it back to Microsoft as a part of its installation). In fact, the latter concern caused some country's navy to decide not to upgrade (I can't find a reference to this though, although I clearly remember reading about it at the time).

But time changes perspective. Looking back at XP after Vista makes me realise that XP was a pretty good OS after all. What I remember about Windows 2000 is that its device driver support was spotty, so not every hardware device was supported. XP has been far better than Windows 2000 in that regard, which is why I don't have the latter in my list.

As a postscript, my vote for the worst Windows ever goes to Windows Me, followed closely by Windows 98. These were two absolutely unnecessary OS releases, and I believe Microsoft released them purely to raise revenue in the years following the phenomenally successful Windows 95. Windows 95 was so good that it comprehensively met home user needs for at least 6 years. There was no real demand from the market for an upgrade, but Microsoft needed the revenue. Hence Windows 98 and then Windows Me. (Boo, hiss!)

Thursday, May 22, 2008

Orwellian Truths of SOA - 2

OK, so this second post may not have the immediate shock value of a "War is Peace" kind of statement that I had in my first, but it still talks about things that people don't normally acknowledge.

Here's today's Orwellian quote:

"SOA subsumes BPM. SOA subsumes ECM."

First of all, why should this be a shocking statement? Because in many organisations, SOA, BPM and ECM are viewed as different things altogether. SOA is considered to be new-age stuff. It's supposed to be all about SOAP, Web Services, BPEL, ESB and other new-fangled technologies. BPM and ECM, on the other hand, having evolved from earlier workflow and document management products, are seen as more "legacy".

Is BPM (Business Process Management) part of SOA? Many people I talk to make a distinction between "workflow" (by which they mean business processes involving people) and process orchestration (by which they mean the automated invocation of computer-based services). The latter is typically implemented as BPEL-defined processes coordinating SOAP-based Web Services. And while BPM products have begun to support BPEL today, they're still used for only workflow tasks. Even more interestingly, large vendors who sell both BPM and process orchestration tools discourage customer organisations from using BPM tools for the latter task even if they're BPEL-capable.

I'd like to challenge this distinction. And I'm going to do this by going back to SOA's first principles, and not falling into the trap of thinking that SOA means Web Services technology.

The SOA approach requires us to analyse business processes as first-class entities. What is the business function that is being performed? The analyst breaks down the process into logical steps, perhaps rationalises it as well in a BPR kind of exercise, then defines the boundaries between process steps through contracts that define the information flow that occurs across those boundaries.

That's why SOA is often referred to as "Contract-Oriented Architecture".

Take the example of a loan approval process. One of the initial steps is a credit check. The process needs to call upon a "service" that performs this function. It needs to pass this service the necessary identifying data about the applicant so that the service can return a credit rating. At a later stage in the process, especially if the loan application is a borderline case, a loan officer may be required to assess the application and provide an approval/rejection decision.

From a SOA perspective, I would argue that these two examples are no different. In each case, the process involves a step where some data needs to be sent off to some specialised service provider in order to receive a specialised piece of information, which is then used in subsequent steps of the process. If we define the boundaries of the process steps in terms of contracts, there is nothing whatever at the logical level to distinguish one from the other on the basis that the first is automated and the second is manual. They're both identical from an interaction perspective. The first may take mere seconds while the second may take minutes, hours or days, but logically, the interaction model is the same.

This ties back to the earlier point about synchronous being asynchronous and real-time being batch. There is a standard interaction model within SOA. The process interacts with an automated credit check service asynchronously by placing applicant data (a batch of one) in a queue, and picks up credit rating data from another queue when triggered by the arrival of that data. In exactly analogous fashion, the process places application data in a queue (the loan officer's inbox) and picks up the approval/rejection decision from another queue (the loan officer's outbox).

It's a sort of Turing test within SOA. How can a process tell that a particular service step is being performed by a human and not an automated system? The answer is that it can't. In fact, it shouldn't, or else it's no longer SOA, because we violate the contract if we use implied knowledge! In that sense, it's an unfair Turing test, because we're not allowed to use any information outside of the contract to guess at the implementation. Hiding implementation behind contracts is the beauty of SOA, because it allows replacement of manual systems by automated ones, or the outsourcing of certain process steps, without breaking the process itself and impacting business continuity.

So that proves my point. SOA subsumes BPM.

ECM (Enterprise Content Management) seems to be another beast altogether. We're not talking services here. We're talking data. Surely that doesn't have much to do with SOA unless services are built on top of it?

Surprise. If we adopt the REST approach to SOA, we find that what we're doing is perfunctorily dismissing the "service" or "verb" side of the design by employing a uniform, polymorphic CRUDish interface and concentrating mainly on the modelling of the "resources" involved. The distinctive "services" that make up the application from a service consumer's point of view automatically follow from the type of resource and the polymorphic nature of the RESTian verbs. REST is effectively based on a very audacious premise that no matter what your application, you can always model it as a set of resources in such a way that the standard four RESTian verbs will let you perform all the kinds of manipulation that you could possibly hope to perform on them.

I haven't yet seen a convincing repudiation of that approach. The REST approach seems to scale to arbitrary levels of application complexity. It's almost as if service-orientation appears as a byproduct of the organisation of the application domain as a structured collection of "content". An approach to Content Management is therefore a valid approach to SOA! And that proves my second point. SOA subsumes ECM.

I'm sure these views will gradually gain currency, and that bolder organisations will start to bring all of these functions under a common umbrella. But for the near future, I'm afraid we will continue to see a rather fragmented approach to SOA, BPM and ECM.

Tuesday, May 20, 2008

Orwellian Truths of SOA - 1

"War is Peace. Freedom is Slavery. Ignorance is Strength."

George Orwell made those words famous in his novel Nineteen Eighty-Four.

In a discussion with a colleague today, I realised that to attain the Zen-like wisdom of SOA, we need to accept similarly ridiculous-sounding Orwellian propositions. By Orwellian, I don't imply something sinister or Big Brother-like. I'm just talking about opposites being considered the same.

Try my version:

"Local is Remote. Synchronous is Asynchronous. Real-time is Batch."

Sounds ridiculous, right? But it's all true, believe me.

Take "Local is Remote". I've already blogged here and here about why the older concept of "distributed objects" and remote procedure calls (RPC) was wrong-headed. In a nutshell, it's because copies of objects can be passed either locally or across a network but memory references can only be passed around locally and cannot be passed over a network. So working with copies of objects is a more universally applicable paradigm than working with memory references to objects. In other words, treating everything as a remote object and passing around copies of it always works (even in the local case), but treating everything as a local object and passing around references to its memory location will not work in the remote case. The SOA approach, whether using the SOAP-based messaging model or the REST-based representation-transfer model, deals with passing copies of objects around rather than passing (or pretending to pass!) memory references around. In other words, the SOA approach treats local objects as if they are remote. Local is Remote.

What about "Synchronous is Asynchronous"? When we delve into the specifics of inter-system communication, we realise that all systems fundamentally communicate through messages, and messages are asynchronous. Then what causes "synchronous" behaviour? Simply put, synchronous behaviour arises when two systems agree on certain conventions when they exchange messages. Conceptually, we can layer the capability to correlate messages on top of a pure messaging system (Give every message a unique ID that travels with it. Then we can say that this message is in response to that one). Once we have the ability to correlate messages, we have a foundation on which we can then layer the capability to coordinate actions ("I'll do an action when you send me a response to my message, and I'll then respond to your message telling you I've done it, based on which you can perform some action"). Synchronous behaviour is a kind of coordination ("I won't do anything else until you respond to my message"). Another kind of coordination is the notion of transactions. Transactions can either be of the classic all-or-nothing variety or a looser style based on compensating actions. Both of them are types of coordination between systems. So peeling back the layers, synchronous interactions (even highly transactional ones) arise from nothing more than a layered set of capabilities on top of a basic messaging paradigm. This is a universal architecture regardless of the technology chosen. It is as true of TCP/IP as it is of SOAP/WS-*. Synchronous is Asynchronous.

But surely "Real-time is Batch" is a bit of a stretch?

I remember learning about the phenomenon of Diffraction in Optics many years ago. If you close a door or window so that just a tiny crack is visible, you may find that the light from the other side forms a regular, fan-shaped pattern on the floor or windowsill, with alternate dark and bright areas. My Physics textbook said this happens when the crack is "small", then asked the thought-provoking question, "Small in relation to what?"

The answer, of course, is the wavelength of light. If the width of the crack in the door or window is of a similar order of magnitude to the wavelength of light, you see a nice diffraction pattern.

Now what is "real-time"? Think of the mainframe-based batch payroll process in an organisation that must complete by 3 a.m. on Monday. How is this any less time-critical than the ABS (Anti-lock Braking System) on a car that must kick in within milliseconds whenever the wheels start to skid? Like with the diffraction example, we have to ask, "Real-time compared to what?" Again, the answer is the degree of precision with which we measure the deadline in question.

Strictly speaking, "real-time" is a requirement or a constraint. "Batch" is a processing model. But there's nothing to prevent us from using a batch processing model as long as it meets the required "real-time" deadline within the degree of precision with which the deadline is measured. Developers who work with message queues will know what I'm talking about. In large organisations, interfaces to many systems are through message queues. Place a request message on one queue and pick up the response message from another queue. And these queues are fast. They're fast enough to handle critical, "real-time" applications like Internet Banking, for example. The customer interacting with their bank using a browser and obtaining satisfactory response times is unaware that their messages are being put on queues and retrieved from queues, - effectively in batches of one. In many implementations therefore, Real-time is Batch.

What has all this to do with SOA?

Well, the SOA mindset is based upon such Inversion of Conventional Wisdom. Indeed, that's what makes SOA unique and that's why it works.

In a later entry, I'll expand on these concepts and present some more Orwellian truths.

Wednesday, April 30, 2008

SpringSource App Server Released

Ben Alex of SpringSource announced this last night at the Sydney Java Users Group meeting.

As big as the Spring framework has been, I think this product is going to have a pretty profound impact on the Java world. I have always wondered why we couldn't use OSGi for server-side deployment when it seemed such a natural model for dealing with dynamic modules. Apparently, while the model has been working for client-side apps for years, there have been issues with connections and threads on the server side, and these issues have only just been solved by the SpringSource "platform", which enhances the basic OSGi container.

OSGi bundles have long been a good way to build applications with flexibly-definable dependencies. Multiple versions of the same library can be deployed into a container without conflicts, allowing other modules that depend on them to run without problems. One can also deploy urgent patches to production servers without having to bring them down. All this is huge, because it allows application modules to be developed in a loosely-coupled manner, by specifying dependency ranges and following standards for version numbering. This is a Service-Oriented approach to deployment.

Going one step further is the ability to build the App Server itself in the form of bundles rather than as a monolithic executable. (JBoss does this too, but not with OSGi). The SpringSource App Server (S2AP) is the first of this new breed of beast. By way of analogy, S2AP is as different from earlier generations of app servers as the T-1000 of Terminator 2 was different from the earlier T-800 (think liquid metal). It means app servers need be no larger than necessary for the task at hand.

Well, the S2AP product is still in beta, so a few more months are going to pass while the industry kicks its tyres. But ultimately, I think there will be a shift to this more modular, version-friendly style of application deployment.

Monday, April 07, 2008

Is the Domain an Aspect?

We know about aspects - those cross-cutting concerns that we can layer on top of an application, the functions like logging that need to be done in a number of different places.

A curious thought struck me for the first time since I was introduced to Grails. Here's a complete framework for any generic application. It has persistence, a web interface, a testing framework, etc., etc. All you need to do to build your own custom application is to add your domain classes and a set of constraints governing them, and the system essentially generates whatever is required for the application to run.

It's almost as if the developer applied a "domain aspect" to a completely generic application and turned it into a bespoke one.

It's a fascinating thought, and I need to explore this idea some more. By Jove, this could be as profound an idea as Inversion of Control! Before Martin Fowler can jump in to do his mischief, let me lay claim to a number of possible pattern names:

1. Inversion of Aspect
2. Domain as Aspect
3. Domain Injection

There, that should do it until I gain more insights on this.

Wednesday, March 19, 2008

New Home for SOFEA, Thin Server Architecture

Peter Svensson has set up a website where like-minded people can discuss the brave new world of applications whose common characteristic is that no aspect of presentation logic resides on the server side. I admit that's an overly broad-brush generalisation, and it will be necessary to read what the various authors of this camp have to say.

In addition to the SOFEA paper that Rajat Taneja, Vikrant Todankar and I co-wrote last year, the site has Peter's articles on The End of Web Frameworks I and II and Mario Valente's four articles 1, 2, 3 and 4.

The site is admittedly a bit rough round the edges at the moment, but we will be polishing it as the days go by, and we would welcome community feedback. For the moment, since the site itself has no comment facility, you may comment here, or send me a mail (g dot c dot prasad at gmail dot com).

Saturday, March 15, 2008

Context Diagrams as a Tool for SOA

Pursuing the theme of reverting to a form of procedural thinking to implement SOA, I remember how we used to draw Context Diagrams and Data Flow Diagrams (DFDs) as part of system design in the eighties. Context Diagrams were also called DFD Level 0. We used to draw the system being designed as a single bubble, and show all interactions (dataflows) with external parties (drawn as rectangles) as a set of arrows going back and forth between these external parties and the system.

To draw the Level 1 and Level 2 DFDs, we used to "explode" the single bubble into "modules". The dataflows shown in the Context Diagram would still need to be shown, but this time, (1) the arrows would terminate at one of the smaller bubbles representing a particular module of the system, (2) the external parties themselves would no longer be shown and (3) there would be new arrows showing dataflows between modules of the system and not involving external parties.

I now think the Context Diagram is the modelling tool for SOA. The dataflows going back and forth between the system and external parties that deal with it form the "service contract". We can express this in SOAP or REST terms. That really doesn't matter.

When we begin to "explode" the single bubble representing the system as a whole into smaller "modules", we are entering the world of domain design. This is no longer SOA. In the old days, the inter-module dataflows were also procedural. Today, they're method calls on objects.

I wrote earlier that when a student embarks on the study of Zen, the mountains are nothing more than mountains. Partway through the training, the mountains are no longer mountains. But finally, when the student has mastered Zen, the mountains are once again mountains, but they're somehow not the same as what they were.

Here's the SOA analogy:

Back in the eighties and early nineties, when systems and modules were purely procedural, we used to draw Context Diagrams and consider them to be Level 0 Data Flow Diagrams. Then we used to use the same paradigm to further explode the Level 0 DFD into Level 1 and Level 2 DFDs.

In the mid-to-late nineties, when OO thinking predominated, we stopped using Context Diagrams and DFDs altogether. We started using UML's Class Diagrams and Sequence/Interaction diagrams and began to look down upon procedural thinking.

In the new millennium, with SOA thinking in vogue, we need to start using Context Diagrams once again to define our service contracts, but we must no longer use the same paradigm to "explode" the single system bubble into modules. Leaving the Context Diagram in place, we must dive below that level and independently model the underlying domain using OO thinking. Alternatively, we build the domain model first, and then independently draw a Context Diagram around it, with a single bubble completely encompassing the domain model and terminating dataflows to and from external parties. Either way, we must then reconcile the two layers by creating a layer of "services" that sit within the Context Diagram's bubble and outside the domain model and translate between the external-facing procedural view and the internal-facing OO view. This service layer would be responsible for the specific transformation that I call The Viewpoint Flip.

A diagram would illustrate this better.

So you see, grasshopper, the mountains are once again mountains, are they not? But they're not the same as when you started on this journey. The Buddha-ness pervades all things, but the SOA-ness and Object-ness live in different worlds.

Thursday, March 06, 2008

Sun Tech Days Sydney - Day 3 (Community Day, 6 March 2008)

(I've already blogged about the first and second days.) There were 5 tracks on this (third) day, called "Community Day": GlassFish, NetBeans, Mobility, Solaris and Sun University. I attended 3 sessions each from the GlassFish and Mobility tracks.

Alexis Moussine-Pouchkine provided an introduction to the GlassFish project. GlassFish is almost exactly the same product as the Sun Java Application Server. The latter has a couple of extra proprietary bits added, and also provides more frequent updates.

A few tidbits that I noted:

The admin console looks pretty professional, unlike Tomcat's minimal one, and unlike the ugly raw JMX that I've seen in the past with JBoss (I haven't used JBoss in a while, thanks to Spring and Tomcat). GlassFish's admin console is even better than Geronimo's, which I thought was pretty good at the time. It's task-oriented, so users will find it easier to find what they need.

GlassFish logs to a bundled JavaDB database (Cloudscape/Derby).

Apparently, GlassFish has a third container apart from the web and EJB containers. It's called the ACC (Application Client Container). Developers can build a Swing client and bundle it with the ear file. Java Web Start is used to deliver it. Dependency Injection can be used even with the client.

In cluster mode, GlassFish keeps Applications, Configurations and Resources consistent across the cluster. GlassFish uses a dynamic clustering feature called Shoal that is built on top of JXTA.

There's a project called SailFin which is meant to bridge HTTP and SIP. This is useful not only for the telecom industry but also generally, to provide rich media capabilities to enterprise apps.

I haven't explored Sun's Portal offering before (OpenPortal). That's another thing on my growing to-do list.

GlassFish has a number of components:

The GlassFish App Server itself
OpenPortal (portal server)
OpenESB (Enterprise Service Bus, of which more later)
OpenSSO (single sign-on solution)
OpenDS (directory server)
Jersey (for RESTful services)
Hudson (continuous build)
jMaki (aggregates Ajax widgets and provides a common API wrapper)
OpenMQ (message queue)
WoodStock (JavaServer Faces components)
JavaDB (Apache Derby pure Java database)
Apache Roller (blog software, one of the "Enterprise 2.0" offerings)
Slynkr (social tagging software, another of the "Enterprise 2.0" offerings)
SailFin (SIP-HTTP bridging software)

Here's the GlassFish project wiki that provides ongoing detail about the project.

David Pulkrabek from Sun's labs in Prague spoke about the future of JavaME development tools, and his demos were very impressive.

He showed a demo of how to record and replay a set of actions, that could be used in automated testing. Then he showed a demo of how a Sun SPOT device could be used to capture sensory inputs like orientation and 3-dimensional acceleration to control a game on a mobile device. Could this power the next Wii?

Next he showed how real devices and emulators are treated the same by toolkits. A mobile phone plugged into the laptop with a USB cable is detected by the tools, and the debugger can execute applications on the mobile phone, with breakpoints and variable editing, just as in a local emulator. The output of the mobile phone can be redirected back into the toolkit's console.

Very impressive. I think JavaME developers have a pretty good set of tools at their disposal, thanks to the work done at Sun's Prague labs.

But even JavaME pales into insignificance when compared to JavaFX Mobile. This session was from Noel Poore. JavaFX Mobile is a rather unfortunately named piece of technology, and will continue to cause Sun grief because of the needless confusion with JavaFX Script. Let me clarify. JavaFX Mobile is the name of a mobile device platform that Sun acquired when it took over SavaJe in May 2007. JavaFX Script is a new (as yet unreleased) scripting language for the UI, which essentially reduces Swing programming to a declarative style. Yes, JavaFX Mobile supports JavaFX Script, but that's the only (tenuous) connection. Sun should stop these silly and confusing names (The Java Desktop System was another one. That was just a Linux (or Solaris) desktop that could run Java).

JavaFX Mobile is based on the Linux kernel and Java SE. Note that it's Java Standard Edition, not JavaME, the Mobile Edition that the previous session dealt with. This has two major implications. One is that the full range of Java features is now available to mobile devices, not just the subset that JavaME used to provide. Second, from a market viewpoint, it removes the fragmentation that the various JavaME implementations necessarily suffered on account of their having to be tailored to devices. JavaFX Mobile is now a full-fledged platform that can take on the iPhone or Google's Android. It can still run MIDP Midlets, but Sun would encourage developers to build full-featured JavaFX Script-based UIs and applets rather than the comparatively anaemic Midlets, now that the full capabilities of a JavaSE platform are available.

The platform has three components: Core, Frameworks and User Experience (Applications).

Core: Kernel abstraction, Linux kernel 2.6, Core graphics, Core networking, Media engine, Device drivers, Filesystems and a C library (not glibc)

Frameworks: Content management, Java, Web content, Provisioning, Graphics, Connectivity, Multimedia, Networking, Messaging, Telephony

User Experience (Applications): Calendar, Browser, Phone, Camera, App Selector, Personal Utilities, Media Player, Content Viewer, System Utilities, Contacts, Messaging Apps, Home Screen, Themes and third party apps

Messaging includes IM and Presence (OMA IMPS and Jabber).

Jim Weaver (who has written many books on Java, including J2EE 1.4 and JEE 5) spoke next about JavaFX Script and showed a few demos. I'll just sum up my impressions on JavaFX Script.

First, JavaFX Script is to Swing as Groovy is to Java. It's a lightweight scripting approach to building GUIs, and it uses a declarative style.

Second, although JavaFX script is far less verbose than the equivalent Swing code, it still unfortunately looks like the dog's breakfast (compared to Flex, for example). I think it will only become popular when a graphical editor is layered on top of it.

I always like systems that are based on textual files that can be edited by hand if required, compared to systems that store data in binary format. But that doesn't mean I want to use the text editing approach exclusively. "Build graphically and tweak textually" is my motto for GUI development.

Michael Czapski from Sun's Sydney office spoke about JBI (Java Business Integration) and OpenESB.

Historically, approaches to integration have ranged from file transfer through shared databases and EAI to messaging. Messaging is the model currently in vogue. Models have progressed from point-to-point, through hub-and-spokes (brokered) to the "bus", which is a distributed, logically decoupled, anywhere-to-anywhere model.

ESB's facilitate a messaging-based bus model of integration.

JSR 208 deals with Java Business Integration, another unfortunately-named standard that doesn't immediately explain what it does.

This is a standard that governs how ESB components plug into the bus, allowing integrators to mix and match components from different sources to build up a "bus". I think it's more relevant to those who write components like adapters, BPEL engines, XSL transformation engines and the like rather than end-user organisations (like mine) that want to use a bus to integrate their applications. The latter don't care how the "bus" was put together, whether it is a monolithic piece of technology or a pluggable architecture that has components from different sources. [Well, we should care from a vendor lock-in perspective, but functionally, it makes no difference to us.]

JBI standardises component interfaces, component lifecycle (start, stop, deploy, undeploy, etc.), deployment model, monitoring and management, among other things.

It fosters loose coupling through WSDL interfaces, and implements WSDL 2.0 MEPs (Message Exchange Patterns).

The components that plug into a bus are either Service Engines or Binding Components. Examples of Service Engines are Aspect engines, BPEL engines, mashup processors, encoding, ETL, scripting, XSL transformation engines, etc. Examples of Binding Components are BCs for SAP, Siebel, CICS, CORBA, FTP, HTTP, Queues, etc.

OpenESB is Sun's JBI implementation. It uses NetBeans 6 for development and deploys naturally to GlassFish.

Michael demonstated a BPEL process that used two Binding Components (an input queue and an output file), with a Service Engine written on the spot using an EJB with Web Service annotations. The process read in a string from a queue (which was manually input), called the web service to turn it to uppercase, then wrote it into the file. Sure enough, when the file was opened, the string was there, in uppercase. Though it was a simple enough demo, it was fairly impressive.

NetBeans 6.0 is shaping up to be a good development platform for SOA. It has editors for BPEL and XSLT, among other things.

The last session I attended was by Alexis Moussine-Pouchkine again. This was on "Practical GlassFish". He talked about how GlassFish is built, the release cycles, etc.

There's reportedly a graphical installer from a third party (Izpack) for those who don't want to wrestle with the command line.

Alexis also demonstrated the Update Center, which works like Eclipse's.

He showed us some of the asadmin commands for command-line administration. These can obviously be used in scripts as well, which administrators would like.

From a competitive perspective, Sun has a migration tool that helps users automatically (to a large extent) migrate their current deployments on Tomcat, JBoss, WebLogic, WebSphere and older versions of Sun App Server onto GlassFish/Sun App Server. It's reportedly not foolproof, but it could help.

Documentation can be obtained by typing 'asadmin --help' on the command line or online.

That was the end of the three-day Sun Tech Days event. Very useful and informative.

My main takeaway, ironically, was nothing from Sun. It was Grails. I've started doing the InfoQ tutorial, and I'm mightily impressed. I'll blog more about Grails as I learn more.

Wednesday, March 05, 2008

Sun Tech Days Sydney - Day 2 (5 March 2008)

I blogged about the first day here. A lot more happened today, or so it appeared to me. I saw the most jaw-dropping piece of technology demonstrated today, but you'll have to wait a bit to hear more about it.

Sang Shin spoke about AJAX and Web 2.0 technologies. Sang has a wonderful blog at http://www.javapassion.com. This site is chock full of detailed courses on major Java technologies, all generously made available free of charge. His presentation today was based on some of those courses. It was hard for him to do justice to the depth of the topic in just one hour, but I'm sure his blog and the courses there will provide hours of useful instruction to anyone interested in exploring them in detail.

One of the useful tips I picked up while watching his demo was how to use the Firefox extension Firebug. This lets you monitor AJAX traffic, among other things.

The next session I attended was by a person from Atlassian. This session was on Grails. Now, I'd heard of Grails, but didn't quite know what it was all about. I assumed it was a very different technology from Java, something like Ruby on Rails. Only half right, as it turned out. Yes, the technology was inspired by Ruby on Rails, but it's based on Java technology. To be precise, it's a software stack consisting of Java, Groovy, Spring, Hibernate and HSQLDB.

It's a little hard to describe what Grails is. Those who attended Ben Alex's talk early last year on the coming Spring project tentatively called ROO (Real Object Oriented) will have a fair idea of what I'm talking about. Grails seems to have everything that ROO promised (but hasn't yet delivered on). It's a toolset that lets designers build applications based on the principles of Eric Evans's Domain-Driven Design, and it works very hard behind the scenes to build all the infrastructure required to make the domain objects just work. Persistence, a RESTful web interface, test cases, - all get automatically generated. An increasingly common theme that I'm hearing pervades Grails - "Convention over configuration." In other words, if you stick to standard naming conventions for things that you create, the system will make it very easy for you to build a working application with a minimum of coding.

That's the jaw-dropping technology I talked about at the beginning of this post. Grails is a developer's dream. One can knock together working code in minutes. InfoQ has a Grails guide and example code which I intend to work through as soon as I possibly can. I would recommend Grails to all Java developers looking for the next burst of productivity. Best of all, Grails applications can be exported into standard war files and run on standard app servers, so there is no need for any new run-time technology.

Funny how one can be searching for months for a productive development environment that lets you build applications rapidly yet correctly, and then one fine day, it comes up and hits you between the eyes. Praise the Lord! I intend to use Grails to illustrate concepts with working code from now on. I'll blog about my experiences with Grails as I get some experience with it.

(I also liked Atlassian's byline: "Where VB is only in the fridge.")

Lee Chuk-Munn presented next on JavaDB. This is, of course, the much-renamed Cloudscape/Derby pure-Java database that takes up all of 2 MB of space and supports database sizes of up to 700 GB. It is thread-safe and therefore OK for multi-user use, but is positioned as best for use as an embedded database. Since new owner Sun has to juggle other databases like MySQL and PostgreSQL as well, I suspect that the recommendation to use JavaDB for single-user or embedded use has more to do with positioning than with the actual limitations of JavaDB. Too many capable technologies (Linux, MySQL and Tomcat, anyone?) have been unfairly treated as toys in the past for me to swallow such a recommendation.

JavaDB has stored procedures, triggers, functions, built-in XA (distributed transactions), encrypted databases, crash recovery, etc. It also supports JDBC 4.0, with autoloaded drivers, scrollable and updatable ResultSets, better support for CLOB, BLOB, ARRAY and STRUCT, chained exception handling and XML datatype support. The only enterprisey thing JavaDB lacks at the moment is replication, but a basic version is said to be coming in version 10.4.

A very neat feature of JavaDB is the ability to place an entire database inside a jar file on a USB stick for use as a read-only database.

Another neat feature is the ability to define stored procedures in Java rather than a dialect of SQL. The actual command to create the database procedure merely references the Java static method.

Lee showed us examples of how JavaScript and Java may enjoy bidirectional communication using the long-presumed-dead applet technology. He also showed us Java/JavaScript code to demonstrate client-side persistence using embedded JavaDB.

JavaDB is pretty cool, when you stop to think about it. Relational database technology has been a big yawn for a few years now, but things like this make me sit up and take notice.

Carol McDonald spoke about JSF, AJAX, Woodstock, etc. I'm not terribly interested in JSF and would much rather it simply died. I'm not a big fan of web frameworks because I think all thin-client architectures follow a common design anti-pattern. My money is on SOFEA. Still, JSF is something that refuses to go away. Among the goals for the next version of JSF (2.0) is the now-common refrain "Convention over configuration".

After that were two sessions on security. They covered the usual ground, with nothing very spectacular that caught my attention.

A few tidbits from Raghavan "Rags" Srinivasan's session:
1. C14N stands for "canonicalization", the process of rendering XML documents into a common format (i.e., especially differing use of whitespace, which could otherwise play havoc with cryptographic techniques to detect tampering).
2. SAML can be used to make 3 kinds of assertions:
a) Authentication assertions ("This person (or program) is so-and-so.")
b) Attribute assertions ("This entity has the following properties.")
c) Authorisation assertion ("This principal may read data but not update it.")

is Sun's identity management solution. You may download and try it.

There was another talk on SOA security from a person from a company called Layer7. This was more high-level with no code.

The contents of all lectures will be put up on the Sun Java Tech Days website anyday now.

There was an OpenSolaris Installfest going on in the registration/catering hall, and they were giving out CDs and guidebooks. I'm definitely going to try and install OpenSolaris and report on the experience.

Today marks the end of the official Sun Tech Days conference, but tomorrow is a set of community events, which also I plan to attend.

I've been invited to drop by the GlassFish booth tomorrow to discuss my difficulties with the server.

Tuesday, March 04, 2008

Sun Tech Days Sydney - Day 1 (4 March 2008)

Today I attended the first day of Sun Tech Days in Sydney, and here are a few of my impressions.

Some major takeaways for me:

1. The Java and Open Source ecosystem has expanded so much in the last decade that Sun can no longer organise a conference to talk about just its own technology. This year, for example, topics included Ruby, JRuby, Rails, the Spring framework, JBoss Seam, MySQL and PostgreSQL, Grails, AJAX, REST, etc.

2. Sun is now irrevocably committed to Open Source. Open Source has become too powerful a force for any company to withstand. It gave me deliciously wicked pleasure to see James Gosling, the "father of Java" and a traditional Open Source skeptic, take the floor and defend Sun's pro-Open Source direction. I know from reading past interviews of Gosling's that he is no admirer of Richard Stallman or the GNU GPL. It must have hurt for him to see his brainchild released under the GPL! ;-D

3. There seems to be some competition hotting up between three players in the Java/Open Source world - SpringSource, Red Hat/JBoss and Sun. I got the feeling Sun was actually aligning with their old enemy Red Hat, because they seem to view SpringSource as their common enemy. SpringSource have taken over Covalent, the Apache services company, so there's a perception that they're pushing Tomcat over all other app servers, GlassFish included. That seems to irk Sun. Later, watching the JBoss Seam demo and learning that it is going to form the basis of a new JavaEE technology called Web Beans, I got the distinct impression that JBoss is proposing a rival to the Spring framework. However, both Sun and SpringSource favour Toplink over Hibernate as the persistence engine for their products, perhaps since Hibernate belongs to JBoss. This whole area bears watching in the near future. It has implications for organisations looking to deploy Open Source Java stacks, because sourcing pieces of the stack from mutually antagonistic parties may create integration problems, even if all the components are nominally Open Source.

4. There is a lot of "low-level" knowledge in the industry about how to use tools, but not enough "high-level" knowledge about why things are the way they are or even what is wrong with the way things are. This is as true of the presenters as of the audience (if audience questions at the end of each talk are any indication). Many presentations focused on syntax and mechanics of how to do stuff ("If you use this annotation, the system will automatically take care of such-and-such for you"), but did not provide more insights into architecture and design rationale. Worse, presenters did not do a good job of answering audience questions at an appropriately high conceptual level. [I felt I could have answered some of those questions better. E.g., "If REST makes the client manage its own state, then how do you secure a service against unauthorised access? Anyone can access the service." Answer: "Consider a simple system in which the userid and password have to be sent with each request. That solves the problem you have described. Now consider more sophisticated authentication tokens that can be sent (and checked) with each request." E.g., "Doesn't the service need to authenticate the client by looking up LDAP?" Answer: "A trusted identity provider has already looked up LDAP or a similar store to authenticate the client, and has provided a token to vouch for the client's identity, which the service now trusts."]

Mind you, the above critique takes nothing away from the fact that I learnt a fair bit at this conference.

1. For example, I had never before heard of the wireless device called a Sun SPOT, one of which was demo-ed at the conference. This tiny device fits in the palm of your hand, and has various kinds of sensors (e.g., light sensor, temperature sensor, 3-axis accelerometer, etc.) The devices can detect each other and a gateway wirelessly and the whole system can be used to collect and collate various kinds of real-world data very simply and display it graphically using a JavaFX Rich Internet client. Very cool. One of the Sun managers said the company is selling these below cost to encourage take-up, so if someone is interested in this sort of device, the Sun SPOT may be worth looking at.

2. I hadn't heard of Mercurial before either. This is a revision control system. Sun has changed their preferred revision control system twice, once from CVS to Subversion, and again from Subversion to Mercurial. From the Wikipedia description and its favourable rating in this comparison of revision control systems, Mercurial seems to be pretty good.

3. I was only vaguely aware that JDK 6 bundled a client-side HTTP server. I must try out this example they showed today:

import javax.jws.*;
// Define a web service
public class SayHello
public String sayHello()
return "Hello, World";
import javax.xml.ws.*;
// Publish the service using the bundled HTTP server in the JDK.
// This is a "transient" service hosted temporarily by the client workstation.
SayHello sayHello = new SayHello();
Endpoint endpoint = Endpoint.create( sayHello );
endpoint.publish( "http://localhost:8080/hello" );

I made a quick attempt to run this example but it didn't work. I'll need to do more research.

4. NetBeans 6.0 is not bad. It may be worth waiting for 6.1, though, because there's at least one issue with 6.0 that will be fixed in the next release. That's the ability to generate a simple CRUDish web interface to exercise JPA persistence logic, which was apparently working in version 5.5 but has mysteriously stopped working in 6.0.

5. Carol McDonald's two sessions (one on WSIT, JAX-WS and REST and one on Application development using plain JavaEE, Spring and JBoss Seam) were packed with technical content, but alas, I felt she couldn't add much in the way of design rationale or insight. Her blog entries here and here have more details of her presentation and should be of interest to many.

6. Apparently, SOAP's transport-independence is not just theory. Sun's Metro Web Services stack can use any of these transports - HTTP, JMS, SMTP or TCP. The TCP bit is the most interesting, as I don't think SOAP requires a more heavyweight transport. All of the qualities of service required by SOAP messaging are provided by the WS-* headers anyway, so it's best for SOAP to choose the lightest transport protocol that can possibly work. What about IP itself?

7. OpenSolaris makes for a pretty cool desktop. It boasts all the Compiz magic that Linux offers, so Unix in general is now no desktop slouch.

Some of my pet peeves:

1. A statement was made by one of the presenters that DTOs are an anti-pattern, and that detached entities are the way to go. I have railed against detached entities from the time I first heard about them. Mind you, I have nothing against detached entities as long as they stay within the application tier. It's when they are proposed as a means to export data to other tiers that I object. That immediately exposes the domain model to external parties, resulting in tight coupling. Change your domain model and your clients break. [Incidentally, if you consider the web tier to be part of "your" application and not an external party, you are guilty of "web application thinking" and not "SOA thinking". The web tier of your application is an external consumer of your application's services, as much as any external client.] One may hate DTOs and criticise them as an "anaemic domain model", but DTOs were never meant to model anything. They are mere structures that act as an envelope for a bunch of data items that are exported to other tiers. To the extent that they hide the actual domain model from external parties and decouple them, DTOs are your friend. [If you find it a pain to marshal and unmarshal DTOs, you perhaps haven't been using Dozer. Also, if you consider the XML documents within SOAP requests/responses to be DTOs in XML format, then you may similarly benefit from using TopLink's meet-in-the-middle mapping approach or JiBX's mapped data binding to marshal/unmarshal them. You certainly shouldn't be using JAXB to generate one from the other.]

2. Speaking of which, there was a lot of code generation demonstrated today of the sort I simply hate. I have explained before why, if we want to implement SOA, we cannot simply generate service implementations from domain models. I have also tried to mitigate the evils of code generation by proposing a set of explicit "service" classes that act to decouple the domain model from external parties. Annotating these intermediate service classes and automatically generating web services interfaces and WSDL files does no harm, because the required decoupling has already taken place. But as I complained before, this kind of architectural insight did not accompany the demonstrations. They were training technicians out there today.

Nevertheless, on the whole, I thought it was an entirely rewarding day (and the food was great!) I'll be there again tomorrow and will report on my impressions.

Friday, February 29, 2008

Was the Y2K Bug a Hoax?

This is one of those questions that may never be answered. There was no worldwide disaster at the turn of the millennium. In fact, there were no reported cases of a Y2K-caused system failure anywhere. So was the entire issue a mere fear-mongering hoax perpetrated by the IT industry to make money? Or did the IT industry do such a good job of identifying the threat early and fixing it that they deserve our collective thanks and appreciation?

Advocates of the former view (that it was a hoax) have had the benefit of the doubt all along, but yesterday's events lend strength to the opposite argument - that we escaped by the skin of our teeth on 1st January 2000 only thanks to alertness and good crisis management.

I just heard from a friend of mine that his company suffered a major outage of its entire phone system yesterday. The time the problem started gives a clue as to what the issue was.

When the phones went dead in their Sydney office, it was 10:59 am, and the date was 29 Feb. If that doesn't signify anything, think about what time it was then in Greenwich.

11:59 pm, 28 Feb.

In a leap year.

That's what they apparently found out during the investigation. A firmware bug in the phone system's processors couldn't handle the date change on the leap year correctly, leading to a crash.

OK, so it was a relatively isolated bug restricted to one system (the phone network). Other computer systems reportedly chugged along without missing a beat. They had been designed to handle leap years because leap years are a well-known and frequent event.

Y2K was neither. It started as an obscure issue with no public awareness, and represented (literally) a once-in-a-lifetime event. How many systems would have been designed for it? Even farsighted designers were up against the high cost of computing in the sixties, seventies and even eighties. Saving 2 bytes with every date stored meant a lot of money, and therefore there is a real argument to be made that the huge sums of money that were spent on fixing the Y2K problem were worth it because of the larger sums of money that the design shortcut saved them in 60s, 70s and 80s dollars.

I have always believed (perhaps because I'm in the IT industry) that Y2K was a real problem, and that it was effectively fixed. Perhaps it was fixed too well. Perhaps there should have been some systems that were neglected and then allowed to fail on 1 Jan 2000, just to prove to the skeptics that there was a serious problem.

After all, if everyone in town takes a flu shot, and no one gets the flu that winter, is the flu shot a hoax, or did it just work as it was designed to?

Modelling Service Verbs as Java Classes

OK, so this is probably just an intellectual exercise for now, but someone may find a way to make it practical. Read on.

I've been going on in the recent past about how Services need to be verbs that make sense from a service consumer's point of view. Let's say we can articulate such verbs that reflect our domain from the outside looking in (the Viewpoint Flip of service-orientation). In other words, we manage to model the Service Interface quite independently of the Domain Objects.

How do we implement the verbs in this Service Interface? If we use a language like Java which is noun-oriented, that's going to be hard, as this hilarious article illustrates so well.

I propose we use the features of Java 5 to promote our service methods to the level of first class Java entities, i.e., classes. How do we do that?

Look at this example:

import java.text.DecimalFormat;
import java.util.concurrent.Callable;
import java.util.concurrent.Future;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutionException;

// It's a noun! It's a verb! It's a Callable!
public class GetStockQuote implements Callable
private String stockCode;

// Constructor
public GetStockQuote( String _stockCode ) { stockCode = _stockCode; }

// Actual method, but it's effectively hidden and only implicitly called.
public String call()
DecimalFormat decimalFormat = new DecimalFormat( "0.00" );

// In a real system, use the stock code to look up the stock price.
// Here, just generate a random price.
double stockPrice = Math.random() * 100;
return decimalFormat.format( stockPrice );

// Test the "service".
public static void main( String[] _args )
String stockCode = "JAVA";
String stockPrice = null;

ExecutorService executorService = Executors.newSingleThreadExecutor();
Future future = executorService.submit( new GetStockQuote( stockCode ) );

stockPrice = future.get();
catch ( InterruptedException ie )
System.out.println( ie.toString() );
catch ( ExecutionException ee )
System.out.println( ee.toString() );

System.out.println( "The price of " + stockCode + " is " + stockPrice );

As we can see, the Future interface makes the call asynchronous. We could have performed the service in one line:

stockPrice = executorService.submit( new GetStockQuote( stockCode ) ).get();

That would have been a synchronous call.

It's a bit strange to see a verb like GetStockQuote strutting about as a class. In fact, it seems as wrong as those horrible C# method names with initial capitals ;-). But hey, that's the only way King Java will allow verbs to walk around by themselves without a noun chaperone. They've got to become nouns themselves, and the set of classes and interfaces in the java.util.concurrent package makes this passably easy to do.

Now that we've done this, we don't mind too much if someone annotates this verb "class" and generates a SOAP interface and a WSDL description from it. We have already performed the viewpoint flip (in Java), and since we believe that operation is the distinguishing feature of SOA, we are confident that even auto-generating a Web Services interface from this viewpoint-flipped class will not curse it with RPC semantics. It will be a true service.

The reasons why this still isn't going to be easy to turn into SOAP services is this: GetStockQuote will be the WSDL name for the service all right, but what will the operation be called? (The "service" in WSDL is just the wrapper for a number of "operations", which correspond to object methods.) I think "call()" is a rather weak name for the operation, but I guess it corresponds to "processThis()", which is the "uniform interface" of the SOAP messaging model (MEST).

More practically, the Callable mechanism requires the call to be made in two passes. The first pass involves calling the constructor with the parameters to the service. The second pass involves calling the "call()" method (indirectly) by submitting the class to an executor. How would that mechanism translate to a WSDL definition? I must confess I'm stumped there.

Anyway, this is my rough idea for a way to accommodate the automatic interface generation approach provided by SOA tools today without compromising the spirit of SOA by embracing RPC semantics. I'm sure some refinements to this idea will be proposed by wiser heads than mine.

Thursday, February 28, 2008

So What Gives SOAP its RPC Flavour?

(This is a summing-up of several threads from the past.)

When do we say that SOAP usage is RPC-style?

1. When we pretend to pass domain objects by reference but pass them by value?
2. When we use the "rpc/encoded" style instead of "document/literal"?
3. When we use WSDL (which forces request/response semantics) instead of SSDL?
4. When we expose domain methods as service operations without a viewpoint flip?

Not surprisingly, although all these criteria have merit, I'm now leaning towards the last view, because that represents my latest thinking. I believe that what makes SOA service-oriented is the flip in viewpoint from the inward-facing domain model to the way an external consumer would view the domain.


How we humans view our world (our "domain"):
Our behaviours - eat(), sleep(), makeMillionDollars(), payOffMortgage(), etc.

How aliens may view our world:
"Services" they can use - explore(), trade(), conquer(), etc.

I now think that if we achieve a semantic flipping of viewpoint from the methods within domain objects to the verbs that a "service interface" must expose, then we have decisively broken the RPC link.

1. There is no way we can pretend to be passing objects by reference. There is a very tenuous link between a method inside a domain object and a service verb, so the illusion of calling a local method on an object simply cannot exist.

2. The "rpc/encoded" style really doesn't hurt after the viewpoint flip is achieved. Does it really matter if we say


at the top level instead of having a single document root like this:


I don't think so. I think that's a technicality.

3. And even if WSDL forces request/response semantics on SOAP, a suitably viewpoint-flipped service is still not a remote procedure call because the service consumer still sees things their way.

On the other hand, if you take a domain method and (without a Viewpoint Flip) turn it into a SOAP operation in the doc/lit style, and also describe it in SSDL, I would say it's still RPC, because we're merely making domain methods remotely invocable.

So if that's all there is to it, let's call RPC "domain-oriented" in contrast to "service-oriented".