Tuesday, June 23, 2009

What's Wrong With Vendor Lock-In?

A colleague asked me this question today, and I'm really glad it came out in the open, because it's so much easier to deal with ideas when they're plainly stated.

I believe there are at least two strong reasons to actively avoid vendor lock-in.

The first reason, paradoxically, lies in the very justification for agreeing to be locked in - "ease of integration". If an organisation already has a product from a certain vendor and is looking for another product that needs to integrate with this one, then conventional thinking is to go back to the same vendor and buy their offering rather than a competitor's. After all, they're far more likely to be "well integrated". There are typically so many problems integrating products from different vendors that it doesn't seem worth the effort. The best-of-breed approach leads to integration problems, so customer organisations often throw in the towel and go for an "integrated stack" of products from one company.

This approach is antithetical to SOA thinking, however. What they're really saying here is that they don't mind implicit contracts between systems as long as they work. But implicit contracts are a form of tight coupling, and as we know, tight coupling is brittle. Upgrade one product and you'll very likely need to upgrade the other as well. In production systems, we have seen product upgrades delayed for years because the implicit dependencies between "well-integrated" components could cause some of them to break on an upgrade, which is unacceptable in mission-critical systems. As a result, many useful features of later versions are forfeited. That's one of the unseen, downstream costs of the tight coupling that comes from vendor lock-in.

SOA shows us the solution. For robust integration between systems, we need loose coupling between them, which seems a bit paradoxical. Shouldn't tight coupling be more robust? Hard experience has taught us otherwise. But what is loose coupling? It's a hard concept to visualise, so let's define it in terms of tight coupling, which is easier to understand. In practical terms, loose coupling between two systems can be thought of as tight coupling to a mutually agreed, rigid contract rather than to each other. Such contracts are very often nothing more than open standards.

Even though people generally nod their heads when asked about their preference for open standards, the contradiction between that stated preference and their practical choices in favour of "integrated stacks" is hard to understand. If pressed, they might say that creating contracts external to both systems and insisting on adherence to them seems like a waste of time. Why not something that just works "out of the box"? The answer is that this is not an either-or choice. A browser and a web server work together "out of the box", but they do so not because they come from the same company but because they both adhere to the same rigid contract, which is the set of IETF and W3C standards (HTTP, HTML, CSS and JavaScript).

The key is to hold vendors' feet to the fire on adherence to open standards. This isn't that hard. With Open Source products available in the market to keep vendors honest, customers have only themselves to blame if they let themselves get locked in.

The second reason why vendor lock-in is a bad idea is because it often implies vendor lock-out. Very often, customers are held hostage to the politics of vendor competition. Once you start buying into a particular vendor's stack, it will become increasingly hard to look elsewhere. Any third-party component you procure will be less likely to play well with the ones you already have, causing you more integration headaches. My favourite example from just a few years ago is Oracle Portal Server, which claimed to support authentication against LDAP. It turned out later that it wasn't just any LDAP server but just Oracle's own OID. This meant that the corporate directory server which happened to be IBM Tivoli LDAP couldn't be used. The data in it had to be painfully replicated to OID to allow Oracle Portal Server to work.

My solution to incipient vendor lock-in would be to aggressively seek standard interfaces even between products from the same company. I remember asking an IBM rep why they weren't enabling SMTP, IMAP and vCalendar as the mail protocols between Notes client and server. The rep sneered at me as if I was mad. Why would you use these protocols, he wanted to know, when Notes client and server are so "tightly integrated"? Others on my side of the fence agreed with him. Well, the answer came back to bite the company many years later, when they wanted to migrate to Outlook on the client side. Their "tight integration" had resulted in vendor lock-out, preventing them from connecting to Notes server using Outlook (which standard protocols would have allowed) and they were stuck with Notes client indefinitely. By that time, there were too many other dependencies that had been allowed to work their way in, so enabling open protocols at that stage was no longer an option. That was a great outcome for IBM, of course, but to this day, there are people in the customer organisation who don't see that what happened to them was a direct consequence of their neglect of open interfaces in favour of closed, "tight" integration.

Ultimately, vendor lock-in has implications of cost, time and effort, which basically boils down to cost, cost and cost.

As users of technology, we simply cannot afford it.

Monday, June 22, 2009

Microsoft May Lose Some Fair-Weather Friends

I read the news of the release of Microsoft Security Essentials with some amusement.

One reason for my amusement is the notion that an operating system should require a separate product to ensure its security instead of having security built into its design.

The other reason is the anticipation of the impact this will have on that group of parasites in the Windows ecosystem. I'm talking about the makers of anti-virus software.

For a long time now, these companies have been less-than-honest players in the industry, revelling in the fact that the inherent vulnerabilities in Windows have given them a steady income stream and acting like they have the best interests of the customer at heart, when in fact they have always fought true advances in computer security that would have put them out of business.

The FUD from these players against Linux has been astounding. A common refrain is that "Linux is only secure because no one uses it. When its profile rises, hackers and malware writers will turn their attention to it." Really? How come IIS used to attract a disproportionate share of web server attacks in spite of Open Source Apache having twice its market share at the time? Surely it's badly-designed systems that invite attack.

These folk even try and sell anti-virus software for Linux! This would certainly fool people who don't realise that Linux itself is the best anti-virus software you can install on your computer. I haven't been hit by malware since 1997, when I first installed Slackware Linux on my PC.

So what will Microsoft's announcement of free anti-virus protection do to the likes of McAfee and Symantec? While users will probably be going, "It's about time," I can imagine a very different reaction at these companies.

I'm not shedding any tears, though.

Monday, June 08, 2009

JavaOne 2009 Day Five (05/06/2009)

The final day of JavaOne 2009 began with a General Session titled "James Gosling's Toy Show". This featured the creator of Java playing host to a long line of people representing organisations that were using Java technology in highly innovative and useful ways. Many of them got Duke's Choice awards.

First up was Ari Zilka (CEO, Terracotta) who was given the award for a tool that makes distributed JVMs appear like one, thereby providing a new and different model of clustering and scalability. Terracotta also allows new servers and JVMs to be added on the fly to scale running Java applications.

Brendan Humphreys ("Chief Code Poet", Atlassian) received the award for Atlassian's tool Clover. [Brendan is from Atlassian's Sydney office, and I've met him there on occasion when Atlassian hosts the Sydney Java User Group meetings.] Clover is about making testing easier by identifying which tests apply to which part of an application. When changing part of an application, Clover helps to run only the tests that apply to that part of the code.

Ian Utting, Poul Henriksen and Darin McCall of BlueJ were recognised (though not with a Duke's Choice award) for their work on Greenfoot, a teaching tool for children. Most of their young users are in India and China, but it's not clear how many there are, because they only interact with the Greenfoot team through their teachers.

Mark Gerhard (CEO of Jagex) was called up to talk about RuneScape. [Mark Gerhard received the Duke's Choice Award on Day 2, as diligent readers of this blog would remember.] This time, the focus was not on the game itself, but on the infrastructure it took to run it. According to Gerhard, Jagex runs the world's second-biggest online forum. There is no firewall in front of the RuneScape servers (!), so they get the full brunt of their user load. The servers haven't been rebooted in years. Jagex had to buid their own toolchain, because a toolchain needs to understand how to package an application for streaming, which off-the-shelf equivalents don't know to do. Jagex runs commodity servers (about 20?) and their support team has just 3 people. Considering that their user base numbers 175 million (10 million of whom are active at any time), this is a stupendous ratio. Of course, Jagex has about 400 other staff, mainly the game developers. Jagex builds their libraries and frameworks in-house, and they maintain feature parity with their commercial competitor, Maya. I found it curious that Gerhard was cagey when asked which version of Java they used. Why would that need to be a secret? All he would say was that we could guess the version from the fact that their servers hadn't been rebooted in 5 years.

By way of vision, Gerhard said that OpenGL would be standard on cell phones in a year, and Jagex's philosophy is that "There's no place a game shouldn't be".

The next people on stage were two researchers from Sun itelf, - Simon Ritter and Angela Caicedo. [Caicedo had been at the Sydney Developer Day only a couple of weeks earlier.] Ritter demonstrated a cool modification of the Wii remote control, especially its infra-red camera. The remote talks Bluetooth to a controller. I didn't grasp the details of how the system was built, although I heard the terms Wii RemoteJ and JavaFX being mentioned. Ritter held a screen in front of him, and a playing card was projected onto it. Nothing hi-tech there. When he rotated the screen by 90 degrees, the projected image rotated as well, which was interesting. But what brought applause was when he flipped the screen around, and the projected image switched to the back of the card! He also showed how a new card could be projected onto the screen by just shaking the screen a bit (shades of the iPod Shuffle there).

Caicedo demonstrated a cool technology that she thought might be useful to people like herself who had young children with a fondness for writing on walls. With a special glove that had a embedded infra-red chip, she could "draw" on a screen with her finger, because a projector would trace out whatever she was drawing based on a detection of the position of her finger at any given time. The application was a regular paint application, allowing the user to select colours from a toolbar and even mix them on a palette.

Tor Norbye (Principal Researcher, Sun) then gave the audience a sneak preview of the JavaFX authoring tool that has not yet been released. Very neat animations can be designed. It's possible to drag an image to various positions and map them to points in time. Then the tool interpolates all positions between them and shows an animation that creates an effect of smooth motion, bouncing images, etc. There are several controls available, like buttons and sliders, and it's possible to visually map between the actions of controls and behaviours of objects. It reminded me of the BeanBox that came with Java 1.1, which showed how JavaBeans could be designed to map events and controls. The lists of events and actions appear in dropdowns through introspection.

There's no edit-compile cycle, which speeds up development. Norbye showed how the same animation could be repurposed to different devices and form factors. There's a master-slave relationship between the main screen and the screens for various devices, such that any change made to the main screen is reflected in the device-specific screens, but any specific overrides made on a device-specific screen remain restricted to that screen alone.

Fritjof Boger-Engelhardtsen of Telenor gave us a demo on a technology I don't pretend to understand. In the mobile world, the SIM card platform is very interesting to operators. The next generation of SIM cards will be TCP/IP connected nodes, with motion sensors, WiFi, etc., embedded within the card. It will be a JavaCard 3 platform. It's possible to use one's own SIM card to authenticate to the network. FB-E gave us a demo of a SunSpot sensor device connected to a mobile phone and being able to control the phone's menu by moving the SunSpot. The phone network itself is oblivious to this manipulation. More details are available at http://playsim.dev.java.net.

Brad Miller (Associate Professor, Worcester Polytechnic Institute Robotics Research Center) and Derek White (Sun Labs) showed some videos of the work done by High School students. Given a kit of parts, the students have to put together robots to participate in the "US First", an annual robotics competition. A large part of the code has been ported across from C/C++ to Java, and the project is always on the lookout for volunteer programmers. Interested people can go to www.usfirst.org. WPI got a Duke's Choice award for this.

Sven Reimers (System Engineer and Software Architect, ND Satcom) received a Duke's Choice award for the use of Java in analysing the input of satellites.

Christopher Boone (President and CEO, Visuvi, Inc) showed off the Visuvi visual search enine. Upload an image and the software analyses it on a variety of aspects and can provide deep insights. Simple examples are uploading images of paintings and finding who the painter was. More useful and complex uses are in the area of cancer diagnosis. Visuvi improves the quality and reduces the cost of diagnosis of prostate cancer. The concordance rate (probablity of agreement) between pathologists is only about 60%, and the software is expected to achieve much better results. The Visuvi software performs colour analysis, feature detection and derives spatial relationships. There's some relationship to Moscow State University that I didn't quite get. At any rate, Visuvi is busy scanning in 400 images a second (at 3000 megapixels and 10 MB each)!

Sam Birney and Van Mikkel-Henkel spoke about Mifos, a web application for institutions that deal in providing microfinance to poor communities. Microfinance is inspired by the work done by Mohammed Yunus of Grameen Bank. This is an Open Source application meant to reduce the barriers to operation of cash-strapped NGOs. The challenge is to scale. Once again, volunteers are wanted: http://www.mifos.org/developer/ , and not just for development but also for translation into many relatively unknown languages. Mifos won a Duke's Choice Award.

Manuel Tijerino (CEO, Check1TWO) told of how many of his musician friends were struggling to find work at diners. So he created a JavaFX based application that allows artistes to upload their work to the Check1TWO site, and it's automatically available on any Check1TWO "jukebox" at any bar or disco. Regular jukeboxes are normally tied up in studio contracts, so the Check1TWO jukeboxes provide a means for struggling artistes to reach over the heads of the studios and connect directly with their potential audiences.

Zoltan Szabo and Balazs Lajer (students at the University of Pannonia, Hungary) showed off their project that won the first prize at RICOH's student competition. Theirs is a Java application that runs on a printer/scanner and is capable of scoring answer sheets.

Marciel Hernandez (Senior Engineer, Volkswagen Electronics Research Lab and Stanford University) and Greg Follella (Distinguished Engineer, Sun) talked about Project Bixby. This is about automating the testing of high-speed vehicles through "drive-by-wire". The core of the system is Java RTS (Run-time System). The primary focus is improving safety. Stanford University is building the control algorithms. It should be possible to control the car when unexpected things happen, which especially happens on dirt racetracks. There's no need to put a test driver at risk. Project Bixby leads to next-generation systems that are faster, such as more advanced ABS (Anti-lock Braking System) and newer stability control systems).

Finally, there was a video clip of the LincVolt car, which turns a classic Lincoln Continental into a green car like the Prius, but with some differences. The Prius has parallel electrical and petrol engines. The LincVolt has batteries driving the wheels all the time, with the petrol engine only serving to top up the battery pack when it starts to run down. What's the connection with Java? The control systems and visual dashboard are all Java.

This concluded the General Session.

I then attended a session titled "Real-World Processes with WS-BPEL" by Murali Pottlapelli and Ron Ten-Hove. The thrust of the whole session was that WS-BPEL as a standard was incomplete and that real-world applications need more capabilities than WS-BPEL 2.0 delivers. A secondary theme of the session was that an extension called BPEL-SE developed for OpenESB is able to address the weaknesses of WS-BPEL.

[My cynical take on this is that every vendor uses the excuse of an anaemic spec to push their own proprietary extensions. If there is consensus that WS-BPEL 2.0 doesn't cut the mustard, why don't the vendors sit down together and produce (say) a WS-BPEL 3.0 spec that addresses the areas missing in 2.0? I'm not holding my breath.]

The structure was fairly innovative. Ten-Hove would talk about the failings of the standard and Pottlapelli would then describe the solution implemented in BPEL-SE. [The session would have been far better if Pottlapelli had been able to communicate more effectively. I'm forced to the painful realisation that many Indians, while being technically competent, fail to impress because their communication skills are lacking.]

These are the shortcomings of WS-BPEL 2.0 that are addressed by BPEL-SE:

- Correlation is hard to use, with multiple artifacts to define and edit. BPEL-SE provides a correlation wizard that hides much of this complexity.
- Reliability is not defined in the context of long-running business processes susceptible to crashes, maintenance downtime, etc. BPEL-SE defines a state machine where the state of an instance's execution is persisted and can be recovered on restart without duplication.
- High Availability is not defined. BPEL-SE on GlassFish ESB can be configured to take advantage of the GlassFish cluster, where instances migrate to available nodes.
- Scalability is an inherent limitation when dealing with long-running processes that hold state even when idle. BPEL-SE defines "dehydrate" and "hydrate" operations to persist and restore state. As the dehydrate/hydrate operation is expensive, two thresholds are defined (the first to dehydrate variables alone, and the next to dehydrate entire instances).
- Retry is clumsy in WS-BPEL, because the <invoke> operation doesn't support retries. Building retry logic into the code obscures the real business logic. BPEL-SE features a configurable number of retries and a configurable delay between retries. It also supports numerous actions on error.
- Throttling is difficult to achieve in WS-BPEL, which makes the system vulnerable to denial of service attacks and bugs that result in runaway processes. BPEL-SE can limit the number of accesses to a "partnerlink" service node, even across processes. This helps to achieve required SLAs.
- Protocol headers are ignored in WS-BPEL by design, as it is meant to be protocol-agnostic. However, many applications place business information in protocol headers such as HTP and JMS. BPEL-SE provides access to protocol-specific headers as an extension to <from> and <to> elements. Protocols such as HTTP, SOAP, JMS (strictly speaking, this isn't a protocol but an API) and file headers.
- Attachments are non-standard in WS-BPEL, with some partners handling document routing inline and some as an attachment. BPEL-SE adds the extensions "inlined" and "attachment" to the <copy> element.
- BPEL extensions fail because it relies on XPath, and XPath 1.0 has a limitation in that false is interpreted as true (boolVar is a non-empty node). BPEL-SE uses JXPath, and is enhanced to be schema-aware. False is not interpreted as true, and integers do not end in ".0".
- XPath limitations hamper WS-BPEL, because XPath has a limited type system, it is impossible to set a unique id on the payload, it's a challenge to extract expensive items in a purchase order document if it spans multiple activities, etc. Any kind of iteration across an XML structure is difficult, and this is partly due to the limitations of XPath and partly due to those of BPEL. With BPEL-SE, Java calls can be embedded in any XPath expression, with syntax adapted from Xalan. It also supports embedded JavaScript (Rhino with E4X), which makes XML parsing much easier.
- Fault handling is problematic in WS-BPEL because standard faults have no error messages, which makes them hard to debug. Standard and non-standard faults can be hard to distinguish. This complicates fault handlers, requiring multiple catches for standard faults. The WSDL 1.1 fault model has been carried too far. BPEL-SE fault handler extensions propagate errors and faults not defined in WSDL as system faults. Associated errors are wrapped in the fault message. Standard faults are associated with a fault message and failure details like activity name are carried along. BPEL-SE catches all standard and system faults.

I asked the presenters if they were making the case that BPEL-SE had addressed all the limitations of WS-BPEL or if they felt there were some shortcomings that still remained. Ten-Hove's answer was that subprocesses were a lacuna that hadn't yet been addressed.

My next session was "Cleaning with Ajax: Building Great Apps that Users Will Love" by Clint Oram of SugarCRM. I regret to say that this session was short on subject matter and ended in just half an hour. The talk seemed to be more a sales pitch for SugarCRM than a discussion of Ajax principles. Even the few design principles that were discussed were fairly commonsensical and did not add much insight.

For what it is worth, let me relate the points that sort of made sense.

Quite often, designers are asked to "make the UI Ajax". This doesn't say what is really expected. Fewer page refreshes? Cool drag-and-drop? Animation?

There are also some downsides of Ajax:
- Users get lost (browser back and forward buttons and bookmarks are broken, the user doesn't always see what has changed on the page, and screen elements sometimes keep resizing constantly, irritating the user)
- There are connection issues (it can be slow, the data can't be viewed offline, and applications often break in IE6)
- There are developer headaches (mysterious server errors (500), inconsistent error handling, PHP errors break Ajax responses)

Some design questions need to be asked before embarking on an Ajax application:
- Does the user really want to see the info?
- Will loading this info be heavy on the server?
- Does the info change rapidly?
- Is there too much info?

The "lessons learned", according to Oram, are:

- Use a library like YUI, Dojo or Prototype instead of working with raw JavaScript.
- Handle your errors, use progress bars to indicate what is happening, don't let the application seem like it's working when there has been an error.
- Ajax is a hammer, and not every problem is a nail.
- Load what you need, when you need it.

As one can see, there was nothing particularly insightful in this talk, and it was a waste of half-an-hour.

One interesting piece of information from this session was that IBM WebSphere sMash (previously Project Zero) is a PHP runtime that runs on the JVM and provides full access to Java libraries.

SugarCRM 5.5 Beta is out now, and adds a REST API to the existing SOAP service API.

I'm not a fan of SugarCRM. The company's "Open Source" strategy is basically to provide crippleware as an Open Source download, and to sell three commercial versions that do the really useful stuff. I don't know how many people are still fooled by that strategy.

The next session I attended was quite possibly the best one I have attended over these five days. It was called "Resource-Oriented Architecture and REST" by Scott Davis (DavisWorld Consulting, Inc).

Unfortunately, the talk was so engrossing that I forgot to take notes in many places, so I cannot do full justice to it here :-(. Plus, Davis whizzed through examples so fast I didn't have time to transcribe them fully, and lost crucial pieces of code.

Davis is the author of the columns "Mastering Grails" and "Practically Groovy" on IBM Developerworks. He's also a fan of David Weinberger's book "Small Pieces Loosely Joined", which describes the architecture of the web. Although ten years old, the book is still relevant today. [David Weinberger is also the author of The Cluetrain Manifesto.]

I knew Groovy was quite powerful and cool, but in Davis's hands it was pure magic. In a couple of unassuming lines, he was addressing URLs on the web and extracting information that would have taken tens of lines with SOAP-based Web Services. I must have transcribed them incorrectly, because I can't get them to work on my computer. I'll post those examples when I manage to get them working.

A really neat example was when he found a website that provided rhyming words to any word entered on a form. He defined a "metaclass" method on the String class to enable it to provide rhymes by invoking the website. Then a simple statement like "print 'java'.rhyme()" resulted in "lava".

As Davis said, REST is the SQL of the Web.

Davis then talked about syndication with the examples of RSS and Atom. Three things draw one in (chronology (newest first), syndication (the data comes to you) and permalinks (which allow you to "pass it on")). He also mentioned the Rome API and called it the Hibernate of syndication.

What's the difference between RSS and Atom? I hadn't heard it put this way before. Davis called RSS "GETful" and Atom "RESTful".

He then did a live grab of a Twitter feed and identified the person in the room who had sent it.

In sum, this session was more a magic show than a talk. It made me determined to learn Groovy and how to use it with RESTful services.

The last session I attended at JavaOne 2009 was "Building Enterprise Java Technology-based Web Apps with Google Open Source Technology" by Dhanji Prasanna of Google.

Prasanna covered Google Guice, GWT (pronounced "gwit") and Web Driver, and provided a sneak preview of SiteBricks.

I'm somewhat familiar with GWT but none of the others, so my descriptions below may make no sense. It's a verbatim transcription of what Prasanna talked about.

Guice helps with testing and testability. Its advantages are:
- Simple, idiomatic AOP
- Modularity
- Separation of Concerns
- Reduction of state-aware code
- Reduction of boilerplate code

It enables applications to horizontally scale. Important precepts are:

- Type safety, leading to the ability to reason about programs
- Good citizenship (modules behave well)
- Focus on core competency
- Modularity

GWT is a Java-to-JavaScript compiler. It supports a hosted mode and a compiled mode. Core Java libraries are emulated. It's also typesafe.

The iGoogle page is an example of what look like "portlet windows", but which are independent modules that are "good citizens".

Unfortunately, social sites like OpenSocial and Google Wave have different contracts, so modules may not be portable across them.

Google Gin is Guice for GWT. It runs as Guice in hosted mode and compiles directly to JavaScript. There is no additional overhead of reflection.

Types are Java's natural currency. Guice and GWT catch errors early, facilitate broad refactorings, prevent unsafe API usage and reason better about programs. They're essential to projects with upwards of ten developers, because these features are impossible with raw JavaScript.

Web Driver is an alternative to the Selenium acceptance testing tool, and apparently the codebases are now merging. Web Driver has a simpler blocking API that is pure Java. It uses a browser plugin instead of JavaScript, features fast DOM interaction and a flexible API and supports native keyboard and mouse emulation. Web Driver supports clustering.

Then Prasanna provided a preview of SiteBricks, which is a RESTful web framework. The focus is on HTTP and lessons have been learned from JAX-RS. There are statically typed templates with rigorous error checking. It's concise and uses type infererence algorithms. It's also fast and raises compilation errors if anything is wrong.

SiteBricks modules are Guice servlet modules. One can ship any module as a widget library. Any page is injectable. It supports the standard web scopes (request, session) and also a "conversation" scope.

SiteBricks has planned Comet ("reverse Ajax") support. The preview release is available on Google Code Blog.

That concludes my notes on JavaOne 2009. Admittedly, a lot of this has been mindless regurgitation in the interests of reporting. If these make sense to readers (even if they make no sense to me), well and good.

In the days to come, I'll ruminate on what I've learned and post my thoughts.

Thursday, June 04, 2009

JavaOne 2009 Day Four (04/06/2009)

The third day of JavaOne proper (the fourth day if including CommunityOne on Monday) started with a General Session on interoperability hosted by (of all companies) Microsoft. It shouldn't be too surprising, actually, because Sun and Microsoft buried the hatchet about 5 years ago and started to work on interoperability. Time will tell if that was an unholy alliance or not.

Dan'l Lewin (Corporate VP, Strategy and Emerging Business Development, Microsoft) took the stage for some opening remarks. What he said resonated quite well, i.e., that users expect things to just work. Data belongs to users and should move freely across systems. The key themes are interoperability, customer choice and collaboration.

Lewin pointed to TCP/IP as the quintessential standard. Antennae and wall plugs may change from country to country, but TCP/IP is the universal standard for connectivity, which is why the Internet just works. [I could add many other standards to this list, which would also have "just worked" but for Microsoft!]

Lewin added that the significant partners that Microsoft is working with in the Java world are Sun, the Eclipse Foundation and the Apache Software Foundation. The key areas where Sun and Microsoft work together are:

- Identity Management, Web SSO and Security Services
- Centralised Systems Management
- Sun/Microsoft Interoperability Center
- Desktop and System Virtualisation
- Java, Windows, .NET

Identity Management interoperability has progressed a great deal with the near-universal adoption of SAML. On virtualisation, where host and guest systems are involved, Lewin put it very well when he said Sun and Microsoft control each other's environment "in a respectful way."

A website on his slide pack was www.interoperabilitybridges.com .

Steven Martin (Senior Director, Developer Platform Productivity Management, Microsoft) took over from Lewin and started off with "we come in peace and want to talk about interoperability".

He introduced Project Stonehenge, a project under Apache, with code available under the Apache Licence. This uses IBM's stock trading application to demonstrate component-level interoperability between the Microsoft and Sun stacks.

Greg Leake of Microsoft and Harold Carr of Sun then provided a live demo of this interoperability.

The stock trading application has four tiers, - the UI, a business logic tier, a further business logic tier housing the order processing service, and the database tier. The reason for splitting the business logic tier into two was to demonstrate not just UI tier to business logic tier connectivity but also service-to-service interop. The Microsoft stack was built on .NET 3.5, with ASP.NET as the UI technology and WCF the services platform. The Sun stack was based on JSP for the UI and the Metro stack for services, running on Glassfish. Both stacks pointed back to a SQLServer database.

The first phase of the demo showed the .NET stack running alone, with Visual Studio breakpoints to show the progress of a transaction through the various tiers. Then the ASP.NET tier was reconfigured to talk to the Metro business logic layer, and the control remained with the Java stack thereafter. In the third phase of the demo, the Metro business service layer called the order processing service in the .NET stack. The application worked identically in all three cases, effectively demonstrating bidirectional service interoperability between .NET and Metro Web Services.

Martin also mentioned a useful principle for interoperability, "Assume that all applications are blended, and that all client requests are non-native". This is analogous, I guess, to that other principle, "Be conservative in what you send, and liberal in what you accept".

He also referred to "the power of choice with the freedom to change your mind", which I thought was a neat summarisation of user benefits.

Aisling MacRunnels (Senior VP, Software Marketing, Sun) joined Steven Martin on stage to talk about the Sun-Microsoft collaboration, which isn't just limited to getting the JRE to run on Windows. Microsoft also cooperates with Sun to get other "Sun products" like MySQL, VirtualBox and OpenOffice to work on the Microsoft platform. The last item must be particularly galling to the monopoly. Microsoft is also working to get Sharepoint authentication happening against OpenSSO using SAML2. Likewise, WebDAV is being supported in Sun's Storage Cloud. In other words, when both parties support open standards, their interoperability improves.

I think it speaks more of the quality and tightness of a standard than of vendor cooperation when systems work together. Sun and Microsoft shouldn't need to talk to each other or have a cozy relationship. Their systems need to just work together in the first place.

The next session I attended was "Metro Web Services Security Usage Scenarios" by Harold Carr and Jiandong Guo of Sun. Carr is Metro Architect and Guo is Metro Security Architect, so we pretty much had the very brains of the project talking to us.

There wasn't very much in the lecture that was specific to Metro. Most of the security usage patterns were general PKI knowledge, but I must say the diagrams that illustrated the logic flow in each pattern were top class. I have seen many documents on PKI, but these are some of the best. My only quibble with them is they tend to use the same terms "encrypt/decrypt" in two separate contexts - "encrypt/decrypt" and "sign/verify".

Some of the interesting points they made were:

- The list of security profiles bundled with Metro will be refactored soon. Some will be dropped and new ones will be added.
- SSL performance is still better than WS-Security, even with optimisations such as derived keys and WS-SecureConversation. [WS-Security uses a fresh ephemeral key for every message, while WS-SecureConversation caches and reuses the same ephemeral key for the whole session.]

- Metro 2.0 is due in September 2009.

I then attended a session called "Pragmatic Identity 2.0: Simple, Open Identity Services using REST" by Pat Patterson and Ron Ten-Hove of Sun. Ten-Hove is also known for his work on Integration and JBI. He was the spec lead for JSR 208.

[As part of their demo, I realised that NetBeans has at least a couple of useful features I didn't know about earlier. There's an option to "create entity classes from tables" (using JPA, I presume), and another one to "create RESTful web services from entity classes".]

It's a bit difficult to describe the demo that Patterson gave. On the surface, it was a standard application that implemented user-based access control. One user saw more functions than another. The trick was in making it RESTful. Nothing had to be explicitly coded for, which was the cool part. The accesses were intercepted and authenticated/authorised without the business application being aware of it. As I said, it's hard to describe without the accompanying code.

The next session was on "The Web on OSGi: Here's How" by Don Brown of Atlassian. Brown is a very polished and articulate speaker and he kept his audience chuckling.

OSGi is something that has fascinated me for a while but I haven't got my head around it completely yet. At a high level, OSGi is a framework to allow applications to be composed dynamically from components and services that may appear and disappear at run-time. Dependencies are reduced or eliminated by having each component "bundle" use its own classloader, so version incompatibilities can be avoided. Different bundles within the same application can use different versions of libraries without conflicts, because they don't share classloaders.

OSGi is cool but complex. As Brown repeatedly pointed out, while it can solve many integration and dependency problems, it is not trivial to learn. Those who want to use OSGi must be prepared to learn many fundamental concepts, especially around the classloader. Also, components may appear and disappear at will in a dynamic manner. How must the application behave when a dependency is suddenly exposed?

There are 3 basic architectures that one can follow with OSGi:

1. OSGi Server
Examples: Spring DM Server, Apache Felix, Equinox
Advantages: Fewer deployment hassles, consistent OSGi environment
Disadvantages: Can't use any existing JEE server

2.Embedded OSGi server via bridge (OSGi container runs within a JEE container using a servlet bridge)
Examples: Equinox servlet bridge
Advantages: Can use JEE server, application is still OSGi
Disadvantages: (I didn't have time to note this down)

3. Embedded OSGi via plugin
Example: Atlassian plugin
Advantages: Can use JEE server, easier migration, fewer library hassles
Disadvantages: More complicated, susceptible to deployment

I learnt a number of new terms and names of software products in this session.
- Spring DM (Dynamic Modules)
- Peaberry using Guice
- Declarative Services (part of the OSGi specification)
- BND and Spring Bundlor are tools used to create bundles
- Felix OSGi is embedded as part of Atlassian's existing plugin framework.
- Spring DM is used to manage services.
- Automatic bundle transformation is a cool feature that Brown mentioned but did not describe.

There are three types of plugins:
Simple - no OSGi
Moderate - OSGi via a plugin descriptor
Complex - OSGi via Spring XML directly

Brown gave us a demo using JForum and showed that even if a legacy application isn't sophisticated enough to know about new features, modules with such features can be incorporated into it.

I had been under the impression that OSGi was only used by rich client applications on the desktop. This session showed me that it's perhaps even more useful for web applications on the server side.

My last session of the day was a hands-on lab (Bring Your Own Laptop) called "Java Technology Strikes Back on the Client: Easier Development and Deployment", conducted by Joey Shen and a number of others who cruised around and lent a helping hand whenever one got stuck. It looks like Linux support for JavaFX has just landed (Nigel Eke, if you're reading this, you were right after all), and a very quiet landing it has been, too. But it's still only for NetBeans 6.5.1, not NetBeans 6.7 beta. At any rate, I was more interested in just checking to see if it worked. It turned out that there were a couple of syntax errors in the sample app which had to be corrected before the application could run. I was very keen to try the drag-and-drop feature with which one could pull an applet out of the browser window and install it as a desktop icon (Java Web Start application). Unfortunately, this feature requires a workaround in all X11 systems (Unix systems using the X Window System), because the window manager intercepts drag commands and applies it to the window as a whole. There was a workaround described for applets but none for a JavaFX application. As time was up, I had to leave without being able to see drag-and-drop in action. Never mind, I'm sure samples and documentation will only become more widely available as time goes on, and Sun will undoubtedly make JavaFX even easier to use in future.

Wednesday, June 03, 2009

JavaOne 2009 Day Three (03/06/2009)

The second day of JavaOne proper (the third day if including CommunityOne) started with a General Session on mobility conducted by Sony Ericsson.

The main host was Christopher David (Head of Developer and Partner Engagement). At about the same time he started his talk, Erik Hellman (Senior Java Developer) got started on a challenge - to develop a mobile application by the end of the session that would display all Tweets originating within a certain radius of the Moscone Center that contained the word 'Java'.

Rikko Sakaguchi (Corporate VP, Head of Creation and Development) and Patrik Olsson (VP, Head of Software Creation and Development) joined David on stage, and between the three of them, kept the Sony Ericsson story going.

One of the demos they attempted failed (controlling a Playstation 3 with a mobile phone), but then it isn't a demo if it doesn't fail.

One of the points made was about the difference between a mobile application and a traditional web application. A traditional web application has its UI on the client device, with business logic and data on a server across the network. A mobile application has the UI, parts of business logic and parts of data and platform access on the device, and the remaining data and business logic across the network. I don't quite buy this distinction. I don't necessarily see a difference between traditional distributed applications and mobile applications. So the device form factor is a bit different and the network is wireless, but that's hardly a paradigm shift. Application architectures like SOFEA are meant to unify all such environments.

The history of Sony Ericsson's technology journey is somewhat familiar. In 2005, they switched from C/C++ to Java. Java became an integral part of the Sony Ericsson platform rather than an add-on. In 2007, they created a unique API on top of the base Java platform. In 2009, the focus is on reducing fragmentation of platforms. The bulk of the APIs are standard, while a few (especially at the top of the stack) are proprietary to SE.

As expected from a company that boasts 200 million customers worldwide and 200 million downloads a year, SE has a marketplace called PlayNow Arena. SE has been selling ringtones, games, music, wallpapers, themes, movies and lately, applications. I'm frankly surprised that it's taken them so long to get to selling applications.

Since time-to-market is important, SE promises software developers a turnaround time of 30 days from submission to appearance in the virtual marketplace, with assistance provided throughout the process.

And yes, Erik Hellman had completed his application with 10 minutes to spare by the time the session ended.

The next session I attended was something completely new to me. This was called "Taking a SIP of Java Technology: Building voice mashups with SIP servlets" by RJ Auburn of Voxeo Corp. The Session Initiation Protocol (SIP) is mainly used in telephony, but can apparently be used to bootstrap any other interaction between systems. SIP has more handshaking than HTTP, with many more exception cases, so it's a more chatty protocol than HTTP. It's also implemented on top of UDP rather than TCP, so SIP itself needs to do much more exception handling than HTTP.

RFC 3261 that describes SIP is reportedly a rather dry document to read. Auburn recommended The Hitchhiker's Guide to SIP, and also some Open Source info at www.voip-info.org and some industry sites (www.sipforum.org and www.sipfoundry.org).

There seem to be two main ways to develop applications with SIP. One uses XML, the other uses programming APIs. The XML approach is a 90% solution, while the API approach provides more options but is more complex.

There are two sister specifications in the XML approach - VoiceXML and CCXML (Call Control XML). VoiceXML supports speech and touchtone, a form-filling model called the FIA, but very limited call control. CCXML, in contrast, manages things like call switching, teleconferencing, etc. The two work in a complementary fashion, with CCXML defining the overall "flow logic", and VoiceXML defining the parameters of a particular "message" (for want of better terms).

The Java API is based on the SIP Servlet API (www.sipservlet.com). JSR 116 was SIP Servlet 1.0, and JSR 289 is SIP Servlet 1.1 (just released). JSR 309 (the Java Media Server API) is based on the CCXML model, but is still in draft.

SIP is complex, so Voxeo has a simpler API called Tropo, available in a number of scripting languages. This is not Open Source, but is free for developers, and the hosting costs about 3 cents a minute. There are also traditional software licensing models available.

There are some phone-enabled games available, and Vooices is a good example.

More information is available on tropo.com and www.voxeo.com/free.

The next session I attended was "What's New in Groovy 1.6?" by Guillaume Laforge himself. The talk was based on an InfoQ article written by him earlier.

In brief, the improvements in Groovy 1.6 are:

- Performance - the compiler is 3x to 5x faster than 1.5, and the runtime is between 150% to 460% faster.
- Syntax changes - multiple assignment (the ability to set more than one variable in a statement), optional return statements
- Ability to use Java 5 annotations
- Ability to specify dependencies
- Swing support (griffon.codehaus.org)
- JSR 223 (scripting engine) support is built-in and can invoke scripting in any language
- OSGi readiness

My next session was on providing REST security. The topic was called "Designing and Building Security into REST Applications" by Paul Bryan, Sean Brydon and Aravindan Ranganathan of Sun. The bulk of the talk focused on OpenSSO and its REST-like interface. But as the presenters confessed, the OpenSSO "REST" API is just a facade over a SOAP API. In OpenSSO 2.0, they visualise a truer RESTian API.

The other term I had never heard of before was the OAuth protocol. Apparently, OAuth is a style of authentication just like HTTP's Basic Authentication and Digest Authentication.

The last session that I attended today was on "Coding REST and SOAP together" by Martin Grebac and Jakub Podlesak of Sun. Although the topic was entirely serious, it felt like cheating at a certain level.

The premise is that we implement SOAP and REST on top of POJOs using annotations defined by JAX-WS and JAX-RS, respectively. So can't we just add both sets of annotations to the same POJO and enable SOAP and REST simultaneously?

I can see one very obvious problem with this approach. REST forces one to think Service-Oriented because the verbs refer to what the service consumer does to resources. It's what I've earlier called the Viewpoint Flip, and I believe it's an essential part of Service-Oriented thinking. But SOAP doesn't enforce any such viewpoint. It's possible to have an RMI flavour to the JAX-WS SOAP methods. So there's no substitute for proper design.

JavaOne 2009 Day Two (02/06/2009)

The first day of JavaOne proper (or the second day if you include CommunityOne on Monday) started with a two hour General Session.

A brief documentary showing the parallels between a 14 year old boy named Justin and 14 year old Java set the tone for the session. Each year from 1996, a different facet of Java has come to the fore. And in 2009, just as Java turned 14, the 14 year old boy was introduced as a gamer and Java developer. Sun is appealing to a new generation of developers now, and the emphasis on JavaFX reflects a generational shift in more ways than one.

Jonathan Schwartz, Sun's CEO, came up on stage next, and was the MC for almost the rest of the session. I have always been impressed by Schwartz for the direction he has given Sun as CEO, and I was impressed by his poise, confidence and easy manner as he conducted the proceedings. He did goof up a bit, though. He announced the release of Java 7 as of today, but it turned out later that it was just a milestone release. The final release of Java 7 isn't due till early 2010.

The theme for the morning was the power of a simple idea - to wit, "Write Once, Run Anywhere (WORA)".

An impressive array of customers and business partners trooped up to join Schwartz one after the other, all testifying to the wonderful goodies that Java technology had delivered to their organisations.

First up was James Baresse (VP Architecture, Platforms and Systems, eBay). eBay uses the integrated Java stack to run their business - the application framework, content management, batch systems and SOA.

Next in the witness box was Alan Brenner (Senior VP for the BlackBerry platform at RIM). For those who don't know already, the BlackBerry is a Java phone. RIM uses Java end-to-end - the core apps, the development platform, the works. Brenner showed off a third party app called Zavnee that integrates with the BlackBerry's email and phonebook APIs as well as community sites to provide a more insightful address book. According to Brenner, the open APIs of BlackBerry allow ISVs to build applications that integrate tightly with the device.

Don Eklund (Executive VP of Advanced Technologies at Sony Pictures Home Entertainment) came up to crow about the victory of Blu-Ray over its competition, or so it seemed to me. I'm not a great fan of Sony, which represents evil media. Eklund sang the praises of Java's openness. I couldn't help mentally completing his statement. Every company loves it when its building blocks are open and free, and when their own products are closed and proprietary. How about opening up the Blu-Ray format to everyone, Sony?

Lowell McAdam (President and CEO, Verizon Wireless) spoke about Verizon's Open Development Initiative last year that dealt with hardware, and their Open Development for Applications this year that deals with software. Verizon is opening up its APIs to enable third party developers to exploit functions such as presence, location, friends and family, etc.

Diane Bryant (Executive VP and CIO at Intel) talked about the collaboration between Intel and Sun to deliver high Java performance on the Intel platforms (Atom, Core and Xeon). They claim to have achieved an 8x raw performance improvement since 2006.

The final partner to come up on stage was Paul Ciciore (Chief Technologist at the Chicago Board Options Exchange). The CBOE is the world's largest options exchange, but it's a relatively new company. The Java-based implementation was conceived in 1998 and launched in 2001. From 5000 transactions per second in 2001, CBOE has grown to 300,000 transactions per second in 2009. They're also driving latency down lower and lower.

The next phase of the General Session talked about what I would characterise as an attempt by Sun to shift gears and start to provide the technology tools for applications that deal with media content in addition to those for the traditional enterprise applications that have been its mainstay so far. It's a risky gambit for Sun, and I'm not very optimistic about their chances of success. As an example, if Adobe trivially reengineers Flex to generate JVM bytecode, it's bye-bye, JavaFX.

There were some nifty demos of JavaFX applications, including one that ran on an HDTV set. There was another demo by Sun's Director of Engineering for the JavaFX platform, Nandini Ramani (wrongly pronounced (what else?) Ramaani). She showed off a nifty development environment for JavaFX that allowed content to be edited and targeted simultaneously for different devices. There's no compile-build cycle for JavaFX, this being a scripting language.

James Gosling, the inventor of Java, then came up on stage. This is one person I put in the same category as Bob Metcalfe, the inventor of Ethernet. Both are technology geniuses who have an astonishing blind spot about Open Source. [Metcalfe once famously referred to it as "Open Sores".] Gosling made a snide remark about how the Linux platform doesn't let developers build applications for it unless they're willing to make it a labour of love. His self-styled mission is to help developers convert a labour of love into a day job that puts food on the table. That's all very well, but someone should tell him that Open Source works very well as it is, thank you very much. It's a benign pyramid scheme, where a new generation of programmers comes along to build the next layer of an open platform on top of the last. The commoditisation continues up the stack. Too bad for people wanting to make money along the way. But that's not a problem for Open Source, nor is it a problem for the world. It's only in Gosling's (and Metcalfe's) world that a failure to monetise something equates to failure, period.

Gosling presented the Duke's Choice award to Mark Gerhard (CEO of Jagex, maker of the popular RuneScape game). About 20% of Jagex's users are paying customers. RuneScape will be available on the Java Store that I wrote about earlier. The Java Store provides tools to "ingest" and "distribute" applications, and Sun is still working on a suitable cash register implementation to enable effective commercialisation. Suggestions from the community are welcome...

Randy Bryant (Dean of Computer Science at Carnegie Mellon University) received another Duke's Choice award from Gosling on behalf of Randy Pausch, the inventor of the Alice Project. Alice, like MIT's Scratch, is a means of teaching computer programming to kids. Alice 3 goes a step further by introducing kids to Java programming, which Scratch doesn't do. [I'm planning to introduce Alice to my twelve year old.]

Somewhere along the way, Scott McNealy, Sun's chairman and former CEO, came up on stage. I have previously referred to McNealy as a dinosaur, and I must say that he literally looked and sounded old. Gone was the youthful "puckish humour" that magazines used to refer to. He seemed resigned to an imminent retirement. Reading between the lines, though he made some condescending remarks to Jonathan Schwartz about the wonderful way he had led the company even though he was a relative newcomer, I sensed some resentment and disapproval. I guess it's an open secret that Schwartz preferred an IBM takeover of Sun, and McNealy won out with the Oracle deal. I wonder how long Schwartz will last with Oracle CEO Larry Ellison as his boss. Open sourcing Java was the best thing Schwartz has done for the world, and must have turned Ellison's dreams of monetisation to ashes.

The piece de resistance of the morning's session was then the surprise introduction of Ellison himself.

Ellison said many nice things about Java (e.g., "Except for the database, everything at Oracle is Java-based"). Again, I was tempted to complete his sentence that "Java was attractive to us because it was open" with the clincher "and because it has allowed us to close everything we build above it". I hope the emerging Open Source enterprise stack eats Oracle's lunch in the coming recession.

Ellison said a few things that I found very significant. He wanted to see OpenOffice being redeveloped using JavaFX. He's now in a position to drive that initiative through investment, and something tells me he'll do it. It's more than just his traditional hatred of Microsoft. There's probably a big support subscription-based revenue stream that will come from the corporate market when OpenOffice supplants MS-Office.

Ellison also pledged not to make changes to the Java model but to "expand investment" in it, a commitment that drew relieved applause from the mostly geek audience. I guess there's indirect benefit to Oracle from Java's success, largely from its ability to prevent competitors from succeeding with their proprietary equivalents.

In another very significant statement, Ellison also hinted that Sun and Oracle would jointly introduce a mobile device to compete with devices running Google's Android. I think JavaFX is the weapon that Oracle-Sun are betting on.

Someone (I forget whether it was Gosling or someone else) also made a snide remark about Ajax that made me prick up my ears. I have previously wondered on this blog why people even bother with RIA tools like Flash, Silverlight and JavaFX when Ajax/DHTML is getting to be so much more capable. I realise now that I've been looking at it from the viewpoint of the community at large. From the viewpoint of Sun (and now Oracle), there is a silent and desperate struggle for survival going on. If Ajax succeeds, it will further reduce the relevance of these vendors. JavaFX is critically important to Sun. It's irrelevant to the world.

The second session I attended was "Deploying Java Technology to the Masses: How Sun deploys the JavaFX runtime" by Thomas Ng from Sun.

My major grouse against JavaFX is that it still isn't available for the Linux platform. Nigel Eke commented on my earlier blog post that Sun was going to announce Linux support for JavaFX at JavaOne, but it hasn't happened yet.

Very briefly, Ng's talk covered the following points - the deployment mechanism is JNLP, pioneered by Java Web Start almost a decade ago. There's a tool called the JavaFX Packager that is bundled with the JavaFX SDK. This helps to automate the creation of JNLP files, which is otherwise an error-prone undertaking. In future, the same tool will be used to create JNLP-based launchers for applets and Java Web Start applications, but as of today, that's still on the wishlist.

A potentially very useful feature (albeit a potential security headache) is the ability of the JavaFX runtime to permit cross-domain XML traffic. This removes the crippling constraint that prevents current-generation browser applications from reaching back to any servers but their own to make service calls.

A future version of JNLP will also remove the current requirement for an absolute "codebase" URI in the jnlp tag. This relaxation will make applications more readily portable between servers.

The next session was another General Technical Session hosted by Robert Brewin (Distinguished Engineer, VP and CTO for Application Platform Software at Sun). It was called "Intelligent Design: The Pervasive Java Platform".

[My "Press/Analyst" badge entitled me to special seating at the front of the hall, but since the perk didn't come with a table for my laptop, I declined the honour and chose to sit in the bleachers with a mortal friend.]

Items in brief:

Project Kenai allows developers to collaborate, share and engage. It seems to be a richer version of CVS or Subversion as a repository, because it has some of the characteristics of a social networking site. It will soon include continuous integration capabilities through its incorporation of the Hudson continuous build tool.

JDK 7 (prematurely announced by Jonathan Schwartz) will feature 3 things when it debuts - a modular platform, a multilingual VM and productivity enhancements for developers.

A new dependency information file format for classes means that "the classpath is dead", a statement that drew cheers. This diagram shows the new packages in Java 7, and their dependencies.

It will also make it easier to create deployment packages for various target platforms (such as .deb files for Ubuntu Linux).

There is a project called the "Da Vinci Machine" to enable the Java VM to be a true multi-language VM. For the first time in a decade, there will be new bytecode that will effectively upgrade the JVM architecture.

Then there's the cheekily-named Project Coin, which deals with small change(s) to Java.

Many minutes were wasted on a description of JEE 6. I have never understood the rationale for some of the components of JEE, and why Sun insists on throwing good money after bad. There's JSF 2.0 and EJB 3.1. The EJB cancer has reached Java's lymph nodes and is now threatening to infect healthy web server tissue through "Lite EJB", which can be bundled inside .war files and does not require .ear files or a heavyweight container. Why, why, why??? I thought Spring framework chemotherapy had forced EJB into remission, but this seems much more virulent than I thought.

Bean validation (JSR 303) seems to be a way to ensure consistent data validation across tiers (through JSF in the presentation tier and JPA in the persistence tier). I can't help thinking this is just the Java version of XForms. XForms carries an XML document payload that is defined through a schema that can then be used to validate this instance data in any tier.

There's a lot of emphasis on the Glassfish server that I just can't understand. Again, it's probably critical for Sun to own an application server that isn't just Tomcat+Spring. It just isn't that relevant to the rest of the world. Glassfish has ambitions of being scalable, "from embedded to carrier-grade".

One corollary of a more modular Java that I deduced and that Sun acknowledged is that in the future, there will be just one Java. No more ME, SE and EE variants.

I next attended a rather boring and depressing session that was optimistically called "Tips and Tricks for Ajax Push and Comet Apps". This was conducted by Jean-Fran├žois Arcand of Sun and Ted Goddard of ICEsoft Technologies. It was depressing because at the end of the day, there seems to be no satisfactory method to implement server push that will scale well and work on all browsers. The lecture was a tour of various unsatisfactory compromises. The three standard techniques (Polling, Ajax push (long polling) and HTTP Streaming) all have their drawbacks. I personally like HTTP Streaming with its chunked response (multiple as-and-when partial responses to a single request). However, this doesn't seem to work very well in practice because of the lack of a client-side API for reading such an input stream, and the tendency for proxies to cache responses instead of passing them on, incomplete though they may be.

My conclusion is that push is impossible with HTTP. We need to implement AMQP over TCP for true bidirectional messaging. It will take time, but it will happen within a decade. And then these problems (and their kludgy solutions) will seem laughable in retrospect.

The last session that I attended was "Spring Framework 3.0: New and Notable" by Rod Johnson himself. The hall was huge but absolutely packed.

Points in brief:

Spring 3.0 will use Java 5+. Anyone wishing to use Java 1.4 must remain with Spring 2.5. Three cheers for courage. Backward-compatibility is a two-edged sword, and I'm glad Spring has made the leap (no pun intended).

There is a new expression language (EL) that allows developers to navigate bean properties, invoke methods and allow construction of value objects. It also allows us to parameterise annotated code:

private String favoriteColor;

where the term within "#{}" is in the EL.

There is an elegant implementation of REST based on Spring MVC (and what was unsaid was that it was not based on JAX-RS). The Spring MVC ViewResolver is a natural point to implement the various resource representations that REST provides a common interface to.

There are several MVC enhancements. @RequestHeader provides access to HTTP request headers. @CookieValue provides access to cookies. It's also possible to have application-specific annotations.

There is a new Java configuration mechanism. Annotations can now be placed in dedicated "configuration classes" rather than in POJOs themselves. In other words, these are XML files all over again, but in Java 5 syntax rather than using angle brackets. I like this, because I was uneasy about annotations. One of the major benefits of externalising configuration is the ability to decouple wiring from the application's own components and make it possible to substitute implementations without recompiling code. Annotations in code are a step backwards, from this viewpoint. Annotations in dedicated configuration classes are three-quarters of a step forward again. We still need a recompile, but only of the config classes.

Rod Johnson pointed out that previous versions of Spring had a rather ad hoc approach to the web side of an application, but that Spring 3.0 now features a consistent stack of components. The base is Spring MVC. Above this layer are Spring Web Flow, Spring Blaze DS (for Flex) and Spring JavaScript. At the top layer is Spring Faces (ugh!). Spring Blaze DS will make it very easy to build Flex apps with Spring. The diagram made no mention of Spring Portlet MVC, which is perhaps just as well. [The portlet specification is another monstrosity that continues to give organisations grief. As an "integration" technology, it belongs more in the problem space than in the solution space.]

The Spring Java config mechanism is type-safe and does not depend on string names. It therefore has more robust bean-to-bean dependencies. It allows inheritance of configuration. And it allows object creation using arbitrary method calls.

Johnson also introduced Spring Roo, the Domain-Driven Design tool and Ben Alex's brainchild. There is a major functional overlap between Roo and Grails, which Johnson did point out, but his logical flowchart to decide which one to use was a bit contrived, in my opinion. Roo and Grails are competitors, and the fact that they are uncomfortable co-components in the Spring portfolio does not make them part of a seamless product line with an implied "horses for courses" decision-making logic. To put it bluntly, if Roo had appeared a couple of years earlier, Grails would probably have been stillborn. The delay in Roo's release gave Grails its break, and now they're competing for developer mindshare. Well, may the best framework win.

Monday, June 01, 2009

JavaOne 2009 Day One (01/06/2009)

I got an invite to the 2009 edition of JavaOne as a blogger, with a Press Pass and everything ;-).

The conference is at the Moscone Center in SanFrancisco.

June 1 saw the CommunityOne prelude to the actual JavaOne conference that would follow from the 2nd to the 4th.

The General Session opened with Dave Douglas (Senior VP, Cloud Computing and Chief Sustainability Officer at Sun) anchoring the event and introducing other speakers. The major focus of the morning's General Session was Cloud Computing, with a secondary emphasis on OpenSolaris.

Lew Tucker, Sun's Cloud Computing CTO (yes Virginia, there is such a title), came up on stage with Dave and the two had a conversation in which they described the "Sun Cloud" (a meteorological contradiction in terms, but never mind).

I was glad to see that they actually called out an important point that usually tends to be glossed over at events like this - Open Source is not enough, it's open APIs and interoperability that are really important. Hear, hear! Obviously, we're a way away from having interoperable clouds, because most first movers seem to want to lock in users even as they tout the Open Source underpinnings of their solutions.

Lew Tucker demonstrated what I thought was a very cool tool - the Sun Cloud Compute Service, or at least the GUI tool that's used to configure it. Using a drag-and-drop interface, it's possible to set up a complete network, with firewalls, switches and all manner of servers, then have that configuration created in the cloud.

The virtual servers can even have the MAC addresses of their VNICs set during this configuration.

I was glad to see that besides OpenSolaris, some of the other server platforms available were Fedora, Ubuntu and CentOS. I've observed that the Linux support subscriptions from Red Hat and Novell/Suse are generally priced in such a way as to make mainframes look cheap. Since I don't believe that mainframes can be cheap, I'm happy about the CentOS, Fedora and Ubuntu server options. Depending upon the Sun Cloud pricing, we may actually see lower costs associated with cloud computing. [But with Oracle's acquisition of Sun, I'm not holding my breath.]

The Sun Cloud Compute Service doesn't seem to be in General Availability mode yet. Perhaps it's just released to a few partners right now. Some of those partners came up on stage to briefly discuss their wares. These were companies like Moonwalk, Vertica and WebAppVM. I didn't have time to digest what these people were doing with the Sun Cloud, but they did seem to derive some benefit from smooth virtualisation.

Someone mentioned an emerging standard for specifying virtual configuration topologies called OVF or some such. I'll need to keep an eye out for it.

Sun is also reportedly working with the Center for Internet Security to develop standards for securing virtual machines. The servers that Sun Cloud will provide on its drag-and drop palette will all be hardened by default.

Some of the tools being offered for cloud computing are Netbeans (for development), Hudson (for continuous integration) and Kenai (for repositories).

There was a panel hosted by Jon Fowler that delved into what's new and cool in OpenSolaris. There does seem to be a fair bit there. Sun Open Storage is based on ZFS. DTrace is another component that's built into the OS core, so everything that runs on OpenSolaris can be dynamically instrumented with very low performance impact. Project Crossbow is another initiative that deals with network virtualisation. Compared to Linux, OpenSolaris appears to have a few advantages, such as networking efficiency (OpenSolaris can continue to process network traffic in polling mode even when the CPU is maxed out, for example). There was also some discussion about the ground-up support for flash memory. Flash storage ranks between DRAM and disk in both speed and cost and represents a good compromise for certain functions (caching of certain types of data). ZFS can integrate DRAM, flash storage and disk.

My question remains the same as after Sun Developer Day in Sydney - why can't they port these great features to Linux and dump Solaris? We need to move ahead with a single great OS, and Linux is it. Petty sibling rivalry doesn't help.

I then attended two very similar sessions one after the other, that dealt with Metro-based Web Services. The first was by Jonathan Scudder of Identitas, and dealt with securing Web Services using Metro and OpenSSO. The second talk was by Harold Carr of Sun that covered a few other topics such as connecting to Amazon's web services. My takeaway from Scudder's talk was that there are still a number of non-intuitive steps that need to be followed to make Web Services work using Metro and OpenSSO. There were some cool aspects to NetBeans, though. If the developer drags a Web Service operation into the code of a service consumer, it actually expands into a try-catch block with the actual call inside. OpenSSO is configured as an STS (Secure Token Service). I wonder if CAS can be similarly configured. I think I remember hearing that CAS 4 has a lot of such goodies. Also from Scudder's demo, I realise that WireShark is a neat tool to analyse network traffic. Harold Carr's talk convinced me that the time is not yet ripe to try and integrate Amazon's Web Services into one's application. It appears that Amazon does not follow the WS-* standards with respect to specifying WS-Security. There's no use of WS-SecurityPolicy in Amazon's WSDL, and that sucks in my opinion. I'll wait for Amazon to fall in line with industry best practice before I go anywhere near their services.

I then attended a talk on "Developing RESTful Web Services with JAX-RS and Jersey" by Marc Hadley and Paul Sandoz of Sun. I had attended a similar talk before, so there wasn't much that was new. I was just struck by how glacial the pace of development has been. I have been following JSR 311 for the past two years and I'm disappointed that we aren't much further ahead.

A few things Hadley and Sandoz said did strike me as significant, though.

One is that the venerable "web.xml" file will be obsoleted in the Servlet 3.0 spec.

Another is that content negotiation, which is a big part of REST, is coded for very simply with @Produces and @Consumes annotations on methods. Jersey (i.e., the JAX-RS implementation) takes care of the gory details.

There's a difference between "JAX-RS aware" and "non JAX-RS aware" containers. With the latter, the URI must point to a servlet, and an init parameter must point to an Application class. In the former, the URI can directly point to an Application class.

No new technology is complete without its share of horror, and with JAX-RS, the horror seems to be Web Beans (JSR 299). This is of course the hideous combination of JSF and EJB (two monstrosities that have no right to exist independently, and certainly no right to exist as a combination). I'm not quite sure where exactly in JAX-RS these Web Beans are going to raise their ugly heads, but I'm already shuddering.

There are apparently many implementations of JAX-RS in the works, and they are Jersey, RESTlets, JBoss RESTEasy, Apache CXF, Triaxrs and Apache Wink.

In JAX-RS 2.0, which is currently underway, the roadmap includes a client API, "quality of source" (some content formats such as XML are superior to others and should be used if the client is indifferent between them), form data and MIME multipart, declarative hyperlinking and representation templating.

The final session I attended on the first day was a hands-on workshop that dealt with the use of an Eclipse toolset for Glassfish. This was a fairly good session with well thought-out materials (a DVD with all required software and examples, and a printout of the assignments) and helpful instructors. It was a "Bring Your Own Laptop" workshop, and fortunately the lab had power outlets and Ethernet cables for all. I had whinged about the lack of Eclipse tools for Glassfish on some earlier occasion, and my prayers were evidently heard. I didn't find the Eclipse toolkit for Glassfish the most intuitive possible, but it's perhaps a function of learning. I shouldn't criticise it before I learn to use it well.

I look forward to tomorrow, when JavaOne proper begins.