Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Friday, June 18, 2010

Annotations and the Servlet 3 Specification


Now I'm seriously beginning to wonder about the authors of the Java Servlet 3 specification. This time, it's not their architectural wisdom (or the lack of it) regarding session state. It's about something even more basic to the Java language - the nature of annotations.

Chapter 8 deals with annotations that developers may use to mark their classes. Anything about the following strike you as crazy?

Classes annotated with @WebServlet class (sic) MUST extend the javax.servlet.http.HttpServlet class.
[...]
Classes annotated with @WebFilter MUST implement javax.servlet.Filter.

If we must extend a base class anyway, I wonder what the annotation is for. Just to avoid putting a few lines of config code into the web.xml file?

I would have thought an annotation like @WebServlet would be capable of turning any POJO into a servlet class, not just subclasses of HttpServlet! And we could have annotations like @GetMethod, @PostMethod, @PutMethod and @DeleteMethod to annotate any arbitrary methods in the class. We shouldn't have to rely on overriding the doGet(), doPost(), doPut() and doDelete() methods.

The same applies with @WebFilter. It could be used to annotate any arbitrary class, and @FilterMethod could then annotate any arbitrary method in the class.

Look at the way JSR 311 and Spring REST work.

I'm disappointed in the Servlet spec committee. If you're going to use annotations, then use them smartly.

It wouldn't be out of place here to comment on the horrific class hierarchy of the Servlet spec. It certainly shows the era from which it began, an era where interfaces were underappreciated and inheritance hierarchies featured classes themselves. Naming conventions hadn't matured yet, either.

E.g., my application's concrete class "MyServlet" must either extend abstract class "GenericServlet", which in turn partially implements interface "Servlet", or implement "Servlet" directly. This by itself isn't so bad, but go ahead a bit.

My application's concrete class "MyHttpServlet" must only extend abstract class "HttpServlet" which extends abstract class "GenericServlet", which in turn partially implements interface "Servlet". There is no interface to implement.

And why GenericServlet should also implement ServletConfig is something I don't understand. There's a HAS-A relationship between a servlet and its configuration. It's not an IS-A relationship.

HttpServlet should have been an interface extending Servlet.

The abstract class GenericServlet (that partially implements the Servlet interface) should have been called AbstractServlet instead, and there could have been a concrete convenience class called SimpleServlet or BasicServlet that extended AbstractServlet and provided a default implementation that subclasses could override.

Similarly, there should have been an abstract class called AbstractHttpServlet that partially implemented the HttpServlet interface and only provided a concrete service() method, dispatching requests to doXXX() methods that remained unimplemented. There could have been a concrete convenience class called SimpleHttpServlet or BasicHttpServlet that extended the AbstractHttpServlet class and provided a default implementation that subclasses could override.

My application's concrete classes should have had the option to implement one of the interfaces directly or to extend one of the abstract or convenience classes.

Oh well, too late now.

Thursday, June 17, 2010

REST and the Servlet 3 Specification


I've been going through the Java Servlet 3 specification, and I just came across this gem at the start of the chapter on Sessions:

The Hypertext Transfer Protocol (HTTP) is by design a stateless protocol. To build effective Web applications, it is imperative that requests from a particular client be associated with each other. [...] This specification defines a simple HttpSession interface that allows a servlet container to use any of several approaches to track a user’s session [...]

I don't think the spec authors have been adequately exposed to the REST philosophy, or they wouldn't be talking so casually about how "imperative" sessions are to build "effective" Web applications. A few years ago, I would have read this without batting an eyelid. Now, I had to keep myself from falling off my chair in shock. One would think spec writers of advanced technology would know a bit better. At the very least, they could have written something like this:

The Hypertext Transfer Protocol (HTTP) is by design a stateless protocol, and it is strongly recommended that Web applications be built in a stateless manner to be effective, with all state management delegated to the persistence tier. If, for legacy or other reasons, it is unavoidable to maintain in-memory state in a web application, the servlet specification defines a simple HttpSession interface that provides a relatively painless way to manage it. Application developers should however be aware of the severe scalability and recoverability issues that will accompany the use of this feature.

There! Now I feel much better.

Monday, June 08, 2009

JavaOne 2009 Day Five (05/06/2009)

The final day of JavaOne 2009 began with a General Session titled "James Gosling's Toy Show". This featured the creator of Java playing host to a long line of people representing organisations that were using Java technology in highly innovative and useful ways. Many of them got Duke's Choice awards.

First up was Ari Zilka (CEO, Terracotta) who was given the award for a tool that makes distributed JVMs appear like one, thereby providing a new and different model of clustering and scalability. Terracotta also allows new servers and JVMs to be added on the fly to scale running Java applications.

Brendan Humphreys ("Chief Code Poet", Atlassian) received the award for Atlassian's tool Clover. [Brendan is from Atlassian's Sydney office, and I've met him there on occasion when Atlassian hosts the Sydney Java User Group meetings.] Clover is about making testing easier by identifying which tests apply to which part of an application. When changing part of an application, Clover helps to run only the tests that apply to that part of the code.

Ian Utting, Poul Henriksen and Darin McCall of BlueJ were recognised (though not with a Duke's Choice award) for their work on Greenfoot, a teaching tool for children. Most of their young users are in India and China, but it's not clear how many there are, because they only interact with the Greenfoot team through their teachers.

Mark Gerhard (CEO of Jagex) was called up to talk about RuneScape. [Mark Gerhard received the Duke's Choice Award on Day 2, as diligent readers of this blog would remember.] This time, the focus was not on the game itself, but on the infrastructure it took to run it. According to Gerhard, Jagex runs the world's second-biggest online forum. There is no firewall in front of the RuneScape servers (!), so they get the full brunt of their user load. The servers haven't been rebooted in years. Jagex had to buid their own toolchain, because a toolchain needs to understand how to package an application for streaming, which off-the-shelf equivalents don't know to do. Jagex runs commodity servers (about 20?) and their support team has just 3 people. Considering that their user base numbers 175 million (10 million of whom are active at any time), this is a stupendous ratio. Of course, Jagex has about 400 other staff, mainly the game developers. Jagex builds their libraries and frameworks in-house, and they maintain feature parity with their commercial competitor, Maya. I found it curious that Gerhard was cagey when asked which version of Java they used. Why would that need to be a secret? All he would say was that we could guess the version from the fact that their servers hadn't been rebooted in 5 years.

By way of vision, Gerhard said that OpenGL would be standard on cell phones in a year, and Jagex's philosophy is that "There's no place a game shouldn't be".

The next people on stage were two researchers from Sun itelf, - Simon Ritter and Angela Caicedo. [Caicedo had been at the Sydney Developer Day only a couple of weeks earlier.] Ritter demonstrated a cool modification of the Wii remote control, especially its infra-red camera. The remote talks Bluetooth to a controller. I didn't grasp the details of how the system was built, although I heard the terms Wii RemoteJ and JavaFX being mentioned. Ritter held a screen in front of him, and a playing card was projected onto it. Nothing hi-tech there. When he rotated the screen by 90 degrees, the projected image rotated as well, which was interesting. But what brought applause was when he flipped the screen around, and the projected image switched to the back of the card! He also showed how a new card could be projected onto the screen by just shaking the screen a bit (shades of the iPod Shuffle there).

Caicedo demonstrated a cool technology that she thought might be useful to people like herself who had young children with a fondness for writing on walls. With a special glove that had a embedded infra-red chip, she could "draw" on a screen with her finger, because a projector would trace out whatever she was drawing based on a detection of the position of her finger at any given time. The application was a regular paint application, allowing the user to select colours from a toolbar and even mix them on a palette.

Tor Norbye (Principal Researcher, Sun) then gave the audience a sneak preview of the JavaFX authoring tool that has not yet been released. Very neat animations can be designed. It's possible to drag an image to various positions and map them to points in time. Then the tool interpolates all positions between them and shows an animation that creates an effect of smooth motion, bouncing images, etc. There are several controls available, like buttons and sliders, and it's possible to visually map between the actions of controls and behaviours of objects. It reminded me of the BeanBox that came with Java 1.1, which showed how JavaBeans could be designed to map events and controls. The lists of events and actions appear in dropdowns through introspection.

There's no edit-compile cycle, which speeds up development. Norbye showed how the same animation could be repurposed to different devices and form factors. There's a master-slave relationship between the main screen and the screens for various devices, such that any change made to the main screen is reflected in the device-specific screens, but any specific overrides made on a device-specific screen remain restricted to that screen alone.

Fritjof Boger-Engelhardtsen of Telenor gave us a demo on a technology I don't pretend to understand. In the mobile world, the SIM card platform is very interesting to operators. The next generation of SIM cards will be TCP/IP connected nodes, with motion sensors, WiFi, etc., embedded within the card. It will be a JavaCard 3 platform. It's possible to use one's own SIM card to authenticate to the network. FB-E gave us a demo of a SunSpot sensor device connected to a mobile phone and being able to control the phone's menu by moving the SunSpot. The phone network itself is oblivious to this manipulation. More details are available at http://playsim.dev.java.net.

Brad Miller (Associate Professor, Worcester Polytechnic Institute Robotics Research Center) and Derek White (Sun Labs) showed some videos of the work done by High School students. Given a kit of parts, the students have to put together robots to participate in the "US First", an annual robotics competition. A large part of the code has been ported across from C/C++ to Java, and the project is always on the lookout for volunteer programmers. Interested people can go to www.usfirst.org. WPI got a Duke's Choice award for this.

Sven Reimers (System Engineer and Software Architect, ND Satcom) received a Duke's Choice award for the use of Java in analysing the input of satellites.

Christopher Boone (President and CEO, Visuvi, Inc) showed off the Visuvi visual search enine. Upload an image and the software analyses it on a variety of aspects and can provide deep insights. Simple examples are uploading images of paintings and finding who the painter was. More useful and complex uses are in the area of cancer diagnosis. Visuvi improves the quality and reduces the cost of diagnosis of prostate cancer. The concordance rate (probablity of agreement) between pathologists is only about 60%, and the software is expected to achieve much better results. The Visuvi software performs colour analysis, feature detection and derives spatial relationships. There's some relationship to Moscow State University that I didn't quite get. At any rate, Visuvi is busy scanning in 400 images a second (at 3000 megapixels and 10 MB each)!



Sam Birney and Van Mikkel-Henkel spoke about Mifos, a web application for institutions that deal in providing microfinance to poor communities. Microfinance is inspired by the work done by Mohammed Yunus of Grameen Bank. This is an Open Source application meant to reduce the barriers to operation of cash-strapped NGOs. The challenge is to scale. Once again, volunteers are wanted: http://www.mifos.org/developer/ , and not just for development but also for translation into many relatively unknown languages. Mifos won a Duke's Choice Award.



Manuel Tijerino (CEO, Check1TWO) told of how many of his musician friends were struggling to find work at diners. So he created a JavaFX based application that allows artistes to upload their work to the Check1TWO site, and it's automatically available on any Check1TWO "jukebox" at any bar or disco. Regular jukeboxes are normally tied up in studio contracts, so the Check1TWO jukeboxes provide a means for struggling artistes to reach over the heads of the studios and connect directly with their potential audiences.

Zoltan Szabo and Balazs Lajer (students at the University of Pannonia, Hungary) showed off their project that won the first prize at RICOH's student competition. Theirs is a Java application that runs on a printer/scanner and is capable of scoring answer sheets.

Marciel Hernandez (Senior Engineer, Volkswagen Electronics Research Lab and Stanford University) and Greg Follella (Distinguished Engineer, Sun) talked about Project Bixby. This is about automating the testing of high-speed vehicles through "drive-by-wire". The core of the system is Java RTS (Run-time System). The primary focus is improving safety. Stanford University is building the control algorithms. It should be possible to control the car when unexpected things happen, which especially happens on dirt racetracks. There's no need to put a test driver at risk. Project Bixby leads to next-generation systems that are faster, such as more advanced ABS (Anti-lock Braking System) and newer stability control systems).

Finally, there was a video clip of the LincVolt car, which turns a classic Lincoln Continental into a green car like the Prius, but with some differences. The Prius has parallel electrical and petrol engines. The LincVolt has batteries driving the wheels all the time, with the petrol engine only serving to top up the battery pack when it starts to run down. What's the connection with Java? The control systems and visual dashboard are all Java.

This concluded the General Session.

I then attended a session titled "Real-World Processes with WS-BPEL" by Murali Pottlapelli and Ron Ten-Hove. The thrust of the whole session was that WS-BPEL as a standard was incomplete and that real-world applications need more capabilities than WS-BPEL 2.0 delivers. A secondary theme of the session was that an extension called BPEL-SE developed for OpenESB is able to address the weaknesses of WS-BPEL.

[My cynical take on this is that every vendor uses the excuse of an anaemic spec to push their own proprietary extensions. If there is consensus that WS-BPEL 2.0 doesn't cut the mustard, why don't the vendors sit down together and produce (say) a WS-BPEL 3.0 spec that addresses the areas missing in 2.0? I'm not holding my breath.]

The structure was fairly innovative. Ten-Hove would talk about the failings of the standard and Pottlapelli would then describe the solution implemented in BPEL-SE. [The session would have been far better if Pottlapelli had been able to communicate more effectively. I'm forced to the painful realisation that many Indians, while being technically competent, fail to impress because their communication skills are lacking.]

These are the shortcomings of WS-BPEL 2.0 that are addressed by BPEL-SE:

- Correlation is hard to use, with multiple artifacts to define and edit. BPEL-SE provides a correlation wizard that hides much of this complexity.
- Reliability is not defined in the context of long-running business processes susceptible to crashes, maintenance downtime, etc. BPEL-SE defines a state machine where the state of an instance's execution is persisted and can be recovered on restart without duplication.
- High Availability is not defined. BPEL-SE on GlassFish ESB can be configured to take advantage of the GlassFish cluster, where instances migrate to available nodes.
- Scalability is an inherent limitation when dealing with long-running processes that hold state even when idle. BPEL-SE defines "dehydrate" and "hydrate" operations to persist and restore state. As the dehydrate/hydrate operation is expensive, two thresholds are defined (the first to dehydrate variables alone, and the next to dehydrate entire instances).
- Retry is clumsy in WS-BPEL, because the <invoke> operation doesn't support retries. Building retry logic into the code obscures the real business logic. BPEL-SE features a configurable number of retries and a configurable delay between retries. It also supports numerous actions on error.
- Throttling is difficult to achieve in WS-BPEL, which makes the system vulnerable to denial of service attacks and bugs that result in runaway processes. BPEL-SE can limit the number of accesses to a "partnerlink" service node, even across processes. This helps to achieve required SLAs.
- Protocol headers are ignored in WS-BPEL by design, as it is meant to be protocol-agnostic. However, many applications place business information in protocol headers such as HTP and JMS. BPEL-SE provides access to protocol-specific headers as an extension to <from> and <to> elements. Protocols such as HTTP, SOAP, JMS (strictly speaking, this isn't a protocol but an API) and file headers.
- Attachments are non-standard in WS-BPEL, with some partners handling document routing inline and some as an attachment. BPEL-SE adds the extensions "inlined" and "attachment" to the <copy> element.
- BPEL extensions fail because it relies on XPath, and XPath 1.0 has a limitation in that false is interpreted as true (boolVar is a non-empty node). BPEL-SE uses JXPath, and is enhanced to be schema-aware. False is not interpreted as true, and integers do not end in ".0".
- XPath limitations hamper WS-BPEL, because XPath has a limited type system, it is impossible to set a unique id on the payload, it's a challenge to extract expensive items in a purchase order document if it spans multiple activities, etc. Any kind of iteration across an XML structure is difficult, and this is partly due to the limitations of XPath and partly due to those of BPEL. With BPEL-SE, Java calls can be embedded in any XPath expression, with syntax adapted from Xalan. It also supports embedded JavaScript (Rhino with E4X), which makes XML parsing much easier.
- Fault handling is problematic in WS-BPEL because standard faults have no error messages, which makes them hard to debug. Standard and non-standard faults can be hard to distinguish. This complicates fault handlers, requiring multiple catches for standard faults. The WSDL 1.1 fault model has been carried too far. BPEL-SE fault handler extensions propagate errors and faults not defined in WSDL as system faults. Associated errors are wrapped in the fault message. Standard faults are associated with a fault message and failure details like activity name are carried along. BPEL-SE catches all standard and system faults.

I asked the presenters if they were making the case that BPEL-SE had addressed all the limitations of WS-BPEL or if they felt there were some shortcomings that still remained. Ten-Hove's answer was that subprocesses were a lacuna that hadn't yet been addressed.

My next session was "Cleaning with Ajax: Building Great Apps that Users Will Love" by Clint Oram of SugarCRM. I regret to say that this session was short on subject matter and ended in just half an hour. The talk seemed to be more a sales pitch for SugarCRM than a discussion of Ajax principles. Even the few design principles that were discussed were fairly commonsensical and did not add much insight.

For what it is worth, let me relate the points that sort of made sense.

Quite often, designers are asked to "make the UI Ajax". This doesn't say what is really expected. Fewer page refreshes? Cool drag-and-drop? Animation?

There are also some downsides of Ajax:
- Users get lost (browser back and forward buttons and bookmarks are broken, the user doesn't always see what has changed on the page, and screen elements sometimes keep resizing constantly, irritating the user)
- There are connection issues (it can be slow, the data can't be viewed offline, and applications often break in IE6)
- There are developer headaches (mysterious server errors (500), inconsistent error handling, PHP errors break Ajax responses)

Some design questions need to be asked before embarking on an Ajax application:
- Does the user really want to see the info?
- Will loading this info be heavy on the server?
- Does the info change rapidly?
- Is there too much info?

The "lessons learned", according to Oram, are:

- Use a library like YUI, Dojo or Prototype instead of working with raw JavaScript.
- Handle your errors, use progress bars to indicate what is happening, don't let the application seem like it's working when there has been an error.
- Ajax is a hammer, and not every problem is a nail.
- Load what you need, when you need it.

As one can see, there was nothing particularly insightful in this talk, and it was a waste of half-an-hour.

One interesting piece of information from this session was that IBM WebSphere sMash (previously Project Zero) is a PHP runtime that runs on the JVM and provides full access to Java libraries.

SugarCRM 5.5 Beta is out now, and adds a REST API to the existing SOAP service API.

I'm not a fan of SugarCRM. The company's "Open Source" strategy is basically to provide crippleware as an Open Source download, and to sell three commercial versions that do the really useful stuff. I don't know how many people are still fooled by that strategy.

The next session I attended was quite possibly the best one I have attended over these five days. It was called "Resource-Oriented Architecture and REST" by Scott Davis (DavisWorld Consulting, Inc).

Unfortunately, the talk was so engrossing that I forgot to take notes in many places, so I cannot do full justice to it here :-(. Plus, Davis whizzed through examples so fast I didn't have time to transcribe them fully, and lost crucial pieces of code.

Davis is the author of the columns "Mastering Grails" and "Practically Groovy" on IBM Developerworks. He's also a fan of David Weinberger's book "Small Pieces Loosely Joined", which describes the architecture of the web. Although ten years old, the book is still relevant today. [David Weinberger is also the author of The Cluetrain Manifesto.]

I knew Groovy was quite powerful and cool, but in Davis's hands it was pure magic. In a couple of unassuming lines, he was addressing URLs on the web and extracting information that would have taken tens of lines with SOAP-based Web Services. I must have transcribed them incorrectly, because I can't get them to work on my computer. I'll post those examples when I manage to get them working.

A really neat example was when he found a website that provided rhyming words to any word entered on a form. He defined a "metaclass" method on the String class to enable it to provide rhymes by invoking the website. Then a simple statement like "print 'java'.rhyme()" resulted in "lava".

As Davis said, REST is the SQL of the Web.

Davis then talked about syndication with the examples of RSS and Atom. Three things draw one in (chronology (newest first), syndication (the data comes to you) and permalinks (which allow you to "pass it on")). He also mentioned the Rome API and called it the Hibernate of syndication.

What's the difference between RSS and Atom? I hadn't heard it put this way before. Davis called RSS "GETful" and Atom "RESTful".

He then did a live grab of a Twitter feed and identified the person in the room who had sent it.

In sum, this session was more a magic show than a talk. It made me determined to learn Groovy and how to use it with RESTful services.

The last session I attended at JavaOne 2009 was "Building Enterprise Java Technology-based Web Apps with Google Open Source Technology" by Dhanji Prasanna of Google.

Prasanna covered Google Guice, GWT (pronounced "gwit") and Web Driver, and provided a sneak preview of SiteBricks.

I'm somewhat familiar with GWT but none of the others, so my descriptions below may make no sense. It's a verbatim transcription of what Prasanna talked about.

Guice helps with testing and testability. Its advantages are:
- Simple, idiomatic AOP
- Modularity
- Separation of Concerns
- Reduction of state-aware code
- Reduction of boilerplate code

It enables applications to horizontally scale. Important precepts are:

- Type safety, leading to the ability to reason about programs
- Good citizenship (modules behave well)
- Focus on core competency
- Modularity

GWT is a Java-to-JavaScript compiler. It supports a hosted mode and a compiled mode. Core Java libraries are emulated. It's also typesafe.

The iGoogle page is an example of what look like "portlet windows", but which are independent modules that are "good citizens".

Unfortunately, social sites like OpenSocial and Google Wave have different contracts, so modules may not be portable across them.

Google Gin is Guice for GWT. It runs as Guice in hosted mode and compiles directly to JavaScript. There is no additional overhead of reflection.

Types are Java's natural currency. Guice and GWT catch errors early, facilitate broad refactorings, prevent unsafe API usage and reason better about programs. They're essential to projects with upwards of ten developers, because these features are impossible with raw JavaScript.

Web Driver is an alternative to the Selenium acceptance testing tool, and apparently the codebases are now merging. Web Driver has a simpler blocking API that is pure Java. It uses a browser plugin instead of JavaScript, features fast DOM interaction and a flexible API and supports native keyboard and mouse emulation. Web Driver supports clustering.

Then Prasanna provided a preview of SiteBricks, which is a RESTful web framework. The focus is on HTTP and lessons have been learned from JAX-RS. There are statically typed templates with rigorous error checking. It's concise and uses type infererence algorithms. It's also fast and raises compilation errors if anything is wrong.

SiteBricks modules are Guice servlet modules. One can ship any module as a widget library. Any page is injectable. It supports the standard web scopes (request, session) and also a "conversation" scope.

SiteBricks has planned Comet ("reverse Ajax") support. The preview release is available on Google Code Blog.

That concludes my notes on JavaOne 2009. Admittedly, a lot of this has been mindless regurgitation in the interests of reporting. If these make sense to readers (even if they make no sense to me), well and good.

In the days to come, I'll ruminate on what I've learned and post my thoughts.

Thursday, June 04, 2009

JavaOne 2009 Day Four (04/06/2009)

The third day of JavaOne proper (the fourth day if including CommunityOne on Monday) started with a General Session on interoperability hosted by (of all companies) Microsoft. It shouldn't be too surprising, actually, because Sun and Microsoft buried the hatchet about 5 years ago and started to work on interoperability. Time will tell if that was an unholy alliance or not.

Dan'l Lewin (Corporate VP, Strategy and Emerging Business Development, Microsoft) took the stage for some opening remarks. What he said resonated quite well, i.e., that users expect things to just work. Data belongs to users and should move freely across systems. The key themes are interoperability, customer choice and collaboration.

Lewin pointed to TCP/IP as the quintessential standard. Antennae and wall plugs may change from country to country, but TCP/IP is the universal standard for connectivity, which is why the Internet just works. [I could add many other standards to this list, which would also have "just worked" but for Microsoft!]

Lewin added that the significant partners that Microsoft is working with in the Java world are Sun, the Eclipse Foundation and the Apache Software Foundation. The key areas where Sun and Microsoft work together are:

- Identity Management, Web SSO and Security Services
- Centralised Systems Management
- Sun/Microsoft Interoperability Center
- Desktop and System Virtualisation
- Java, Windows, .NET

Identity Management interoperability has progressed a great deal with the near-universal adoption of SAML. On virtualisation, where host and guest systems are involved, Lewin put it very well when he said Sun and Microsoft control each other's environment "in a respectful way."

A website on his slide pack was www.interoperabilitybridges.com .

Steven Martin (Senior Director, Developer Platform Productivity Management, Microsoft) took over from Lewin and started off with "we come in peace and want to talk about interoperability".

He introduced Project Stonehenge, a project under Apache, with code available under the Apache Licence. This uses IBM's stock trading application to demonstrate component-level interoperability between the Microsoft and Sun stacks.

Greg Leake of Microsoft and Harold Carr of Sun then provided a live demo of this interoperability.

The stock trading application has four tiers, - the UI, a business logic tier, a further business logic tier housing the order processing service, and the database tier. The reason for splitting the business logic tier into two was to demonstrate not just UI tier to business logic tier connectivity but also service-to-service interop. The Microsoft stack was built on .NET 3.5, with ASP.NET as the UI technology and WCF the services platform. The Sun stack was based on JSP for the UI and the Metro stack for services, running on Glassfish. Both stacks pointed back to a SQLServer database.



The first phase of the demo showed the .NET stack running alone, with Visual Studio breakpoints to show the progress of a transaction through the various tiers. Then the ASP.NET tier was reconfigured to talk to the Metro business logic layer, and the control remained with the Java stack thereafter. In the third phase of the demo, the Metro business service layer called the order processing service in the .NET stack. The application worked identically in all three cases, effectively demonstrating bidirectional service interoperability between .NET and Metro Web Services.

Martin also mentioned a useful principle for interoperability, "Assume that all applications are blended, and that all client requests are non-native". This is analogous, I guess, to that other principle, "Be conservative in what you send, and liberal in what you accept".

He also referred to "the power of choice with the freedom to change your mind", which I thought was a neat summarisation of user benefits.

Aisling MacRunnels (Senior VP, Software Marketing, Sun) joined Steven Martin on stage to talk about the Sun-Microsoft collaboration, which isn't just limited to getting the JRE to run on Windows. Microsoft also cooperates with Sun to get other "Sun products" like MySQL, VirtualBox and OpenOffice to work on the Microsoft platform. The last item must be particularly galling to the monopoly. Microsoft is also working to get Sharepoint authentication happening against OpenSSO using SAML2. Likewise, WebDAV is being supported in Sun's Storage Cloud. In other words, when both parties support open standards, their interoperability improves.

I think it speaks more of the quality and tightness of a standard than of vendor cooperation when systems work together. Sun and Microsoft shouldn't need to talk to each other or have a cozy relationship. Their systems need to just work together in the first place.

The next session I attended was "Metro Web Services Security Usage Scenarios" by Harold Carr and Jiandong Guo of Sun. Carr is Metro Architect and Guo is Metro Security Architect, so we pretty much had the very brains of the project talking to us.

There wasn't very much in the lecture that was specific to Metro. Most of the security usage patterns were general PKI knowledge, but I must say the diagrams that illustrated the logic flow in each pattern were top class. I have seen many documents on PKI, but these are some of the best. My only quibble with them is they tend to use the same terms "encrypt/decrypt" in two separate contexts - "encrypt/decrypt" and "sign/verify".













Some of the interesting points they made were:

- The list of security profiles bundled with Metro will be refactored soon. Some will be dropped and new ones will be added.
- SSL performance is still better than WS-Security, even with optimisations such as derived keys and WS-SecureConversation. [WS-Security uses a fresh ephemeral key for every message, while WS-SecureConversation caches and reuses the same ephemeral key for the whole session.]



- Metro 2.0 is due in September 2009.

I then attended a session called "Pragmatic Identity 2.0: Simple, Open Identity Services using REST" by Pat Patterson and Ron Ten-Hove of Sun. Ten-Hove is also known for his work on Integration and JBI. He was the spec lead for JSR 208.

[As part of their demo, I realised that NetBeans has at least a couple of useful features I didn't know about earlier. There's an option to "create entity classes from tables" (using JPA, I presume), and another one to "create RESTful web services from entity classes".]





It's a bit difficult to describe the demo that Patterson gave. On the surface, it was a standard application that implemented user-based access control. One user saw more functions than another. The trick was in making it RESTful. Nothing had to be explicitly coded for, which was the cool part. The accesses were intercepted and authenticated/authorised without the business application being aware of it. As I said, it's hard to describe without the accompanying code.

The next session was on "The Web on OSGi: Here's How" by Don Brown of Atlassian. Brown is a very polished and articulate speaker and he kept his audience chuckling.

OSGi is something that has fascinated me for a while but I haven't got my head around it completely yet. At a high level, OSGi is a framework to allow applications to be composed dynamically from components and services that may appear and disappear at run-time. Dependencies are reduced or eliminated by having each component "bundle" use its own classloader, so version incompatibilities can be avoided. Different bundles within the same application can use different versions of libraries without conflicts, because they don't share classloaders.



OSGi is cool but complex. As Brown repeatedly pointed out, while it can solve many integration and dependency problems, it is not trivial to learn. Those who want to use OSGi must be prepared to learn many fundamental concepts, especially around the classloader. Also, components may appear and disappear at will in a dynamic manner. How must the application behave when a dependency is suddenly exposed?

There are 3 basic architectures that one can follow with OSGi:



1. OSGi Server
Examples: Spring DM Server, Apache Felix, Equinox
Advantages: Fewer deployment hassles, consistent OSGi environment
Disadvantages: Can't use any existing JEE server

2.Embedded OSGi server via bridge (OSGi container runs within a JEE container using a servlet bridge)
Examples: Equinox servlet bridge
Advantages: Can use JEE server, application is still OSGi
Disadvantages: (I didn't have time to note this down)

3. Embedded OSGi via plugin
Example: Atlassian plugin
Advantages: Can use JEE server, easier migration, fewer library hassles
Disadvantages: More complicated, susceptible to deployment

I learnt a number of new terms and names of software products in this session.
- Spring DM (Dynamic Modules)
- Peaberry using Guice
- Declarative Services (part of the OSGi specification)
- iPOJO
- BND and Spring Bundlor are tools used to create bundles
- Felix OSGi is embedded as part of Atlassian's existing plugin framework.
- Spring DM is used to manage services.
- Automatic bundle transformation is a cool feature that Brown mentioned but did not describe.

There are three types of plugins:
Simple - no OSGi
Moderate - OSGi via a plugin descriptor
Complex - OSGi via Spring XML directly



Brown gave us a demo using JForum and showed that even if a legacy application isn't sophisticated enough to know about new features, modules with such features can be incorporated into it.

I had been under the impression that OSGi was only used by rich client applications on the desktop. This session showed me that it's perhaps even more useful for web applications on the server side.

My last session of the day was a hands-on lab (Bring Your Own Laptop) called "Java Technology Strikes Back on the Client: Easier Development and Deployment", conducted by Joey Shen and a number of others who cruised around and lent a helping hand whenever one got stuck. It looks like Linux support for JavaFX has just landed (Nigel Eke, if you're reading this, you were right after all), and a very quiet landing it has been, too. But it's still only for NetBeans 6.5.1, not NetBeans 6.7 beta. At any rate, I was more interested in just checking to see if it worked. It turned out that there were a couple of syntax errors in the sample app which had to be corrected before the application could run. I was very keen to try the drag-and-drop feature with which one could pull an applet out of the browser window and install it as a desktop icon (Java Web Start application). Unfortunately, this feature requires a workaround in all X11 systems (Unix systems using the X Window System), because the window manager intercepts drag commands and applies it to the window as a whole. There was a workaround described for applets but none for a JavaFX application. As time was up, I had to leave without being able to see drag-and-drop in action. Never mind, I'm sure samples and documentation will only become more widely available as time goes on, and Sun will undoubtedly make JavaFX even easier to use in future.

Wednesday, June 03, 2009

JavaOne 2009 Day Three (03/06/2009)

The second day of JavaOne proper (the third day if including CommunityOne) started with a General Session on mobility conducted by Sony Ericsson.

The main host was Christopher David (Head of Developer and Partner Engagement). At about the same time he started his talk, Erik Hellman (Senior Java Developer) got started on a challenge - to develop a mobile application by the end of the session that would display all Tweets originating within a certain radius of the Moscone Center that contained the word 'Java'.

Rikko Sakaguchi (Corporate VP, Head of Creation and Development) and Patrik Olsson (VP, Head of Software Creation and Development) joined David on stage, and between the three of them, kept the Sony Ericsson story going.

One of the demos they attempted failed (controlling a Playstation 3 with a mobile phone), but then it isn't a demo if it doesn't fail.

One of the points made was about the difference between a mobile application and a traditional web application. A traditional web application has its UI on the client device, with business logic and data on a server across the network. A mobile application has the UI, parts of business logic and parts of data and platform access on the device, and the remaining data and business logic across the network. I don't quite buy this distinction. I don't necessarily see a difference between traditional distributed applications and mobile applications. So the device form factor is a bit different and the network is wireless, but that's hardly a paradigm shift. Application architectures like SOFEA are meant to unify all such environments.

The history of Sony Ericsson's technology journey is somewhat familiar. In 2005, they switched from C/C++ to Java. Java became an integral part of the Sony Ericsson platform rather than an add-on. In 2007, they created a unique API on top of the base Java platform. In 2009, the focus is on reducing fragmentation of platforms. The bulk of the APIs are standard, while a few (especially at the top of the stack) are proprietary to SE.

As expected from a company that boasts 200 million customers worldwide and 200 million downloads a year, SE has a marketplace called PlayNow Arena. SE has been selling ringtones, games, music, wallpapers, themes, movies and lately, applications. I'm frankly surprised that it's taken them so long to get to selling applications.

Since time-to-market is important, SE promises software developers a turnaround time of 30 days from submission to appearance in the virtual marketplace, with assistance provided throughout the process.

And yes, Erik Hellman had completed his application with 10 minutes to spare by the time the session ended.

The next session I attended was something completely new to me. This was called "Taking a SIP of Java Technology: Building voice mashups with SIP servlets" by RJ Auburn of Voxeo Corp. The Session Initiation Protocol (SIP) is mainly used in telephony, but can apparently be used to bootstrap any other interaction between systems. SIP has more handshaking than HTTP, with many more exception cases, so it's a more chatty protocol than HTTP. It's also implemented on top of UDP rather than TCP, so SIP itself needs to do much more exception handling than HTTP.

RFC 3261 that describes SIP is reportedly a rather dry document to read. Auburn recommended The Hitchhiker's Guide to SIP, and also some Open Source info at www.voip-info.org and some industry sites (www.sipforum.org and www.sipfoundry.org).

There seem to be two main ways to develop applications with SIP. One uses XML, the other uses programming APIs. The XML approach is a 90% solution, while the API approach provides more options but is more complex.

There are two sister specifications in the XML approach - VoiceXML and CCXML (Call Control XML). VoiceXML supports speech and touchtone, a form-filling model called the FIA, but very limited call control. CCXML, in contrast, manages things like call switching, teleconferencing, etc. The two work in a complementary fashion, with CCXML defining the overall "flow logic", and VoiceXML defining the parameters of a particular "message" (for want of better terms).

The Java API is based on the SIP Servlet API (www.sipservlet.com). JSR 116 was SIP Servlet 1.0, and JSR 289 is SIP Servlet 1.1 (just released). JSR 309 (the Java Media Server API) is based on the CCXML model, but is still in draft.

SIP is complex, so Voxeo has a simpler API called Tropo, available in a number of scripting languages. This is not Open Source, but is free for developers, and the hosting costs about 3 cents a minute. There are also traditional software licensing models available.

There are some phone-enabled games available, and Vooices is a good example.

More information is available on tropo.com and www.voxeo.com/free.

The next session I attended was "What's New in Groovy 1.6?" by Guillaume Laforge himself. The talk was based on an InfoQ article written by him earlier.

In brief, the improvements in Groovy 1.6 are:

- Performance - the compiler is 3x to 5x faster than 1.5, and the runtime is between 150% to 460% faster.
- Syntax changes - multiple assignment (the ability to set more than one variable in a statement), optional return statements
- Ability to use Java 5 annotations
- Ability to specify dependencies
- Swing support (griffon.codehaus.org)
- JSR 223 (scripting engine) support is built-in and can invoke scripting in any language
- OSGi readiness

My next session was on providing REST security. The topic was called "Designing and Building Security into REST Applications" by Paul Bryan, Sean Brydon and Aravindan Ranganathan of Sun. The bulk of the talk focused on OpenSSO and its REST-like interface. But as the presenters confessed, the OpenSSO "REST" API is just a facade over a SOAP API. In OpenSSO 2.0, they visualise a truer RESTian API.



The other term I had never heard of before was the OAuth protocol. Apparently, OAuth is a style of authentication just like HTTP's Basic Authentication and Digest Authentication.



The last session that I attended today was on "Coding REST and SOAP together" by Martin Grebac and Jakub Podlesak of Sun. Although the topic was entirely serious, it felt like cheating at a certain level.

The premise is that we implement SOAP and REST on top of POJOs using annotations defined by JAX-WS and JAX-RS, respectively. So can't we just add both sets of annotations to the same POJO and enable SOAP and REST simultaneously?

I can see one very obvious problem with this approach. REST forces one to think Service-Oriented because the verbs refer to what the service consumer does to resources. It's what I've earlier called the Viewpoint Flip, and I believe it's an essential part of Service-Oriented thinking. But SOAP doesn't enforce any such viewpoint. It's possible to have an RMI flavour to the JAX-WS SOAP methods. So there's no substitute for proper design.

JavaOne 2009 Day Two (02/06/2009)

The first day of JavaOne proper (or the second day if you include CommunityOne on Monday) started with a two hour General Session.

A brief documentary showing the parallels between a 14 year old boy named Justin and 14 year old Java set the tone for the session. Each year from 1996, a different facet of Java has come to the fore. And in 2009, just as Java turned 14, the 14 year old boy was introduced as a gamer and Java developer. Sun is appealing to a new generation of developers now, and the emphasis on JavaFX reflects a generational shift in more ways than one.

Jonathan Schwartz, Sun's CEO, came up on stage next, and was the MC for almost the rest of the session. I have always been impressed by Schwartz for the direction he has given Sun as CEO, and I was impressed by his poise, confidence and easy manner as he conducted the proceedings. He did goof up a bit, though. He announced the release of Java 7 as of today, but it turned out later that it was just a milestone release. The final release of Java 7 isn't due till early 2010.

The theme for the morning was the power of a simple idea - to wit, "Write Once, Run Anywhere (WORA)".

An impressive array of customers and business partners trooped up to join Schwartz one after the other, all testifying to the wonderful goodies that Java technology had delivered to their organisations.

First up was James Baresse (VP Architecture, Platforms and Systems, eBay). eBay uses the integrated Java stack to run their business - the application framework, content management, batch systems and SOA.

Next in the witness box was Alan Brenner (Senior VP for the BlackBerry platform at RIM). For those who don't know already, the BlackBerry is a Java phone. RIM uses Java end-to-end - the core apps, the development platform, the works. Brenner showed off a third party app called Zavnee that integrates with the BlackBerry's email and phonebook APIs as well as community sites to provide a more insightful address book. According to Brenner, the open APIs of BlackBerry allow ISVs to build applications that integrate tightly with the device.

Don Eklund (Executive VP of Advanced Technologies at Sony Pictures Home Entertainment) came up to crow about the victory of Blu-Ray over its competition, or so it seemed to me. I'm not a great fan of Sony, which represents evil media. Eklund sang the praises of Java's openness. I couldn't help mentally completing his statement. Every company loves it when its building blocks are open and free, and when their own products are closed and proprietary. How about opening up the Blu-Ray format to everyone, Sony?

Lowell McAdam (President and CEO, Verizon Wireless) spoke about Verizon's Open Development Initiative last year that dealt with hardware, and their Open Development for Applications this year that deals with software. Verizon is opening up its APIs to enable third party developers to exploit functions such as presence, location, friends and family, etc.

Diane Bryant (Executive VP and CIO at Intel) talked about the collaboration between Intel and Sun to deliver high Java performance on the Intel platforms (Atom, Core and Xeon). They claim to have achieved an 8x raw performance improvement since 2006.

The final partner to come up on stage was Paul Ciciore (Chief Technologist at the Chicago Board Options Exchange). The CBOE is the world's largest options exchange, but it's a relatively new company. The Java-based implementation was conceived in 1998 and launched in 2001. From 5000 transactions per second in 2001, CBOE has grown to 300,000 transactions per second in 2009. They're also driving latency down lower and lower.

The next phase of the General Session talked about what I would characterise as an attempt by Sun to shift gears and start to provide the technology tools for applications that deal with media content in addition to those for the traditional enterprise applications that have been its mainstay so far. It's a risky gambit for Sun, and I'm not very optimistic about their chances of success. As an example, if Adobe trivially reengineers Flex to generate JVM bytecode, it's bye-bye, JavaFX.

There were some nifty demos of JavaFX applications, including one that ran on an HDTV set. There was another demo by Sun's Director of Engineering for the JavaFX platform, Nandini Ramani (wrongly pronounced (what else?) Ramaani). She showed off a nifty development environment for JavaFX that allowed content to be edited and targeted simultaneously for different devices. There's no compile-build cycle for JavaFX, this being a scripting language.

James Gosling, the inventor of Java, then came up on stage. This is one person I put in the same category as Bob Metcalfe, the inventor of Ethernet. Both are technology geniuses who have an astonishing blind spot about Open Source. [Metcalfe once famously referred to it as "Open Sores".] Gosling made a snide remark about how the Linux platform doesn't let developers build applications for it unless they're willing to make it a labour of love. His self-styled mission is to help developers convert a labour of love into a day job that puts food on the table. That's all very well, but someone should tell him that Open Source works very well as it is, thank you very much. It's a benign pyramid scheme, where a new generation of programmers comes along to build the next layer of an open platform on top of the last. The commoditisation continues up the stack. Too bad for people wanting to make money along the way. But that's not a problem for Open Source, nor is it a problem for the world. It's only in Gosling's (and Metcalfe's) world that a failure to monetise something equates to failure, period.

Gosling presented the Duke's Choice award to Mark Gerhard (CEO of Jagex, maker of the popular RuneScape game). About 20% of Jagex's users are paying customers. RuneScape will be available on the Java Store that I wrote about earlier. The Java Store provides tools to "ingest" and "distribute" applications, and Sun is still working on a suitable cash register implementation to enable effective commercialisation. Suggestions from the community are welcome...

Randy Bryant (Dean of Computer Science at Carnegie Mellon University) received another Duke's Choice award from Gosling on behalf of Randy Pausch, the inventor of the Alice Project. Alice, like MIT's Scratch, is a means of teaching computer programming to kids. Alice 3 goes a step further by introducing kids to Java programming, which Scratch doesn't do. [I'm planning to introduce Alice to my twelve year old.]

Somewhere along the way, Scott McNealy, Sun's chairman and former CEO, came up on stage. I have previously referred to McNealy as a dinosaur, and I must say that he literally looked and sounded old. Gone was the youthful "puckish humour" that magazines used to refer to. He seemed resigned to an imminent retirement. Reading between the lines, though he made some condescending remarks to Jonathan Schwartz about the wonderful way he had led the company even though he was a relative newcomer, I sensed some resentment and disapproval. I guess it's an open secret that Schwartz preferred an IBM takeover of Sun, and McNealy won out with the Oracle deal. I wonder how long Schwartz will last with Oracle CEO Larry Ellison as his boss. Open sourcing Java was the best thing Schwartz has done for the world, and must have turned Ellison's dreams of monetisation to ashes.

The piece de resistance of the morning's session was then the surprise introduction of Ellison himself.

Ellison said many nice things about Java (e.g., "Except for the database, everything at Oracle is Java-based"). Again, I was tempted to complete his sentence that "Java was attractive to us because it was open" with the clincher "and because it has allowed us to close everything we build above it". I hope the emerging Open Source enterprise stack eats Oracle's lunch in the coming recession.

Ellison said a few things that I found very significant. He wanted to see OpenOffice being redeveloped using JavaFX. He's now in a position to drive that initiative through investment, and something tells me he'll do it. It's more than just his traditional hatred of Microsoft. There's probably a big support subscription-based revenue stream that will come from the corporate market when OpenOffice supplants MS-Office.

Ellison also pledged not to make changes to the Java model but to "expand investment" in it, a commitment that drew relieved applause from the mostly geek audience. I guess there's indirect benefit to Oracle from Java's success, largely from its ability to prevent competitors from succeeding with their proprietary equivalents.

In another very significant statement, Ellison also hinted that Sun and Oracle would jointly introduce a mobile device to compete with devices running Google's Android. I think JavaFX is the weapon that Oracle-Sun are betting on.

Someone (I forget whether it was Gosling or someone else) also made a snide remark about Ajax that made me prick up my ears. I have previously wondered on this blog why people even bother with RIA tools like Flash, Silverlight and JavaFX when Ajax/DHTML is getting to be so much more capable. I realise now that I've been looking at it from the viewpoint of the community at large. From the viewpoint of Sun (and now Oracle), there is a silent and desperate struggle for survival going on. If Ajax succeeds, it will further reduce the relevance of these vendors. JavaFX is critically important to Sun. It's irrelevant to the world.

The second session I attended was "Deploying Java Technology to the Masses: How Sun deploys the JavaFX runtime" by Thomas Ng from Sun.

My major grouse against JavaFX is that it still isn't available for the Linux platform. Nigel Eke commented on my earlier blog post that Sun was going to announce Linux support for JavaFX at JavaOne, but it hasn't happened yet.

Very briefly, Ng's talk covered the following points - the deployment mechanism is JNLP, pioneered by Java Web Start almost a decade ago. There's a tool called the JavaFX Packager that is bundled with the JavaFX SDK. This helps to automate the creation of JNLP files, which is otherwise an error-prone undertaking. In future, the same tool will be used to create JNLP-based launchers for applets and Java Web Start applications, but as of today, that's still on the wishlist.

A potentially very useful feature (albeit a potential security headache) is the ability of the JavaFX runtime to permit cross-domain XML traffic. This removes the crippling constraint that prevents current-generation browser applications from reaching back to any servers but their own to make service calls.

A future version of JNLP will also remove the current requirement for an absolute "codebase" URI in the jnlp tag. This relaxation will make applications more readily portable between servers.

The next session was another General Technical Session hosted by Robert Brewin (Distinguished Engineer, VP and CTO for Application Platform Software at Sun). It was called "Intelligent Design: The Pervasive Java Platform".

[My "Press/Analyst" badge entitled me to special seating at the front of the hall, but since the perk didn't come with a table for my laptop, I declined the honour and chose to sit in the bleachers with a mortal friend.]



Items in brief:

Project Kenai allows developers to collaborate, share and engage. It seems to be a richer version of CVS or Subversion as a repository, because it has some of the characteristics of a social networking site. It will soon include continuous integration capabilities through its incorporation of the Hudson continuous build tool.

JDK 7 (prematurely announced by Jonathan Schwartz) will feature 3 things when it debuts - a modular platform, a multilingual VM and productivity enhancements for developers.

A new dependency information file format for classes means that "the classpath is dead", a statement that drew cheers. This diagram shows the new packages in Java 7, and their dependencies.



It will also make it easier to create deployment packages for various target platforms (such as .deb files for Ubuntu Linux).



There is a project called the "Da Vinci Machine" to enable the Java VM to be a true multi-language VM. For the first time in a decade, there will be new bytecode that will effectively upgrade the JVM architecture.

Then there's the cheekily-named Project Coin, which deals with small change(s) to Java.

Many minutes were wasted on a description of JEE 6. I have never understood the rationale for some of the components of JEE, and why Sun insists on throwing good money after bad. There's JSF 2.0 and EJB 3.1. The EJB cancer has reached Java's lymph nodes and is now threatening to infect healthy web server tissue through "Lite EJB", which can be bundled inside .war files and does not require .ear files or a heavyweight container. Why, why, why??? I thought Spring framework chemotherapy had forced EJB into remission, but this seems much more virulent than I thought.

Bean validation (JSR 303) seems to be a way to ensure consistent data validation across tiers (through JSF in the presentation tier and JPA in the persistence tier). I can't help thinking this is just the Java version of XForms. XForms carries an XML document payload that is defined through a schema that can then be used to validate this instance data in any tier.

There's a lot of emphasis on the Glassfish server that I just can't understand. Again, it's probably critical for Sun to own an application server that isn't just Tomcat+Spring. It just isn't that relevant to the rest of the world. Glassfish has ambitions of being scalable, "from embedded to carrier-grade".

One corollary of a more modular Java that I deduced and that Sun acknowledged is that in the future, there will be just one Java. No more ME, SE and EE variants.

I next attended a rather boring and depressing session that was optimistically called "Tips and Tricks for Ajax Push and Comet Apps". This was conducted by Jean-François Arcand of Sun and Ted Goddard of ICEsoft Technologies. It was depressing because at the end of the day, there seems to be no satisfactory method to implement server push that will scale well and work on all browsers. The lecture was a tour of various unsatisfactory compromises. The three standard techniques (Polling, Ajax push (long polling) and HTTP Streaming) all have their drawbacks. I personally like HTTP Streaming with its chunked response (multiple as-and-when partial responses to a single request). However, this doesn't seem to work very well in practice because of the lack of a client-side API for reading such an input stream, and the tendency for proxies to cache responses instead of passing them on, incomplete though they may be.

My conclusion is that push is impossible with HTTP. We need to implement AMQP over TCP for true bidirectional messaging. It will take time, but it will happen within a decade. And then these problems (and their kludgy solutions) will seem laughable in retrospect.

The last session that I attended was "Spring Framework 3.0: New and Notable" by Rod Johnson himself. The hall was huge but absolutely packed.

Points in brief:

Spring 3.0 will use Java 5+. Anyone wishing to use Java 1.4 must remain with Spring 2.5. Three cheers for courage. Backward-compatibility is a two-edged sword, and I'm glad Spring has made the leap (no pun intended).

There is a new expression language (EL) that allows developers to navigate bean properties, invoke methods and allow construction of value objects. It also allows us to parameterise annotated code:

@Value("#{systemProperties.favoriteColor}")
private String favoriteColor;

where the term within "#{}" is in the EL.

There is an elegant implementation of REST based on Spring MVC (and what was unsaid was that it was not based on JAX-RS). The Spring MVC ViewResolver is a natural point to implement the various resource representations that REST provides a common interface to.

There are several MVC enhancements. @RequestHeader provides access to HTTP request headers. @CookieValue provides access to cookies. It's also possible to have application-specific annotations.

There is a new Java configuration mechanism. Annotations can now be placed in dedicated "configuration classes" rather than in POJOs themselves. In other words, these are XML files all over again, but in Java 5 syntax rather than using angle brackets. I like this, because I was uneasy about annotations. One of the major benefits of externalising configuration is the ability to decouple wiring from the application's own components and make it possible to substitute implementations without recompiling code. Annotations in code are a step backwards, from this viewpoint. Annotations in dedicated configuration classes are three-quarters of a step forward again. We still need a recompile, but only of the config classes.

Rod Johnson pointed out that previous versions of Spring had a rather ad hoc approach to the web side of an application, but that Spring 3.0 now features a consistent stack of components. The base is Spring MVC. Above this layer are Spring Web Flow, Spring Blaze DS (for Flex) and Spring JavaScript. At the top layer is Spring Faces (ugh!). Spring Blaze DS will make it very easy to build Flex apps with Spring. The diagram made no mention of Spring Portlet MVC, which is perhaps just as well. [The portlet specification is another monstrosity that continues to give organisations grief. As an "integration" technology, it belongs more in the problem space than in the solution space.]

The Spring Java config mechanism is type-safe and does not depend on string names. It therefore has more robust bean-to-bean dependencies. It allows inheritance of configuration. And it allows object creation using arbitrary method calls.

Johnson also introduced Spring Roo, the Domain-Driven Design tool and Ben Alex's brainchild. There is a major functional overlap between Roo and Grails, which Johnson did point out, but his logical flowchart to decide which one to use was a bit contrived, in my opinion. Roo and Grails are competitors, and the fact that they are uncomfortable co-components in the Spring portfolio does not make them part of a seamless product line with an implied "horses for courses" decision-making logic. To put it bluntly, if Roo had appeared a couple of years earlier, Grails would probably have been stillborn. The delay in Roo's release gave Grails its break, and now they're competing for developer mindshare. Well, may the best framework win.

Tuesday, May 19, 2009

Will Windows Become a Drain on Microsoft?

A friend just pointed me to the latest blog posting of Sun's CEO Jonathan Schwartz. Titled "Will the Java Platform Create the World's Largest App Store?", the post reveals a side of the Java platform I hadn't much thought about. I suspect not many people are aware of the revenue model that the Java runtime has created for Sun. Google and Yahoo! (I reckon they're the companies Schwartz refers to) obviously find it worthwhile to pay Sun for the opportunity to reach out to the billion users of the Java runtime.

Three observations I can make immediately:

1. Microsoft ironically did Sun a favour by trying to corrupt Java in the mid-nineties. This caused Sun to bypass Microsoft and go direct to the user's computer. That's what has now resulted in the happy situation of Sun being able to negotiate effectively with the search giants without having to cut Microsoft in on the deal.

2. Oracle probably has a financial reason to buy Sun after all :-).

3. I can now understand another motivation for Google to innovate Android. With Android, Google can be in the position that Sun now occupies as gatekeeper to a billion users' eyeballs. Like Sun cut Microsoft out of the negotiation, Google can cut Sun out with a simpler licensing deal (for the Java VM) and lock onto a growing revenue stream instead. But they may have to share their profits with the owners of the hardware platform.

This line of thinking leads me to a conclusion that is very bad news for Microsoft. If I was an executive at Nokia, I would be talking to Google about getting a share of the ad revenues that Google will surely get through widespread penetration of Android. Armed with a likely deal of that nature, I would then approach Microsoft to work out something similar. Microsoft will probably be in for a shock. Rather than be able to charge hardware vendors for the privilege of licensing Windows, they would be asked to pay rents to those vendors (and their telco partners) for the privilege of reaching millions of customer eyeballs. If Microsoft doesn't play ball, the phone vendors can simply switch to Android. It's not like the PC platform where users have been dog-trained to demand Windows. Even a zero-licence fee Windows won't be good enough in the mobile device market.

I read an article recently that speculated Microsoft was cutting Windows licence fees to the bone to make it viable on Netbooks, and the article then went on to say it was no wonder Microsoft was shedding staff. Now, if OEMs start expecting Microsoft to pay for them to use Windows, the job losses at Redmond will only mount.

All because of a little operating system called Linux, and an open platform called Java (that together go to make up the base platform for Android).

It's wonderful what a bit of competition will do.

Sun Developer Day Sydney (19 May 2009)

I attended Sun Developer Day in Sydney today. Compared to the three-day extravaganza of last year, this was a much simpler affair, with just eight short sessions over a single day.

The sessions were as follows:

1. Keynote (Reginald Hutcherson)
2. JavaFX (Angela Caicedo)
3. Java, Dynamic Languages and Domain-Specific Languages (Lee Chuk Munn)
4. MySQL (Peter Karlsson)
5. OpenESB (Lee Chuk Munn)
6. Virtualization (Peter Karlsson)
7. Developing and Deploying apps with Java SE 6 Update 10 (Angela Caicedo)
8. DTrace (Peter Karlsson)

I'll organise my feedback grouping the topics by speaker, since they specialise in related fields.

Reginald Hutcherson is "Director, Technology Outreach" at Sun and seems to have been with the company forever. I remember hearing him speak at a Sun event more than 10 years ago. He provided an overview of the topics that would be covered over the rest of the day. The key takeaway from his speech was the central role of the Java Virtual Machine. On one side, there are multiple languages (including the increasingly popular scripting languages) that are capable of running on the JVM. On the other side, the JVM has been ported to "all the screens of our life" - computers, TVs, mobile phones, GPS devices, etc. Java the platform, as has always been emphasised by Sun, is more important than Java the language.

Angela Caicedo spoke about JavaFX and later about new options to develop and deploy applications after the innovations in Java 6 update 10. I didn't get the impression that anything very major had occurred in the JavaFX world since last year. My reaction to all these RIA tools (Flash/Flex, Silverlight and JavaFX) is, why bother? JavaScript-based tools are becoming so much more powerful these days that we can build astoundingly rich applications using nothing more heavyweight than JavaScript and HTML. So I don't know where JavaFX will go. I commented last year after Jim Weaver's demo that the declarative JavaFX code quickly ends up looking like the dog's breakfast. I haven't seen anything this year to change my opinion of JavaFX.

[Out of curiosity, I tried downloading NetBeans 6.5 later in the evening and was more than a little disappointed to see that it had no JavaFX support on Linux, only on Windows. Why not? As an Ubuntu user, I'm offended that this excellent platform isn't deemed a first-class development environment.]

Angela's second talk (on the latest options for developing and deploying applications since Java 6 update 10) was, in my opinion, the best session of the day.

Minor revisions (called updates) don't break APIs, but update 10 is still quite revolutionary.

Where do I begin? First, my personal background - unlike most Java developers in the industry today (who build web applications), I have actually built Swing, Applet and Java WebStart-based applications for a number of organisations. And I've been personally disappointed in the fact that these technologies did not become more popular.

With Java 6u10, perhaps some of the reasons for their relative unpopularity have gone away (but it may be too late to get the developers back).

In a nutshell, Sun has now unified the development and deployment of applets and Java WebStart applications, cleaned up the browser-JVM architecture and improved the bidirectional interoperability between Java and JavaScript. I can only ask Sun, where have you been these ten long years?

JVMs no longer have to share process space with the browser, and different applets can run in different JVMs if required. This reduces the impact of an applet crash on other applets and on the browser. Note this: the crash of an applet no longer results in the browser crashing, but more importantly, one can close the browser and have an applet continue to run. This is big stuff. When one then tries to close the applet, one is prompted with the choice to save the applet as a desktop icon. In other words, it's no different from Java WebStart. This is really powerful and cool.

There are a number of other improvements and optimisations. The JRE is now broken into a number of smaller components, and a more lazy download strategy is employed under the covers to reduce the startup time of applications. There is a more modern look and feel to replace the dated Metal appearance. Components are now rendered using scalable vector graphics rather than bitmaps.

I'm most excited about the three major improvements I listed earlier. I can see a very elegant way to implement SOFEA using these innovations. The MVC controller on the client should be an applet, running in a persistent JVM outside the browser process. The Application Download will start the applet and configure it with the wiring logic for the application's Presentation Flow. The controller will handle not only Presentation Flow but also Data Interchange with services hosted by remote servers. Perhaps a variant of Spring Web Flow suitably modified for the client side will be a good model for Presentation Flow. Server-side UI resources in the form of Freemarker or other templates could be pulled by the controller and populated using appropriate models. With the seamless interaction between Java and JavaScript, many things are now possible.

Lee Chuk Munn spoke about Java, Dynamic Languages and Domain-Specific Languages and in a later session, about OpenESB.

There wasn't anything very new in the session on scripting languages, but it may be worth repeating the benefits of scripting.

Scripting helps build an ecosystem for an application. Applications become platforms. Witness plugins for Firefox. Firefox is no longer just an application. It is a platform by itself, because of the JavaScript-based plugins that it supports. Scripting also enables domain-specific applications, such as Mathematica.

[I hadn't heard of Lua before. This is a scripting language for gaming.]

In his second session, Lee worked through an example to demonstrate the features of OpenESB. I must confess I'm a bit of an ESB skeptic. To be fair, the ESB feature that was demonstrated was plain BPEL functionality, so it wasn't so much an ESB as a BPEL engine. I have no objections to BPEL engines as long as they don't pretend to be "SOA fabric in a box", which is my grouse against ESB products.

The main feature of OpenESB as a BPEL engine seems to be its support for JBI (Java Business Integration). Put bluntly, JBI seems to mean a WSDL-defined interface on both the service consumption side and the service provision side. Lee demonstrated how a process could be graphically mapped using NetBeans. The WSDL on the service provision side made the entire process look like a standard SOAP service. On the other side, the actual implementation of business logic was encapsulated in a Stateless Session EJB (ugh!) The interface to the EJB was once again a WSDL. The demo worked quite well, showing not only how logic could be placed in components but also in the process definition itself. However, I fear that powerful tools that simplify service and process development, when placed in the hands of developers without a sufficiently "SOAphisticated" understanding of concepts, will lead to horribly designed applications. (And why are we still wasting our time with EJBs?)

Peter Karlsson gave three talks, one on MySQL, one on virtualisation and one on DTrace. Unfortunately, his areas of specialisation (other than databases) are not of great interest to me personally, so I'll just give a brief run-down of what he covered.

My biggest takeaway from the MySQL talk was that MySQL supports multiple "storage engines", each optimised for a different task. In the same database, one can allocate different storage engines to different tables depending on usage. Examples:

MyISAM is useful for tables that don't require transactional integrity. This is the default, and is most useful for web and data warehouse applications.
InnoDB should be used for tables that do require transactional integrity, but because of the extra performance overhead, should not be used for other tables. Slashdot, Google, Yahoo! and Facebook are InnoDB users.
Archive should be used for log and audit trail type tables, because of its native compression and high insert performance.
In-memory should be used for scratch tables (working storage).

I was a bit confused during the virtualisation talk because there were some generic virtualisation techniques being described, but also some Sun products, and I wasn't always able to tell them apart.

There are four broad categories of virtualisation:
1. Hard partitions (e.g., Dynamic System Domains).
2. Virtual Machines (e.g., Logical Domains, Solaris xVM x86 hypervisors, VirtualBox).
3. OS Virtualisation (Solaris Containers, Solaris Trusted Extensions, Brand Z Solaris Containers, Trusted VirtualBox).
4. Resource Management (Solaris Resource Manager).

My takeaway from this was that VirtualBox was something worth looking into as a developer. This is a Sun product that is Open Source.

Peter Karlsson's last talk was on DTrace, in which I have not the slightest interest. This is a Solaris-based system tool, I understand, and provides relatively low-overhead diagnostic capability. It is reportedly safe for use in production environments because it is supposedly zero-overhead (though Peter did warn about some specific settings that had high performance impact), because it cannot change data, because it cannot cause crashes and because it eliminates the need for post-processing of data.

There are Probes and Providers and a scripting language called D. Traces can be set on activities using a predicate-based expression language. The approach reminded me of AOP (Aspect-Oriented Programming).

Overall, I was satisfied with the coverage of the day's topics, but I have a lingering feeling that the action in the Java world has moved away from Sun.

NetBeans => Eclipse
GlassFish => Tomcat
EJB => Spring POJOs
ESB, SOAP => REST
JavaFX => JavaScript/Ajax libraries (AJAX is now so common it's now spelt Ajax)

It'd be good if Sun could drag itself closer to where the action is, instead of seeming to play in its own little sandpit.

I'll be at JavaOne in two weeks, and I'll report on my impressions there.

Thursday, May 07, 2009

Open Source's Assault on the Enterprise Data Centre

This week's announcement of SpringSource's acquisition of Hyperic, a maker of Open Source enterprise monitoring software, made me think about a number of things.

1. What does this mean for the prospects of Open Source within enterprises, especially in data centres?
2. What are the prospects for SpringSource as a company? Crucially, what are SpringSource's strategic intentions?
3. What do these trends mean for customers, users and the community as a whole?

I keep reading about how Open Source's prospects have paradoxically brightened thanks to the deepening economic gloom. For all the wrong reasons, enterprises are now looking to Open Source. I say their reasons are wrong because short-term cost savings in the form of lower licence fees are not the major advantage of using Open Source. The opportunity to control their own destiny, free from coercion by vendor agendas, is the greatest benefit, and this can deliver savings far in excess of the measly amounts that licence fees alone would suggest.

I believe that the increasing profile of Open Source in the enterprise software stack is a good thing for user organisations, with some qualifications that I will come to later. I have believed for some time, based on my experiences in large commercial organisations, that proprietary software (and hardware) companies manage to earn a hefty premium from corporate customers on the basis of benefits that are more mythical than real. There is a certain type of corporate decision-making animal that wants to be convinced and comforted by the brand shorthand of large vendors that spell safety, reliability, and a general "enterprise classness". Thus a WebSphere "has to be" better than a Tomcat, an Enterprise Service Bus "has to be" better than simple REST, and an AIX, HP-UX or Solaris "has to be" better than Linux. They cost so much more, so they've got to be better. Cheaper alternatives damn themselves by being cheaper.

How can this mentality be overcome?

It's instructive to consider the hardware market. The most dangerous stealth assault weapon in hardware is the Intel platform. I made the mistake of underestimating the potential of Intel in the early nineties. Those who worked in the industry in the nineties would remember a product category called the Unix workstation. Unix workstations were desktop machines used by the elite in the IT industry. Compared to Unix workstations, Intel PCs were woefully underpowered apologies for computing platforms. Lowly, second-rate developers wrote applications on PCs using pathetic tools like FoxPro. Real developers (like me) built high-end applications using real tools like Oracle Forms on workstations.

Well, the bottomline is that today's generation of developers has probably never heard of a Unix workstation. The puny, pathetic PC platform, like an imperceptibly rising tide, completely swamped its high-end cousin. The tables turned swiftly. "Underpowered" no longer was. That change swiftly repositioned the alternative as "overpriced". End of story.

Today, the Intel assault continues, this time against big iron in the data centre. I'm frankly skeptical about the value proposition of big iron. Intel power and capability are constantly rising, and its price-performance trend continues unabated. The ability of Intel to virtualise is changing the game rather rapidly. The argument in the data centre, like on the desktop, will shift from "underpowered" to "overpriced" in short order, and big iron will be out. Big Unix will go first, followed by mainframes when organisations can wean themselves off their legacy mainframe applications.

But as on the desktop, the Intel platform cannot prevail against the incumbent without the help of software. And that's where the SpringSource acquisition drops a piece of the jigsaw puzzle into place. Hyperic now challenges enterprise monitoring tools like IBM Tivoli, HP OpenView and BMC Patrol, providing similar capability at a fraction of the price. The pattern is familiar.

For a while, corporate decision-makers will hesitate, reluctant to abandon the comfort blanket of familiar brand names from the big end of town. But I suspect the growing pressure to find cost reduction opportunities in the current climate will force many of them out of their comfort zone. Many of them will sample Open Source. Most, I suspect, will like what they find, once they overcome their own mental roadblocks. [I have never ceased to be amazed at the reluctance of customers of software to choose options that will give them control of their own destiny. No wonder Marx had to exhort workers to unite with the reminder that they had nothing to lose but their chains. Humans seem to prefer the comfort of familiar bondage to unfamiliar freedom. It's the Matrix's choice of red pill (harsh reality) and blue pill (comfortable illusion). Many actually prefer the blue pill. It's only when the blue pill is no longer an option will they try the red pill.]

But that brings me to the qualification I expressed earlier. I support the notion of Commercial Open Source as long as certain principles are respected, the main one being the ability of users to control their own destiny. SpringSource and its increasingly powerful stack of products (Tomcat or tc Server, the Spring framework and now Hyperic) are most definitely liberators when compared to the gouging data centre vendors that we all know and (don't) love. But will SpringSource stay faithful to the Open Source social contract?

I believe that software must be collaboratively developed to amortise the costs of development among a number of developers and thereby reduce the requirement to show a return on investment on the core product. That is the fundamental difference between the Open Source development model and the proprietary one. Proprietary software is forced to show a return in the form of per-copy licence fees because it is funded by an investment model. Open Source is not, because its funding can follow an expense model with development costs written off, or amortised. Thus Open Source appears "fairer" because it doesn't charge "rents" on copies of software whose marginal cost of production is actually zero. Commercial Open Source carries this model to its logical conclusion by only charging for non-repeatable services that cannot be stored and replicated, such as training, consultancy and support.

So far, SpringSource as a company has followed the book on Commercial Open Source. Their products tend to be freely downloadable (with some exceptions such as their "enterprise" versions), and organisations can run them without having to pay either licence fees or support fees if they can also maintain them in-house. There is sufficient documentation available to give users control of their own destiny, which is the crucial benefit of Open Source.

My fear is about the future. Will SpringSource continue to maintain a benign pay-for-services model on top of genuinely Open Source products, or will they go the way of so many others before them (like Compiere), and be tempted to convert their stewardship of a powerful stack of enterprise software into a model with tighter control and less user freedom? I sincerely hope they can avoid the temptation of greed, because the long-term benefits to them as well as the rest of the industry are much higher if they retain a light touch. A tighter grip will make their users want to wriggle free. Witness the challenges of Adempiere to Compiere, MariaDB to MySQL and CentOS to Red Hat Linux. If a commercial entity seems to wield an unhealthy degree of influence on an Open Source product, the tendency to fork the project will increase, to the detriment of all (the company more than the community, ultimately). It's a bit worrisome that only SpringSource employees have commit rights to their software. A more inclusive and meritocratic development community around products under SpringSource's stewardship would allay user apprehensions and pre-empt such forking.

This month's Indian elections underline my point. How could an incredibly diverse country of over a billion people have stayed united and largely peaceful for over sixty years? Only by giving those people genuine control over their own destiny through democracy. Much smaller and more ethnically homogeneous countries have been less successful at this.

In future, SpringSource could attain sufficient gravitational mass to pull the stewardship of the Java platform itself into their orbit. If they can avoid the temptation to tighten their grip, "monetise" their influence or otherwise break faith with the community, they can look forward to a (gradual) rise to a position of pre-eminence in the enterprise software market.

The current big names in the enterprise data centre have the most to fear from this development. If SpringSource is wise, the enterprise market is theirs. Customers will gladly give them their business in return for guaranteed control of their destiny.