Archive

Archive for the ‘JavaOne’ Category

JavaOne day five

11 May 2008 Leave a comment

The keynote

The keynote started with a dj playing some house music. Quite good house music I’d say since San Francisco has a great house scene (although the dj was from L.A.)
The session started with James Gosling presenting several people who have implemented projects in Java. It started by showing a vide about John Gage and giving him a gold duke and a painting of the new Duke design by James Gosling.

Then a demo of visualvm followed. visualvm is a profiler (it reminds me a lot of the NetBeans profiler). The visualvm is not for testing, it is meant to be used in production to evaluate the application running.

Tor Norbye went on the stage and talked about JavaScript. He then built a simple diary by using JavaScript and the yahoo libraries. To me it seemed like he tried to show the JavaScript editor in NetBeans, which was quite good to be honest.

Ken Russel and Sven Gothel talked about java on nVidia APX 2500 on a cell phone. The also showed a 3D demo model of a city on a mobile phone. If we take in mind the mobile device’s capabilities this demo was quite impressive I’d say.

Chris Melissinos and Joshua Slack talked about jMonkeyEngine and project darkstar, and also showed us a demo (unfortunatelly I didn’t take any video of the demo).

Next Laurent Logasanto went on stage and talked about extreme Java Card innovation and also presented a Java Card demo, a robocode game.

Joe Polastre talked about pervasive Java where they have some of the worlds smallest computers and they run Sentilla which fits in 10Kb or ram and 48 of ROM memory.

Jim Marggraff did an excellent demo of livescribe which is a new product of a mobile platform. Basically livescribe is a pen and a specially constructed notepad that contains micro cells and can record everything you write on this notepad as well as voice. It also provides musical capabilities and translation. And all this is written in Java. The only drawback currently is that if you run out of notebooks you will need to order them from pulse (the company behind livescribe) but they don’t ship outside USA. The good news is that in a few months you will be able to print your own notebooks on certified laserjet printers

Dr. Greg Bollella spoke about industrial strength java and showed the Blue Wonder project

Paul Perrone from Perrone Robotics presented an automatic car demo. If you are born in the ’70s it’s more than likely that you remember a TV series called “The Knight Rider” where Kit, a super car was following orders and was doing things.

This demo was doing exactly the same (although not so sophisticatedly). A car was full of sensors and a navigation system. All car’s intelligence, the AI and the decision making, is done on a microprocessor that runs Solaris and Java. Paul was giving verbal instructions to the car and this was moving! He was commanding it to go forth and it was going forth, he was commanding it to turn left and it was turning left! Impressive.

Dr Phil Christensen talked and showed a demo about Java rocks on Mars. He presented the demo JMars (which targets, distributes and analyses data from Mars) and it’s open source.

He talked about the “opportunity rover” which is the rover that was sent to Mars by NASA to gather information. In about two weeks from now the next NASA lander is using JMars to define a safe place to go. In two weeks they will find out if they did the right job.

The a video was shown about “the grid” of CERN and Derek Mathieson talked about accelerating java and how CERN is building huge projects and have hundreds of Java engineers working on them and also about the TIM (Technical Infrastructure Monitoring) project.

Then he presented a few more demos, namely

  • 3D model of Atlas with GraXML
  • EGEE
  • GridPP

Using Java technology at the world’s largest web site by Joshua Blatt and Dean Yu.

yahoo! has more than 500 million registered users and all the traffic generated every day is more than 20TB of data. To cope with this volume of data it has hundreds of thousands of servers and dozens of data centres around the world.

Although the core technology of yahoo! is C and C++ Java comes into play through yahoo! acquisitions. They also use apache and LAMP applications but over the years yahoo! has made several Java acquisitions

All these are run by using Tomcat and JBoss application server. They also use JNI and IPC bridge for C/C++ and Java communication. Although JNI was a few times slower than direct Java access they found that this was mainly due to character conversion and they could work on that.

In order to have proper web security they use

  • JSCV (Apache common daemon)
  • Cross-site scripting (XSS) for input validation
  • HTML input validation (all entities are encoded, or stripped)

For scalability and reliability they have

  • load balancers
  • DNS round robin algorithm
  • HTTP redirectors

They also use maven to do the builds and get each version and dependencies of the libraries they are using.

In order to deploy all the Java applications they face several challenges

  • all the software and hardware configurations have to be consistently reproducible
  • all the changed have to be auditable
  • J2Ee containers need to be configured differently for different products
  • manual deployment and configuration is impossible for thousands of machines

The package management systems helped

  • Dependency resolution and conflict detection
  • Detect modifications made after installation
  • Repair manual, uncontrolled modifications
  • Additional Y! requirements
  • Clearly defined package lifecycle
  • Consistent start/stop/restart directives
  • Settings as part of package state
  • Automatic journaling of system’s package state
  • easy to change system to match any point in journal
  • state can be cloned to other machines
  • Centralized deployment console

In summary

  • Bulk of JNI API performance overhead is charset conversion
  • Synchronizing JNI API calls insufficient when dealing with many native libraries
  • Use process privilege levels to protect sensitive files
  • Employ opt-out input validation strategies
  • Multi-process Tomcat improves performance and availability
  • Transitive dependency conflicts need to be treated like source merge conflicts
  • Finer grained package of Java EE platform containers enable easier reconfiguration

Advanced Java NIO technology based application using the Grizzly framework by Jean-Francois Arcand and Oleksiy Stashok

This talk was about how easy it can be to write scalable client- server applications with the Grizzly Framework.

Grizzly is an open source framework that

  • Uses Java NIO primitives and hides the complexity programming with Java NIO.
  • Easy-to-use high performance APIs for TCP, UDP and SSL communications.
  • Brings non-blocking sockets to the protocol processing layer.
  • Utilizes high performance buffers and buffer management.
  • Choice of several different high performance thread pools.
  • Ship with an HTTP module which is really easy to extend.
  • Use the Framework, but STILL have a way to control the IO layer.

Grizzly use SelectorHandlers for each protocol implementation. Each handler is added to a controller (a controller represents the main point of the grizzly framework. A controller has several handlers and all these interfaces have ready to use implementations.). There are TCPSelectorHandler, UDPSelectorHandler, TLSSelectorHandler and of course you can write your own selector handlers since everything in Grizzly is customizable.

A controller by default is composed of

  • SelectorHandler
  • ConnectorHandler
  • SelectionKeyHandler
  • ProtocolChainInstanceHandler
  • ProtocolChain
  • Pipeline

(all these interfaces have ready to use implementations by default)

Grizzly also provides other classes you need

A ProtocolFilter encapsulates a unit of processing work to be performed, whose purpose is to examine and/or modify the state of a transaction that is represented by a Context. Most Grizzly based application only have to write an implementation of one ProtocolFilter and you can assembled many ProtocolFilters into a ProtocolChain.

A ProtocolParser is a specialized ProtocolFilter that knows how to parse bytes into protocol unit of data. Grizzly implements it by parsing state into steps:

  • start processing a buffer
  • enumerate the message in the buffer
  • and end processing the buffer.

The buffer can contain 0 or more complete messages.

A ProtocolChain implements the “Chain of Responsibility” pattern, The ProtocolChain API models a computation as a series of “protocol filters”, that can be combined into a “protocol chain”.

A ProtocolChainInstanceHandler is where one or several ProtocolChains are created and cached and it decides if a stateless (one ProtocolChain instance shared amongst Threads) or stateful (one ProtocolChain instance per Thread) ProtocolChain needs to be created:

PipeLine is an interface is used as a wrapper around any kind of thread pool. Can have one Pipeline per SelectorHandler or a shared one amongst them.

On the client Grizzly provides the following components

A ConnectorHandler which is a connection unit that implements basic I/O functionality (connect, read, write, close). It provides possibility of working in both synchronous and asynchronous modes (e.g. blocking or non blocking read/write) and you can control the asynchronous either by a CallbackHandler with user custom code, or automatically by framework, using asynchronous read and write queues.

 
Controller ctrl = new Controller(); 
ctrl.addSelectorHandler(new TCPSelectorHandler(true)); 
startGrizzly(ctrl); 
ConnectorHandler connection = ctrl.acquireConnectorHandler(Protocol.TCP); 
connection.connect(new InetSocketAddress(host, port)); 
connection.write(outputByteBuffer, true); 
connection.read(inputByteBuffer, true); 
connection.close(); 
ctrl.releaseConnectorHandler(connection); 

A CallbackHandler handles client side asynchronous I/O operations (connect, read, write). When NIO channel is ready to perform the operation the corresponding CallbackHandler method will be called. All asynchronous events could be either processed inside CallbackHandler or propagated to a ProtocolChain. When processing asynchronous event inside CallbackHandler, be careful with SelectionKey interests registration.

 
Controller ctrl = new Controller(); 
ctrl.addSelectorHandler(new TCPSelectorHandler(true)); 
startGrizzly(ctrl); 
ConnectorHandler connection = ctrl.acquireConnectorHandler(Protocol.TCP); 
connection.connect(new InetSocketAddress(host, port), new CustomCallbackHandler(connection)); 
connection.write(giantByteBuffer, false); 

public class CustomCallbackHandler implements CallbackHandler<Context> {
    private ConnectorHandler connection;
    CustomCallbackHandler(ConnectorHandler connection)
    {
        this.connection = connection;
    }    

     public void onWrite(IOEvent<Context> ioEvent) {
        connection.write(giantByteBuffer, false);
        if (!giantByteBuffer.hasRemaining()) {
           notifyAsyncWriteCompleted();
        }
    }
 }
 

ThreadAttachment

  • Between OP_READ and OP_WRITE, some protocol needs to keep some states (remaining bytes inside a byte buffer, attributes, etc.).
  • In Grizzly, the default is to associate one Byte Buffer per Thread. That means a byte buffer cannot be cached as its Thread can always be re-used for another transaction.
  • To persist the byte buffer among transactions, a Thread Attachment can be used
  • What happens is internally, all attributes associated with the WorkerThread are ‘detached’, and new instance recreated (Warning: A new ByteBuffer will be created!!)
  • The ThreadAttachment can be attached to a SelectionKey, and next time an OP_READ/OP_WRITE happens, you are guarantee the value will be re-used.

HTTP modules

  • The Grizzly Framework also have an HTTP framework that can be used to build Web Server
  • This is what GlassFishTM v1|2|3 build on top of.
  • More specialized modules are also available like Comet (Async HTTP).
  • Simple interface to allow customization of the HTTP Protocol
  • GrizzlyRequest: A utility class to manipulate the HTTP protocol request.
  • GrizzlyResponse: A utility class to manipulate the HTTP protocol response.
  • GrizzlyAdapter: A utility class to manipulate the HTTP request/response object.

NIO.2: Asynchronous I/O (JSR-203) JDK version 7.0

  • An API for asynchronous (as opposed to polled, non-blocking) I/O operations on both sockets and files.
  • The completion of the socket-channel functionality defined in JSR-51, including the addition of support for binding, option configuration, and multicast datagrams.
  • Starting in Grizzly 1.8.0, Asynchronous I/O is supported if you are running JDK version 7.
  • You can easily mix non blocking with asynchronous I/O.
  • Grizzly supports NIO.2. Switching an application from using NIO.1 to NIO.2 is quite simple
  • Bytes are delivered the same way, independently of NIO.1 or NIO.2
Categories: Java, JavaOne

JavaOne day four

9 May 2008 Leave a comment

I have to admit that I went to the session at midday. All the late past nights, the constant attention of the sessions and the club we went last night took a toll on me and I wasn’t there on time. So I only managed to attend four sessions and unfortunatelyl I missed the session with the Swing extensions :( 

Distributed client-server persistence with JPA  by Alexander Snaps

The session started with a few words and code examples of how to use JavaDB. Then the speaker went on explaining what an object relational mapping is and what are the benefits of using it.

An object relational mapping

  • eliminates the need for JDBC
  • provides object identity management
  • provides inheritance strategies (class hierarchies to single or multiple tables)
  • provides associations and compositions (lazy navigation, fetching strategies)
  • provides transparency

The JPA is vendor independent ORM solution, it is easily configurable (configuration can be done directly in code using Java 5 annotations or you can override annotations and XML). JPA implementations are also available outside a J2EE container and there is a dedicated JSR (JSR 317)  as of JPA2.0

It provides (almost) transparent POJO persistence

  • non-final class or methods
  • constructor with no argument
  • collections typed to interfaces
  • associations aren’t managed for you
  • database identifier field.

Entity sample code 

@Entity
public class Person
{
    @Id
    private Long id;
 
    @ManyToOne
    private Company company;

    @OneToMany(mappedBy=”resident”)
    private Set<Address> addresses;
}

The persistence can be managed by using an EntityManagerFactory which is created by the persistence based on a persistence unit name. The EntityManagerFactory in turn creates EntityManager instances and the EntityManager instances know how to handle the persistence of the entities. Use a Query object to query the entities back from the database. An EntityTranaction class is used for transaction demarcation.

The persistent unit can also be set up by using an XML file

<persistence-unit name=”persistenceUnitName”
        transaction-type=”RESOURCE_LOCAL”>
  <provider>
    oracle.toplink.essentials.PersistenceProvider
  </provider>
  <class>some.domain.Class</class>
  ...
</persistence-unit>

There are managed and detached entities. Managed entities have their state being synchronised back to the database transparently for you (no need to call a persist operation on managed instances and the status is flushed to the database automatically). Detached entities on the other hand are not managed any more after they have been persisted. If the state needs to be saved back to the database this requires a merge operation.

Of course there are a few pitfalls

  • transaction management (no need for long living transactions)
  • exception handling (as entities become detached)
  • transaction rollbacks

JPA can be used nicely with Swing. Model View Controller (MVC) is still a good way to design the user interface and layered MVC makes even better match. 

Each MVC layer can 

  • have its own persistent context
  • inherit the one from its parent controller

events can 

  • be MVC local
  • propagate down to child
  • or to the entire tree

At this point they presented a nice Swing and JPA demo.

The data can be distributed by

  • client/server with off-line mode
  • distributed data sets
  • sharing data(it means everyone can update the data)

But the difference of distributed data is that the server and the client might see different data. There might be

  • different database identifier
  • maybe the database itself
  • the JPA implementation used
  • the entities that can be altered on each side

You can use the Holchoko framework. In order to use it you have to define a client side

  • client keeps track of “pairs”
  • a pair is a local and remote ids for an entity
  • the id can be represented by any Serializable type
  • communicates to server through a “filter”

How do we synchronise the state to clients?

  • server simply sends entities to the client
  • sending entities over wires makes them (the entities) detached automatically
  • client changes id field to the local value matching the remote identity
  • client merges the detached entities with the current local persistence context

How do we synchronise the state back to the server?

  • client sends entities to server (after having replaced local id with remote ones)
  • entities with remote id are being references in value holder object (that also hold the local id value)
  • value holders are being send back to the client (holding the identity matches)
  • entity matches are persisted locally for future reference
  • All that currently happens over HTTP

In summary JPA and JavaDB really ease persistence, even on the desktop, all this abstraction pays out and enables distributed persistence, it enables you to DRY and KISS.

For more info

javadb website

 

Design patterns reconsidered  by Alex Miller

The session started by explaining what is a design pattern. In a few words a design pattern is a common solution to a recurring problem. The epitome of the design patterns is considered the “Design patterns, elements of re-usable object oriented software” by Gamma at all published in1994.

There are three types of design patterns

  • creational
  • structural
  • behavioural

The programming language you have solve a problem affects how you think about it.

But there is also the patterns backlash. Some people claim that patterns stop people from thinking. they encourage 

  • copy/paste
  • design by template
  • cookbook/recipe approach

People might not understand what they are doing. “The design pattern solution is to turn the programmer into a fancy macro processor” – M.J.Dominous

Other people also claim that the design patterns are not patterns; they are just workarounds for language missing features. “At code level, most design patterns are code smell” – Stuart Halloway

Overuse is another thing you see thrown around here and then. “Beginning developers never met a  pattern or an object they didn’t like. Encouraging them to experiment with patterns is like throwing gasoline on a fire” – Jeff Atwood, coding horror

The practical patterns are not just code you throw around, they talk about real design issues and help us compare alternatives and found the best solution and understand of a particular problem.

The singleton pattern is simple: I want one instance of a particular class in a system. We create a static instance and provide an accessor method, then hide the ability to create more.

Things that can go wrong;. If you need to test the singleton, how do you mock the singleton? You might not even know that a piece of code uses a singleton. This kind of singleton creates a hidden coupling which means it’s hard to pick up a piece of code in your design and test it in isolation. It’s  hard to test/maintain/evolve.

Singleton has issues.

  • Hidden coupling
  • testing
  • possible memory leak
  • sub classing (it’s possible but its pretty ugly)
  • initialisation order and dependencies if you have a lot of singletons

We address these issues by having an interface and an implementation. Dependency injection comes to the rescue

public class InnocentBystander
{
    private final Singleton singleton;

    public InnocentBystander(Singleton singleton)
    {
        this.singleton = singleton;
    }
}

Testing

public class testInnocentBystander
{
    public void testSomething()
    {
        Singleton s = new MockSingleton();
        InnocentBystander bystander = new InnocentBystander(s);
    }

}

So if you only need one instance of an object you can control it by configuration, not by pattern. You can use Guice, Spring or injection by hand.

The template method pattern. This pattern deals with a template method which (the template method) is essentially a pluggable algorithm. When you subclass the algorithm you need to define details of what’s happening in an algorithm. Take for example the class Controller in Spring. You create a very complex API to get the functionality you need. You have an algorithm and a couple of sub-classes and then you find out that one of the subclasses needs more implementation and this can go on and on. You have to subclass you have no other choice. This makes things complicated. 

With template method you are fighting over inheritance. Think about a Map and subclasses, all of them have a different implementation. If you need functionality from more than one subclass you need to subclass even more.

Alternative to inheritance is composition. When you subclass you bring in all protected and public methods from parent class. It’s unclear often which ones you need to implement. With composition you split the algorithm in different steps and then you inject them to the class you need. It’s a naïve strategy (most likely it’s going to be more fine grained for what we need and therefore there might be a better model).

What about use of context class in template methods? You can create a context class in the algorithm and pass it into the different steps.

Can closures help us with this? If you have them you can use them to replace the strategy pattern.

What have we learned so far? We should prefer composition to inheritance. Composition 

  • allows better use of the pieces functionality
  • it communicates better
  • easier to understand
  • easier to maintain and more robust as it evolves. 
  • Inheritance is a very strong form of coupling, a dependency in your code.

The visitor pattern comes out when you use a composite hierarchy. For example when you have a   Node class and then a CompositeNode class as a leaf. The visitor addresses that and instead of having a bunch operations in every node, you have one generic method (which takes a visitor) and put it on in any one of them. Then you can define visitors at will without changing your data structure.

public interface Visitable
{
    void acceptVisitor(Visitor visitor);
}

public interface Visitor
{
    void Visit(ConcreteNode1 node1);
}

public class ConcreteNode1 implements Visitable
{
    public void acceptVisitor(Visitor visitor)
    {
        visitor.visitor(this);
    }
}

The visitor pattern allows you to add new operations easily, so we can add easily visitor types.

There are some common visitor types

  • collector visitor (collect and accumulate for return)
  • finder visitor (return immediately when match found)
  • event visitor (stateless, fire events for subset of nodes)
  • transform visitor
  • validation visitor

You can simplify visitors with many similar methods if you dynamically assemble them with closures.

Some design principles are

  • use interfaces and dependency injection to reduce coupling
  • favour composition over inheritance
  • separate logic that will evolve at different rates
  • rely on object identity as little as possible
  • leverage static typing and generics

In summary

  • design patterns are a valuable tool to describe real programming model
  • solutions to all design problems are contextual (dependent on language, the code base and the developers themselves)
  • use design patterns as a starting point to discuss alternative.

 

Taming the Leopard: Extending OS X the Java™ Technology Way with Tim Gleason and Jonathan Maron

The session started by presenting how you can extend the mac OS-X for Java.

There are two different types of plugins

  • quicklook
  • spotlight importers

And also two different types of events

  • spotlights
  • fsevents (monitoring a file system for changes)

The quicklook (you get a preview of the file without opening it) plugin

  • written in objective-c
  • implement a number of standard interface methods
  • render a representation of the file type (can be heavyweight GUI, typically developers seem to leverage native platform display capabilities)

The interface for the quicklook plugin is as follows

public interface QuickLookGenerator
{
    public String generateHTMLForPath(String path);
}

In order to build the plugin one needs to 

  • implement the QuickLookGenerator interface
  • create a jar file
  • place jar file and dependencies in a directory
  • update plugin properties file
  • run “ant generate plugin”
  • copy generated plugin directory to /Libary/… plugin folder

Then a demo of building a plugin followed.

The spotlight plugin

  • local box/network wide search/metadata feature
  • extensible
  • xcode spotlight plugin template
  • written in objective-C
public interface SpotlightImporter
{
    public Map<String, Object> importFile(String path);
}

In order to build the spotlight plugin one needs to implement the SpotlightImporter interface and follow the same steps as the QuickLookGenerator plugin.

A simialr demo followed of how to build a spotlight extension.

The spotlight can use a query language to do a search. The query language

  • has sql “where clause” like syntax (wildcards, scoped to certain directories)
  • can query over any file metadata attributes and can be scoped to directories

The Java FSEvent can receive notification of file system modifications. 

 

The future of Guice by Bob Lee

This was the coolest session of all I have attended. It wasn’t exactly like a session but more of a friendly chat with beers and laughs. Google bought several beers and everyone at the session had the chance to have a relaxed conversation with the Guice guys over a few bottles of beer. 

The chat started by Bob going through the philosophy behind Guice. This was similar to what he had said in JavaPolis last year. All in all the philosophy of Guice is

  • back to basics
  • @inject is the new “new” brevity of “new”, flexibility of a factory
  • fail early but no too early (try all possible ways)
  • make it easy to do the right thing
  • types are the natural currency of java
  • prefer annotations to convention
  • singleton’s aren’t bad – the typical implementation is
  • focus on readability over write-ability
  • maximise power-to-weight ration of API
  • balance – just because you can doesn’t necessarily mean you should

The goals for Guice 2.0 are

  • relieving pain point
  • compatibility with 1.0
  • extensibility
  • tool support (last summer they had an intern developer for an eclipse plugin)
  • shooting for Summer 2008 release

They also announced that Jesse Wilson (from the glazed lists project) is the latest member of the team.

The core features of Guice 2.0 will be:

  • mirror API (it’s the Software Service Provider, for people who want to have fairly deep integration with Guice, like writing Guice extensions, or you want to hook up Guice to a 3rd party tools)
  • tool stage (when you call Guice.createInjector() it does a lot of work, this is a mechanism to create injector that is indented for tools to use)
  • binder.getProvider() (you are writing a module and you want another module to provide some other binding. This is an API where in your module you can get a provider and when the application starts up all dependencies are available)
  • class/constructor listeners (the idea is that if you build arbitrarily complex framework like EJBs you can use Guice to be your hosting environment. Guice can give you the opportunity to have a hook and use all the cool ELP stuff with reflection. This is a way to write very powerful aspect code)
  • multi-bindings (it lets you bind the same type as many times as you want. It’s fairly often requested feature)
  • constant type converters
  • @nullable (you can inject null, so Guice knows that null is a valid value)
  • enhanced error reporting
  • specified exception handling (whenever you throw from a provider get method you catch a ProvisionException)
  • module overrides (someone else gave you a module and that module has a dependency you don’t really want, this lets you say “take this module but whenever a certain binding happens I want another binding to happen instead)

Provider methods have been improved 

Old code:

bind(new TypeLiteral<Set<String>>(){})
    .annotatedWith(new HostNameImpl(“servers”))
    .toProvider(new Provider<Set<String>>()
    {
        @Inject Provider<User> userProvider;
        public Set<String> get()
        {
            User user = userProvider.get();
            return user.isImpatient()
                ? ImmutableSet.of(“azul”)
                : ImmutableSet.of(“tandy”, “commodore”);
        }
    });

New code:

@Provides @HostName(“servers”)
public Set<String> provideServerNames(User user)
{
    return user.isImpatient()
                ? ImmutableSet.of(“azul”)
                : ImmutableSet.of(“tandy”, “commodore”);
}

(At this point the Guice team complained that there were still so many beers left and we all helped ourselves to some more)

The Eclipse rich tool is almost ready. It lets you search for bindings and when you give it a type it lets you go to this binging. IntelliJ and NetBeans tools will follow shortly after the Eclipse one is released.

Thoughts about improving Guice beyond the 2.0 version. 

  • compile-time Guice (if you don’t like reflection this project will let you take a Guice module and convert it to Java code that is equivalent but has no reflection)
  • 3rd party extensions like warp, GuiceBerry (they use Guice to inject unit, functional and integration tests), Peaberry OSGI (extends Guice to support dependency injection of dynamic services. It provides OSGi integration out of the box).

(as a side note it was nice to see Joshua Bloch in the Guice session)

Q&As

Can I support custom annotations? Yes, this is where the class/constructors listeners come into place. 

Are there any plans to support things like XML configuration? They are going to provide the hooks to build our own stuff with them.

Do you take advantage of any class loader hooks? They don’t need to because they don’t do anything magical. Its simply reflection.

Do they look it favourably if releases happen more often or if they happen less often? Frequent releases could be a bad or a good thing. The Guice team does not necessarily push out releases but they need to be careful when they release because they have to be convinced that something is good.

Categories: Java, JavaOne

JavaOne day three

8 May 2008 3 comments

Beans binding good for the heart

The session was presented by Shannon Hickey and Jan Stola and started with them explaining that the caveat with the Beans Binding Framework’s JSR is not final. It’s pretty solid although not final. But they don’t see any fundamental changes being made until it’s finalised.

The Beans Binding Framework is all about properties. A property is not just a string. It’s an interface that represents the abstract concept of a property, it allows for definition of a property through various means, like for example through standard Java methodology or through an XPath expression.

A property is readable and write-able. It’s also immutable, it can be reused and properties themselves are neat objects.

The basic class of the framework is the BeanProperty which resolves properties by using reflection. There is also another class that represents a property and it’s called ELProperty since it represents expression language properties and it adds EL capabilities to them. Both of these classes work similarly. If a property or a value of a property changes they are being notified. And they both support maps, get(“key”).

There are also binding providers that provide all the functionality needed. Swing has already registered default providers for several of its components like JtextComponent.text, Slider.value, AbstractButton.selected and so on.

The binding describes and maintain binding between two properties. It is abstract and contains a Converter (the ability to convert values from source or target) and a Validator (it validates changes from the target). The AutoBinding subclass provides updates for the binding. 

Then they showed a demo of how to use NetBeans 6.1 (NetBeans 6.1 includes the binding framework by default) and the binding framework.

Then they talked about BindingGroup. A BindingGroup manages a group of bindings, methods and listener to track state of the group of bindings and is a singe point to bind and unbind a set of values.

What about when we want to bind multiple pieces of data, like a JList or a JTable? Well the binding framework provides the relevant implementations like JListBinding, JTableBinding and so on. An example code for JTableBinding in Java 7 is the following


Property fnp = BeanProperty.create("firstName");

Property lnp = BeanProperty.create("lastName");

JTableBinding tb = SwingBindings.createJTableBinding(READ, list, jtable);

tb.addColumnBinding(fnp)

    .setColumnName("First name" )

    .setColumnClass(String.class);

tb.addColumnBinding(lnp)

    .setColumnName("Last name" )

    .setColumnClass(String.class);

tb.bind();

and for a list you will be able to do


Property nameProperty = ELProperty.create("${firstName} ${lastName}");

JListBinding lb = SwingBindings.createJListBinding(READ, list, jList);

lb.setDetailBinding(nameProperty);

lb.bind();

Then they showed a JTable binding example (where the check box binding didn’t quite work). Then another demo with a car and pictures that correspond to properties. And a final demo with a picture of a train and properties read from the database (this reminded me of a similar demo I saw at Javoxx by Hans Muller).

For more information you can look at Shannon’s blog, at the project’s home page or at the JSR

 

Q&As

Refactoring does not work with the framework. They are working on it.

What happens if a property cannot be bound? The system will throw an exception.

 

NLJUG and James Gosling meeting

This is so far the most interesting session I have attended. Well it wasn’t actually a scheduled session but something that the NLJUG is organising. This is actually the third time the NLJUG meets with James Gosling in person and has the ability to ask him questions. What can I say, respect to NLJUG and Klassjan for giving us such a great opportunity to meet and ask questions at James. These guys are amazing and have a model JUG which I think 99% of the JUGs out there need to follow. Well done guys.

Off to the questions:

Question: What do you think about closures? 

Answer: James said that he does not know how to answer that. He does not know what is going on in the JCP. Loads of people originally were afraid about closures mainly because of performance times. But now we have better hardware and language progresses and since closures solve a few problems it could be a good idea.

Question: What about Scala in the JVM?

Answer: Scala is a pretty good language but if you use it correctly it becomes unnatural for several people because it’s a functional programming language. He wouldn’t make huge changes to Java to accommodate functional programming languages since this would turn Java into a brand new language.  He closed this question by saying that there are loads of different language for different things.

Question: Isn’t it time for Java3? 

Answer: It might be, this is what things like Scala could be. It would depend on what kind of compatibility you would want to break if you implement all new changes. You have massive APIs that people are addicted to and these would be massive changes. 

Question: What do you think about deprecated methods? 

Answer: There were loads in Java 1.1. (mainly bean patterns) and many APIs like AWT use deprecated methods. There was actually a debate with the AWT team, should we break AWT and build up a new pattern or leave AWT as it is? Another interesting deprecated method is stop() (from Thread) method which is deprecated because almost every time you use it it’s the wrong thing. 99% of the people who have the urge to use the stop method are wrong. He also mentioned that he wanted to change the stop method name with a long name like stop_yes_I_understand_what_this_is_and_I_know_what_i_am_doing().

 Question: What about delivering different versions of Java without the deprecated methods? 

Answer: You could do it at different levels. One is to have the compiler fail the compilation (to something like “undefined method” error message). But there are so many libraries out there that people use that it’s unlikely you will find that you use no library out there that does not use some deprecated methods somewhere, even if they are hidden pretty well. It’s also depressing how often people use older APIs, like in the enterprise world they use old versions because of licensing reasons. Some of these enterprise application contracts people could have avoided by solaris because solaris can pretend to be some other systems (like solaris eight or nine or even linux) so vendors could use it to “fool” the application servers about the version used.

Question: Why isn’t aspect oriented programming endorsed by sun? 

Answer: Sometimes it is endorsed and sometimes it isn’t. It mostly works pretty well. But then when you look at some of the tools they make you do things way outside the real nature of the problem. It’s so easy to inappropriately use AspectJ. It’s completely overshadowed by use cases which are bad. Too many people were abusing it. There are places where people use that kind of technology (code injection) and it works really well, without aspect oriented programming. James mentioned that he likes aspect oriented technology but you should use it wisely and carefully. He would like to use it but not to have someone else use it on him.

Question: Do you feel the same about closures? 

Answer: Closures are a lot harder to use. Like operator overloading, there are some really compelling uses cases that would matter a lot to 5% of the developers, but not to the rest of them. In C++ they used shift operators for I/O and this was bad. But it feels like these wounds are now healed.

Question: Did you really fail chemistry? 

Answer: Not really but I never did well.

Question: Java is getting complex, it’s a risk, are you worried? 

 Answer: Not really. The language hasn’t got terribly more complicated. They tend to put a bit more of features there for the world to see.  If you look at generics or closures, people do not have the same depth discussing all the corner cases. Look at ruby, it does not have tight specs. But with generics, most of the scary bits are corner cases. Yes it works but it was really pretty tough. But most people do not have to go there. 

All the weirdness, like wild chars, are pretty straight forward. But most of the complexities are in the libraries these days. The only way to make it simple is to do less. One thing that people mention is why do you not delete CORBA? No one is using it. But the truth is that no-one admits to using it. they use CORBA all over the place and if you try to take out these numbers it’s a really bad idea. It’s always a very difficult thing to decide how to redo some of the complexities.

Question: What new things in open source community are you waiting closely that might end up in next releases of Java? 

Answer: I don’t really have any kind of list in mind. I wish someone would do a decent HTML rendering engine. Currently all available are either broken or outrageous complicated. The problem with HTML is that there is no specification. There is a document that people call it a specification but it’s not. The open source project he works with mainly the most is open solaris. 

Question: When is it safe to buy an iPhone with a decent Java implementation? 

Answer: There are Java implementations in iPhone, but not with the GUI. This is Apple’s issue. Look at the SDK license, it’s a nightmare. It seems they have a compiler but it will take a stack and compile into apple bytecode that’ indistinguishable to other bytecode. But its code linking issue and introspection is a bit of a problem. One day in the future. Nothing more to say other than a look at the Wireless magazine article a couple of months ago. Go back to linux.

Question: What about android

Answer: They are telling nobody anything. They have written their own VM. At various times they had discussions with them about putting a real VM in there. Google has not announced any plans of business models. What are they going to do about it?

Answer: Why can’t you inherit annotations? 

Answer: I don’t have the answer for this. Probably the expert group couldn’t find out how to resolve several semantic issues, but I don’t know. There is a huge redesign in the annotation group that will be available in Java 7. But this is more for people to annotate more.

Question: What about progress on FX? 

Answer: The FX team is very happy with the progress on FX. They now have a real compiler. They are currently expecting to resolve a few naming issues remaining. 

Question: Is it stable enough to use in projects? 

Answer: It’s stable enough to play around with it, but in a few months people will be able to use it seriously in production code.

Question: What about the adoption of FX? 

Answer: People have been very enthusiastic so far. But you just have to wait and see. There is really computing underneath, real rocket science below so you have do really hard rendering with excellent integration of all accelerators. The new thing about the FX compiler is that it can adopt for different implementations. You can compile it to target different devices like android, mobile phones and so on. In blue ray you really care about gorgeous results. They have pre-computed images rather use the graphics images. If you look at the chip sets you can get today (like 3D rendering chips for cell phones) they cost nothing, they do gorgeous 3D so it’s easy to explore FX capabilities.

Question: What about visual editors for FX? 

Answer: We had hoped to have stuff to show at J1 this year (there is a thing called “the distiller” which works well with NetBeans) but it didn’t work. They are working on a visual editor. FX becomes big deal for more folks as they really care these days that things look very good. 

Question: What about the future of flex, FX etc. Will they replace traditional web? 

Answer: Yes. Ajax had a few issues. You could make interesting stuff but it was unbelievably hard. And yet, despite the fact that you put thousands of man hours creating a text editor for example, it was pretty lame text editor. The Ajax experience has been very interesting. It demonstrated that people cared about dynamic interactive experience, but they cared so much that made simple things complicated.

Question: What do you think about SunSPOT?

Answer: They are really cool. And there is a really huge business case around it. There are a variety of companies that build things for SunSPOTs. They might be less known but the are leading the industrial revolution.  They get a lot more deployments that you think about but they are not very visible so you don’t know they exist. The kind of stuff people doing with embedded stuff is very very different than the traditional use of Java.

Question: How is it with Swartz being CEO? 

Answer: I like Jonathan a lot. He is a cool guy. When you are CEO you have to look responsible in front of wall street analysts. He is a bit edgy for them but he works pretty well and he is really sharp. I like him a lot. By training he is a mathematician. He asked the question “what do you do with a maths degree?”; well software. Before coming to Sun he ran a small software company. It’s nice to have someone who thinks making software beautiful is a good thing. 

 

Transactional memory in Java technology based systems by Vyacheslav Shakin and Surech Srinivas

The main goal of the session was to learn about about what transaction memory is and how Java can utilise it.

The mutli-core era is upon us. In April 2005 Intel introduced the first dual core processor. In 2007 it started getting accelerated and so the hardware folks have thrown the gauntlet. Now it’s up to the software people to utilise it.

In order to utilise the cores, you have to have concurrency control vectors. Concurrency control vectors provide

  • granularity – coarse grain/fine grain concurrency
  • scalability- scale up/down
  • partitioning- task level parallelism, data parallelism, main core/accelerator

Granularity deals with 

  • coarse grain concurrency which is used in J2EE transactions (every connection is concurrently processed), application level concurrency (virtualisation, multiple JVM instances)
  • fine grain concurrency which is using transactional memory and java.util.concurrent data strucures

Transactional memory is sequence of memory operations that execute completely. There is 

  • Software transactional memory -  software only implementation and new language constructs
  • Hardware transactional memory – hardware only or HW/SW implementation

 Software transactional memory is implemented completely in software using  language extensions (like “atomic” or “retry”). The software system 

  • guarantees atomicity
  • discovers and guarantees concurrency
  • coarse grain programming and fine grain performance

An example of how the atomic extension could be used (instead of a synchronized block) is shown below.


public Objec get(object k)

{

  atomic{

    return Map.get(k)

  }

}

Hardware transactional memory works as follows

  • Hardware views individual threads as executing a series of reads/writes to memory
  • in a transaction we want the series of read/write to occur at a single instance in time

In order to design software transactional memory we need to think about

  • isolation (weak/strong atomic system)
  • data management (in place/buffered)
  • conflict detection(eager, lazy, granularity, policies)
  • language integration (nesting, native code,i/o, exceptions, library/language extensions)

 Isolation means that each transaction must appear to execute in complete isolation. We have weak atomic systems (only relative to other transactions) and strong atomic system (relative to both transactional and non-transactional access). Weak is not a string subset of strong

There is a number of implementations software transactional implementations

The transaction in the Java platform will define the following extensions 

  • atomic – execute block atomically
  • retry – block until alternative path available
  • orelse – compose alternate atomic blocks
  • tryatomic- atomic with escape hatch
  • when – conditionally atomic region
  • it is build on prior research in other systems.

In order to design hardware transactional memory we need to think about the existing software usage – the industry has invested a lot in the existing programming model for Java and changing the programming model is hard. 

Sun wants to implement a Java based system that utilises hardware transactional memory (isolation & atomicity) with no changes to software. They use  speculative lock elision which improves concurrency by speculatively executing locking code and roll back on actual data contention. And 

speculative execution which improve single thread performance by speculatively executing hot paths and roll back on assumption violation.

Software transactional memory results in

  • smaller workloads easier to transactionalise (larger workloads are harder due to i/o, native code, or weak atomicity.  
  • Software transactional memory has subtle semantic challenges but it also has good scalability under lower contention and for micro benchmarks.

Both software lock elision and speculative execution prototypes run realistic workloads using future proposed instructions and yield10% better performance.

In summary

  • transaction management is a tool for concurrency control in multi core systems
  • software transactional memory is a software technology for developing new parallel software
  • hardware transactional memory is a hardware technology that both existing and new software can utilise to improve performance as well as concurrency

Java Mangement extensions technology update by Jean-François Denise and Eamonn McManus

The session started by a slide explaining what is new in JMX 2.0

  • namespaces
  • event services
  • miscellaneous changes

JMX is a standard part of Java since version 5 and can be used to manage and monitor running applications.

At this point they showed us a demo of how to use JConsole and its functionalities.

After the demo they talked about MBeans (ManagedObject) which can link to other objects or they can even manage themselves. A ManagedObject (MBean) represents a resource. A MBean can have (lets say in a Poker game)

  • attributes (player count, maximum connected players)
  • operations (ejectPlayer, addPlayer)
  • notifications (com.example.player.joined – a notification every time a player joins) 

There are several kinds of mbeans

  • standard
  • MXBean
  • dynamic
  • model
  • open

the simplest are standard MBeans and their cousins MXBeans

At this point they went through some code examples of how to create a managed bean.

In order to make an application JMX connectable in Java 5 you can run it with -Dcom.sun.management.jmxremote. In Java 6 this has changed and it just works (if you are connecting from the same machine).

There are two JSRs, JSR 255 and JSR 262 and they expect that Java 7 will include both of the JSRs.

The namespaces they mentioned in the beginning mean that you can structure the MBeanServer in a hierarchical way. You can have several applications running in the same JVM and you don’t want them interfering with each other, so you want each to have a separate MBean server assigned on them. With namespaces this is possible and we can gather these MBean servers together and control them from a higher MBeanServer.

Another feature of namespaces is cascading which means that the MBeanServer for a namespace can be remote by using “subagents” from the “master agent”. This would be useful for clustered application servers. Each clustered instance can have its own MBean server but also have a parent MBean server to control these subsequent servers.

But there are some cascading issues like security. If I connect to a master agent what permissions do I need to connect to subagents? This can be solved by the master propagating the credentials to sub agents. If there is network failure what happens? What do we do if the connection to the remote MBeanServer fails? Do I still see it in the master server? Can I reconnect when connection comes up again?

We can have virtual Mbeans. The standard implementation has only one Java object for every MBean, so we do whatever necessary in order to simulate the existence of an object. This is very appropriate for large numbers of objects where they come and go very quickly.

We can have client contexts. We can use locales for clients to communicate with server in namespace jmx.content//. A call to getAttribute(“jmx.context//locale=uk//com.example…”), “bar”) becomes  a call to getAttribute(“com.example…”, “bar”). An Mbean can return localised string in attributes.

The event service mentioned in the beginning is defined in the JSR 160 and allows notifications to be received remotely. 

Several times we have very loose coupling between client and server and when there are many notifications a client can miss some since an MBean does not know who is listening to it). Event service fixes these problems without changing the client/server protocol, and also allows listening to a set of MBeans in one operation.

Event service also allows you to use custom transport, it allows you to change the transport on which you receive notifications.

Miscellaneous

  • you can define MBeans with annotations
  • you don’t have to define the interface you just annotation the implementation class with @Mbean annotation
  • there are annotations for descriptions with @Description
  • there is a query language
  • queries inspired by SQL but must be constructed with code
  • there is new sql-like language (queryexp query = query.fromStrign(conenctTime > 60) for instance)
  • code is much easier to read/write and provides a simple way for clients to input user queries

web services connectors 

  • integrate JMX with http/web architectures
  • allows JMX to interoperate outside of the Java world.

Java clients can use JMX Remote API. The protocol choice is nearly transparent and it fits into JMS Remote API Open Architecture. All you have to do is to just change the URL service: jmx:ws://…

This fits into web architecture and is firewall friendly. It also provides interoperability.

The web service management is soap based and relies on existing set of web service specification. The adoption is growing (it’s supported in Windows Vista, XP sp2, 2003 server) and it is the building block for the management of the virtualisation (Sun, MS, VMWare, Novell).

The JSR 262 is about mappings. It defines an XML representation for Java types exposed in MBean interfaces (primitive and boxed types, other java types like URL, JMX open type, some collections such as Lists, Map, Vector). It defines a mapping from exceptions into web service management faults. The security relies on the same security model and is using http basic authentication and https encryption. 

Finally there was a poker demo written with JMX and JavaFX as the front end.

If you want to learn more visit java.sun.com/jmx

 

Q&As

Is there any facility to persists JMX state? No, you will have to programme it yourself.

 

Creating Games on the Java™ Platform with the jMonkeyEngine with Rikard Herlitz and Joshua Slack

This was mainly a friendly discussion and a presentation of games that are using the jMonkeyEngine and are written in Java.

People compare Java games to top quality games and they don’t believe that the games that perform as good as the non-java games are actually written in Java. The evolution can best embodied into an example: jMonkeyEngine.

JMonkey is an open source graphics API. It is the bedrock technology to build 3D games in Java and you can embed it into Swing and AWT if you want.

There were presentations for

In any language it’s the hardware that does all the work. The language just sets up the hardware and instructs it what to do.

Project wonderland (no really a game but the platform could be applied for gaming technology) it’s an open source project. They are moving to jMonkeyEngine for rendering. They want to get all the capabilities that jMonkeyEngine supports. Also the jMonkeyEngine community and tools are in abundance. The jMonkeyEngine edition will be released in Fall this year.

Then Doug (the person in Sun responsible for 3D) said that they are re-writing the Java 3D components. They are doing some interesting things that have not been addressed yet in the 3D industry. Client technologies become multi-core and they will build a games engine that people have never seen before. It would be  platform that wonderland could be plugged in. They are sort of finalising the architecture before it goes public.

 

Java class loader rearchitected with Iris Clark and Karen Kinnear

The current problem with class loaders is that there can occur class loader deadlock. The JVM spec does not specify that class loaders have to ask parent before they load child class. Technically speaking a class loader implementation can delegate to anyone. But the class loader spec does specify the parent delegation. This is what they have implemented into the SDK.

A deadlock may occur when a multi-threaded custom class loader uses a non-hierarchical delegation model. It frequently occurs in app servers or code that is trying to run multiple versions of the same application classes. Examples of class loader deadlock  running on recent versions of JDK are scarce (might be because users have written their own workaround, or that the JVM changes already in production have reduced their frequency)

The JSR 277 specifies the module system which solves name space an versioning problems associated with distributing complex applications. Each instance of java.module.Module provides its own class loader (a) when the application’s main class is a Java Module b) the system class loader is the corresponding Module’s and c) ModuleLoaderModule authors must extend java.module.Module)

The requirements for re-architecture are 

  • they must allow non-hierarchical class delegation topologies without deadlock
  • they must be backwards compatible
  • they must meet the JSR 277 needs
  • must be possible to define guidance to migrate custom class loaders to take advantages of un synchronised behaviour.

VM changes that are already in production

  • bug 46999981 which is a ClassCircularityError incorrectly thrown (occurs when two threads attempt to load the same class even if there is no circular dependency, fixed in JDK6 and JDK 5u8)
  • finer granularity locking not dependent on ClassLoader object (VM updated to ensure class loading is thread safe, fixed in JDK7 and JDK 6u4)
  • 4470034 additional diagnostic information for NoClassDefFoundError (exception chain mechanism to provide root cause data, fixed in jdk7 and JDK6u4)

ClassLoader.loadClassInternal() is called by the JVM. It is called after acquiring a lock on the ClassLoader object since as of JDK6u4 the VM has a “parallel” mode for the bootstrap class loader (ClassLoader object is no longer used as a locking mechanism by the VM). The “default” mode continues to lock as before and invokes LoadClassInternal().

Sun asked if they can delete the ClassLoader.loadClassInternal(). If yes it will help resolve the no 1 SDN bug; 4670071 loadClassInternal(String) is too restrictive (817 SDN votes as of 28 April 2008, originally filed in 2002 against JDK1.3.1_01 largely in response to problems running JBoss, potential workaround in JBoss 3.2.3?). The last constructive SDN comment posted late 2004 and Sun thinks that removing loadClassInternal() would be the most effective solution.

APIs changes that are proposed APIs is the addition of an interface ParallelClassLoader which can be used as shown below


loadClass

{

  if (this instanceof ParallelClassLoader)

    loadClassUnsynch(name, false)

  else

    // Load class as you normallt do

} 

loadClassAsynch… is the same as loadClass but it’s not synchronised

If you want to implementing a custom class loader you wouldn’t have to change anything, just  implement ParallelClassLoader and update existing findClass method to be thread safe. The second solution would be to convert to the Modules system (the Modules class loader is necessarily a ParallelClassLoader, custom code will still need to be thread safe).

Additional library changes

  • Synchronisation scheme in some libraries classes
  • remove usages of exceptions for message passing
  • provide APIs for additional diagnosing info
  • update documentation
  • modernise language use in code (generics, varargs)
  • other bugs and RFEs

 

 

 

 

Categories: Java, JavaOne

JavaOne day two

7 May 2008 4 comments

General Session

The general session started by some hip-hop/rap group singing and dancing on the stage. I am not very much into rap but I guess most of the people found it amusing.

Next James Gosling went on the stage throwing some t-shirts around using a sling!

Unfortunately I didn’t manage to get any, hopefully next time.

The real session started by announcing the number one rule of the J1; “don’t be shy”. Even if you think a question is stupid there are several other people in the same room who want to ask the same question.

Then they started talking about the Java chips they have installed in our passes. Each pass is actually a JavaCard. There are sensors installed everywhere in the Moscone centre so every time we enter a room it senses where the attendants are going. All our badges are RFID (Radio Frequency Identidication) enabled. Due to this Sun has developed a very smart system to identify the sessions. (Note: you have to pre-register for sessions otherwise you run the risk of missing the sessions). They have devices that read information from the JavaCard before you enter a session room. If the information matches the room then you can enter, otherwise you have to wait in the queue for an available seat.

The Java runtime and the JavaCard are installed in many systems in the world, they are in London’s Oyster card, they even run Istanbul’s transport system.

At this point they showed the JavaOne Open Possibilities video

After these they presented a JavaFX demo that takes data from sensors and displays it on the screen. They measured all the energy used in the general session room (they actually measured the carbon dioxide generated from everybody). These same devices are also used in Tokyo to measure all the energy needed to operate the elevators.

Richard Green went on stage and explained how the JavaFX technology will prevail. traditionally there used to be monolithic and cumbersome systems; think for example an airline application where you have the old terminals with green screens. Only few people knew how to operate them. today the world is changed and things are completely different. You have better GUI, more reusable and user friendly with dynamic actions. People can actually choose their seats, select their type of meal etc. Sun has realised that people need simple, intuitive and compelling interfaces and it’s working on this direction.

Years ago the printing press was an amazing revolution. The sense of using a book was unique. Now you can get interactive books off the internet.

Ian Freed from amazon went into the stage and started presenting the kindle. Years ago the printing press was an amazing revolution. The sense of using a book was unique. Now things have changed and you can get interactive books off the internet. Kindle lets you do that. It does whatever a real book does and also goes beyond the book. You can have all the books you need, newspapers, articles, journals, even blog inside a kindle. Kindle also provides you with recommendations and everything else you can do with the amazon web site. And it uses Java to do all these.

According to Ian Java was selected for kindle for two main reasons; it has a huge developer base/community so it was easy to implement and install the software, and it was easy to emulate the application on a desktop PC before kindle came out and then deploy or keep developing on it when it was released.

Next one on the stage was Rikko Sakaguchi, SVP Sony Ericsson, that talked about how Java helped them quadruple their growth in the past six years. They run Java on their mobile devices. Actually Java is their core strategy since it is the key factor behind their success. They chose Java because of the rich user experience and its portability, it runs everywhere.

Richard Green talked again about RIA and how you can create all these applications. In the traditional model each one of us used to be and do everything, the designer, the developer, everything. But we have moved from the green screen model to compelling designs. We want agility and the right tools to make all the small parts come together. Everything is about the number of iterations, implementations, tests, and evaluations per unit time. This is critical because creating all these is taking too long. Now it’s minimised. Small teams and small groups of people are coming together and building and extending on the work of others.

Next it was the turn of Nandini Ramani to get on the stage. She showed a demo of Connected Life. Connected Life unifies all parts of your favourite applications in one application. She tried to show how to use Connected Life but unfortunately the demo crashed After a quick restart the demo went smoothly and she showed us how you can integrate and embed applications (twitter, flickr, facebook etc) into Connected Life. And every single bit of Connected Life is written in JavaFX.

A very interesting feature is that the application can be dragged out of the browser and onto the desktop. She tried to do that but unfortunately the demo crashed for one more time. Then there was a demonstration of the same application on a mobile phone. It looked and behaved exactly the same as long as the phone had the capability of supporting it. Then Nandini and Richard tried to get a picture and use it with the demo but unfortunately it crashed for one more time. Pitty because I really liked the demo.

A flickr demo written in JavaFX followed. This demo was about how you can write an application that fetches pictures from flickr based on given tags. This demo showed how easy it is to implement application that utilise high performance audio and 3D movement rendering. Interesting thing is that when the tag “jamesgosling” was entered, flickr returned the picture we I had taken with Paris and James Gosling! So me and Paris were famous for a few minutes since twelve thousand people were looking at us. The five minutes of fame as Woody Allen put it :)

Next in line was a JavaFX demo (the same Connected Life application) using an android emulator. This showed the power of JavaFX. It unifies everything that’s shipping in a common runtime. It brings all the devices together under a common umbrella.

JavaFX also brings a new advertising model. It gives developers a two-way conversation with their audiences.

Note also that all current video players that support blue ray already run Java. And since Richard thinks that blue ray will prevail Java has already prevailed.

Now time for some numbers

  • First taste of JavaFX is coming in July
  • JavaFX 1.0 will ship in Fall 2008
  • JavaFX mobile will ship in Spring 2009
  • Glassfish’s kernel is 98K. It loads in one second. Glassfish follows a modular design, it uses modules that build around the kernel.
  • Glassfish downloads went up
  • Mysql has 65.000 downloads per day (50.000 before it was acquired by Sun)
  • Netbeans has 44% growth year after year with regards to active users
  • 48.000.000 JRE downloads per month
  • Java now ships with Ubuntu and Red Hat.

Next in line was project hydrazine. Hydrazine is a project for content creators. It can help you find, merge, deploy and share services. It allows you to bring services together and make them available to everyone interested in using them.

Jonathan Swartz went on the stage next. He said that there is battle for the next development platform. Sun is ready and has four fundamental differentiating activities in mind;

  • they want to reach more devices on the planet
  • they have to make the platform compelling for developers (fast, high performance, accessible to developers and accessible to consumers)
  • they are placing a stake on the ground (Java will give an insight to the developers about the users of this content, understand who is using it and how they are using it, they have to understand the users if they are to build a subsidised application)
  • it’s all going to be free (freely available, free price, free philosophically)

Next it was Neil Young (yes it was actually Neil Young himself).

Neil wanted to bring to the world everything he has done in the past years. He wanted to make his music available to vast audiences. Originally the digital sound wasn’t good enough but now with the blue ray tachnology you can get far better sound. Neil needed a technology that was flexible, could be dropped anywhere in the time line and have a representation of music of this time. He chose Java for this.

At this point a very realistic demo application was shown that could play Neil’s music from archives on a blue ray disk. The GUI was amazing and very easy to use. You could navigate to sounds, pictures and video and you could play all of them (you could actually see vinyl records playing when the music was playing). It made it look very real. Neil wanted to do it back in the ’90s but you couldn’t do that with the DVD. They were defeated by the technology back then but now the technology has progessed they were able to do it using blue ray and Java.

If you need more information about this project have a look at Neil Young archives.

Lastly there was a slide about openeco.

Designing a MMORPG with project Darkstar

This session was presented by Jeffrey Kesselman.

As an avid gamer I wanted to attend the project darkstar session. Project darkstar is a platform that help you write Massively Multiplayer Online Games (MMOG). Strictly speaking online games today are not massive, but the potential audience is as it can grow exponentially. The big advantage of MMOG is that they allow players to engage in content from a variety of locations and devices (desktop PCs, mobile devices, set top boxes etc).

It takes anything between 20 and 30 million in order to develop a good MMOG and this is what keeps the industry away from growing. When you spend that much money you don’t want to take many and big risks; you want to use things that are already tried and tested, things that are already successful.

Project darkstar is open source under the GPL license and is completely free. All you need to do is to buy a boxcha, install darkstar, stick it to your system and you are ready to go.

Sun game Server (SGS) wants to solve the problems corporations are encounter when developing MMO games. They want to make practical and scalable on-line game content in a host-able model (to enable better games with more and smaller groups of developers), and to change the current state of MMO which exhibits limited persistence and fault tolerance.

The purpose of SGS is to make distributable persistent and fault-tolerant game servers which are easy and economical to administer. Corporations do not have to spend vast amounts of code in order to operate game servers any more.

They also want to make server side code reliable for the developer and to present a simple single-threaded event-driven programming model. The developer should never have his code fail due to interactions between code handling different events.

For the operator they aim at a single point of administration and easy load balancing across the entire data centre.

SGS in the design is much like a 3-tier architecture.

Tier 1 is the communications tier that takes care of peer to peer or publish and subscribe channels.
it defines a pluggable transport, reliable/unreliable or ordered/unordered byte package transport and it’s optimised for the lowest worst time latency.

Tier 2 is the execution kernel, it executes tasks in response to events and it is analogous to a J2EE application server. It is stateless and tasks appear to be single-threaded. It is also optimised for the optimised for lowest worst case scenario.

Tier 3 is the object store; it stores objects. It’s lightning fast, highly scalable and analogous to database layer. It is built to scale and all application state lives here, in an in memory data-store with secondary storage backup. It’s transactional and fault tolerant but it’s not relational. If a deadlock happens at tier 2 it is detected here. It’s also optimised for the lowest worst case latencies.

SGS supports two kinds of events; system and manager events. Systems events generated from method calls like initialise() and loggedin() which are generated by the project darkstar infrastructure. Manager events are generated by darkstar managers, (managers are pluggable additions to the infrastructure, they are like drivers). All the events result in a task being queued for execution and all of them have a listener interface associated with them.

A task is a threaded execution and a transactional (ACID) context. Task execution appears to be single-threaded (but in reality it isn’t). Tasks aqcuire read and write locks on game objects and can abort and reschedule if a conflict occurs. Tasks scale out horizontally over the back end but this is not a detail we need to think about, just remember that fewer object access conflicts results in a greater scalability.

Task execution is mostly unordered and parallel. However relative task ordering for a single user’s input is assured (actions get executed in the order they arrive at the server and an event generated by the user won’t start processing until all processing of earlier events have successfully completed. So we can say that the input is assured), and also parent-child task ordering is assured (a task may use the TaskManager to queue child tasks and a child task won’t start processing until its parent has finished).

SGS Managers; they present an API to the application and they generate events. There are three standard (visible) managers

  • task manager
  • data manager
  • communications manager

Managers are really just facades. They are there to provide services to other services.

Applications are made of ManagedObjects, they (the ManagedObjects)

  • live in the object store
  • they are fetched and locked by tasks
  • are written back at successful termination of event
  • they are seemingly single-thread in execution
  • they are referenced through ManagedReferences
  • are normal Java objects

ManagedReferences are coming from the DataManager.

There are two types of communication; client/server (direct communication, client sends commands, server responds) and communication channels.

At that point there was an exampel of how to implement a sample game (DarkMMO). Before any MMO implementation we have to consider technical challenges (set design limits, consider results of latency spikes. Another consideration is how to deal with lag. do we respond locally immediately and check results with server for correctness later? Or do we allow others to lag on the screen?).

Consider scaling issues early enough in the design process. Set design limits, n-squared is the enemy (take the game world and divide it to pieces so the world is zoned. Each zone is tile-based and we can limit awareness to size of one tile).

ManagedObjects can be defined as the entities in a game. Entities that can be identified are players (represent a user’s account), characters (they can interact with the environment), monsters, NPCs, placeable objects (they don’t move around but they can be moved by players or monsters), walkways etc. In order to map these entities to event classes such as ManagedObject, ClientSessionListener or Task should be used.

When the space is partitioned and the use/monsters move, large structures must be searchedon every possible position that is changed and must be frequently modified

In order to grid the zone the cell size should be equal to the maximum view distance (what can’t be seen does not matter) and the character should see at most 4 surrounding cells at a time

Other issues we might encounter are the differences positions in combat. The exact position doesn’t really matter but the range does (the distance between players can be different on different screens), During fight we have two possible cases; hand to hand and ranged combat

With hand to hand combat we have a fixed close range, if the server decides that the attacker is at close range then it can lock the defender in place, move the attacker to within melee combat and start the fight. Bt we might encounter issue here is we have obstacles or traps (a possible solution would be to auto-detect them and abort attack).

In ranged combat viewers can’t tell the exact distance but as long as the server thinks target is in range it can allow allow attack. The server should be able to check blocking

DarkMud (demo with artwork from NeverWinter nights). the quality is not very good bu tthis is the best I could do

Project SunSPOT: a Java technology enabled platform for ubiquitous programming

This session was presented mainly by Vipul Gupta and Arshan Poursohi.

SunSPOTs are devices that have a sensor attached and can be used to measure several things. A distinct characteristic of SunSPOTS are that they are self-healing when they are in a network. If one node is lost then the system can recognise it and recalculate the shortest path.

They are felxible (all Java software stack) and there is also access to the internet through the base SunSPOT or through an HTTP proxy.

An integral part is the strong security. Since SunSPOTs communicate in a wireless manner data nac be stolen by third parties. SunSPOTs use efficient cryptography with simple user-transparent key management (the use elliptic curve cryptography).

The code is deployed securely. A Java project is compiled into a Jar file and then deployed to suite. Due to limited capabilities SunSPOT only verifies the digital signature of the Jar file and not the bytecode itself.

Each user’s SDK has a public/private key. SunSPOT stores “trusted” key of owner and compares it with it’s own key. It sends the public key back to the user, user signs the key and sends it back to the SunSPOT. SunSPOT matches it with the private key (this all happens transparently from the user). SunSPOT uses SSL with HTTPS support for web communication.

Then Randy Smith presented Solarium and demonstrated how it can be used with SunSPOTs by having two SunSPOTs communicate with each other.

Ron Goldman talked about how one can emulate SunSPOTs. SunSPOTs are not cheap, they cost money. Since they cannot give SunSPOTs away for free Sun has developed an emulator that behaves exactly like a SunSPOT. For exampel virtual SunSPOTs can send radio messages to each other, or use a sensor panel. One can use Solarium to create a virtual SunSPOT and control it from there.

Arshan Poursohi talked about SunSPOT advantages; the capable hardware that makes prototyping very easy. Then he talked about Sun’s Blackbox project.

Sun has also developed an outreach programme granting SunSPOTs to students and educators worldwide and also it participates in larger programmes to reach out to university and professors

Yggdrasil, another project in Brazil

For more information visit sunspotworld.

Q&A

How reliable is the emulator? Can I make sure 100% that an application on the emulator will work the same on the real device? Yes

Can I do internet connection? Yes, as long as there is a spot box connected to the station. There is communication through the USB connection.

Who owns the SunSPOT API? It’s under a GPL license. If you modify it you have to redistribute it. Any new applications developed are own by the developers.

How sun is making money from SunSPOTs? By promoting Java and by the data generated by these devices and they need to process this data. Sun is very good at that.

JavaScript programming language: the language everybody loves to hate

In the past I have tried hard, very hard, to like JavaScript but to no avail. This time I said to myself that I will attend a JavaScript session, maybe I was wrong and I need to give it a n-th chance. I also thought maybe I could get ideas in order to use a scripting language at work. Unfortunately not even this time I managed to like it.

The session started with Roberto Chinnici announcing that if you do a search on google with “(x) sucks” it will return 625000 results, the highest of all programming languages/scripts. But I want to be fair and objective and I will describe the session from the notes I took.

Roberto said that JavaScript was frozen too soon and came with a punchy standard library. This gave JavaScript a bad name that continues until now.

JavaScript in reality is a functional programming language. Functions are first class objects, they are closures. Functions can be anonymous and can be of a higher order. They are also “first class functions” meaning that it’s quite normal to pass around functions and return functions from within other functions. For example


function compose(f, g)
{
  return function (z)
  {
    return f(g(z));
  }
}

Closures are capturing variables in their environment


function createCounter()
 {
  var i=o;
  return function()
  {
    return i+1;
  }
 }

Although JavaScript provides several features from other modern languages there is a catch with variable declaration. There are no warning for multiple variable definition. So if you define a variable and then redefine it later in the script (it does not matter in what scope the variable is defined into) JavaScript does not actually warn you and ignores completely the oldest defining variable.

If you need a good library to use with javaScript have a look at the functional library (http://osteele.com/sources/javascript/functional/)

In JavaScript there is no tail recursion and this runs a very serious risk of blowing the stack. Also the runtime is not particularly optimised for heavy use of HOF (High Order Functions). But more sophisticated compilers are on their way

In OO JavaScript technology objects

  • exist on their own account
  • have a bunch of named properties
  • (almost) every one of them has a prototype
  • prototype from chains
  • property lookups follow prototype chain
  • Object.prototype is the default “root” (Object.prototype.prototype = undefined)
  • this is a special variable, like arguments

The secret not to go wrong with objects is to forget about constructor and classical inheritance and use Douglas Cockford’s object function instead. Object by all accounts should have been a primitive but it isn’t and you need to build chains of object using supporting creation functions.

Defective Java code: turning WTF code into a learning experience

William Pugh started by showing some code from the daily WTF.


char c = (char) inputStream.read();
 if (c == -1)
 break;

But the exact same code can be found in several other implementations, in sun.net.httpserver.ChunkedInputStream, in org.eclipse.jdt.core.dom.CharacterLiteral, in org.eclipse.pde.internal.core.content.BundleManifestDescriber and many more.

The above code will fail. It will fail because char is the only unsigned primitive data type and therefore it’s value will never be -1. Also casting an int to a char will only work for ASCII characters, not for unicode. So the moral of the story is

  • methods that return -1 for EOF can be tricky (you need to check for -1 bfore doing anythign with the result)
  • code is rarely tested for unexpected EOF (you may need to use a mocking framework)
  • don’t assume that any mistake is so unique that no one else could have made it.

Second WTF was the following code (found in Jetty-6.1.3 BoundedThreadPool)


private final String lock = “LOCK”;
 ...
 synchronized(lock)
 {
 ...
 }

The catch here is that String constants are interned and shared in the JVM. So if you want to synchronise do it on a raw object.

Another WTF code is the following


private Long myLong = new Long(0);

synchronized(myLong)
{
  Long l = myLong.longValue();
  myLong = new Long(l.longValue());
}

The above code does not provide a mutual exclusion. One thread can synchronise on an old value, another thread on a newer value, and both be in a critical region at the same time. This could result in duplicated values being handed out. Long objects aren’t shared, but you need a “remove explicit autoboxing” quick fix.

The key mistake here is that one tried to synchronise on a field (Long in our example). This is something that cannot be done, you can only synchronise on an object.

The next example was about synchronisation on a getClass() method call (example taken from java.awt.Panel in JDK version 1.6.0).


String constructComponentName()
{
  synchronised(getClass())
  {
    return base + nameCounter++;
  }
}

This is a mistake since subclasses can synchronise on something else. The best way to ensure synchronisation is to use an AtomicInteger or synchronise on Panel.class. never synchronise on getClass(). In general it’s good to follow the following guidelines

  • do not think about what to synchronise on
  • instead, think about who to synchronise with
  • avoid synchronising on something that unrelated code can synchronise on

Next example was the following code


com.sun.corba.se.impl.io.IIOPInputStream:

protected final class resolveClass(ObjectStreamClass v) throws IOException, ClassNotFoundException
{
  throw new IOException(“method resolveClass not supported”);
}

In the code above Class extends java.io.ObjectInputStream and by calling the resolveClass method it works the same as in the OIS class.

The solution is to be found at the following code


java.io.ObjectInputStream:

protected Class<?> resolveClass(ObjectStreamClass desc) throws IOException, ClassNotFoundException {...}

com.sun.corba.se.impl.io.IIOPInputStream:

protected final Class resolveClass(ObjectStreamClass v) throws IOException, ClassNoFoundException {...}

Notice that


com.sun.corba.se.impl.io.IIOPInputStream:

import com.sun.corba.se.impl.io.ObjectStreamClass<orange colour>;

protected final Class resolveClass(ObjectStreamClass<orange colour> v) throws IOException, ClassNoFoundException {...}

The parameter types are different: they have the same simple name but they belong to different packages. The protected final Class method does not override method in the superclass.

The moral is

  • @Override is your friend. do an autofix to apply it uniformly throughout your code base. if you don’t see it somewhere it should appear, something is wrong
  • Minimise overloading simple names. Autocompletion makes it far too easy to get the wrong one.
  • Avoid overloading the simple name of a superclass. If you have a class alpha.Foobar do not define a subclass beta.FooBar. There are too many possibilities for collisions.

Formatting a date


org.jfree.data.time.Day:

protected static final DateFormat DATE_FORMAT...

return new Day(Day.DATE_FORMAT.parse(s));

The catch here is that DateFormat is not thread safe. It stores the date to be formatted/parsed into an internal field. If two threads try to simultaneously parse or format the date you may get runtime exceptions or incorrectly formatted date. This is not just a theoretical possibility; it’s easy to replicate with test code and has happened in the past causing field failures (this was actually reported to FindBugs by someone who was bitten by this bug).

The moral is

  • DateFormat (and subtypes) are not thread safe
  • Immutability is your friend
  • When designing an API understand your use cases. If the class feels like an immutable constant, people will use it like an immutable constant, even if the JavaDoc tool says not to.

Last code sample had to do with equality


java.sql.Timestamp:

public boolean equals(java.lang.Object ts)
{
  if (ts instanceof Timestamp)
    return this.equals((Timestamp)ts);
  else
    return false;
}

The requirements for equals are

  • equals(null) returns false
  • if x.equals(y) then x.hashCode() == y.hashCode()
  • equals is reflexive, symmetric and transitive

Date d = new Date();
Timestamp ts = new Timestamp(d.getTime());
System.out.println(d.equals(ts)); // true
System.out.println(ts.equals(d)); // false

The difference in the equals method above is because equals() is not symmetric. If you ask Date if it’s equal to a Timestamp it will return true. Date looks at the the value of the class and compares it with the Timestamp value. It is the same so Date and Timestamp are equal. If you ask Timestamp if it is equal to Date, then it will return false, since the compared object to Timestamp is not a Timestamp.

Symmetry is important

  • a primary use of equals() methods is in containers (e.g. Sets of Maps)
  • various methods might invoke equals as a.equals(b) or b.equals(a)
  • non-symmetric equals can produce confusing and hard to reproduce results

There is also the equals debate; should equals use instanceof or getClass() to check for compatible arguments?

If you use instanceof and override the equals method, you may break symmetry. If you use getClass(), it is impossible to define a subclass such that instances could be equal to an instance of the base class.

This leads to the conclusion that it is hard to design a subtype and the equals issue only matters when implementing a class that is designed to be subclassed. Doing so is hard in general, even more so when you anticipate third parties extending the class. Specifying how equals should behave is only of one many tricky decisions you will have to make.

As Doug Lea put it more than 20 years ago “the problem with defining equals as class method is that there are too many senses of equals for one method to support. and the author of a class won’t have all of them in mind”.

We have for instance object equality (no two distinct objects are equal, this is the definition inherited from the Object class and its useful and sufficient more often than you would expect). We also have value equality (any two objects that represent the same value should be equal. For exampel a LinkedList and an ArrayList that represent the values 1,2,3 should be equal). And we have behaviour equality where objects are equal if you can’t distinguish them based on their behaviour.

Using getClass() generally implies behavioural equality. But it’s the behavioural equality that is the most subtle, and to some confusing or just wrong. How can for example B be a subclass of A if it’s impossible for B to be equal to an A? With getClass() any extension of a class splits the equality relation, even an extension to add some performance monitoring.

Hibernate uses a lot the getClass() equality because it creates proxy classes for persistent object model classes. The use of getClass() causes errors and there are lots of discussions about it in the hibernate forums.

But getClass() is sometimes used in an abstract base class to compare any common fields of the class. An example of such implementation is in the java.security.basicPermission class.

On the other hand if you use instanceof and override equals in a subclasses

  • you must not change the semantics of equals
  • the only thing you are allowed to do is to use a more efficient algorithm, or instrument for debugging or performing monitoring

The moral is

  • document your equals semantics
  • if you use instanceof consider declaring the equals method final
  • using getClass() avoids the finality of the equals method but limits the behaviours you can implement

Most of the learning opportunities come from students as they tend to be very good mstake generators. And more often that you expect the mistakes made by students produce detectors that find mistakes in production code (although often manifested in different ways and for different reasons).

What’s new for concurrency on the Java platform standard edition

All features discussed in this session by Brian Goetz will be available in Java 7. Brian started by saying that as of about 2003 we stopped seeing increases in CPU clock rate. But Moore’s law still holds strong since now we have more cores per chip rather than faster cores. As a result many more programmers become concurrent programmers (maybe reluctantly). We should expect core count to increase exponentially for at least the next ten years.

Hardware trends drive software trends. Hardware shapes language, library and framework design. Java had support since day one, but it was mainly asynchrony, not concurrency (which was right for the hardware of the day). The revolution was with JDK5 that offers coarse-grained concurrency.

Lets say we have a database query. A fine-grained parallelism will parse and analyse the query, will plan a selection and will try to find ways to minimise processing power in the shortest time. CPU intensive jobs are sped up with parallelism.

Select maximum example (find the largest number in a list). In Java5 we can use a thread pool and divide the array, then assign each division to each thread in the pool. Put them in a collection of tasks, start the tasks and wait till they are completely. Then iterate through all of them and pick up the largest. But this undesirable because some of the sub tasks might completely earlier than the others (in this case you have a processor that does not do any work). The bigger problem though is that the code can be ugly and clunky (as the find maximum code is duplicated).

Solution: divide and conquer. Bring the problem down to tiny sub-problems. Then solve the sub-problems and combine the results. The invoke in parallel step waits for both halves to complete before providing the combination result.

For fork-join use the RecursiveAction class in the fork-jion framework is ideal for representing divide and conquer solutions. this can run in a single or a thousand CPU system, it won’t make any difference. Fork-join offers a portable way to express many parallel algorithms since the code is independent of the execution topology. But we still need to set a number of fork-join threads in the pool.

Under the hood we are not using an ordinary thread pool, nor an Executor. We are looking for a solution to minimise contention and overhead. The technique fork-join uses is called work stealing. You have a pool of worker thread (each worker has its own double-ended work queue) and pushes new tasks to its own queue. Then reads it from the head of the queue. If a queue is empty then a worker tries to get a task from the tail of another queue.

Stealing is infrequent since worker put and read tasks in their queue in a LIFO order and the size of work items gets smaller as problem is divided. Therefore you get a very nice work balancing with no central co-ordination. You start by giving the tasks to a central worker and it keeps dividing them. Eventually all the workers have something to do with minimum synchronisation cost.

Extend LinkedAsynchAction instead of RecursiveAction. It manages and maintains a parent-child relationship and finished methods mean “wait for all my children to be finished”.

Fork-join can be used for several things like matrix operations, numerical integration, finite-element modelling, game playing (move generation, move evaluation, alpha-beta pruning) etc.

ParallelArray classes let you declaratively specify aggregate operation on data arrays. There are version of primitives and objects ParallelArray<T>. Coding a “select-maximum” with ParallelArray is trivial as all you have to do it

ParallelLongArray pa…
pa.max();

ParallelArray framework automates fork-join decomposition for operations on arrays and it can use filtering and mapping. There are a few restrictions though. Filtering has to precede mapping, mapping has to precede aggregation.

Basic operations supported by ParallelArray are

  • filtering (selecting subsets of the elements)
  • mapping (convert selected elements to another form)
  • replacement (create a new PA derived from the original)
  • aggregation (combine all values into a single value), application (perform an action for each selected element).

You can combine multiple ParallelArrays. The PA has combiners for arithmetic operations like max, min etc.

There is also connection with closures. We can specify the work to do with a closure passed to the methods of the ParallelArrays. With closures the Api can be rewritten in terms of functional types instead of named types, Ops.Predicate<T> becomes {T=> boolean}

JUG community BOF; JUG leaders from around the world interact with Sun

Three JUG leaders discussed their concerns with people from Sun and other JUG leaders.

Frank Greco set three main concerns

  • Java’s buzz factor is declining. What is sun doing to increase sex appeal?
  • JEE and app servers are legacy. What is the next step for Java on the server that will scale with less complexity?
  • Jini/javaspaces are enjoying a resurgence in financial applications and with scale hungry web 2.0. Why does sun ignore these technologies?

Antonio Concalves

  • Java is too complex and is becoming legacy. Many other simpler languages are running on the JVM. Is the JVM the only feature of Java?
  • The JCP should be seen as a community not as an opaque ruling organisation. Open up the JCP
  • More exposure should be given to the JUGs. What about a booth at JavaOne.

Paul Webber

  • New features vs backward compatibility
  • Desktop vs web programming (complexity, tooling APIs). What progress is being made with the Java media API?
  • Jug sustainability. What is important to the local JUG community?

There were several opinions and thoughts about the concerns. Sun replied that whatever a JUG leader needs in order to start a JUG and get it going can be found by asking for Sun;s help or visiting the java.net web site.

Some answers to the questions above were

  • Java hype is not declining. If you look at J1, Javoxx, javabin, Jfall you will see that Java hype is far from declining.
  • Java 7 will address the issues with multi-core threading.

More questions and concerns raised

  • Issues we face is that trying to educate people that Java has changed. Many people learned Java years ago but now it has changed.
  • What language we should be using; it’s matter if preferences and how each language is solving a particular problem.
  • What is missing is intercommunication among jug leaders.
  • We need to get the messages across that Java is not slow because people who do not know consider it slow and this has its drawbacks.
  • How does Java handle middle age? How do we increase the value of the Java community?
  • What can we do in order to make Java more attractive?

We know in general what developers are doing with the technology. We have a view of how things are working around the world. If you move to South Asia or Africa Java is used as a fit all solution. Most companies that do not use Java (like Facebook), they use the language they were grown with when they were students. If they don’t use Java now they will never do.

Categories: Java, JavaOne

JavaOne day 1

6 May 2008 8 comments

Woke up early morning (7.00 am), ate something quickly and off to Moscone centre for the JavaOne registration. Good thing, Moscone centre is very close to where I am staying. Bad thing, most of the rooms are not theatrical, all seats are on the same level (although I have to admin that you have a clear view wherever you sit). Anyway, I got my pass, got my J1 bag and waited in the lobby till the Community One general session starts (around 9.30 am). So while I am waiting I though I’d blog about it.

Moscone centre is huge, when I say huge I really mean huge, there are three wings, (north, south and west) and all of them are in different blocks. The J1 event takes place in the north and south wing. So far it’s been very well organised, the staff is very polite and always eager to help you, there signs and guides everywhere in the centre so any given time you know where to find your room. It’s full of security people and always checking your pass so I’d guess it’s very tough to get in here if you don’t hold a valid pass.

Community one started at 9.45 am. Ian Murdock started by giving us information about J1. This is the second year that Community One takes place and the overall J1 conference has 50% more attendees (a total of 15.000) than last year. The computing world has evolved a lot in the past decade and the next natural stop is the open source.

Jonathan Swartz was called and he talked about the role of the community and how it has changed the face of computing forever. He went on by saying that the purpose of the community is to create markets and opportunities.

Ian started talking again mentioning that developers and managers should try and learn new things and make connections. The best and most important thing that the community has to offer is innovation. A community, the Java community, is about people, about us. And when people are involved and get passionate there are disagreements. These disagreements are what we should take the best out of it and make them positive. Sun has evolved a lot in the past years, it has progressed, made mistakes and learned from them.

Another integral part of the community is a common set of interests and how we progress from one application to the other, how do we go from Eclipse to NetBeans ot from Linux to Solaris. The answer is open standards. Open standards have enabled us to move from close to open, from proprietary to free.

Ian said that back in 1993 he saw people from all around the world, with different cultures and languages, getting together to and talking about the same things. This was truly remarkable. But you had to get all different things and assemble them together, linux kernel, linux drivers, linux applications etc. Therefore the idea was simple; get all these things, put them together delivere them to the world. Have open source standards and trends; move from monolithic to fine grained applications. this increases flexibility and competition. And increased competition lowers prices.

Linux distributions changed everything. The biggest innovation of Debian was the way development was taking place. It showed to the community how to maintain and distribute the technology with the package installation system. The smaller independent developer could deliver their innovation to the market by using the Debian installer package.

The wad of stuff (speaking about solaris) is a move from monolithic to marginal architecture in open solaris. Sun embraces the same model through the full product line: open source. They provide free and open source, tried and tested production ready solutions. You do not have to pay Sun anything, all is free. But if you need help to upgrade, scale or support then that’s how they make money. It’s a win win situation.

And how open source relates to the new computing world? What does it mean? Ian mentioned that it hides complexity and developers can focus on the actual application and produce ready to market applications. Open solaris is a platform that enables developers to assemble the small pieces (IDE, compilers, drivers, tools etc) they need in order to develop the application they want.

After that Marten Mickos (database from Sun), Jim Zemlin (Linux foundation), Stormy Peters, Ted Leung, Jeremy Allison and Mike Evans went onto the stage and there was a conversation about the products they represent. Marten declared explicitly that MySQL is and will be open source software forever. And there is no exception to that rule.

Other interesting thoughts and ideas are that the open source community needs all the developers, technical writers, testers etc they can get. Originally open source asked people for either code contribution or cash. Now it asks for documentation, blogging, to make the community known. The only enemy of the community is obscurity. By committing code it benefits everybody and companies realised that early enough therefore several of them are assigning people working on open source projects full time.

A point made clear was about Samba. Samba is “just a bunch of guys” and the existence of a Samba commercial company does not break the open source model and they refuse to have corporate contribution in their code base.

In a question about what a community needs in order to be successful the answer is straight forward; you need a strong leader, you need to have the model for the community, and you have to evaluate the community. The open source developers are a different thing to the users. This takes an extra special step because you need to have someone to articulate the vision. Having a charismatic leader that can absorb all feedback and make the right decisions helps a lot because many of these projects are going to change the world.

At some point certain community members will be valued more than others. These decisions are based on the installation base there is and on their contribution to the community, on their dedication.

Sun expects to get excitement and participation from the community. That’s all they ask for. You want the enterprises to use your software and the community participation can help to do that.

Somebody asked if google is a bad or good example in the open source community. The answer is that google is doing open source in their own terms and they deliver software through the web. The good thing about google is that it does not try to own a particular open source project they work on. And this is the right way to engage in the open source world.

In a question what are the top three things Sun can learn from the Apache and Python the answer wasn’t clear (or I wasn’t able to understand it). Between the lines I understood that having the right person in the right place makes a big difference.

The panel closed by saying that there is tension between corporations and organic communities.

Richard Green was the next one on the stage. He said that in Sun everything they do is about the rock starts, and the rock stars are the community. The made bits of solaris available to the community and that was a great start. But open source is not about the bits but about the whole picture. But before you go on you have to think how the model works and what are the amendments and changes you need to make to the model in order for the whole thing work. They made a lot of progress with the help of many people around the world. They added new features and make the network a computer in order to support the whole ecosystem so more people can contribute.

They announced the first fully supported release of open solaris. It’s the centre of gravity of a whole ecosystem and includes features such as Iscsi, ZFS, containers, fma, virtualisation, dtrace, cifs, clearview, hypervisor, device detection tool, d-light, IPS, liveusb, Mysql, ruby, php, apache, gnome, and other oss projects.

Then they demonstrated open solaris from a live CD as well as a d-trace (which was first introduced in Solaris 10 and now it has been spread to other projects as well like Java, Ruby and Firefox 3) demonstration.

Jim (CTE of Solaris organisation) did a demo of a system with several hard disks plugged into running OpenSolaris. They literally smashed one of the hard drives (by using an anvil and a hammer) and the system kept on as nothing had happened. They destroyed a second hard drive and, again, the system kept on as nothing has changed. OpenSolaris was unable to identify the failed disks and continue by using the rest of them. Failed disks were replicated on the spot by brand new ones. They were just plugged into, the operating system recognised them, used ZFS to replicate the missing data form the remaining disks onto the new ones and continue as normal. Seamless integration and very useful if you depend on critical data.

Next David Stewart (engineering manager, Intel corporation) went and talked about how they make sure that the Intel chips are well suited for OpenSolaris.

Open source tools for optimising your development process

Many developers think that the goal of the software development team is to build build software within time, scope and budget. But the real goal is to build the best possible application within time and budget constraints. We need to build a higher quality, more flexible and more useful software, that should correspond to what the users want.

The traditional approach results in poorly tested and inflexible (difficult to maintain and extend) code as well as difficult integration phases, bad coding standards and programming habits. In most of the cases the documentation is also out of date as developers tend to forget to update the documentation when they update the project.

An improved approach is to use newer techniques such as better build scripts, better dependency management and good testing practices. Automating the building process and continuous integration always help since the code quality is checked automatically, we end up with a tighter issue tracking system and, to some extend, we can have automatic technical documentation.

The build scripts are the cornerstone of a good software development since they make builds re-produceable, portable and they automate the building process. Two are the most known tools for this job, Ant and Maven 2.

Ant has several advantages; it is know and widely used, it’s powerful and flexible. But on the other hand it needs loads of low level code.

Maven 2 uses a declarative build script framework which allows the developer to describe the application (what we want to do) and maven figures out how to do it. It offers higher level scripting, strong use of standards and conventions, loads of plugins, “convention over configuration” and good reporting features. But Maven can be more rigid than Ant (if a project does weird or complicated things Ant will be better).

Maven has a standard directory structure and standard life cycle (you start from declaring resources, then compile, then test-compile etc), has declarative transitive dependency management and good support for multi-module projects.

Maven also has better dependency management. It’s very common that a Java application needs jar files and libraries in order to work. These jar files in turn need jar files themselves. For every library you use it’s more than likely that you need a whole stuff of other libraries.

The traditional approach is that these jar files are stored locally. If each project has its own jar files then it’s very hard to keep track of what version each application is using. Also duplication of jar files/libraries is very likely, you might get errors due to incompatibility of jars and you might also overload the source code repository.

All these issues are solved by declarative dependency management; versioned jar files are stored on a central server. Each project declares the version of jars it needs and it gets the relevant jars from the central server.

Maven 2 has built in declarative dependency management functionality, rich public repositories and if you want better performance you can install local enterprise level repository. Of course what you can do with Maven you can also do with Ivy for Ant. Ivy provides maven-style dependencies for Ant, it’s a bit more powerful than maven but also a bit more complicated to set up.

The cornerstone of development is unit testing. It ensures that the code behaves as expected and it makes the code more flexible and easier to maintain. It also helps detect regressions early enough in the development life cycle and document the code. The drawback here is that you have more code to write and maintain but in exchange you get more reliable code with less bugs which is also easer to maintain.

The latest version of JUnit, JUnit4.4, provides many features that make writing tests easier and more productive, like annotations, annotations for testing timeouts and exceptions, parameterised tests and theories for better test coverage. There are a few differences between JUnit 3 and JUnit 4.

JUnit 3

  • you needed to extend the TestCase class
  • you use the setUp() and tearDown() methods
  • your method should start with “test”

JUnit 4

  • any class can contain tests.
  • you have annotations like @before, @after etc
  • you test by using the @Test annotation,
  • you can test timeout, exceptions and have parameterised tests

With JUnit 4.4. you also get hamcrest asserts which are a more reliable way of writing assertions. Trditionally with JUnit you use the “assertTrue” or “assertEquals” methods but with hamcrest you use the “assertThat” thus making the code more readable. You also get more readable and informative error messages and you can combine combine constraints by using the “not()” method.

There are also several test coverage tools that help you write better unit tests and show how much of your code is being executed. These test coverage tools can be integrated in the build process (like cobertura which can run with every build) or they can be integrated in the IDE (like ECLEmma for Eclipse – Netbeans 6.1 already has a test coverage plugin integrated, crap4j and many more). These tools are way more convenient for the developer than using an HTML report

Another technique developers can use is continuous integration which needs to be done alongside with testing. Continuous integration integrates and compiles code from different developers on a central build server. Having several developers commit code in a central server is the most common and widely used practice in modern programming. Therefore continuous integration is a core best practice of modern software development

In order to do continuous integration we need automated build process (ant, maven, make files), automated test process (junit, testng), a source code repository (cvs, svn, starteam etc) and a continous integration build tool (cruisecontrol, continuum, hudson, lunt build etc). If there are any issues with continuous integration the server will notify the developers (mainly via e-mails) of the problem. With continuous integration we get better and more flexible code because

  • we do regular commits (at least once a day)
  • automatic builds and reporting
  • regular releases and more regular testing
  • less bugs
  • faster bug fixes

Another way to get better code quality is to enforce coding standards. By enforcing coding standards we get better quality code and code that is easier to maintain and to detect potential bugs. It’s also easier to train new staff since everyone is following the same guidelines.

Code quality can also be enforced by doing manual code reviews, although they are not done very systematically and can be slow and time consuming. Automatic code audits on the other hand are easier to be done and they are done on a regular basis. There are several tools that help

  • checkstyle (coding standards, naming conventions, indentation, javadocs)
  • PMD (best practices empty try/catch/finally blocks, null pointer checks, complex methods etc, a bit harder to use than checkstyle but very informative)
  • find bugs (potential defects, potential null pointer exceptions, fields that could be modified where they shouldn’t be, infinite loops etc)
  • crap4j (overly complex and poorly tested classes, uses test complexity techniques)

Automated documentation can also help as opposed to manual documentation which is written once and then it’s forgotten. Automatic documentation is complete, always up to date and cheap to produce but it lacks “higher vision” (tends to be a bit dry and not very usable). The simplest thing in order to enforce automatic documentation is to try to get developers write Javadoc comments.

Q&A

Are there any os tools for testing web applications? There is Selenium, canoo web test, http unit. Selenium is very useful since it’s using the web browser.

Will JUnit 4 run JUnit 3 tests? Yes

What about EJB testing? The problem is that you have to deploy the EJBs before you can run the tests. Write the EJBs as POJOs first in order to step the functionality. Alternatively use tools like MockEJB.

How to get new people to write unit tests? New people don’t like unit testing. The best way to get them into unit testing is to pair with experienced programmers.

JavaFX - Mezzanine room 236

This was a session I didn’t have in my schedule. We got into Mezzanine room 236 and there were announcements and discussions about where JavaFX is going.

The session started by explained what exactly FX is; a RIA (Rich Internet Application), a way to enhance the Java ecosystem and to expand the universe of people who are working with Java. FX’s aim is to create new things that Java is and can do. FX is Java everywhere. It’s happening, you can actually build Java applications that can run anywhere and everywhere, be it desktop, browser or mobile devices. And this is still part of Java, build on Java and run on Java. The whole FX investment is in Java.

With FX one gets better browser and desktop support. In a question “how do we meet the needs of what people want” the answer is that all system will eventually be built in Java Fx using different modules and therefore they can run literally anywhere there is a JVM installed, since JavaFX can be compiled into bytecode.

What about JavaFX on Mac? Apple’s future is very focused to not saying or doing anything unless a product is released. Sun is having conversations with Apple at multiple levels, they are conversations that are happening. Support of Java on Mac is of highest priority for Sun, but we have to bear in mind that although Sun understands developers this is not entirely on Sun’s hands.

In a question “What are the biggest risks with JavaFx right now?” the answer is “Delivery”. Sun has to hit the time line and the deadlines they have for FX delivery and they are working hard on it since they want to deliver a JavaFX implementation that can live up to the expectations.

With JavaFX Sun is trying to create a common solution for the billions of devices with Java. Across every place you have Java and they are trying to find a common and unified solution that can fit all.

In a question “What developer problem are you solving with JavaFX?” the answer is that JavaFX can save time developing applications. It is easy to expand it and due to being a Java solution you can actually use existing multimedia libraries and applications.

An interesting thing mentioned is that Sun will choose to deliver different layers of JavaFX on mobile phones. This means that there will be different profiles (a profile can be thought of as a different level of capability). This should not affect the JavaFX applications, at least not for the most of them. But it will affect devices with weaker processing power. If you have for example a cheap phone that does not fully support the JavaFX capabilities you won’t be able to get the full potential of JavaFX.

JavaFX can be thought of as an engine that drives all the capabilities that are exposed to the Java runtime. It is different to Flex. With Flex you have to create a flash component (so you need a Flash designer) and then call it from Flex. With JavaFX you call and create the objects directly.

JUG Leaders – Think Globally, Act Locally

Due to the JavaFX session and a misleading data on the Java Sunspot site I was a bit late for the JUG leader session, so I got there half an hour after it had started. Sorry I couldn’t make it on time guys. Therefore I didn’t even keep a log of what was said during the session.

NetBeans platforms success stories

This was a session about the NetBeans platform and how it can be used to build RIA.

Netbeans platform is a modular system. It’s a modular system because it’s easier to understand the modules rather than a monolithic system with spaghetti code. It’s a platform for Java applications which is open source, written in pure java (so you can reuse all the code you had written before), stable and mature (it’s been around for a long time, longer than seven years) and you can also call it RCP (Rich Client Platform).

The platform, offers several advantages including a windowing system, inherent build scripts, declarative configuration (with an XML file where you can include and exclude features), auto-update, ability to reuse any IDE features, modules that make common tasks easy (dialogs, file io, threading and progress notifications, support for custom project/files etc).

Then a nice demo application of creating a simple platform application using NetBeans was demonstrated.

Another demo of a RCI application followed.

Another demo followed, this time it was the Blue Marine application which, I have to admit, was quite impressive. The overall look and feel reminded me of the Azureus Vuze layer; same colors, same effects etc, but this one was written using the NetBeans platform.

Blue Marine started in 2003 as a Swing application. But in 2005 it was completely rewritten from scratch by using the NetBeans platform. In this demo application one could choose several photos of a library and manipulate them (make brighter, scale etc). then these photos could be added as pins on a global map. So you could actually visit places and pin down photos you have taken from these places. As I said, this was a very impressive demo and shows the full potential and capabilities of the NetBeans platform. It also reminded me a lot of the Parleys demo I saw last year in javoxx.

If you want to learn more about the platform you can visit the following addresses

  • platform.netbeans.org
  • Rich client programming book
  • Fabrizio’s javaone presentation (TS 5483)
  • Tom’s javaOne presentation (TS 5541)

If you are looking for training you can have professional training by the NetBeans experts, it covers

  • developing on the NetBeans platform from the ground up
  • offers several levels of Netbeans Platform Certification
  • community trainings around the world
  • customised version of the course also available through Sun learning centre
  • visit http://edu.netbeans.org
  • Send an e-mail to users@edu.netbeans.org if you want to become a certified netbeans engineer.

Q&A

Jgoodies with netbeans paltform – the platform integrates well with jgoodies although the speakers have not used all of jgoodies features.

NetBeans IDE, lightning talks. Cool stuff and more with James Gosling

This was a session with James Gosling and guests talking for approximately ten minutes (each guest) about the NetBeans paltform.

Adam Myatt is the author of the pro-netbeans IDE 6 book. He said that his favourite netbeans feature is the out of the box functionality, the netbeans profiler and how easy is to measure and profile an application.

Dr B.V. Kumar, author of Delivering SOA using the Java Enterprise Edition Platform with co-authors Prakash Narayan and Tony Ng). He likes the NetBeans easiness with which you can create SOA services and clients and how you can hook them up to different disparate services.

Brouno Soouza and Tom (sorry didn’t catch the full name) gave a speach about Sun’s NetBean’s development programme. Sun gives money to six communities to do open source projects. The total amount given is $175.000 divided into twenty different projects (ten big and ten small). They have hundreds of submissions that they have to go through and choose the best. Only twenty make it at the end. In order to get the money they have to finsih and deliver the projects. You can visit netbeans.org/grant for the winners.

Chris Palmer from Oracle developed the Learning 360 described as “ERP for education” which is essentally a NetBeans learning application that uses the Visual Library. It can support more than 100.000 concurrent users, mainly tutors, students and parents. Learning 360 provides the same sort of environment for students and teachers alike. After five minutes Chris showed us a demo of the application  and he said he chose the visual library because they were running out of time and it gave them a stable container for all the things they needed to do (dragging, drawing etc).

Mark from dotFX Inc talked about secured rich internet application based on Java. Real live software enables deployment of secure RIA using ordinary Java. By using transparent runtime services they managed to solve long standing software problems like changes in the software life cycle (versioning/update problems, vendor lock-in etc). doxTF comes as a free NetBeans plugin.

When you run the application using dotFX it will actually run on a very functional sandbox and the user doesn’t have to do anything.

Categories: Java, JavaOne

JavaOne warm up

5 May 2008 Leave a comment

With less than a day away from JavaOne I went to a small warm up party that the Netherlands JUG

threw at the Carnelian room. Met many new intresting people, saw old acquaintances, Aaron Houston, the JUG coordinator from Sun, Stephan Jansen, the man behind javoxx (former JavaPolis) and bejug (the Belgian JUG),Kirk Peperdine, and Klaasjan Tukker, JUG leader of the Netherlands JUG.

Many topics discussed over a few beers, glassfish, restful web services, Java on Mac, SunSpots and many more. Surely JavaOne has started nicely and it seems it will be a huge success if you take into account that Sun put a lot of effort on it

(yes it’s an ad on a bus stop).

Categories: Java, JavaOne
Follow

Get every new post delivered to your Inbox.

Join 38 other followers