Archive

Archive for the ‘devoxx’ Category

Devoxx – Day 4

11 December 2008 1 comment

The day started with a second keynote in room 8. Stephan Janssen talked for a few minutes and asked everyone who hasn’t voted to vote on the whiteboards for Java 7 features. Then Joshua Bloch started his talk by showing some optical illusions and he said that, like optical illusions, things in Java are not what they seem to be sometimes. He then explained what i new in his Effective Java book.

What new in effective java

  • chapter 5: generics
  • chapter 6: enums and annotations
  • one or more changes on all other java 5 language features
  • threads chapter renamed concurrency

Generics are invariant, this means that a List<String> is not a subtype of a List<Object>. It’s good for compile time safety, but inflexible. That’s why the added wildcards. It’s easy to use wildcards if you remember the PECS – Producer Extends Consumer Super

  • For a T producer, use Foo<? extends T>
  • For a T consumer, use Foo<? super T>

This only applies to input parameters. Don’t use wildcards for return types. Of course there are rare exceptions to this rule.

For the rest of his talk Joshua went through some examples from his Effective Java book and explained the gotchas o the seemingly easy to understand code.

Next one on was Mark Reinhold which talked about the modular Java platform. He started by explaining why a “Hello World” programme in python is faster than a “Hello World” programme in Java. the answer is simple, Java needs to load 332 classes (it needs to resolve all reference, do the verification etc) in order to run the “Hello World”. By modularising the JDK will force the separate components to identify themselves and reduce the number of classes that are needed to be loaded, thus reducing loading and run time.

Then he talked about the JSR 294 (and also mentioned that JSR 277 is not dead, it’s just on hold). The rest of the time was spent on talking about the project jigsaw.

The requirements of a perform module system

  • integrate with jvm
  • integrate with the language
  • integrate with native packaging
  • support multi-module packages
  • support “friend: modules

In the JDK there will be added new features from Sun and other parties.

Big features from Sun

  • JSR 294 + jigsaw
  • JSR 292 (VM support for dynamic languages)
  • JSR 203  (more new IO APIs)
  • JSR TBD: small language changes.
  • forward-port 6u10 features
  • java kernel, quickstarter, new plug-in, etc
  • safe re-throw
  • null-dereference expressions (this one he thinks is already in)

Small featues from Sun

  • SCTP (Stream Control Transmission Protocol)
  • Sockets Direct Protocol
  • upgrade class-loader architecture
  • method to close a UrlClassLoader
  • unicode 5.0 support
  • XRender pipeline for Java 2D
  • swing updates
  • JXLayer, DataPicker, CSS styling – maybe

Fast features from sun

  • Yet more HotSpot run-time compiler enhancements
  • G1 garbage collector
  • compressed-pointer 64-bit VM
  • MVM-lite – maybe (MVM – Multiple Virtual Machines)

Features from other:

  • JSR 308: annotations on java Types (allows you to put annotation in more places than today) (see photo for example)
    • Prof. Michael Ernst. Mahmood Ali
  • concurrency and collections updates
  • Doug Lee, John Bloch et al
  • Fork/Join framework
  • Phasers – Generalized barriers
  • LinkedTransferQueue  – Generalized queue
  • ConcurrentreferencehashMap
  • Fences – Fine-grained read/write ordering

Features not in 7 (at least some of them)

  • closures
  • other language features
    • reified generic types
    • operator overloading
    • BigDecimal syhtax
    • First-class properties
  • JSR 295: beans binding

Jdk 7 will be released early 2010

Towards a Dynamic VM by Brian Goetz and Alex Buckley

The talk started with Brian Goetz explaining what a virtual machine is; a software implementation of a specific computer architecture. This computer architecture could be a real hardware architecture or a fictitious one.

there are several system virtual amchines that emulate a complete computer system (CMWare, VirtualBox, VirtualPC, Parallels)

Virtual machines isolate the hosted application from the host system (a virtual machine appears as an ordinary process in the host system)

Virtual machines isolate the host system from the hosted application (a virtual machine acts as an intermediary between hosted application and host system)

Virtual macines provide a higher level of abstraction

  • sensible layer for portability across underlying platforms
  • abstracts away low-level architctural considerations
    • size of register set, hardware word size
  • 1990s buzzword ANDF

Nowadays virtual machines win as compilation targets

  • Today it is silly for a compiler to target actual hardware
    • much more effective to target a vm
    • writing a native compiler is a lots more work
  • languages need runtime support
    • C runtime is tiny and portable (and wimpy)
    • more sophisticated language runtimes need
      • memory management
    • security
    • reflection
    • tools

If a virtual machine doesn’t provide the feature you need you have to either write it yourself  or do without them. If the virtual machine does provide them you will use them which is less work for you and makes your programming language better (eg gc makes the programmign easier and better).

Targeting existing vistual machines also reuses libraries and tools, debugegers, IDE, profilers, management tools etc.

Vistual machine-based facilites become common across languages (Java code can call JRuby code, Java objects and Jython objects are garbage-collected together).

The best reason to target a virtual machine as a compilation target is the HotSpot JIT compiler. The compiler can generate bytecode and feed it to hotspot. A lot of optimisation that the dynamic compiler can do is harder than the standard compiler since it has to go through loads more info that’s not available to standard compilers. A dynamic compiler can use adaptive and speculative techniques (compile optimistically, deoptimise when proven necessary). Targeting a VM allows compilers to generate “dumb” code and let the dynamic compiler to optimise it (the VM will optimise it better at runtime anyway).

There are loads of VMs out there (Java VM, .NET CLR, SmallTalk, Perl, Python, YARV, Valgrind, Lua, Dalvik, Flash, Zend etc). There are so many because each one was designed to solve specific problem.

You have to make loads fo choices when you design a VM, like the instruction set, where do you store data (stacks like Java does or registers), what data types do you care about, “is everything an object”, choices about the instruction format, what kinds of instruction we want to inlclude (primitives, implementation flexibility), object model (is it class based like Java or obect based like JavaScript), strongly typed or weakly typed, do you trust your compilers (there is loads of bytecode that the JVM will accept but would never be produced by the JDK), how are errors handled, can we call native code?

jvm architecture

  • stack-based programmes representation and execution
  • core instructions
  • data types: objects, arrays, eight primitive types
  • object model: single inheritance with interfaces
  • dynamic linking
    • untrusted code from the web motivates static typechecking (at load-time)
    • symbolic resolution dynamically (base classes are not fragile in the JVM)

We see some common patterns in a JVM like objects, signed integers, single inheritance, static typechecking etc. These features can actully form a VM for many programming languages, many of them unknow to most people (phobos, Piccola, SALSA, ObjectScript, FScript, Anvil, Smalltalk etc). Early enough (1997) in the JVM specification they stated that the JVM does not know anything about the Java programming language but only about the bytecode.

Some features are easy to implement for a universal VM (like checked exceptions in Java) but some others are very dicfficult to implement effectively (open classes in Ruby, alternate numeric towers a la Scheme).

JSR 292 often called the “invokedynamic” JSR because it originally proposed a specific bytecode for method invocation but the scope has widened since then. The work curently that goes into JSR 292 includes invokedynamic bytecode (allows the language runtime to work hand in had with the jvm on method selection), method handles (many languages have constructs like closures, classe are too heavy as container for a single block of code), interface injection (add new methods and types to existing classes).

Virtual method invocation in java

  • the only dynamism for the method invocation is for the receiver
    • different implementation os size() for ArrayList vs LinkedList
    • this is called single dispatch. Java;s method selection algorithm doesn’t  (and can’t) consider the runtime types of arguments given
    • invokevirtual Foo.bar: (int)int
  • the jvm looks for bar: (int) int in the class of the receiver (the receiver is reference dform the stack)
  • if the receiver’s class doesn’t have this method, the JVM recurses up to its superclass…
  • repeated recursive method lookup makes invocation slow
    • fortunatelly, this can often be heavilly optimized
  • divirtualize monomorphic methods
    • if vm can prove there is only one target method body, then invocation turns into a single jump
    • can then inline the mehod call avoiding invocation overhead
    • bigger basic blok enables further optimizaroins
    • inline cachng
    • figure out th emost like receiver type for a call site and cache it
    • optimizes for th emost likely case(s)

 

But compiling dynamic languages directly to JVM is tricky. Many dynamic languages have no receiver type, no static argument type and maybe the return type isn’t even boolean (in the code below), maybe it’s the type of x or y.

function max(x, y)
{
   if x > y then x else y;
}

Dynamic typed method invocation

  • dynamic is a magic type
  • no such type in the jvm today
  • but if the jvm had Dynamic, invokeinterface is almost flexible

How can a language runtime manage dynamic invocation?

  • creative solutions have been proposed
    • could define an interface for each possible method signature
      • complex, fragile, expensive
  • could use reflection for everything
    • use “inline caching” trick to cache method objects for specific combinations of argument types
    • but heavyweight and slow if you use it for every method call
  • it’s easy to conclude “the jvm isn’t a match for dynamic languages”

A little help goes a long way

  • it turns out that the static type checking is closer to the surface than it first appears
  • the big need: first-class language specific method resolution
    • so the lanfiage can identify the call target
    •   but then get out of the VM’s way
  • this is the rationale behind invokedynamic

 
The first time the jvm sees an invokedynamic instruction it calls a bootstrap method which does all the work. Bootstrap chooses the ultimate method to be called. The vm associates that method with the invokedynamic instruction. The next time the jvm sees the instruction it jumps oto the previously chosen method immediately.

Putting it all together

  • jvm method invocatio is still sttically typed
  • the ultimate method invoked is arbitrary
  • depends on the language rules
  • could even have a different name than in the instructions

method handles are composable

  • and adapter method handle takes another method handle and executes code before and after invoking it.
  • endless applications!
    • coercing types of individual arguments
    • java.lang.String -> org.juby.RubyString (different encodign)
    • boxing all argument sinto a boxing array
    • pre-allocating stack frames
    • prepare thread-specific context

Interface injection

  • dynamically typed programs look like self modifying code
  • generally, self modifying code is dangerous and hard to optimize
  • idea: don’t restructure classes, just relabel themselves
  • interface injection: the ability to modify old classes just enough for them to implement new interfaces
    • superinterfaces are cheap for jvm object
    • invokeinterface is fast these days
  • if an interface-dependent operation is about to fall, call a static injector method to bind an interface to the object and provide MethodHandles for the interface’s methods.
    • one change only for the injector to say yes!

Don’t do it! – Common performance Antipatterns by Alois Reitbaeur

In this session the room was literally full. There were people sitting on the corridor and on the floor near the speaker. Unfortunatelyl I didn’t find a place to sit and I was standing (at some point I sat down) therefore I didn’t get any notes. Things I remember from the session:

don’t do premature optimization, even if it’s very tempting. Never do it. Never take care of performance at an early stage, only do it at a later stage.

Good programmers write good performing code. They might be functional bugs, this is acceptable, but there shouldn’t be many performance problems.

Performance management is diffcult because it’s difficult to find performance problems. performance is a moving target, it works today but it might not work tomorrow. Even testing does not prevent you from having performance problems.

How do we test this stuff?? by Frank Cohen

I met Frank at the jug leaders and speakers dinner on Tuesday and I really wanted to see his talk. 

Frank talked about his pushTotest tool which is an open source tool sthat helps you test web services and web applications. With pushToTest you can surface issues quickly, you can create automated functional tests and have SLA compliance monitoring and provides an integrated environment.

The reasons behind having an integrated environment are simple

  • organizations require test and operational management
    • Ajax commerical testing tools not keeping up
    • where is the test tool for GWT, YUI, Dojo, Appcelarator?
  • organisations benefit from integration test and operational management
    • repurpose tests among developers, QA, ops
  • makes build + test-first possible
    • very agile, very rapid, bvery inexpensive

Then a demo followed with some screenshots and a walkthrough some source code.

Preventing bugs with pluggable type checking for Java by Mahmood Ali

Some otes I wrote down

benefits of type qualifiers

  • improve documentation
  • find bugs in programmes
  • guarantee the absence of errors

Checkers:

  • @NonNull: null dereference
  • @Internal: incorrect equality tests
  • @ReadOnly: incorrect mutation and side-effects
  • Many other simple checkers
    • security, encryption, access control
    • format/encoding, SQL
    • checkers designed as compiler plugins and use familiar error messages

 
A nullness and mutation demo followed

Checkers are fearful:

  • full type systems, assignment, overriding
  • polymorphic (Java generics)
  • flow sensitive type qualifier inference

Checkers are effective

  • scales to > 200.000 LOC
  • each checker found errors in each code base it ran on

Checkers are usable:

  • tool support javac, Ant, Eclipse, NetBeans
  • not too verbose
    • @NonNull: 1 annotation per 75 lines
    • @Interned: 124 annotation in 220 KLOC revealed 11 errors
  • fewer annotations in new code
  • inference tools: nullness, mutability

Another demo followed at this point.

Summary

  • pluggable type checkers
    • featureful, effective and usable
  • programmers can
    • find and prevent bugs
    • obtain guarantees that program isfree of certain errors
    • create custom qualifiers and type checkers

 
If you want to learn more have a look here: http://pag.csail.mit.edu/jsr308

Categories: devoxx Tags:

Devoxx – Day 3

10 December 2008 1 comment

First day of the conference today, the keynote started at around 9.45. The whole room was packed and a guy called roxorloops did some beat boxing to entertain the attendees. Quirte impressive I’d say.

Next Stephan Janssen welcomed everyone and announced a few facts and updates about Devoxx. This is the first edition of Devoxx (after it has been renamed Devoxx from JavaPolis) which is sold out with 3.200 attendees (from 35 countries all over the world), 160 speakers, more rooms (6 rooms this times, as opposed to five in last year). There are 40 partners at the exhibition floor, 40 affiliated Java User Groups and also 400 students had the opportunity to come in for free in the first two days of the university.

Stephan said a BIG thank you to Devoxx programme committee and the to Devoxx administration team. As a side note he asked us to be gentle with the venue and take care of it. Do not litter it, do not abuse it, and keep the place clean and quiet. Also be gentle with the wireless network and do not download big files (to day for some reason the network is very fast, compared with the previous years and the fist two days. Well done). Stephan also announced that this afternoon they will be serving free beer (hope not as strong as the one I had in the jug leaders and speakers dinner last night, 10% alcohol!) and free fries.

He also said that people complain that the Belgian JUG apart from Devoxx is not organising anything else. Stephan explained that he didn’t have enough time since the preparations for Devoxx start 8 to 9 months before December. But he has quit his current job and has more time to dedicate now. As a ersult of this BeJUG will organise bi-weekly evening session for 150 maximum throughout Belgium. This is 18 meetings in total (no meeting during December and Summer holidays). Fro more info visit the BeJUG site.

After Stephan stepped down Danny Coward from Sun Microsystems stepped on and talked about JavaFX. He also announced that Sun is dedicated to always be shipping final production software only.

Next he talked about the 10 things we need to know about JavaFX

1) JavaFX is here, it was released on the 4th of December. Sun released a preview in July this wear, a preview of the SDK. JavaFX SDK (which includes the runtime for the desktop and the emulator which allows to deploy on desktop, on browser or mobile phones). NetBeans 6.5 has support for JavaFX. He also mentioned the JavaFX production suite which is a collection of tools that lets the JavaFX developer to work with graphics to create RIA. JavaFX also ships with 75+ sample applications

2) JavaFX defines a cool new language. Why do we need a new language? Languages are evolving rapidly. People in Sun learn from their experience and they put the best best features of the language they have worked so far. JavaFX script was purpose built for only RIA in mind, nothing else. It’s declarative, has a Java like syntax, supports data binding and event triggers.

3) JavaFX supports beautiful graphics. They take advantage of the Java layer for graphics. JavaFX provides support for graphics acceleration, javaFX scene graph, animations and lighting. At this point Richard Bair presented a demo of a video puzzle. The video was playing and it became a jigsaw and they had to put the pieces together. After this small video they explained the relevant source code.

4) JavaFX has a rich APIs set. They have created a simple to use JavaFX script. The API also supports scene graph, media, web services (RESTFUL) and … any Java API.

5) Greater developer tools. The NetBeans plugin for JavaFX includes first class projects, JavaFX script editing, code completion, compile and save, debugging, graphics peeview, integrated documentation, deploy to desktop/browser/mobile.

6) JavaFX integrates into graphics design tools. JavaFX production suite includes tools for developer/designer workflow, export design from adobe tools, import and integrate into JavaFX. Then they showed a demo of the JavaFX production suite.

7) JavaFX runs on multiple devices. Then one more demo followed by a JavaFX application running on a mobile phone.

8) It is built on Java. Great advantage since you can rely on 13 years of JVM implementation. You can rely on this robustness and scalability of the underlying technology.

9) Encode once, play anywhere media. Developers have been asking for years for better media support. They support the native media frameworks (mac native and windows native). The added a new cross platoform format (FXM) which means if they use this format the media will play in any JavaFX-enabled device. Another demo by Joshua Marinacci followed, the Fox Box which was basically amovie website with several movies playing at the same time and Joshua could play with video properties and the video could be dragged outside the browser to watch it as standalone application.

10) JavaFX deploys itself. Anywhere there is a JRE the JavaFX runtime will deploy. JRE is installed on 9/10 new PCs. More than 30-50 million downloads per month. Full FX mobile release will be in March.

At this point there was another break with the beatboxing guy again doing some amazing sounds.

Next was Bart Donn, Christophe De marlie, Robin Mulkers from IBM. They talked about RFID @ Devoxx 2008 (this is I think the same technology used in JavaOne last year).

RFID is a new project installed at Devoxx. But why do we need a project during Devoxx? Instead of giving goodies during Devoxx they decided to spend this money to start a project and benefit everyone. The partners of this project are: IBM, Intermec and SkillTeam.

Then they showed the followingvideo that gives an introduction to the RFID concept. Nice ad.

The rest of the talk was spent by talking about the RFID technology and how IBM has developed it.

From Concurrent to Parallel (by Brian Goetz)

This was actually the same talk Brian Goetz gave at JavaOne in May. I won’t go into details since I have written about that in this post.

Effective pairing: the good the bad and the ugly (by Dave Nicolette)

This was an interactive session again when people started pairing in front of the audience and played different pair-programming scenarios. This session talked about pair programming, its problems and how we can overcome them.

We started with a pair-programming scenario where one impersonated a senior developer and the other a junior developer. The problem demonstrated was that the senior developer didn’t want to let the junior do anything. The senior always had the upper hand and didn’t let the junior do anything and was always picking on the junior guy.

Teams are most affective when everyone can learn about the technologies and the problem. If the senior guy just holds the keyboard and does his own stuff it’s not a good thing. Junior developers learn by typing and practising. The junior should do the typing and the senior the driving. But the senior thinks that sometimes the project goes a little snow and wants to take over things. But the slowing of the project is a normal thing if you want the junior developers to learn. You loose a bit in project time but you gain later in the project. The tip is to have the less experienced person on the keyboard

Another scenario is the soloist. People who want to do this themselves because they think they know the problem ad thus the solution and can work better on their own. Everyone in the team has difficulty learning the problem they’re dealing with. The solution is everyone to know the problem and know how the system works and how to deal with it. This is the bus and team problem, if the lead developer is hit by a bus, how many people can take over the project?  Everyone in the team should try to have equal knowledge of the problem and the system in use.

Another scenario, one doesn’t follow the other while they’re discussing ideas. This can be because one has far more knowledge than the other or because one always changes his mind about ideas and software patterns. Problems arise because one might feel stupid. Another problem is that when someone changes always his mind, they might step away from the problem the customer needs solved. People need to learn how to work with different types of personalities. In pair programming one should make the other stay in touch with the original problem. And both of them can cancel each other’s problems and/or expose their abilities. Also we should need to give emphasis on the simplest design (this is agile development). Sometimes when we have many ideas and we change our mind all the time we make the solution more complex than necessary.

Fourth scenario, one of the persons in the pair has additional responsibilities and can make pairing difficult because he can be interrupted all the time. The team should be dedicated to the project, it should not be interrupted because one of the members of the team is assigned to other things. This usually when the testers are also the business analysts. There is another form of interruption too: when someone outside the team comes in and starts taking about nonsense that is not related to the project. This disrupts the pair when they try to work.

Pairing is really a kind of a discipline art. It’s not just sitting there talking with your friends. It really is work.

Fifth scenario. Physical working conditions in the team room. Pairing is usually done in an agile manner. The team is located in the same room, pairing wheer the team is located in different locations is not a good idea. Or the way the office is laid out. For instance desks and chairs for pairing might be laid out correctly but they there might only be one monitor. By doing this the pairt is loosing time because the second person cannot follow the code. A solution is to ask the manager to buy more monitors (monitors are less expensive than people). As a logical conclusion the working environment should be set up as to be easy for people to pair.

Scenario six. How to maintain the system and fix the bugs if the original application was not developed in an agile manner. If the application was developed by using agile methods there are probably test suites. The pair can check out the application and their test (the first step to fix the bug). Then they have to reproduce the bug. If the bar is green then it means that someone forgot to write the tests, or that someone put code into production without testing it. They can use the same techniques that the development team used. But if the bug refers to some kind of legacy application where there are no test cases the approach is different. In this situation you have to send someone in who knows the system and can fix the bug. This is not really a pairing scenario but good to mention in the pairing context.

Scenario seven, the Fearful Freddie, someone who’s afraid to change the code or can’t be bothered (too much of a hassle). This is a legacy scenario from old legacy systems where they had no tests cases and if you changed something you most certainly had broken it as well. Now things have changed. Even if you break something you can always reverse it by using the version control system. You don’t have to be afraid to change things. It’s better to change small things at once rather than do a big change. Like banks and bill payments, you don’t pay the bills at once but with small installments. Don’t let the complexity of the code build up over time, because you have problems maintaining the code and fixing it. You make the application’s life less. Because of the complexity of the code you actually need to implement a new one. And all this because people don’t want to modify the code and they are afraid to touch it.

Eighth scenario, the disengaged. One person that does the work and the other person is disengaged. The engaged person tries to get the other person  interested in the code they are working on. In this situation you have to remove the option from the other guy, just put the keyboard in front of him and ask him to do the job. What if the partners decide to do a major refactoring that will take 20 minutes and only one can use the keyboard? You don’t have to both use the keyboard, you just have to put your mind into work. Only one can type at the same time, but both of them can think. What if they want to do refactoring but they both have different ideas of how to refactor? A good idea is to ask the other team members about their opinions. You might disrupt them a bit but the benefits you gain are more.

Ninth scenario, the stubborn pair, when they both want to follow their ideas and they won’t change their mind. Another aspect of agile development is self-organisation. Every time there is a little dispute from the team you cannot run up to the manager to solve it because pretty soon the manager is going to take control. That might not be desirable, it might not be what you want. You take it to the manager and one person wins and the other looses (or both loose). The best thing to do when you have a problem like that is to take a break and clear your mind. A second solution is to change partner. Don’t let things become personal.

The Siamese twins scenario. Part of pair programming is that you change pairs (the original authors of pair programming called it promiscuous pair). Different pairs work differently, when one pair is finished the other pair is still working. So they have to take advantage of the spare time. There is a Pomodoro technique; a pair works together for a specific period of time and then they stop. Then take a break and start another time period with different partners.

The Siamese twins scenario kick in when two people are really engaged in the store they’re working on and they don’t want to separate. They work together well and they don’t want to switch. People should be able to switch, if they can’t they are probably stuck and they need a new pair of eyes to look in the problem they are looking into, a fresh pair. Sometimes people know they have problem, they know that the project falls behind but they don’t want to give up. The manager should not ask “how much longer will it take to solve the problem”, but assign new people to look into the problem.

The Ping pong scenario. One person writes the unit test and the other person writes the code to make the test pass. The person who writes the test leaves and lets the other guy to write the code. They are not talking about design, he just pushes the burden of design to the other person. This approach encourage single programming, but sometimes you can make it for a while in order to make the experience of programming a little bit different.

Q&A:

What’s the bets way to learn pair programming? If never done it before then get mentors, outside people who have done it before

Should people pair all the time? No, there are some tasks that don’t really benefit from it. Sometimes the best way for the team to solve the problem is to have one person go and think about it and figure out how to do it. In an 8 hour day the pair should be around five-five and a half.

Behaviour driven development in Java with easyb (by John Ferguson Smart)

This was a talk about the easyb framework and behaviour driven development.

The talk started by explaining that TDD is not about tests, but about wring good software. In the same manner behaviour-driven development is not about behaviour, it’s about delivering software that helps the end-user. TDD in general tends to make better code, by the application being more flexible, more better designed and more maintainable.

BDD is a recent evolution of TDD. The idea is to help to determine what to test. In order to test use cases it uses words like “should” to describe the desired behaviour of the class, eg should verify that client can repay before approving loan, should transfer money from account a to account b. As with TDD you should also focus on requirements not on implementation.

The framework to do BDD is easyb which is an os testing framework for Java (but written in Groovy). It makes testing clearer and easier to write. It makes tests self-documenting and it enhances communication between the development team and the end user. There is another BDD test framework for Java called JBehave but on the speaker’s personal opinion it’s cumbersome to use. easyb is based on groovy but it has Java like syntax, it’s quick to write and you have full access to Java classes and APIs.

easyb in action test requirement by writing easyb stories which:

  • use a narrative approach
  • describe a precise requirements
  • can be understood by a stakeholder
  • usually made up of a set of scenarios
  • use an easy to understand structure

Lets look at an example user story: opening a bank account. “As a customer I want to open a bank account so that I can put my money into a safe place”. We come up with a list of tasks. Open account, make initial deposit etc. Lets concentrate on the initial deposit requirement:

Make initial deposit:

  • given a newly create account
  • when a deposit is made
  • then the account balance should be equal to money deposited.

You implement the scenario in a test test case written in Groovy which can use all Java APIs.

If we had to compare easyb to JUnit

  • more boilerplate code required in JUnit
  • not very self-explanatory
  • the intention is less clear.

With easyb you can have multiple post and pre conditions.

In easyb you have shouldBe syntax instead of assert. Variations of the shouldBe syntax include shouldBeEqualTo, shouldNotBe, shouldHave etc. Also there is another way to verify outcome, the ensure syntax, which is much like Java assert.

Fixtures in easyb; you can use before and before_each (similar to @Before and @BeforeClass in JUnit). Very useful for setting up database and test servers. You an also use after, after_each (similar to @After and @AfterClass)

Fixtures are good at

  • keeping infrastructure code out of the test cases.
  • making test cases more readable and understandable.

As for easydb plugins, only one is available, the dbunit. But more to come for Grails and Excel.

easyb produces test specifications in user-friendly format and flags pending (unimplemented) stories. Also provides readable error messages. When tests fail easyb will tell you why by a more readable manner than JUnit.

As for IDE support for easyb. There are three option options: IntelliJ, Eclipse, NetBeans, but only IntelliJ has a very good support for Groovy.

Some upcoming easyb features

  • html reports
  • grails plugin
  • CI integration
  • Easiness – a fitnesse style web application (stake-holders create stories in normal text)

The talk closed with an easyb demo and stepping through the source code of the test cases.

What’s new in Spring Framework 3.0 (by Arjen Poutsma and Alef Arendsen)

New features in the upcoming 3.0 release and also some that already exist in 2.5 release.

@Controller for Spring MVC.

@RequestMapping methods.

@RequestMapping(“/vets”)
public List<Vet> vets()
{
return clinic.getVets();
}

Constantly simplifying, LoC for sample application PetClinic over time, it dropped significantly from Spring 2.0 to Spring 2.5.

Spring integration now in 1.0 version. released last week in SpringOne.

@PathVariable

@RequestMapping(“/pets/{petId}”)
public Visit visit{
@PathVariable long petId

..

New Views with new MIME types:

  • application/xml use MashallingView in 3.0M2/SWS 1.5
  • application/atom+xml use AtomFeedView 3.0M1
  • application/rss+xml use RssFeedView 3.0M1
  • application/json use JsonView Spring-JS

ShallowEtagHeaderFilter

  • introduces in Spring 3.0M1
  • creates ETag header based on MD5 of rendered view
  • saves bandwidth only
  • Deep ETag support comes in M2 (through @RequestHeader)

At this point they presented a demo with URI support and ATOM feed by using Spring MVC.

Introducing expressions Spring 3.0 will include full support for expressions.

Spring 3.0 will only be available for people who use Java 5. It will support the Portlet 2.0 specification. And depending on specs finalising (if they are released on time) it will support Java EE6, the Servlet 3.0, JSF 2.0, JAX-RS and JPA 2.0. It will also support Web Beans annotations.

Spring 3.0 will deprecate/remove several stuff:

  • traditional Spring MVC controller jierarchy
  • Commons Attributes support
  • Traditional TopLink support
  • Traditional JUnit 3.8 class hierarchy.

but it will still be

  • 95% backwards compatible with regards to APIs
  • 99% backwards compatible in the programming model.

Spring 3.0M1 released last week.
Spring 3.0 Milestones Janurary/February 2009
Spring 3.0 Release Candidates March/April 2009

Categories: devoxx Tags:

Devoxx – Day 2

9 December 2008 Leave a comment

This was a tough day for me, I got a very bad cold (the weather here is far worse than London) and it took loads of strength to go to the conference (as a side note I noticed that the sponsors’ booths are already there, one day earlier than usual). Anyway, I started my day with three hours of Java tuning by attending the

Java Performance by Kirk Pepperdine and Holly Cummins

I was a bit late for this talk since I arrived there ten minutes after it had started. I got there right after Kirk had explained the reasons why a Java application might be performing badly. After that he showed a demo of a web based application connecting to a server that was taking long to reply.

Each software system has dynamic and static aspects. Dynamic aspects include components such actors (usage patterns) and static aspects include components that do not change or manage/provide resources. Al these factors put load on our system. Each Java system can be further divided into:

  • the application that does all the processing (it provides locks and includes external systems)
  • the virtual machine (it manages the memory and the hardware)
  • the hardware (this is not shareable and you have to have exclusive access to it). the hardware manages the CPU, the disk i/o, the memory and the network connections.

In multithreaded systems loads of time is spent on waiting, every time we want to use a shared resource and someone else is using it we have to wait. So we might experience poor response times with possible reasons

  • overflow is queued on every level
  • hardware lacks capacity
  • bad JVM implementation

At the beginning we don’t really know anything about the cause of the bad performance. All we know is that the users are experiencing poor performance. This poor performance can be either at the operating system level, virtual machine level or application level (or on all simultaneously)

The operating system induced latency

  • hardware management
  • symptoms include
    • relative high CPU utilisation
  • high rates of context switching

The virtual machine induced latency

  • Java heap memory management
    • object creation
    • garbage collection
  • symptoms include
    • high rates of object creation
    • low garbage collection throughput
    • likely to see high CPU utilisation

The application induced latency

  • locks block thread from making forward progress (queuing)
  • synchronous calls to external systems park threads

This latency is always expressed by inflated user response times.

If we have high CPU consumption the candidate consumers of CPU are the application (in this case we need to do execution profiling), the JVM (in this case we need to do memory profiling) or the os.

There is a wrinkle in all these:

  • JVM and application run in the same user process.
  • Differentiate by monitoring the garbage collection.
  • The operating system needs to be reported separately
  • The operating system will prevent CPUs from being fully utilised
  • Kernel utilisation is a significant portion of overall utilisation
  • High rates of interrupt handling or context switching.

The implications of the above are:

  • applications are asking too much of the operating system
  • thread scheduling
  • system calls

So how do we start our tests? We need to set up a test environment and do a benchmarking.

In order to expose kernel counters we can use the following command line tools: vmstat, mpstat, corestat. In order to monitor the garbage collection we can use:

  • jvm switch -verbose gc
  • gchisto analysis tool (new)
  • hpjmeter

At this point the speakers presented a demo with a tool called health centre by IBM.

As a synopsis when garbage collection throughput is very high we need to do object creation profiling. When application execution time is very high we need to do execution profiling.

We can diagnose CPU bound

  • code is being invoked more than it needs to be (easy done with event-driven models)
  • an algorithm is not the most efficient (easily done without algorithms research)

Fixing CPU bound application requires knowledge of what code is being run

  • identity methods suitable for optimisation (optimise methods which the application doesn’t spend time on is a waste of time)
  • identify methods where more time is being spent than you expect
  • “why is so much of my profile in calls to this trivial little method?”

There are two ways to work put what code is application is doing: trace and profiling

Trace

  • does not require specialist tools (but is better with them)
  • records every invocation of a subset of methods
  • gives insight into sequence of events
  • in the simplest case System.out.println

profiling

  • samples all methods and provides statistics

Method profiling

  • validates algorithm performance
    • where is the application spending its time
    • am I getting benefits from that?
  • identify methods where application is spending lots of time
  • why are they being called? can calls be reduced?
  • identify branches where time is being spent

The IBM health centre it is designed to be

  • live monitoring of applications
  • capabilities in a number of areas
  • method profiling
  • garbage collection
  • locking
  • configuration
  • visualisation and recommendations

At this point the speakers did a benchmark (process by which we will measure and investigate performance) their demo application by using Apache JMeter and Grinder.

In order to configure the benchmarking environment

  • mirror production (box tells us, change a layer, change in the problem)
  • ensure adequate hardware for test harness (harness should not be the bottleneck)

preliminary steps:

  • review performance targets
  • plan on how to bring test to steady state
  • enable appropriate level of monitoring
  • too much will affect baseline results
  • system counters
  • gc
  • external system performance

To establish a workload

  • fixed amount of work measure the time
  • fixed amount of time measure the work

Stumbling points when we set up the environment

  • noise
  • randomisation and caching (systems are very good at caching, we cache everywhere)
  • randomisation and access patterns
  • complex usage patterns (really hard to cope with these things)
  • complex system interactions
  • stubbing out external systems
  • use mocks

At this point the showed an Apache JMeter demo. JMeter is a closed system which means it

  • supports limited number of users
  • user re-joins the line as soon as it is finished
    • source of artificial latency
  • sell throttling
  • difficult to regulate request arrival rates (thread starvation)

On the other hands open systems have:

  • unlimited number of users
    • users arrive according to some schedule
  • possible to flood a system (this is desirable because if your systems performs poorly you want it to flood)

closed harness when no of users is fixed
open harness when no of users is not fixed

The garbage collection it’s more than collecting objects, it’s memory management. it can provide performance benefits by faster freeing of memory, by providing faster memory allocation and by providing faster memory access. Even in C freeing memory can be expensive with malloc/free.

Allocating memory also takes time and is particular slow when you have loads of threads and one heap (the threads will be fighting for the heap).

Finally not all access memory is equally fast. Garbage collection can speed up memory access by rearranging objects in memory.

Memory access is slow compared to instruction processing. To prevent memory access from being a bottleneck, memory caches are added since access to objects that are already in the cache is faster. On top of that most modern systems have a hierarchy of caches of increasing speed and decreasing size. When an object is loaded into the cache its neighbours are also loaded into the cache. This makes relative position of objects very important to memory access.

the garbage collector can hinder or help interaction with the cache

  • cache pollution
  • depending on the algorithm gc may visit quite a memory during collection
  • the cache won’t be right for the applciation because it will be full of stuff the gc collection just visited
  • compaction
    • means are closer to their neighbours and morelikely to be in the cache
    • re-arrangement
    • objects are closer to their friends and more likely to be in the cache at the right time

Most JVMs provide more than one garbage collection algorithm. None of the policies are bad, but the default is not necessarily the best in every circumstance. All the different garbage collectors differ in the following

  • when and how the work is done?
  • what happens to garbage?
  • how is the heap laid out?

There are three types of garbage collectors:

  • stop the world garbage collector
    • all application threads stop
  • incremental
    • divided into smaller portions
  • concurrent
    • happens at the same time with the application.

Even when the garbage collector spends loads of time pausing application performance might be better. Why? because garbage collection is not just a garbage collection.

At this point they presented another demo of the Memory Visualizer tool that’s focused just on garbage collection. This tools works with all the VMs (the health centre works only with IBM VM).

And yet another demo with Eclipse Memory Analyzer tool this time.

Pro Spring 2.5 (with Joris Kuipers and Arjen Poutsma)

Another three-hours session, an overview of the new Spring 2.5 features.

Spring 2.5 is one of the first major frameworks with dedicated support for Java 6. It supports all new JDK 1.6 APIs supported (JDBC 4, JMX, JDK ServiceLoader API). JDK1.4 and 1.5 still supported but there is no JDK 1.3 support.

There is improved JDBC support:

  • JDBC 4.0
    • native connections (java.sql.Wrapper)
    • LOB handling (setBlob/setClob)
    • new SQLException subclasses.
  • other JDBC improvements
    • SimpleJdbcTemplate
    • SimpleJdbcCall and SimpleJdbcInsert

They have added support for named parameters when we use the JdbcTemplate.

They have added support for JMX MX beans. MXBeans are a new addition to JMX and they provide better support for bundling related values (standard beans require custom classes). MXBeans can be registered by MBeanExporter (the JMX spec does not allow dynamic creation).

The new JDK ServiceLoader API (java.util.ServiceLoader) is use to register the service providers for services. The file META-INF/services/my.service defines the implementation classes for my.service. This is used by Service(List)FactoryBean

Spring 2.5 also supports the Java 6 built-in HTTP server. It also supports HTTP-based remoting using SimpleHttpInvokerServiceExporter and SimpleHessian/BurlapServiceExporter. You can set up a JRE 1.6 HttpServer by using SimpleHttpServerFactoryBean.

Java EE 5 support

  • integrates seamlessly
  • new Java EE 5 APIs supported
    • Servlet 2.5, JSP 2.1 and JSF 1.2
    • JTA 1.1, JAX_WS 2.0 and JavaMail 2.4
  • J2EE 1.4 and 1.3 still fully supported
    • eg BEA WebLogic 8.1 and higher
    • eg IBM WebShere 1.5 and above.

    But in Spring 3 they will drop J2EE 1.3 compatibility.

    Java EE 5 APIs:

    • Support for unified expression language
    • JSF 1.2: SpringBeanFacesELResolver
    • Consistent use of JSR-250 annotations
    • JTA 1.1.: support new
    • TransactionSynchronizationRegistry
    • new JTA, JavaMail and JAX-WS support also available for stand-alone usage

    Spring 2.5 also supports the Java Connectivity API (JCA)

    Other J2EE ehnancements that come with Spring 2.5 include:

    • Spring 2.5 officially supported on IBM WAS 6.x
    • WebSphereUowTransactionManager
    • WebSphereTransactionManagerFactoryBean replacement
    • no new features, but uses supported IBM API

    Spring 2.5 also works well with OSGI. it provides a dynamic module system and is bundled as central packaging unit.

    It also supports new configuration features such as annotations-driven configuration, JMS and JCA support, enhanced AspectJ support and annotations-driven MVC controllers.

    Spring 2.5 embraces annotations. It supports JSR250 annotations and Spring specific annotations (makes the code spring dependent). This doesn’t mean that they prefer annotations to XML, XML is in no way deprecated. Spring specific annotations can be used if you need more power than the traditional annotations, and also if you don’t care about migrating your application to another framework):

    • New @Autowired annotation
      • autowiring by type
      • of fields methods and constructors
      • AutowiredAnnotationBeanPostProcessor
    • Autowiring by type might have too many candidates
    • provide hints using qualifiers
    • through new @Qualifier annotation
    • on fields or parameters

    Annotations-based autowiring pros and cons

    • pros:
      • self-contained: no XML configuration needed
      • work in much more cases than generic autowiring (ant method or field)
      • JSR 250 or custom annotations keep your code from depending on Spring
    • cons:
      • requires classes to be annotated
      • configuration only per class not per instance
      • changes require recompilation

    At this point they presented a demo demonstrating the the pet clinic and showed the capabilities of spring.

    Spring 2.5 also provides extra support for AspectJ

    • new bean(name) pointcut element
      • for use in aspectj pointcuts
      • matches beans by name
      • supports wildcards
    • no more need for BeanNamedAutoProxyCreator
    • support AspectJ load-time weaving through spring’s LoadTimeWeaver
    • driven by META-INF/aop.xml files
    • for any supported paltform
      • generic spring vm agent
      • various app servers: Tomcat, Glassfish, OC4J

    You can use AspectJ to inject instances of objects that have not been constructed by Spring.

    Since we are not using proxies any more but AspectJ, we can apply aspects that we were not able to do it before. We don’t have to force everyone to go through the proxy instead of the real object. The programme becomes more natural.

    Spring has its own web framework, the Spring MVC, which in Spring version 2.5 it has

    • Java5 variant of MultiActionController
      • including form handling capabilities
    • POJO based
      • just annotate your class
      • works in Servlet and Portlet container
    • Several annotations
      • @Controller
      • @Requestparam
      • @RequestMapping/@RequestMethod
      • @ModelAttribute
      • @SessionAttributes
      • @InitBinder

    The test context framework

    • revised-annotation based test frameworks
    • supports JUnit 4.4, TestNG as well as JUnit 3.8
    • supersedes older JUnit 3.8 base calsses
      • AbstractDependencyInjetionSpringContextTests and friends
      • they’re still there for 1.4
      • will be deprecated in Spring 3.0
    • convention over configuration
      • use only annotations
      • reasonable defaults that can be overridden
    • consistent support for spring’s core annotations
    • spring-specific integration testing functions
      • context management and caching

    Another pet clinic demo of spring followed at this point.

    Profiler: the better debugger (by Heiko Rupp)

    This was a very similar talk to the one Kirk and Holly gave this morning. Nothing new really here (I shouldn’t have gone but since I got in the room and I thought I’d stay).

    Main points I wrote down:

    A debugger is a tool that steps through code, can look at variables, stop/.pause the programme on exception or breakpoints. But there are issues with time-outs in larger applications.

    A profiler analyses the CPU and memory usage. The application runs while profiling and there is no view on content of the variables. Free profilers, NetBeans profiler, Eclipse TPTP, commerical profilers: JProfiler, JProbe.

    Why use a profiler for debugging?

    • start of call chain is unknown
    • call stack uses reflection
    • use of big complex frameworks
    • transaction timeouts render values invalid in the debugger

    And then a NetBeans profiler demo followed.

    Categories: devoxx

    Devoxx – University Day 1

    8 December 2008 5 comments

    Back in Antwerp for yet one more devoxx event, second only to J1. Five days of Java overdose, lets see what we have. Today the first talk I attended was

    Kick start JPA with Alex Snaps and Max Rydahl Andersen.

    This was a talk I wanted to attend since we are using JPA at work. The talk started by Alex Snaps giving an overview about JPA and why we need an ORM framework. Traditional CRUD code using JDBC tends to be ugly and hard to maintain. JPA eliminates the need for JDBC (CRUD and Querying), provides inheritance strategies (class hierarchy to single or multiple tables), associations and compositions (lazy navigation and fetching strategies).

    It is vendor independent and easy to configure by using annotations (but you can override them by using XML) and JPA is available outside JEE containers. And as of JPA 2.0 there is a dedicated JSR to it (JSR 317).

    One of the goals of JPA is that it should be transparent, but, according to Alex, it’s not there yet and he doesn’t think it will ever be.

    After this small introduction the speaker moved to explaining what an entity class is. In JPA

    • entity classes should not be final or have final methods
    • entity classes have a no argument constructor
    • collections in entity classes should be typed to interfaces.
    • associations in entity classes aren’t mapped for you.
    • and there must be an id field in the entity class

    Entity classes support simple types:

    • primitive & wrapper classes
    • String
    • BigInteger & BigDecimal
    • Byte & Char arrays
    • Java & JDBC temporal types
    • Enumeration
    • Serialisable types

    In JPA we can have class hierarchies like inheritance, entity support, polymorphic associations, and we can also map concrete and abstract classes by using the @Enity or @MappedSuperclass annotations.

    There are a few ways to implement polymorphism in an JPA. We can have one table per class (and use a discriminator value, this is a viable solution but the table can get very big), we can have a joined subclass (a class for a table and then you join some subclasses which are in their own tables) and one table per concrete class (optional).

    Many-to-one association is supported by using the @ManyToOne annotation:

    public class Person
    {
        @ManyToOne
        private Customer customer;
    }
    

    when you load the person, customer is also loaded as well.

    Similarly one-to-one association is supported by using the @OneToOne annotation:

    public class Person
    {
        @OneToOne
        private Address address
    }
    

    There is a unique constraint here: only one person can have this address. In a one-to-one bi-directional association both a person belongs to one address and an address belongs to one person.

    A one-to-many uni-directional association is the same as the bi-directional associations but without the mappedBy annotation. In this case, without an owning side with cardinality of one, a join table is required.

    Of course we can use generics for the mapping. If we do not use generics we will need to tell the container what entity it should be mapped to.

    In order to manage the persistence we need to use javax.persistence.Persistence and create an EntityMangerFactory based on a persistence with name. With this entity manager factory instance we can create an EntityManager instance which handles the persistence of the entities and use Query to query them back from the database.

    In order to set the persistence unit we need to write a bit of xml (the persistence.xml file). We have to tell the persistence provider all the database properties (driver etc – the provider will use these properties for initialisation) and let JPA know the class that will be persistent.

    The entity manager is the main object that takes care of the JPA stuff. It manages object identity and it manages CRUD operations.

    The life cycle of an entity can have four states:

    • new (it’s new and not yet associated to a persistence context)
    • managed (is associated to a persistence context and the persistence context is active)
    • detached (has a persistent identity -i.e. it is associated to a persistence context- but this persistence context is not active)
    • removed (it is removed from the database).

    The persist method of the entity manager persists a new entity to the database. The remove method removes the entity from the database. If the entity is already scheduled for removal the operation is ignored. In all the above cases the operation might be cascaded if there are associated objects with the entities.

    If an entity is managed an something changes in its state, its state will be automatically synchronised with the database. This is called flushing and we can have automatic flushing or a manual commit.

    When an entity’s persistence context is closed (it can be closed when a) the transaction commits in a JEE environment or when the developer manually manages the persistence context’s life cycle in JSE b) the entity is serialised and c) an exception occurs) then the entity goes into the detached state.

    An entity can also be merged using the entity manager’s merge method. When an entity is detached and a merge is applied to it then a new managed instance is returned, with the detached entity copied into it. If the entity is new then a new one is returned. Merge is also a cascading operation.

    If we want to have optimistic locking with JPA we should annotate our entity with the @Version annotation. This field annotated with @Version will be updated by the entity manager every time the entity’s state is written in the database. The @Version field can be of type int, Integer, short, Short, long, Long and Timestamp. As an advice do not modify this field (except if you really know what you are doing).

    JPA uses it’s own query language to query for an entity. This query API is used for named queries by using the @NamedQuery annotation (in TopLink you can cash all the queries/prepared statements), for dynamic queries and it supports polymorphism and pagination among other features.

    You also have the chance to do bulk operation with JPA by using the Query API (caution! bulk operation will not affect the entity manager).

    How do we deal in JPA with object identity? Do we use the database identifier as part of the operation? There is a remonder from the Object’s equal method that we should take into consideration: “Note that it is generally necessary to override the hashCode method whenever this method is overridden, so as to maintain the general contract for the hashCode method, which states that equal objects must have equal hash codes.”

    If we use a database identifier then we always need to have it assigned before we use the object. First persist the object, then flush it and then use the object (for example as part of a bigger collection).

    Another solution is to use a business key, to have some sort of GUID/UUID. It is recommended not to override the equals or hashCode method, except if you really need to and know what you are doing. Yet , overriding the equals and hashCode method is okay if your object is going to be used as a composite identifier.

    Then the talk moved to listeners and callback methods of JPA. Listeners are called before callbacks methods on entity classes and they are registered by using the @EntityListeners annotation in the entity class. The order is preserved and the container starts with the top of the hierarchy. Each event (PrePersist, PostPersist etc) is registered by adding the relevant annotation to a method with signature: void method(Object).

    Callbacks are called after the listener classes and they are registered by using the same annotations to a method on the entity class. In case of a runtime exception the transaction will rollback.

    In a JEE environment if we have a stateful session bean, the persistence context will be created when the stateful bean is created and will be closed when the stateful bean and all other stateful beans that inherit the persistence context are removed. If the stateful bean uses container managed transaction demarcation the prsistence context will join the transaction.

    In summary JPA makes dealing with RDBMS much simpler… once you understand how JPA works. It is available in JEE as well as in JSE, multiple vendors support it, it has a dedicated JSR and there is great tools support.

    Time for q&a. What are the differences between different vendor implementations of JPA and what is the user’s personal preference? He said he prefers hibernate but he is biased since he’s been using it for years. He’s never used Eclipse link, he’s used TopLink and he saw that it doesn’t implement the specification properly sometimes. As an advice whatever ORM framework you choose make sure you know its flaws very well, especially towards the specification.

    Second part of the talk was about two JPA tools we can use: the Dali tool and hibernate tools. This talk was delivered by Max.

    The Dali tool supports the definition, editing and deployment of JPA entities and makes mapping simple. Hibernate tools (they are hibernate-centric but can be used for other entities as well) own a unique feature set (wizards, .hbm & .xml editors, JPA query prototyping etc) to make writing JPA much simpler.

    The goals of both the tools is simplicity (mapping assistance & automatic generation), intuitiveness (use existing modelling), compliance and extensibility.

    At this point the speaker presented these tools to us for the rest of his speaking time.

    Test driven development with Dave Nicolette.

    This was mainly a 3-hour development of a test driven application from scratch. Several people went to the speaker’s computer and pair-programmed an application (using Eclipse and JUnit 4), from an initial test (they started with a user story -a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output-) to completion. This development took most of the time of the talk. While the application was being developed we were talking about the different steps and approaches to the problems solved.

    Key things I wrote down:

    Test driver development is tests we write to drive the code. We keep these tests as we write more code and we make sure that our new code didn’t break anything from the existing code. these regression tests have a lasting value throughout the application.

    TDD works better with small specifications where we can break the problem down to smaller bits.

    At this point some people from the audience got to the speakers computer and started developing a sample TDD application using Eclipse and JUnit 4. they started with a user story (a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output). This development took most of the time of the talk.

    In a TDD there is a 3-step cycle. Red (the test fails initially), green (we make the test pass with the simplest and quickest code) and re-factor, we make the code better without breaking the existing test.

    Should we test getters/setters? If we write them manually we should, otherwise, if we let the IDE generate them for us, we could leave the tests out. Also it’s a good practice to write tests for mutators that mutate invariants of a class, for example a field that has to make sure that the balance of an account is never set to zero.

    It’s not wrong to have more test code than production code, sometimes you might need to have even ten times more test code than production code.

    VisualVM – New extensible monitoring platform (by Kirk Pepperdine)

    This was a short talk by Kirk Pepperdine (he actually replaced the original speaker from Sun) about how to use the VirtualVM. It was more of a demonstration rather than a talk.

    The current VisualVM version is 1.0.1. This includes

    • visual tool
    • combines several tools
      • command line
      • jconsole
      • profiler
    • targeted for production and development
    • bundled with Sun’s JDK 1.6_07

    For monitoring and discovering local JVMs, use jps. For remote monitoring use jstatd which defaults to the following:

    • com.sun.management.jmxremote.port=3333
    • com.sun.management.jmxremote.ssl = false
    • com.sun.management.jmxremote.authenticate = false

    For explicit jmx agent connection use: -J -Djconsole.plugin.path=path to the jconsole plugin

    It does core dumps only on linux and solaris.

    You start visual vm with visualvm –jdkhome $JAVA_HOME –userdir <path to dir>

    When we use the visual vm to monitor an application we have to make sure that we have turned clustering off.

    Sometimes when we get an OutOfMemoryError and the heap is empty it’s most likely a problem with the perm generation.

    Objects that stay in the new (eden) space are easier to discard/gc than object that stay in the old heap space.

    Making full use of hibernate tools (by Max Andersen, JBoss Tools Lead)

    This again was similar to the talk in the morning, but with more in-depth demonstration of how to use hibernate tools. In short

    • Hibernate tools support Hibernate 3 and EJB3/JPA.
    • supports code completion in .hbm and .xml (class, properties, types etc).
    • Usually code generation is not as sophisticated or smart as manual code… but sometimes you have to bite the bullet.

    It can export templates (like .ftl templates) into entity objects.

    It provides custom JDBC binding (when we read stuff from the db how do we understand the meta-data in there).

    It provides reverse engineering:

    • use reveng.xmlfor basic control over:
      • included/excluded tables/colmns
      • naming
      • meta-attributes (<meta attribute..
    • use programmatic reverse engineering for complete control (extend DelegatingReverseEngineeringStrategy)
    • you can also implement customizable reverse engineering

    Concluding statement: use the right tool for the right job. Even if you have a tool that does (almost) everything for you, you still have to think.

    q&a: are these tools available to netbeans. Short answer is no, but you can use the ant tasks for NetBeans. The source code is there but someone needs to integrate this code with NetBeans.

    Categories: devoxx Tags:

    JavaPolis changes its name to Javoxx

    11 May 2008 Leave a comment

    With JavaOne and all the travelling and the sessions I had to attend I forgot to say that former JavaPolis has now become Javoxx. Nothing at all has changed (apart of course from the name and the logo) about the conference, it’s still the same people, same place and same concept.

    Hope to see everyone of you there in December (8 to 12). Don’t forget to book the tickets and the hotels early enough, last year it was sold out.

    Categories: devoxx

    JavaPolis 2007 – Day five

    16 December 2007 Leave a comment

    Last day of JavaPolis today and it finishes at two o’clock. For my first presentation today I attended Java for high performance 3D and 2D graphical applications by Frank Suykens.

    He first showed us a few demo 2D and 3D applications written in Java which were quite fast and responsive. Some tips to write applications with great performance are

    • Do benchmarking and profiling in order to measure performance.
    • Use good algorithms (BSP-Tree, R-Tree).
    • Always use caching.
    • Always use latest JRE is possible.

    He then showed us a nice air traffic demo from Luciad with 3D rendering and some really nice and smart features. They used heavyweight OpenGL in order to do all these and then Frank went through several tips of how to achieve high performance in critical applications using Swing.

    • Use GLCanvas
    • Use GLJPanel
    • Make sure all components are heavyweight. For example for a popup menu use setDefaultLightweightPopupEnabled(false) to indicate that the component is heavyweight it will be hidden behind other components.
    • Use Vertex Buffer Objects (VBO) when drawing triangles.
    • Reuse Swing components.
    • Use incremental gc if the time really matters.
    • Use tools such as JConsole, JProfiler, VisualGC in order to profile the memory.
    • Us -XX:+PrintGCDetails.

    A nice tip at the end of the presentation is that Luciad is hiring, so if you are interested in 2D and 3D programming with Java drop them an e-mail.

    Next talk was about Real Options in a Nutshell by Olav Maasen and Chris Matts. This session was mainly about changing peoples minds and behaviour when there is really the need to do so. Decisions have to be based on logic and facts rather than emotions.

    It’s all about options and the three things we should have in mind are

    • Options have value
    • Options expire
    • Never commit early unless you know why.

    Another good point is that people don’t like uncertainty. They always want to hear specific dates. For example it’s always better if you postpone a project deadline to let the manager know the exact date of the next release than say “in a few days”. Make a decision then rather than don’t make a decision now because nobody likes uncertainty.

    Always take time to evaluate a product, don’t rush things even if the managers push for the quickest way to market.

    Always create options, then when you have to decide you have several options.

    Good design upfront helps you identify good options in the future.

    And always prepare to run fast when you make decisions, but make sure that you have rest before.

    Last session I attended was TDD beyond the acronyms by Lasse Koskela. Lasse Koskela spoke about test driven development and design and how this can help us build better code faster. The principles of TDD are three easy steps

    • Write a test and see it failing.
    • Make the test to pass.
    • Refactor it and improve the design in the safety of the test.

    A question arises when we do TDD. What should we test? The answer is rather simple; think about the design and what kind of behaviour is missing from the application. In the beginning we want to test the application as soon as possible and therefore we need to make the test pass asap. We don’t really care about what the code looks like in this stage as long as it works (even hard coded values are fine). Then as soon as the test passes we can refactor it and improve the design. We should restructure the code without changing its behaviour.

    There are three ways to use test-doubles (terminology below according to Martin Fowler)

    • Stubs – they only implement a subset of methods of the real objects and they return hard-coded values.
    • Fakes – they replace a real db with a db in memory.
    • Mocks – Some sort of self verifying object.
    Categories: devoxx

    JavaPolis 2007 – Day four

    16 December 2007 Leave a comment

    Second day of the conference and I just have time to write about it (so much beer and socialization in this year’s JavaPolis). To be honest I didn’t attend the Flex talk, I only went there for the new Parleys site. It is built using Flex and it has several advanced Web 2.0 features. I won’t spoil the secret, wait till it’s released (now it’s in beta). Around January they will announce that they need some beta testers so if you’re interested keep an eye on the Parley web site.

    Second talk I went into was the Java Persistence 2.0 API (JSR-317) by Linda Demichiel.The new JPA 2.0 was introduced as part of EJB 3.0 (JSR-220). The current JPA is still in the 1.0 release which has several issues and ambiguities (like optional functionality that was left as vendor specific to implement). The purpose of the new JSR is to solidify the standard and clarify all open issues.

    The JSR 317 is still work in progress.

    Among the things they will introduce is

    • a more flexible modeling and mapping with
    • ordered lists
    • collections of basic types
    • support for embedded types.
    • Multi-levels of embedables.
    • @OrderBy and @OrderColumn annotations.
    • New maps’ functionality in 2.0 version
      • a map key can be a basic type, an embedable and an entity
      • same applies for a map value.
    • new@Access annotation that will specify non-default behaviour. Classes in a hierarchy can have different access types.
    • It’s still under discussion whether to have
      • a table per concrete class
      • an inheritance mapping
      • and orphan deletion (it’s optional in the JPA 1.0 version).
    • Expanded query capabilities in order to improve the query language.
    • Specify what happens in un-fetched entities/relationships
    • Extended persistence context.
    • Bean Validation (JSR-303)
    • Other proposed functionality
      • More flexible modeling.
      • Expanded O/R mapping.
      • QL extensions.
    • Better portability, aligning with emerging JSRs.

    Next presentation was the “Closures Controversy” by Joshua Bloch. This talk was given instead of the Effective Java Reloaded talk and was purely speaker’s opinions expressed not Google’s. To be honest I wasn’t familiar with the closures syntax at all and didn’t understand several of the features Joshua was talking about, but the closures sure makes the Java syntax ugly. I cannot comment on any other parts since the presentation consisted mainly of closures examples taken from the BGGA. So better study the BGGA spec and find out yourself.

    Next session was JavaPosse live by Dick Wall and Carl Quinn (wearing these weird hats lol) with the remote help (via Skype) of Tor Norbye and Joel Nuxxol. They presented the usual JavaPosse news straight from JavaPolis.

    Next talk was JSR-310, Date and Time API by Stephen Colebourne (also author of Joda time). The idea is simple, the current date and time API has several misleading features (months start from zero, year starts from 1900 etc) and Stephan in this JSR suggests a way to overcome these issues.

    The JSR-310 is a very open process. It has public mailing lists, a public wiki and SVN repository and a public bug & features request database. Therefore anyone can participate.

    The JSR suggests a few things:

    • The date should be immutable. This gives several advantages. it cannot be changed after it’s created, it is thread-safe and it can be a singleton.
    • Should use the builder pattern in order to create a date.
    • We should have fewer sub/superclasses.
    • Have only one class of date that wil be called Instance.
    • The interval of time can be represented as an Interval class.
    • The duration of time as a Duration class.
    • The format of the time should follow the ISO-8601 format: {date}T{time}{offset}. For this we can have
      • LocalDate
      • LocalTime
      • LocalOffset
    • With regards to the bullet point above we can have one class for each combination, like LocalDateTime or OffsetDate.
    • The date can be split down into year, month of year and day of month. Each one of them can be represented as a different class.
    • There should be a Resolver class that can resolve invalid dates such as 30th of February as well as DST changes.
    • We should use the strategy pattern for the resolver.
    • Integration with existing classes should be done via interfaces. We can change the current date & time classes to implement the same interfaces like ReadableDate (java.sql.Date implements ReadableDate).
    • All new classes do not reference any of the old JDK classes.
    • In XML we define similar connections, for instance:
      • xs:date corresponds to ReadableDate
      • xs:gYearMonth to ReadableYearMont
    • We implement several classes to handle specific dates. Each class should implement ReadableDate.
      • HebrewDate
      • JapaneseDate
      • BrazilianDate
      • etc.

    For those who use Joda time and they want to move from existing Joda implementation will require to do a few changes.

    Last talk for today was The Java Puzzlers with Joshua Bloch and Neal Gafter. They run through several java gotchas and demonstrated how easy it is to fool the Java programmer. I don’t have the code slides wiht me so I will go quickly through the dos and donts

    • The remove method in the Set interface does not work with generics since it still takes an object as parameter.
    • Hash code and equals methods of URL are broken. Use URI instead.
    • JUnit does not support concurrency. It can throw exceptions that are never seen. If you are testing threads always pass the exception to the test framework*.
    • Add try... catch around assert methods.
    • Auto-boxing happens when you least expect. Try to avoid it.
    • Order of executing static statement does matter.
    • Use primitive boolean instead of object Boolean.
    • Never use a Boolean to return true, false or null (I don’t know).
    • InputStream‘s skip() method does not guarantee to skip all data.
    • In general if an APi is broker wrap it to something that hides the brokeness and behaves better.
    • Math.abs() does not guarantee a positive value since Integer.MIN_VALUE == Integer.MAX_VALUE.
    • Do not mix data types.
    • Silent”widening” is lossy and dangerous.
    • Almost never use float. Only use it if you have an array with huge amounts of floats.
    • Overloading is dangerous.

    * Define some error handling in the tearDown method

    volatile Exception ex;
    volatile Error error;
    ...
    tearDown()
    {
    if (error != null) throw error;
    if (ex != null) throw ex;
    }

    Categories: devoxx

    JavaPolis 2007 – Day three

    16 December 2007 Leave a comment

    First day of the conference today and we all gathered in the big room number 8 to see James Gosling. The session started with Stephan Janssen announcing some facts about JavaPolis 2007. It’s actually the first JavaPolis that is sold out and people from as far as Korea, New Zealand and Brazil came to attend. It was a short but concise talk for fifteen minutes.

    Bruce Eckel followed and he opened the presentation with the notion of “Unconference”. The unconference idea is based on the concept of open spaces. Researches have shown that most people enjoy two things from the conferences. The BOFs (Bird Of Feathers sessions) and the “hallway talk”, i.e. when a session finishes and people meet in the hallway outside the rooms and have a chat. So the idea is simple. There is a white board in the ground floor with a grid on it. Horizontal row is the room number and vertical column is the time. Anyone who wants to have a chat about any idea (s)he has, can go there and put a not on the grid. If people are interested they meet and have a chat about the subject. There is also the “law of two feet” in this kind of talks. If you don’t like the talk you use your two feet and go to another conversation. This is not considered bad or rude since by going away you take with you the bad energy and feeling you radiate and you only leave there the people who really want to talk about the subject. If you want to be polite and stay you are actually pulling everyone else’s energy down, so it’s better to leave than to stay and say nothing. Brilliant idea.

    After Eckel James Gosling started presenting his talk. By this time three huge rooms were full of people and the talks were projected to the other rooms as well. Gosling started with some Java statistics. There are more than 5 billion devices that support java (2.1 of them are mobile devices), there are more than 6 million Java developers and Java is behind some of the most impressive systems in the world, including the robots sent to Mars, the brazilian health system, ebay, CERN and so on. Even the Oyster card in London underground (and some overground) is a Java card (I didn’t know that) and the whole back end is written in Java. Gosling is currently involved in the JavaRTS, the real time specification for Java. He explained the myth behind the Java language; that it is slow. The truth is that in some benchmarks it’s faster than C++ (the performance varies between -2% and 4% faster) and it is highly optimized.He then went on about Java 6 which has several improvements over Java 5. There is a tremendous speed improvement on the server side but not so impressive on the client side. The reason is that speed on the client depends on how fast you draw the pixels on the screen. More slides followed about JSR248- MSA (Mobile Services Architecture), JavaFX, a few slides about Applets (he mentioned that Applets are a victim of several years of litigation as well as Windows 98).

    Then it was the turn for Angela to show us a nice JavaFX demo and Simon Ritter a demo with two robots running SunSpot. Simon was moving forwards and backwards and the robots were doing the same.At the end it was time for the “Ask James” session, were everyone could ask questions to James Gosling. Briefly, the questions asked were the following:

    • Q: How does the JCP work?
    • A: JCP is independent organization. Better ask the JCP people.
    • Q: Where is Swing going?
    • A: It’s strong, he doesn’t see it vanishing.
    • Q: What about androind?
    • A: He doesn’t have enough information to comment on adroid yet. We need more data in order to have a clear idea about it.
    • Q: What about JavaFX on the web?
    • A: It is there in the form of Applets.
    • Q: Why there are so many web frameworks out there?
    • A: Because there are so many smart people out there. They think that each one of them can offer something unique. It would be nice to have less but again it’s good to innovate.
    • Q: Java 6 on the mac?
    • A: This is Steve Job’s job. Apple has become an iPod company, this is their focus. Better ask Apple this question.
    • Q: Any ideas about Java 8. 9 or 10?
    • A: No
    • Q: Why use JavaFX instead of Flex?
    • A: Because JavaFX is integrated with all existing Java code and libraries out there. There is better performance and also you have all the Java 2D libraries you need.
    • Q: Java on multicore & Moore’s law?
    • A: Moore’s law was originally written for transistors, not clock cycles. Java is doing quite well in the multicore and it’s mainly the games industry that drives the computing today.

    Second talk I attended was Google’s Guice by Bob Lee. Guice is a framework that helps you do dependency injection using annotations. Annotations are easier to maintain and understand than XML. It is using @Inject to do the injection (thing about Inject as the new “new”. Instead of calling new you have the type injected). Bob also mentioned a few things about injections:

    • It is preferable to use @Inject Foo foo instead of using injector.getInstance(Foo.class) since the latter is using a service locator to locate the class.
    • It is better to prefer constructor injection rather than method injection since the developer might forget to call the method while (s)he will never forget to call the constructor.
    • You should use method injection in parent classes.
    • You can inject static methods.
    • Guice can invoke private methods. The @Inject annotation tells Guice that it’s ok to override accessibility restrictions.
    • There is a Guice plugin for Eclipse.
    • You can hide packages with package-private annotations, @Inject @MyPackage MyClass
    • It’s good to avoid binding using toInstance()because
      • It forces instantiation inside of modules.
      • Precludes construction injection.
      • Inhibits scoping.
    • Use the @Singleton annotation.
    • Use eager singleton for startup logic.

    Handling exceptions with Guice is pretty simple, all you need to do is to pass the exception to a Binder.addError() call:

    try {...}catch(Exception e) {binder.addError(e);}

    Another advice is to prefer the creation of a Provider and use the getProvider() method call instead of the getInstance() and also to inject the provider instead of the injector since if we inject a provider Guice can see what’s injected and if a dependency is missing. Also it’s better to use Binder.getProvider() instead of injector.getProvider() since with the former Guice fails in startup if something is missing.What happens in cases when you want to inject something manually but you have already provided the @Inject annotation? Guice has the solution by providing the @New annotation which lets the developer to manually create something. In a few words Guice will treat it exactly as we were using the new keyword.A few more tipes I managed to write down:

    • Do not use Guice in unit testing, prefer mock objects instead.
    • There is a Guice Enterprise Edition (GEE) of guice
    • Prefer injection to JNDI lookup since with injection we have information about the type of the object we are injecting.
    • When using Guice avoid HTTPSession scope since there are concurrency and fail-over issues.
    • Let Guice convert values since it also does up front checking of the type of the value.

    Next session I attended was Filthy Makeover by Chet Haase (co-author of the book “Filthy Rich Clients”). Chet’s talk was about how to create compelling and dynamic client applications using Swing. He went through explaining some very nice tricks you can do by using Swing. He talked about Gradients (an interpolation between two colours), composites, animations and a few other techniques and showed us an e-mail client that was using all the techniques to make it richer and more user-friendly.Some tips about the cool features he used

    • Gradients
      • If you want to have lighting effects it’s a good idea to use radial gradient.
      • Cache gradient in images.
      • Use stretched images.
      • Use cyclic gradients.
    • Composites
      • Use translucency.
      • Graphics2D is the class that understands composites.
      • You can have nice effects if you increment/decrement the alpha value of an image.
    • GlassPane
      • It’s a component that sits on top of everything else in a JFrame.
      • Allows painting over the entire window area.
      • It’s not visible by default.
    • Blur
      • Use ConvolveOp to do blurring.
      • Perform a convolution from source to destination.
      • A box blur is a blur that is done on a kernel of equal pixel weight.
      • A Gaussian blur is one that is done in normal distribution of weight.
      • Use upscale for cheap blur effects.
      • It might be cheaper to use smaller/separate filters that one larger one.
    • Shadows
      • They simulate lighting.
      • They make GUI more realistic.
      • Realistic shadows require blur.
      • Use SwingLabs classes DropShadowBorder, DropShadowPanel, ShadowFactory.
    • Animation
      • It’s all about varying properties over time.
      • Use the Timing framework
        • It has a simple declaration model.
        • It’s an easy specification of timing behaviours.
        • To do fade out effects simply decrement the alpha value of an image. Start from alpha 1 and go to alpha 0 while repainting.
          (at this point Chet showed a very cool demo of an image with some moving cogs painting slowly when an exception occurs in the application)
    • Animated transitions
      • They can be used in order to show the user how (s)he got from one part of the application to the other.
      • Show short animations on when changing components.
        • Use fade in/out
        • Move/resize
      • Use the Animated Transitions framework.
        • It simplifies transition.
        • It figures out the component deltas and runs the default animation effects (fades, moves, resizes).

    Next talk was IRIS – a RIA Swing Applet by Richard Bair. The talk was about how to mix applets and HTML and the update “N” which provides several new features regarding applets (and not only applets). The biggest are the consistent cross browser plug in and the fact there there are no more browser lockup or crashes. Prior to update “N” a Java applet was running under the same process as the browser but now it is running on its own process. Furthermore the Java 6 plugin is completely rewritten in order to provide the new features.

    A few more features that will be coming soon are JNLP support, client side pack200 (reduces jar size considerably. At first the plug in asks for a pack200 file. If it is not found then it tries to load up the normal jar file) and auto-detect & install Java plug in. It seems there will be a revival regarding applets since they provide advanced features that other client side technologies do not like security, multi-threading, full screen mode, 3D graphics and so on.

    Then the IRIS applet itself was demonstrated which connects to flickr and loads up photos. The applet communicates with javaScript. JavaScript calls the applet, the applet talks to flickr and then calls back the JavaScript to update the HTML code. The flickr photos are displayed in a 3d view using JOGL.

    Again some tips when using applets (with or without JavaScript)

    • Make calls to JavaScript in a separate thread.
    • You can treat element like any other element in the DOM tree using JavaScript.
    • You cannot set its z-order.
    • Use invokeLater() when doing drag and drop in order to avoid deadlocks.
    • Use DataFlavor to determine the data type of the dnd object.
    • You will probably need to create a custom dnd component

    Next presentation was Practical JRuby on Rails by Ola Bini and Charles Oliver Nutter. The went through a demo of using the Ruby language and explained a few facts about Ruby. They explained that Ruby is good because it’s having blocks, modules and meta-programming.

    Blocks are blocks of code that can be attached to any method invocations. By using blocks you avoid iteration since there is no need for repetition.

    Modules provide name spaces and allows behaviour to be mixed in. There are several modules such as the Enumerable module, the Comparable module etc. It seems to me that modules in Ruby are used like classes in Java, you define them and then you use them (like you import Java classes).

    Meta programming allows the use of dynamic classes, for example we can tell a string during run time to define a new method in itself (this sounds similar to cglib I guess).

    Ruby is using a malleable syntax which means that there are several ways to do things and you can also override most operators.

    Then they speakers showed us some slides about JRuby. JRuby is basically a Ruby for the JVM. It is based on the version 1.8.5 of Ruby and started in 2001. JRuby supports native threads, unicode, it has better performance than Ruby, uses the Java garbage collection and there are different compilation modes, interpreted and compiled.

    Rails is an MVC web framework. It is an agile framework since you can be up and running from day zero, support REST services, “convention over configuration”, AJAX and Web 2.0 and has changed how people view web development. In a few words it uses the strengths of the Ruby language (code generation, build in testing, reasonable defaults etc).

    A Rails web application can be deployed to any server that supports Java (it’s just a WAR file -using Goldspike- that you copy paste to the server). It can use any Java library and integrate with existing infrastructure (like EJB, JMS, legacy systems).

    The presentation ended with a demo of a Rails application.

    Last talk was about the Future of Computing with James Gosling, Neal Gafter, Joshua Bloch and Martin Ordesky. The session opened with a question of “what will be the computer in the future”. In a few words it would literally be everywhere, anything you see and does something will be a computer.

    Question about where Java will be. Again, James Gosling said that it would be everywhere, any device (small or big) will have a JVM installed on it and be able to run a Java application.

    Will there be any problems implementing software on computers with multi cores? Thinking with todays data there will be but the computing will evolve to overcome these problems.

    How much future does the Java platform have? If the platform stagnates then we have no future. They will make sure that the platform will not stagnate.

    How can we evolve the platform without breaking existing stuff? Good analysis, good design and good testing. It’s the cycle of birth-grow-death.

    Is there a chance of making the current APIs optional? No. The APIs are there, but you can use them only if you want. There are people who think that if you are not in rt.jar then you don’t count, bu the fact that J2EE is not in the rt.jar is actually a victory.

    What about the software development process? Now it’s more community based, do you see it changing in the future? It wouldn’t change in the future. Software developers are citizens of the world, they can work from where-ever they are for what-ever company they want. A doctor or a lawyer cannot do this. Also they cannot learn from the internet, developers can. The developers profession is not fractured by geography or national boundaries.

    What kind of new problems computing will bring? It will be harder and harder to test application 100%.

    What language features are missing in order to make programming easier? In their opinion mutability should be minimized as much as possible. Java is the first language that gave immutability to the masses. But a key point in order to have immutable objects is to have a garbage collector.

    Categories: devoxx

    JavaPolis 2007 – The speakers and jug leaders dinner

    13 December 2007 Leave a comment

    After the second day finished several speakers as well as jug leaders and jug members went for a dinner. Actually it wasn’t a proper dinner (I mean we weren’t sitting in a table and having dinner) but more of a bar/club. The booze started to flow in big quantities, loads of food followed and desert as well. It wasn’t long until Paris got hammered but he hadn’t realised it (it always hits you when you go out in the fresh air).

    We got to meet some very interesting people, speakers, other jugs leaders and the man himself, the man behind the Java language, James Gosling (a huge thanks to Stephan -founder of BeJUG and JavaPolis- for introducing us to James Gosling). Of course we wouldn’t let this chance go by and we took a photo together 🙂

    Me (on the right), James and Paris.

    We got back to the hotel at around one o’clock and we were pretty wasted, that’s why I renew this blog two days after the events.

    Categories: devoxx

    JavaPolis 2007 – Day two

    12 December 2007 Leave a comment

    Second day in the JavaPolis 2007, first talk in the morning was about Flex and specifically Thinking in Flex with Bruce Eckel (when did he joined Adobe, really?) and James Ward. In general Flex was introduced because they wanted to improve the web and user experience interacting with it. As soon as the presentation started we had a problem with the internet connection and the speakers couldn’t get any demos running so we skipped the demos right to the next slides.

    The basic idea behind Flex is to build web applications that don’t make the user wait and make the interaction smoother, as well as better UI experience. At this point the internet came back and we delved straight to the web demo consisting of form validation (very impressive I have to admit). After that we saw an advanced application connecting to the database and reading/writing data from/to it. Since flex was designed with the idea of user interface in mind both demos were impressive, cool graphics, nice transitions from one state to the other, smooth colours… I now understand what all the hype is about.

    Next was a desktop demo (as opposed to the previous web demos) which was an application that was connecting to ebay and was interacting with it. Again I don’t have any bad comments here, it seems that Adobe did a tremendous job on the client, not only graphically but with regards to responsiveness as well. No surprise if you think that flex has native support for several components (like events for example). I am really thinking trying Flex even if I am a server side developer myself.

    All these demos were build using Flex builder plug in for Eclipse (unfortunately it’s not free although there is a two months trial version) and the free SDK. All SWF produced can be freely distributed with no license restrictions.

    Before the break the speakers showed us two more application built with flex, Buzzword and Flow (very addictive game).

    Second part of the presentation and the presenters went straight into explaining that flex can interact with Java web services and the web by using either HTTP or SOAP protocol. Then they went thrgough some code and showed us examples of an MVC application and one more demo applications. Actually this demo application was the first Photo Web Application designed for desktop this time. Near the end was time for Q&A. Some of the information I managed to write down:

    • There are flex plugins for UI testing.
    • The main differences between Flash and Flex is that the first is targeted to designers while the latter to developers.
    • There is a flash-lite version for mobile devices but they are still working on that.
    • Flex SDK is free but not open-source yet. We are to expect it to be open-sourced next year.
    • There is officially not maven support but some people managed to get maven working with flex.

    Second talk was about JavaFX in action by Jim Weaver. I didn’t want to miss this one since I had been in the flex session and I wanted to compare (or contrast) these two babies. So, JavaFX it was for my second session. Jim was very thorough in his presentation although many people asked to see more code behind the scenes. The presentation started with some slides explaining the family of products of JavaFX (Mobile, Desktop, Web -Applets- and other) and that the JavaFX is the JavaFX script which can either be interpreted or compiled. The interpreted version of JavaFX is very stable and reliable while the compiled version has some issues which Sun is working on (the compiled version doesn’t seem to be fully working).

    Then we went through the Freebase demo which is exclusively written in JavaFX and it’s basically a desktop application that connects to wikipaedia and reads data from it. In a few words you can bring wikipaedia in your desktop. This application works using the JSON protocol to connect to the web.

    JavaFX is using layout widgets and binds the UI to a model. It can use triggers declaratively and uses sequences (arrays) as the main data structure. It does not support localization yet but they are working on a spec on it. There is also exception handling and it’s more like Java’s exception handling. It supports three types of data, primitives, objects and sequences (arrays). It also has several features of the Java language like foreach loop and in general the syntax is very familiar. It supports block expressions (code that is enclosed by curly brackets and separated by semicolons. The value of the block is the last statement in the code).

    So what is the best thing about JavaFX? It’s all about Java. At the end of the day what you get as a compiled executable is essentially Java bytecode. So JavaFX can work with any of the existing Java libraries out there in every JVM. If your computer supports Java then it also supports JavaFX. Think about JavaFX as an easier way to write Swing applications. You abstract all the Swing source code, you write it in the JavaFX script and you compile it into bytecode. Nice one (although I have to admit it’s not as impressive as Flex yet. It needs more time to mature).

    Next talk was about Apache Ivy by Xavier Hanin. In a few words Ivy is a dependency management tool. It was created in 2004 and it is currently in the 2.0 beta version (just released, in December 2007). It provides several features for a project like recording the dependencies, resolving them, reporting, publishing etc. It integrates tightly with Apache Ant and it’s also compatible with Maven 2 repositories. There are also several implementations for Ivy and not only for Java. Some more things about Ivy

    • There is an Ivy plug in for Eclipse.
    • Ivy can be used with cruise control but needs customization.
    • Ivy is not to be considered as a light version of maven. Maven is a build tool while Ivy is a dependency management tool.

    Third presentation was Task focused programming with Mylyn by Wayne Beaton. Mylyn is a plugin for Eclipse that is task oriented and reduces information overloading thus making multi-tasking easier. It was written in order to make it easier to read code (the presenter said that developers only spend 10% of their time actually writing code). There is one task list in Mylyn plug in which manages all tasks in a single personalised view. It integrates well with web based task repositories and also provides off-line editing and access. By using Mylyn you don’t waste time looking through files that you don’t need since it monitors the interaction you have with the files and “remembers” what you are doing creating a degree-of-interest model. All changes are automatically grouped by task and commit messages are automatic.

    There is also the option to filter tasks and only show them at specific dates (very handy for TODO comments). Mylyn 2.0 is included when you download Eclipse Europa as well as with Eclipse IDE for Java/J2EE/RPC and Plug-in developers (unfortunately it is not included with Eclipse for C/C++). The creator of Mylyn has a company called Tasktop which has extended Mylyn to work with windows specific files.

    Categories: devoxx