Archive

Posts Tagged ‘devoxx’

Devoxx – Day 4

11 December 2008 1 comment

The day started with a second keynote in room 8. Stephan Janssen talked for a few minutes and asked everyone who hasn’t voted to vote on the whiteboards for Java 7 features. Then Joshua Bloch started his talk by showing some optical illusions and he said that, like optical illusions, things in Java are not what they seem to be sometimes. He then explained what i new in his Effective Java book.

What new in effective java

  • chapter 5: generics
  • chapter 6: enums and annotations
  • one or more changes on all other java 5 language features
  • threads chapter renamed concurrency

Generics are invariant, this means that a List<String> is not a subtype of a List<Object>. It’s good for compile time safety, but inflexible. That’s why the added wildcards. It’s easy to use wildcards if you remember the PECS – Producer Extends Consumer Super

  • For a T producer, use Foo<? extends T>
  • For a T consumer, use Foo<? super T>

This only applies to input parameters. Don’t use wildcards for return types. Of course there are rare exceptions to this rule.

For the rest of his talk Joshua went through some examples from his Effective Java book and explained the gotchas o the seemingly easy to understand code.

Next one on was Mark Reinhold which talked about the modular Java platform. He started by explaining why a “Hello World” programme in python is faster than a “Hello World” programme in Java. the answer is simple, Java needs to load 332 classes (it needs to resolve all reference, do the verification etc) in order to run the “Hello World”. By modularising the JDK will force the separate components to identify themselves and reduce the number of classes that are needed to be loaded, thus reducing loading and run time.

Then he talked about the JSR 294 (and also mentioned that JSR 277 is not dead, it’s just on hold). The rest of the time was spent on talking about the project jigsaw.

The requirements of a perform module system

  • integrate with jvm
  • integrate with the language
  • integrate with native packaging
  • support multi-module packages
  • support “friend: modules

In the JDK there will be added new features from Sun and other parties.

Big features from Sun

  • JSR 294 + jigsaw
  • JSR 292 (VM support for dynamic languages)
  • JSR 203  (more new IO APIs)
  • JSR TBD: small language changes.
  • forward-port 6u10 features
  • java kernel, quickstarter, new plug-in, etc
  • safe re-throw
  • null-dereference expressions (this one he thinks is already in)

Small featues from Sun

  • SCTP (Stream Control Transmission Protocol)
  • Sockets Direct Protocol
  • upgrade class-loader architecture
  • method to close a UrlClassLoader
  • unicode 5.0 support
  • XRender pipeline for Java 2D
  • swing updates
  • JXLayer, DataPicker, CSS styling – maybe

Fast features from sun

  • Yet more HotSpot run-time compiler enhancements
  • G1 garbage collector
  • compressed-pointer 64-bit VM
  • MVM-lite – maybe (MVM – Multiple Virtual Machines)

Features from other:

  • JSR 308: annotations on java Types (allows you to put annotation in more places than today) (see photo for example)
    • Prof. Michael Ernst. Mahmood Ali
  • concurrency and collections updates
  • Doug Lee, John Bloch et al
  • Fork/Join framework
  • Phasers – Generalized barriers
  • LinkedTransferQueue  – Generalized queue
  • ConcurrentreferencehashMap
  • Fences – Fine-grained read/write ordering

Features not in 7 (at least some of them)

  • closures
  • other language features
    • reified generic types
    • operator overloading
    • BigDecimal syhtax
    • First-class properties
  • JSR 295: beans binding

Jdk 7 will be released early 2010

Towards a Dynamic VM by Brian Goetz and Alex Buckley

The talk started with Brian Goetz explaining what a virtual machine is; a software implementation of a specific computer architecture. This computer architecture could be a real hardware architecture or a fictitious one.

there are several system virtual amchines that emulate a complete computer system (CMWare, VirtualBox, VirtualPC, Parallels)

Virtual machines isolate the hosted application from the host system (a virtual machine appears as an ordinary process in the host system)

Virtual machines isolate the host system from the hosted application (a virtual machine acts as an intermediary between hosted application and host system)

Virtual macines provide a higher level of abstraction

  • sensible layer for portability across underlying platforms
  • abstracts away low-level architctural considerations
    • size of register set, hardware word size
  • 1990s buzzword ANDF

Nowadays virtual machines win as compilation targets

  • Today it is silly for a compiler to target actual hardware
    • much more effective to target a vm
    • writing a native compiler is a lots more work
  • languages need runtime support
    • C runtime is tiny and portable (and wimpy)
    • more sophisticated language runtimes need
      • memory management
    • security
    • reflection
    • tools

If a virtual machine doesn’t provide the feature you need you have to either write it yourself  or do without them. If the virtual machine does provide them you will use them which is less work for you and makes your programming language better (eg gc makes the programmign easier and better).

Targeting existing vistual machines also reuses libraries and tools, debugegers, IDE, profilers, management tools etc.

Vistual machine-based facilites become common across languages (Java code can call JRuby code, Java objects and Jython objects are garbage-collected together).

The best reason to target a virtual machine as a compilation target is the HotSpot JIT compiler. The compiler can generate bytecode and feed it to hotspot. A lot of optimisation that the dynamic compiler can do is harder than the standard compiler since it has to go through loads more info that’s not available to standard compilers. A dynamic compiler can use adaptive and speculative techniques (compile optimistically, deoptimise when proven necessary). Targeting a VM allows compilers to generate “dumb” code and let the dynamic compiler to optimise it (the VM will optimise it better at runtime anyway).

There are loads of VMs out there (Java VM, .NET CLR, SmallTalk, Perl, Python, YARV, Valgrind, Lua, Dalvik, Flash, Zend etc). There are so many because each one was designed to solve specific problem.

You have to make loads fo choices when you design a VM, like the instruction set, where do you store data (stacks like Java does or registers), what data types do you care about, “is everything an object”, choices about the instruction format, what kinds of instruction we want to inlclude (primitives, implementation flexibility), object model (is it class based like Java or obect based like JavaScript), strongly typed or weakly typed, do you trust your compilers (there is loads of bytecode that the JVM will accept but would never be produced by the JDK), how are errors handled, can we call native code?

jvm architecture

  • stack-based programmes representation and execution
  • core instructions
  • data types: objects, arrays, eight primitive types
  • object model: single inheritance with interfaces
  • dynamic linking
    • untrusted code from the web motivates static typechecking (at load-time)
    • symbolic resolution dynamically (base classes are not fragile in the JVM)

We see some common patterns in a JVM like objects, signed integers, single inheritance, static typechecking etc. These features can actully form a VM for many programming languages, many of them unknow to most people (phobos, Piccola, SALSA, ObjectScript, FScript, Anvil, Smalltalk etc). Early enough (1997) in the JVM specification they stated that the JVM does not know anything about the Java programming language but only about the bytecode.

Some features are easy to implement for a universal VM (like checked exceptions in Java) but some others are very dicfficult to implement effectively (open classes in Ruby, alternate numeric towers a la Scheme).

JSR 292 often called the “invokedynamic” JSR because it originally proposed a specific bytecode for method invocation but the scope has widened since then. The work curently that goes into JSR 292 includes invokedynamic bytecode (allows the language runtime to work hand in had with the jvm on method selection), method handles (many languages have constructs like closures, classe are too heavy as container for a single block of code), interface injection (add new methods and types to existing classes).

Virtual method invocation in java

  • the only dynamism for the method invocation is for the receiver
    • different implementation os size() for ArrayList vs LinkedList
    • this is called single dispatch. Java;s method selection algorithm doesn’t  (and can’t) consider the runtime types of arguments given
    • invokevirtual Foo.bar: (int)int
  • the jvm looks for bar: (int) int in the class of the receiver (the receiver is reference dform the stack)
  • if the receiver’s class doesn’t have this method, the JVM recurses up to its superclass…
  • repeated recursive method lookup makes invocation slow
    • fortunatelly, this can often be heavilly optimized
  • divirtualize monomorphic methods
    • if vm can prove there is only one target method body, then invocation turns into a single jump
    • can then inline the mehod call avoiding invocation overhead
    • bigger basic blok enables further optimizaroins
    • inline cachng
    • figure out th emost like receiver type for a call site and cache it
    • optimizes for th emost likely case(s)

 

But compiling dynamic languages directly to JVM is tricky. Many dynamic languages have no receiver type, no static argument type and maybe the return type isn’t even boolean (in the code below), maybe it’s the type of x or y.

function max(x, y)
{
   if x > y then x else y;
}

Dynamic typed method invocation

  • dynamic is a magic type
  • no such type in the jvm today
  • but if the jvm had Dynamic, invokeinterface is almost flexible

How can a language runtime manage dynamic invocation?

  • creative solutions have been proposed
    • could define an interface for each possible method signature
      • complex, fragile, expensive
  • could use reflection for everything
    • use “inline caching” trick to cache method objects for specific combinations of argument types
    • but heavyweight and slow if you use it for every method call
  • it’s easy to conclude “the jvm isn’t a match for dynamic languages”

A little help goes a long way

  • it turns out that the static type checking is closer to the surface than it first appears
  • the big need: first-class language specific method resolution
    • so the lanfiage can identify the call target
    •   but then get out of the VM’s way
  • this is the rationale behind invokedynamic

 
The first time the jvm sees an invokedynamic instruction it calls a bootstrap method which does all the work. Bootstrap chooses the ultimate method to be called. The vm associates that method with the invokedynamic instruction. The next time the jvm sees the instruction it jumps oto the previously chosen method immediately.

Putting it all together

  • jvm method invocatio is still sttically typed
  • the ultimate method invoked is arbitrary
  • depends on the language rules
  • could even have a different name than in the instructions

method handles are composable

  • and adapter method handle takes another method handle and executes code before and after invoking it.
  • endless applications!
    • coercing types of individual arguments
    • java.lang.String -> org.juby.RubyString (different encodign)
    • boxing all argument sinto a boxing array
    • pre-allocating stack frames
    • prepare thread-specific context

Interface injection

  • dynamically typed programs look like self modifying code
  • generally, self modifying code is dangerous and hard to optimize
  • idea: don’t restructure classes, just relabel themselves
  • interface injection: the ability to modify old classes just enough for them to implement new interfaces
    • superinterfaces are cheap for jvm object
    • invokeinterface is fast these days
  • if an interface-dependent operation is about to fall, call a static injector method to bind an interface to the object and provide MethodHandles for the interface’s methods.
    • one change only for the injector to say yes!

Don’t do it! – Common performance Antipatterns by Alois Reitbaeur

In this session the room was literally full. There were people sitting on the corridor and on the floor near the speaker. Unfortunatelyl I didn’t find a place to sit and I was standing (at some point I sat down) therefore I didn’t get any notes. Things I remember from the session:

don’t do premature optimization, even if it’s very tempting. Never do it. Never take care of performance at an early stage, only do it at a later stage.

Good programmers write good performing code. They might be functional bugs, this is acceptable, but there shouldn’t be many performance problems.

Performance management is diffcult because it’s difficult to find performance problems. performance is a moving target, it works today but it might not work tomorrow. Even testing does not prevent you from having performance problems.

How do we test this stuff?? by Frank Cohen

I met Frank at the jug leaders and speakers dinner on Tuesday and I really wanted to see his talk. 

Frank talked about his pushTotest tool which is an open source tool sthat helps you test web services and web applications. With pushToTest you can surface issues quickly, you can create automated functional tests and have SLA compliance monitoring and provides an integrated environment.

The reasons behind having an integrated environment are simple

  • organizations require test and operational management
    • Ajax commerical testing tools not keeping up
    • where is the test tool for GWT, YUI, Dojo, Appcelarator?
  • organisations benefit from integration test and operational management
    • repurpose tests among developers, QA, ops
  • makes build + test-first possible
    • very agile, very rapid, bvery inexpensive

Then a demo followed with some screenshots and a walkthrough some source code.

Preventing bugs with pluggable type checking for Java by Mahmood Ali

Some otes I wrote down

benefits of type qualifiers

  • improve documentation
  • find bugs in programmes
  • guarantee the absence of errors

Checkers:

  • @NonNull: null dereference
  • @Internal: incorrect equality tests
  • @ReadOnly: incorrect mutation and side-effects
  • Many other simple checkers
    • security, encryption, access control
    • format/encoding, SQL
    • checkers designed as compiler plugins and use familiar error messages

 
A nullness and mutation demo followed

Checkers are fearful:

  • full type systems, assignment, overriding
  • polymorphic (Java generics)
  • flow sensitive type qualifier inference

Checkers are effective

  • scales to > 200.000 LOC
  • each checker found errors in each code base it ran on

Checkers are usable:

  • tool support javac, Ant, Eclipse, NetBeans
  • not too verbose
    • @NonNull: 1 annotation per 75 lines
    • @Interned: 124 annotation in 220 KLOC revealed 11 errors
  • fewer annotations in new code
  • inference tools: nullness, mutability

Another demo followed at this point.

Summary

  • pluggable type checkers
    • featureful, effective and usable
  • programmers can
    • find and prevent bugs
    • obtain guarantees that program isfree of certain errors
    • create custom qualifiers and type checkers

 
If you want to learn more have a look here: http://pag.csail.mit.edu/jsr308

Categories: devoxx Tags:

Devoxx – Day 3

10 December 2008 1 comment

First day of the conference today, the keynote started at around 9.45. The whole room was packed and a guy called roxorloops did some beat boxing to entertain the attendees. Quirte impressive I’d say.

Next Stephan Janssen welcomed everyone and announced a few facts and updates about Devoxx. This is the first edition of Devoxx (after it has been renamed Devoxx from JavaPolis) which is sold out with 3.200 attendees (from 35 countries all over the world), 160 speakers, more rooms (6 rooms this times, as opposed to five in last year). There are 40 partners at the exhibition floor, 40 affiliated Java User Groups and also 400 students had the opportunity to come in for free in the first two days of the university.

Stephan said a BIG thank you to Devoxx programme committee and the to Devoxx administration team. As a side note he asked us to be gentle with the venue and take care of it. Do not litter it, do not abuse it, and keep the place clean and quiet. Also be gentle with the wireless network and do not download big files (to day for some reason the network is very fast, compared with the previous years and the fist two days. Well done). Stephan also announced that this afternoon they will be serving free beer (hope not as strong as the one I had in the jug leaders and speakers dinner last night, 10% alcohol!) and free fries.

He also said that people complain that the Belgian JUG apart from Devoxx is not organising anything else. Stephan explained that he didn’t have enough time since the preparations for Devoxx start 8 to 9 months before December. But he has quit his current job and has more time to dedicate now. As a ersult of this BeJUG will organise bi-weekly evening session for 150 maximum throughout Belgium. This is 18 meetings in total (no meeting during December and Summer holidays). Fro more info visit the BeJUG site.

After Stephan stepped down Danny Coward from Sun Microsystems stepped on and talked about JavaFX. He also announced that Sun is dedicated to always be shipping final production software only.

Next he talked about the 10 things we need to know about JavaFX

1) JavaFX is here, it was released on the 4th of December. Sun released a preview in July this wear, a preview of the SDK. JavaFX SDK (which includes the runtime for the desktop and the emulator which allows to deploy on desktop, on browser or mobile phones). NetBeans 6.5 has support for JavaFX. He also mentioned the JavaFX production suite which is a collection of tools that lets the JavaFX developer to work with graphics to create RIA. JavaFX also ships with 75+ sample applications

2) JavaFX defines a cool new language. Why do we need a new language? Languages are evolving rapidly. People in Sun learn from their experience and they put the best best features of the language they have worked so far. JavaFX script was purpose built for only RIA in mind, nothing else. It’s declarative, has a Java like syntax, supports data binding and event triggers.

3) JavaFX supports beautiful graphics. They take advantage of the Java layer for graphics. JavaFX provides support for graphics acceleration, javaFX scene graph, animations and lighting. At this point Richard Bair presented a demo of a video puzzle. The video was playing and it became a jigsaw and they had to put the pieces together. After this small video they explained the relevant source code.

4) JavaFX has a rich APIs set. They have created a simple to use JavaFX script. The API also supports scene graph, media, web services (RESTFUL) and … any Java API.

5) Greater developer tools. The NetBeans plugin for JavaFX includes first class projects, JavaFX script editing, code completion, compile and save, debugging, graphics peeview, integrated documentation, deploy to desktop/browser/mobile.

6) JavaFX integrates into graphics design tools. JavaFX production suite includes tools for developer/designer workflow, export design from adobe tools, import and integrate into JavaFX. Then they showed a demo of the JavaFX production suite.

7) JavaFX runs on multiple devices. Then one more demo followed by a JavaFX application running on a mobile phone.

8) It is built on Java. Great advantage since you can rely on 13 years of JVM implementation. You can rely on this robustness and scalability of the underlying technology.

9) Encode once, play anywhere media. Developers have been asking for years for better media support. They support the native media frameworks (mac native and windows native). The added a new cross platoform format (FXM) which means if they use this format the media will play in any JavaFX-enabled device. Another demo by Joshua Marinacci followed, the Fox Box which was basically amovie website with several movies playing at the same time and Joshua could play with video properties and the video could be dragged outside the browser to watch it as standalone application.

10) JavaFX deploys itself. Anywhere there is a JRE the JavaFX runtime will deploy. JRE is installed on 9/10 new PCs. More than 30-50 million downloads per month. Full FX mobile release will be in March.

At this point there was another break with the beatboxing guy again doing some amazing sounds.

Next was Bart Donn, Christophe De marlie, Robin Mulkers from IBM. They talked about RFID @ Devoxx 2008 (this is I think the same technology used in JavaOne last year).

RFID is a new project installed at Devoxx. But why do we need a project during Devoxx? Instead of giving goodies during Devoxx they decided to spend this money to start a project and benefit everyone. The partners of this project are: IBM, Intermec and SkillTeam.

Then they showed the followingvideo that gives an introduction to the RFID concept. Nice ad.

The rest of the talk was spent by talking about the RFID technology and how IBM has developed it.

From Concurrent to Parallel (by Brian Goetz)

This was actually the same talk Brian Goetz gave at JavaOne in May. I won’t go into details since I have written about that in this post.

Effective pairing: the good the bad and the ugly (by Dave Nicolette)

This was an interactive session again when people started pairing in front of the audience and played different pair-programming scenarios. This session talked about pair programming, its problems and how we can overcome them.

We started with a pair-programming scenario where one impersonated a senior developer and the other a junior developer. The problem demonstrated was that the senior developer didn’t want to let the junior do anything. The senior always had the upper hand and didn’t let the junior do anything and was always picking on the junior guy.

Teams are most affective when everyone can learn about the technologies and the problem. If the senior guy just holds the keyboard and does his own stuff it’s not a good thing. Junior developers learn by typing and practising. The junior should do the typing and the senior the driving. But the senior thinks that sometimes the project goes a little snow and wants to take over things. But the slowing of the project is a normal thing if you want the junior developers to learn. You loose a bit in project time but you gain later in the project. The tip is to have the less experienced person on the keyboard

Another scenario is the soloist. People who want to do this themselves because they think they know the problem ad thus the solution and can work better on their own. Everyone in the team has difficulty learning the problem they’re dealing with. The solution is everyone to know the problem and know how the system works and how to deal with it. This is the bus and team problem, if the lead developer is hit by a bus, how many people can take over the project?  Everyone in the team should try to have equal knowledge of the problem and the system in use.

Another scenario, one doesn’t follow the other while they’re discussing ideas. This can be because one has far more knowledge than the other or because one always changes his mind about ideas and software patterns. Problems arise because one might feel stupid. Another problem is that when someone changes always his mind, they might step away from the problem the customer needs solved. People need to learn how to work with different types of personalities. In pair programming one should make the other stay in touch with the original problem. And both of them can cancel each other’s problems and/or expose their abilities. Also we should need to give emphasis on the simplest design (this is agile development). Sometimes when we have many ideas and we change our mind all the time we make the solution more complex than necessary.

Fourth scenario, one of the persons in the pair has additional responsibilities and can make pairing difficult because he can be interrupted all the time. The team should be dedicated to the project, it should not be interrupted because one of the members of the team is assigned to other things. This usually when the testers are also the business analysts. There is another form of interruption too: when someone outside the team comes in and starts taking about nonsense that is not related to the project. This disrupts the pair when they try to work.

Pairing is really a kind of a discipline art. It’s not just sitting there talking with your friends. It really is work.

Fifth scenario. Physical working conditions in the team room. Pairing is usually done in an agile manner. The team is located in the same room, pairing wheer the team is located in different locations is not a good idea. Or the way the office is laid out. For instance desks and chairs for pairing might be laid out correctly but they there might only be one monitor. By doing this the pairt is loosing time because the second person cannot follow the code. A solution is to ask the manager to buy more monitors (monitors are less expensive than people). As a logical conclusion the working environment should be set up as to be easy for people to pair.

Scenario six. How to maintain the system and fix the bugs if the original application was not developed in an agile manner. If the application was developed by using agile methods there are probably test suites. The pair can check out the application and their test (the first step to fix the bug). Then they have to reproduce the bug. If the bar is green then it means that someone forgot to write the tests, or that someone put code into production without testing it. They can use the same techniques that the development team used. But if the bug refers to some kind of legacy application where there are no test cases the approach is different. In this situation you have to send someone in who knows the system and can fix the bug. This is not really a pairing scenario but good to mention in the pairing context.

Scenario seven, the Fearful Freddie, someone who’s afraid to change the code or can’t be bothered (too much of a hassle). This is a legacy scenario from old legacy systems where they had no tests cases and if you changed something you most certainly had broken it as well. Now things have changed. Even if you break something you can always reverse it by using the version control system. You don’t have to be afraid to change things. It’s better to change small things at once rather than do a big change. Like banks and bill payments, you don’t pay the bills at once but with small installments. Don’t let the complexity of the code build up over time, because you have problems maintaining the code and fixing it. You make the application’s life less. Because of the complexity of the code you actually need to implement a new one. And all this because people don’t want to modify the code and they are afraid to touch it.

Eighth scenario, the disengaged. One person that does the work and the other person is disengaged. The engaged person tries to get the other person  interested in the code they are working on. In this situation you have to remove the option from the other guy, just put the keyboard in front of him and ask him to do the job. What if the partners decide to do a major refactoring that will take 20 minutes and only one can use the keyboard? You don’t have to both use the keyboard, you just have to put your mind into work. Only one can type at the same time, but both of them can think. What if they want to do refactoring but they both have different ideas of how to refactor? A good idea is to ask the other team members about their opinions. You might disrupt them a bit but the benefits you gain are more.

Ninth scenario, the stubborn pair, when they both want to follow their ideas and they won’t change their mind. Another aspect of agile development is self-organisation. Every time there is a little dispute from the team you cannot run up to the manager to solve it because pretty soon the manager is going to take control. That might not be desirable, it might not be what you want. You take it to the manager and one person wins and the other looses (or both loose). The best thing to do when you have a problem like that is to take a break and clear your mind. A second solution is to change partner. Don’t let things become personal.

The Siamese twins scenario. Part of pair programming is that you change pairs (the original authors of pair programming called it promiscuous pair). Different pairs work differently, when one pair is finished the other pair is still working. So they have to take advantage of the spare time. There is a Pomodoro technique; a pair works together for a specific period of time and then they stop. Then take a break and start another time period with different partners.

The Siamese twins scenario kick in when two people are really engaged in the store they’re working on and they don’t want to separate. They work together well and they don’t want to switch. People should be able to switch, if they can’t they are probably stuck and they need a new pair of eyes to look in the problem they are looking into, a fresh pair. Sometimes people know they have problem, they know that the project falls behind but they don’t want to give up. The manager should not ask “how much longer will it take to solve the problem”, but assign new people to look into the problem.

The Ping pong scenario. One person writes the unit test and the other person writes the code to make the test pass. The person who writes the test leaves and lets the other guy to write the code. They are not talking about design, he just pushes the burden of design to the other person. This approach encourage single programming, but sometimes you can make it for a while in order to make the experience of programming a little bit different.

Q&A:

What’s the bets way to learn pair programming? If never done it before then get mentors, outside people who have done it before

Should people pair all the time? No, there are some tasks that don’t really benefit from it. Sometimes the best way for the team to solve the problem is to have one person go and think about it and figure out how to do it. In an 8 hour day the pair should be around five-five and a half.

Behaviour driven development in Java with easyb (by John Ferguson Smart)

This was a talk about the easyb framework and behaviour driven development.

The talk started by explaining that TDD is not about tests, but about wring good software. In the same manner behaviour-driven development is not about behaviour, it’s about delivering software that helps the end-user. TDD in general tends to make better code, by the application being more flexible, more better designed and more maintainable.

BDD is a recent evolution of TDD. The idea is to help to determine what to test. In order to test use cases it uses words like “should” to describe the desired behaviour of the class, eg should verify that client can repay before approving loan, should transfer money from account a to account b. As with TDD you should also focus on requirements not on implementation.

The framework to do BDD is easyb which is an os testing framework for Java (but written in Groovy). It makes testing clearer and easier to write. It makes tests self-documenting and it enhances communication between the development team and the end user. There is another BDD test framework for Java called JBehave but on the speaker’s personal opinion it’s cumbersome to use. easyb is based on groovy but it has Java like syntax, it’s quick to write and you have full access to Java classes and APIs.

easyb in action test requirement by writing easyb stories which:

  • use a narrative approach
  • describe a precise requirements
  • can be understood by a stakeholder
  • usually made up of a set of scenarios
  • use an easy to understand structure

Lets look at an example user story: opening a bank account. “As a customer I want to open a bank account so that I can put my money into a safe place”. We come up with a list of tasks. Open account, make initial deposit etc. Lets concentrate on the initial deposit requirement:

Make initial deposit:

  • given a newly create account
  • when a deposit is made
  • then the account balance should be equal to money deposited.

You implement the scenario in a test test case written in Groovy which can use all Java APIs.

If we had to compare easyb to JUnit

  • more boilerplate code required in JUnit
  • not very self-explanatory
  • the intention is less clear.

With easyb you can have multiple post and pre conditions.

In easyb you have shouldBe syntax instead of assert. Variations of the shouldBe syntax include shouldBeEqualTo, shouldNotBe, shouldHave etc. Also there is another way to verify outcome, the ensure syntax, which is much like Java assert.

Fixtures in easyb; you can use before and before_each (similar to @Before and @BeforeClass in JUnit). Very useful for setting up database and test servers. You an also use after, after_each (similar to @After and @AfterClass)

Fixtures are good at

  • keeping infrastructure code out of the test cases.
  • making test cases more readable and understandable.

As for easydb plugins, only one is available, the dbunit. But more to come for Grails and Excel.

easyb produces test specifications in user-friendly format and flags pending (unimplemented) stories. Also provides readable error messages. When tests fail easyb will tell you why by a more readable manner than JUnit.

As for IDE support for easyb. There are three option options: IntelliJ, Eclipse, NetBeans, but only IntelliJ has a very good support for Groovy.

Some upcoming easyb features

  • html reports
  • grails plugin
  • CI integration
  • Easiness – a fitnesse style web application (stake-holders create stories in normal text)

The talk closed with an easyb demo and stepping through the source code of the test cases.

What’s new in Spring Framework 3.0 (by Arjen Poutsma and Alef Arendsen)

New features in the upcoming 3.0 release and also some that already exist in 2.5 release.

@Controller for Spring MVC.

@RequestMapping methods.

@RequestMapping(“/vets”)
public List<Vet> vets()
{
return clinic.getVets();
}

Constantly simplifying, LoC for sample application PetClinic over time, it dropped significantly from Spring 2.0 to Spring 2.5.

Spring integration now in 1.0 version. released last week in SpringOne.

@PathVariable

@RequestMapping(“/pets/{petId}”)
public Visit visit{
@PathVariable long petId

..

New Views with new MIME types:

  • application/xml use MashallingView in 3.0M2/SWS 1.5
  • application/atom+xml use AtomFeedView 3.0M1
  • application/rss+xml use RssFeedView 3.0M1
  • application/json use JsonView Spring-JS

ShallowEtagHeaderFilter

  • introduces in Spring 3.0M1
  • creates ETag header based on MD5 of rendered view
  • saves bandwidth only
  • Deep ETag support comes in M2 (through @RequestHeader)

At this point they presented a demo with URI support and ATOM feed by using Spring MVC.

Introducing expressions Spring 3.0 will include full support for expressions.

Spring 3.0 will only be available for people who use Java 5. It will support the Portlet 2.0 specification. And depending on specs finalising (if they are released on time) it will support Java EE6, the Servlet 3.0, JSF 2.0, JAX-RS and JPA 2.0. It will also support Web Beans annotations.

Spring 3.0 will deprecate/remove several stuff:

  • traditional Spring MVC controller jierarchy
  • Commons Attributes support
  • Traditional TopLink support
  • Traditional JUnit 3.8 class hierarchy.

but it will still be

  • 95% backwards compatible with regards to APIs
  • 99% backwards compatible in the programming model.

Spring 3.0M1 released last week.
Spring 3.0 Milestones Janurary/February 2009
Spring 3.0 Release Candidates March/April 2009

Categories: devoxx Tags:

Devoxx – University Day 1

8 December 2008 5 comments

Back in Antwerp for yet one more devoxx event, second only to J1. Five days of Java overdose, lets see what we have. Today the first talk I attended was

Kick start JPA with Alex Snaps and Max Rydahl Andersen.

This was a talk I wanted to attend since we are using JPA at work. The talk started by Alex Snaps giving an overview about JPA and why we need an ORM framework. Traditional CRUD code using JDBC tends to be ugly and hard to maintain. JPA eliminates the need for JDBC (CRUD and Querying), provides inheritance strategies (class hierarchy to single or multiple tables), associations and compositions (lazy navigation and fetching strategies).

It is vendor independent and easy to configure by using annotations (but you can override them by using XML) and JPA is available outside JEE containers. And as of JPA 2.0 there is a dedicated JSR to it (JSR 317).

One of the goals of JPA is that it should be transparent, but, according to Alex, it’s not there yet and he doesn’t think it will ever be.

After this small introduction the speaker moved to explaining what an entity class is. In JPA

  • entity classes should not be final or have final methods
  • entity classes have a no argument constructor
  • collections in entity classes should be typed to interfaces.
  • associations in entity classes aren’t mapped for you.
  • and there must be an id field in the entity class

Entity classes support simple types:

  • primitive & wrapper classes
  • String
  • BigInteger & BigDecimal
  • Byte & Char arrays
  • Java & JDBC temporal types
  • Enumeration
  • Serialisable types

In JPA we can have class hierarchies like inheritance, entity support, polymorphic associations, and we can also map concrete and abstract classes by using the @Enity or @MappedSuperclass annotations.

There are a few ways to implement polymorphism in an JPA. We can have one table per class (and use a discriminator value, this is a viable solution but the table can get very big), we can have a joined subclass (a class for a table and then you join some subclasses which are in their own tables) and one table per concrete class (optional).

Many-to-one association is supported by using the @ManyToOne annotation:

public class Person
{
    @ManyToOne
    private Customer customer;
}

when you load the person, customer is also loaded as well.

Similarly one-to-one association is supported by using the @OneToOne annotation:

public class Person
{
    @OneToOne
    private Address address
}

There is a unique constraint here: only one person can have this address. In a one-to-one bi-directional association both a person belongs to one address and an address belongs to one person.

A one-to-many uni-directional association is the same as the bi-directional associations but without the mappedBy annotation. In this case, without an owning side with cardinality of one, a join table is required.

Of course we can use generics for the mapping. If we do not use generics we will need to tell the container what entity it should be mapped to.

In order to manage the persistence we need to use javax.persistence.Persistence and create an EntityMangerFactory based on a persistence with name. With this entity manager factory instance we can create an EntityManager instance which handles the persistence of the entities and use Query to query them back from the database.

In order to set the persistence unit we need to write a bit of xml (the persistence.xml file). We have to tell the persistence provider all the database properties (driver etc – the provider will use these properties for initialisation) and let JPA know the class that will be persistent.

The entity manager is the main object that takes care of the JPA stuff. It manages object identity and it manages CRUD operations.

The life cycle of an entity can have four states:

  • new (it’s new and not yet associated to a persistence context)
  • managed (is associated to a persistence context and the persistence context is active)
  • detached (has a persistent identity -i.e. it is associated to a persistence context- but this persistence context is not active)
  • removed (it is removed from the database).

The persist method of the entity manager persists a new entity to the database. The remove method removes the entity from the database. If the entity is already scheduled for removal the operation is ignored. In all the above cases the operation might be cascaded if there are associated objects with the entities.

If an entity is managed an something changes in its state, its state will be automatically synchronised with the database. This is called flushing and we can have automatic flushing or a manual commit.

When an entity’s persistence context is closed (it can be closed when a) the transaction commits in a JEE environment or when the developer manually manages the persistence context’s life cycle in JSE b) the entity is serialised and c) an exception occurs) then the entity goes into the detached state.

An entity can also be merged using the entity manager’s merge method. When an entity is detached and a merge is applied to it then a new managed instance is returned, with the detached entity copied into it. If the entity is new then a new one is returned. Merge is also a cascading operation.

If we want to have optimistic locking with JPA we should annotate our entity with the @Version annotation. This field annotated with @Version will be updated by the entity manager every time the entity’s state is written in the database. The @Version field can be of type int, Integer, short, Short, long, Long and Timestamp. As an advice do not modify this field (except if you really know what you are doing).

JPA uses it’s own query language to query for an entity. This query API is used for named queries by using the @NamedQuery annotation (in TopLink you can cash all the queries/prepared statements), for dynamic queries and it supports polymorphism and pagination among other features.

You also have the chance to do bulk operation with JPA by using the Query API (caution! bulk operation will not affect the entity manager).

How do we deal in JPA with object identity? Do we use the database identifier as part of the operation? There is a remonder from the Object’s equal method that we should take into consideration: “Note that it is generally necessary to override the hashCode method whenever this method is overridden, so as to maintain the general contract for the hashCode method, which states that equal objects must have equal hash codes.”

If we use a database identifier then we always need to have it assigned before we use the object. First persist the object, then flush it and then use the object (for example as part of a bigger collection).

Another solution is to use a business key, to have some sort of GUID/UUID. It is recommended not to override the equals or hashCode method, except if you really need to and know what you are doing. Yet , overriding the equals and hashCode method is okay if your object is going to be used as a composite identifier.

Then the talk moved to listeners and callback methods of JPA. Listeners are called before callbacks methods on entity classes and they are registered by using the @EntityListeners annotation in the entity class. The order is preserved and the container starts with the top of the hierarchy. Each event (PrePersist, PostPersist etc) is registered by adding the relevant annotation to a method with signature: void method(Object).

Callbacks are called after the listener classes and they are registered by using the same annotations to a method on the entity class. In case of a runtime exception the transaction will rollback.

In a JEE environment if we have a stateful session bean, the persistence context will be created when the stateful bean is created and will be closed when the stateful bean and all other stateful beans that inherit the persistence context are removed. If the stateful bean uses container managed transaction demarcation the prsistence context will join the transaction.

In summary JPA makes dealing with RDBMS much simpler… once you understand how JPA works. It is available in JEE as well as in JSE, multiple vendors support it, it has a dedicated JSR and there is great tools support.

Time for q&a. What are the differences between different vendor implementations of JPA and what is the user’s personal preference? He said he prefers hibernate but he is biased since he’s been using it for years. He’s never used Eclipse link, he’s used TopLink and he saw that it doesn’t implement the specification properly sometimes. As an advice whatever ORM framework you choose make sure you know its flaws very well, especially towards the specification.

Second part of the talk was about two JPA tools we can use: the Dali tool and hibernate tools. This talk was delivered by Max.

The Dali tool supports the definition, editing and deployment of JPA entities and makes mapping simple. Hibernate tools (they are hibernate-centric but can be used for other entities as well) own a unique feature set (wizards, .hbm & .xml editors, JPA query prototyping etc) to make writing JPA much simpler.

The goals of both the tools is simplicity (mapping assistance & automatic generation), intuitiveness (use existing modelling), compliance and extensibility.

At this point the speaker presented these tools to us for the rest of his speaking time.

Test driven development with Dave Nicolette.

This was mainly a 3-hour development of a test driven application from scratch. Several people went to the speaker’s computer and pair-programmed an application (using Eclipse and JUnit 4), from an initial test (they started with a user story -a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output-) to completion. This development took most of the time of the talk. While the application was being developed we were talking about the different steps and approaches to the problems solved.

Key things I wrote down:

Test driver development is tests we write to drive the code. We keep these tests as we write more code and we make sure that our new code didn’t break anything from the existing code. these regression tests have a lasting value throughout the application.

TDD works better with small specifications where we can break the problem down to smaller bits.

At this point some people from the audience got to the speakers computer and started developing a sample TDD application using Eclipse and JUnit 4. they started with a user story (a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output). This development took most of the time of the talk.

In a TDD there is a 3-step cycle. Red (the test fails initially), green (we make the test pass with the simplest and quickest code) and re-factor, we make the code better without breaking the existing test.

Should we test getters/setters? If we write them manually we should, otherwise, if we let the IDE generate them for us, we could leave the tests out. Also it’s a good practice to write tests for mutators that mutate invariants of a class, for example a field that has to make sure that the balance of an account is never set to zero.

It’s not wrong to have more test code than production code, sometimes you might need to have even ten times more test code than production code.

VisualVM – New extensible monitoring platform (by Kirk Pepperdine)

This was a short talk by Kirk Pepperdine (he actually replaced the original speaker from Sun) about how to use the VirtualVM. It was more of a demonstration rather than a talk.

The current VisualVM version is 1.0.1. This includes

  • visual tool
  • combines several tools
    • command line
    • jconsole
    • profiler
  • targeted for production and development
  • bundled with Sun’s JDK 1.6_07

For monitoring and discovering local JVMs, use jps. For remote monitoring use jstatd which defaults to the following:

  • com.sun.management.jmxremote.port=3333
  • com.sun.management.jmxremote.ssl = false
  • com.sun.management.jmxremote.authenticate = false

For explicit jmx agent connection use: -J -Djconsole.plugin.path=path to the jconsole plugin

It does core dumps only on linux and solaris.

You start visual vm with visualvm –jdkhome $JAVA_HOME –userdir <path to dir>

When we use the visual vm to monitor an application we have to make sure that we have turned clustering off.

Sometimes when we get an OutOfMemoryError and the heap is empty it’s most likely a problem with the perm generation.

Objects that stay in the new (eden) space are easier to discard/gc than object that stay in the old heap space.

Making full use of hibernate tools (by Max Andersen, JBoss Tools Lead)

This again was similar to the talk in the morning, but with more in-depth demonstration of how to use hibernate tools. In short

  • Hibernate tools support Hibernate 3 and EJB3/JPA.
  • supports code completion in .hbm and .xml (class, properties, types etc).
  • Usually code generation is not as sophisticated or smart as manual code… but sometimes you have to bite the bullet.

It can export templates (like .ftl templates) into entity objects.

It provides custom JDBC binding (when we read stuff from the db how do we understand the meta-data in there).

It provides reverse engineering:

  • use reveng.xmlfor basic control over:
    • included/excluded tables/colmns
    • naming
    • meta-attributes (<meta attribute..
  • use programmatic reverse engineering for complete control (extend DelegatingReverseEngineeringStrategy)
  • you can also implement customizable reverse engineering

Concluding statement: use the right tool for the right job. Even if you have a tool that does (almost) everything for you, you still have to think.

q&a: are these tools available to netbeans. Short answer is no, but you can use the ant tasks for NetBeans. The source code is there but someone needs to integrate this code with NetBeans.

Categories: devoxx Tags: