Home > devoxx > Devoxx – University Day 1

Devoxx – University Day 1

Back in Antwerp for yet one more devoxx event, second only to J1. Five days of Java overdose, lets see what we have. Today the first talk I attended was

Kick start JPA with Alex Snaps and Max Rydahl Andersen.

This was a talk I wanted to attend since we are using JPA at work. The talk started by Alex Snaps giving an overview about JPA and why we need an ORM framework. Traditional CRUD code using JDBC tends to be ugly and hard to maintain. JPA eliminates the need for JDBC (CRUD and Querying), provides inheritance strategies (class hierarchy to single or multiple tables), associations and compositions (lazy navigation and fetching strategies).

It is vendor independent and easy to configure by using annotations (but you can override them by using XML) and JPA is available outside JEE containers. And as of JPA 2.0 there is a dedicated JSR to it (JSR 317).

One of the goals of JPA is that it should be transparent, but, according to Alex, it’s not there yet and he doesn’t think it will ever be.

After this small introduction the speaker moved to explaining what an entity class is. In JPA

  • entity classes should not be final or have final methods
  • entity classes have a no argument constructor
  • collections in entity classes should be typed to interfaces.
  • associations in entity classes aren’t mapped for you.
  • and there must be an id field in the entity class

Entity classes support simple types:

  • primitive & wrapper classes
  • String
  • BigInteger & BigDecimal
  • Byte & Char arrays
  • Java & JDBC temporal types
  • Enumeration
  • Serialisable types

In JPA we can have class hierarchies like inheritance, entity support, polymorphic associations, and we can also map concrete and abstract classes by using the @Enity or @MappedSuperclass annotations.

There are a few ways to implement polymorphism in an JPA. We can have one table per class (and use a discriminator value, this is a viable solution but the table can get very big), we can have a joined subclass (a class for a table and then you join some subclasses which are in their own tables) and one table per concrete class (optional).

Many-to-one association is supported by using the @ManyToOne annotation:

public class Person
{
    @ManyToOne
    private Customer customer;
}

when you load the person, customer is also loaded as well.

Similarly one-to-one association is supported by using the @OneToOne annotation:

public class Person
{
    @OneToOne
    private Address address
}

There is a unique constraint here: only one person can have this address. In a one-to-one bi-directional association both a person belongs to one address and an address belongs to one person.

A one-to-many uni-directional association is the same as the bi-directional associations but without the mappedBy annotation. In this case, without an owning side with cardinality of one, a join table is required.

Of course we can use generics for the mapping. If we do not use generics we will need to tell the container what entity it should be mapped to.

In order to manage the persistence we need to use javax.persistence.Persistence and create an EntityMangerFactory based on a persistence with name. With this entity manager factory instance we can create an EntityManager instance which handles the persistence of the entities and use Query to query them back from the database.

In order to set the persistence unit we need to write a bit of xml (the persistence.xml file). We have to tell the persistence provider all the database properties (driver etc – the provider will use these properties for initialisation) and let JPA know the class that will be persistent.

The entity manager is the main object that takes care of the JPA stuff. It manages object identity and it manages CRUD operations.

The life cycle of an entity can have four states:

  • new (it’s new and not yet associated to a persistence context)
  • managed (is associated to a persistence context and the persistence context is active)
  • detached (has a persistent identity -i.e. it is associated to a persistence context- but this persistence context is not active)
  • removed (it is removed from the database).

The persist method of the entity manager persists a new entity to the database. The remove method removes the entity from the database. If the entity is already scheduled for removal the operation is ignored. In all the above cases the operation might be cascaded if there are associated objects with the entities.

If an entity is managed an something changes in its state, its state will be automatically synchronised with the database. This is called flushing and we can have automatic flushing or a manual commit.

When an entity’s persistence context is closed (it can be closed when a) the transaction commits in a JEE environment or when the developer manually manages the persistence context’s life cycle in JSE b) the entity is serialised and c) an exception occurs) then the entity goes into the detached state.

An entity can also be merged using the entity manager’s merge method. When an entity is detached and a merge is applied to it then a new managed instance is returned, with the detached entity copied into it. If the entity is new then a new one is returned. Merge is also a cascading operation.

If we want to have optimistic locking with JPA we should annotate our entity with the @Version annotation. This field annotated with @Version will be updated by the entity manager every time the entity’s state is written in the database. The @Version field can be of type int, Integer, short, Short, long, Long and Timestamp. As an advice do not modify this field (except if you really know what you are doing).

JPA uses it’s own query language to query for an entity. This query API is used for named queries by using the @NamedQuery annotation (in TopLink you can cash all the queries/prepared statements), for dynamic queries and it supports polymorphism and pagination among other features.

You also have the chance to do bulk operation with JPA by using the Query API (caution! bulk operation will not affect the entity manager).

How do we deal in JPA with object identity? Do we use the database identifier as part of the operation? There is a remonder from the Object’s equal method that we should take into consideration: “Note that it is generally necessary to override the hashCode method whenever this method is overridden, so as to maintain the general contract for the hashCode method, which states that equal objects must have equal hash codes.”

If we use a database identifier then we always need to have it assigned before we use the object. First persist the object, then flush it and then use the object (for example as part of a bigger collection).

Another solution is to use a business key, to have some sort of GUID/UUID. It is recommended not to override the equals or hashCode method, except if you really need to and know what you are doing. Yet , overriding the equals and hashCode method is okay if your object is going to be used as a composite identifier.

Then the talk moved to listeners and callback methods of JPA. Listeners are called before callbacks methods on entity classes and they are registered by using the @EntityListeners annotation in the entity class. The order is preserved and the container starts with the top of the hierarchy. Each event (PrePersist, PostPersist etc) is registered by adding the relevant annotation to a method with signature: void method(Object).

Callbacks are called after the listener classes and they are registered by using the same annotations to a method on the entity class. In case of a runtime exception the transaction will rollback.

In a JEE environment if we have a stateful session bean, the persistence context will be created when the stateful bean is created and will be closed when the stateful bean and all other stateful beans that inherit the persistence context are removed. If the stateful bean uses container managed transaction demarcation the prsistence context will join the transaction.

In summary JPA makes dealing with RDBMS much simpler… once you understand how JPA works. It is available in JEE as well as in JSE, multiple vendors support it, it has a dedicated JSR and there is great tools support.

Time for q&a. What are the differences between different vendor implementations of JPA and what is the user’s personal preference? He said he prefers hibernate but he is biased since he’s been using it for years. He’s never used Eclipse link, he’s used TopLink and he saw that it doesn’t implement the specification properly sometimes. As an advice whatever ORM framework you choose make sure you know its flaws very well, especially towards the specification.

Second part of the talk was about two JPA tools we can use: the Dali tool and hibernate tools. This talk was delivered by Max.

The Dali tool supports the definition, editing and deployment of JPA entities and makes mapping simple. Hibernate tools (they are hibernate-centric but can be used for other entities as well) own a unique feature set (wizards, .hbm & .xml editors, JPA query prototyping etc) to make writing JPA much simpler.

The goals of both the tools is simplicity (mapping assistance & automatic generation), intuitiveness (use existing modelling), compliance and extensibility.

At this point the speaker presented these tools to us for the rest of his speaking time.

Test driven development with Dave Nicolette.

This was mainly a 3-hour development of a test driven application from scratch. Several people went to the speaker’s computer and pair-programmed an application (using Eclipse and JUnit 4), from an initial test (they started with a user story -a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output-) to completion. This development took most of the time of the talk. While the application was being developed we were talking about the different steps and approaches to the problems solved.

Key things I wrote down:

Test driver development is tests we write to drive the code. We keep these tests as we write more code and we make sure that our new code didn’t break anything from the existing code. these regression tests have a lasting value throughout the application.

TDD works better with small specifications where we can break the problem down to smaller bits.

At this point some people from the audience got to the speakers computer and started developing a sample TDD application using Eclipse and JUnit 4. they started with a user story (a simple statement about the user, what the action is, what the person is going to do with the software, and the expected output). This development took most of the time of the talk.

In a TDD there is a 3-step cycle. Red (the test fails initially), green (we make the test pass with the simplest and quickest code) and re-factor, we make the code better without breaking the existing test.

Should we test getters/setters? If we write them manually we should, otherwise, if we let the IDE generate them for us, we could leave the tests out. Also it’s a good practice to write tests for mutators that mutate invariants of a class, for example a field that has to make sure that the balance of an account is never set to zero.

It’s not wrong to have more test code than production code, sometimes you might need to have even ten times more test code than production code.

VisualVM – New extensible monitoring platform (by Kirk Pepperdine)

This was a short talk by Kirk Pepperdine (he actually replaced the original speaker from Sun) about how to use the VirtualVM. It was more of a demonstration rather than a talk.

The current VisualVM version is 1.0.1. This includes

  • visual tool
  • combines several tools
    • command line
    • jconsole
    • profiler
  • targeted for production and development
  • bundled with Sun’s JDK 1.6_07

For monitoring and discovering local JVMs, use jps. For remote monitoring use jstatd which defaults to the following:

  • com.sun.management.jmxremote.port=3333
  • com.sun.management.jmxremote.ssl = false
  • com.sun.management.jmxremote.authenticate = false

For explicit jmx agent connection use: -J -Djconsole.plugin.path=path to the jconsole plugin

It does core dumps only on linux and solaris.

You start visual vm with visualvm –jdkhome $JAVA_HOME –userdir <path to dir>

When we use the visual vm to monitor an application we have to make sure that we have turned clustering off.

Sometimes when we get an OutOfMemoryError and the heap is empty it’s most likely a problem with the perm generation.

Objects that stay in the new (eden) space are easier to discard/gc than object that stay in the old heap space.

Making full use of hibernate tools (by Max Andersen, JBoss Tools Lead)

This again was similar to the talk in the morning, but with more in-depth demonstration of how to use hibernate tools. In short

  • Hibernate tools support Hibernate 3 and EJB3/JPA.
  • supports code completion in .hbm and .xml (class, properties, types etc).
  • Usually code generation is not as sophisticated or smart as manual code… but sometimes you have to bite the bullet.

It can export templates (like .ftl templates) into entity objects.

It provides custom JDBC binding (when we read stuff from the db how do we understand the meta-data in there).

It provides reverse engineering:

  • use reveng.xmlfor basic control over:
    • included/excluded tables/colmns
    • naming
    • meta-attributes (<meta attribute..
  • use programmatic reverse engineering for complete control (extend DelegatingReverseEngineeringStrategy)
  • you can also implement customizable reverse engineering

Concluding statement: use the right tool for the right job. Even if you have a tool that does (almost) everything for you, you still have to think.

q&a: are these tools available to netbeans. Short answer is no, but you can use the ant tasks for NetBeans. The source code is there but someone needs to integrate this code with NetBeans.

Advertisements
Categories: devoxx Tags:
  1. 8 December 2008 at 7:00 pm

    I find Alex Snaps’ response to your question about comparing Hibernate with TopLink or EclipseLink a little funny.

    >He’s never used Eclipse link, he’s used TopLink and he saw that it
    >doesn’t implement the specification properly sometimes.

    TopLink Essentials is the JPA 1.0 reference implementation and EclipseLink is the JPA 2.0 reference implementation. We’ve had developers migrating from Hibernate to EclipseLink discover their code didn’t run because EclipseLink was very spec compliant. Perhaps Alex misunderstood our lack of support for Hibernate-isms as non-spec compliance? 😉

    –Shaun

  2. 9 December 2008 at 10:11 am

    Hello Shaun, actually I didn’t ask the question, it was someone else from the audience.

    Alex said that he never used EclipseLink so he cannot commend on that. When he referred to incompliances he did it with regards to TopLink.

  3. Max
    10 December 2008 at 9:56 am

    Shaun,

    Just to be clear we stated that the spec has areas that can result in different behavior in corner cases but both Hibernate and Toplink Essentials passes the TCK thus are both spec compliant. And if you move between providers don’t just do it blindly – make sure you test it 😉

  4. 10 December 2008 at 3:08 pm

    Hi,

    Nice writeup, thanks for giving us a taste of what’s happening at devoxx !

    On the subject of VisualVM I think there’s a typo:

    “For explicit jmx agent connection use: -J -Djconsole.plugin.path=path to the jconsole plugin”

    I believe the jconsole.plugin.path is here to specify the path to the legacy jconsole plugin that you would like to import into VisualVM. You would also need to have downloaded & installed the JConsole plugin container from the VisualVM update center – see
    https://visualvm.dev.java.net/plugins.html

    Cheers,

    — daniel

  5. 11 December 2008 at 8:41 am

    > we stated that the spec has areas that can result in different behavior in corner cases but both Hibernate and Toplink Essentials passes the TCK thus are both spec compliant.

    Max, yes, you are right, you clarified that at the end.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: