Archive for the ‘Coherence’ Category
The Live Object Pattern – Coming to a SIG Near You
In spite of this blog being dormant for over a year, things have been busier than ever over in Coherence land! We just released version 3.7.1 which is packed with new features. Some of the highlights:
- REST support in the proxy. In addition to Java, C# and C++, now you can access the grid with any language that has a REST API.
- POF enhancements, including annotations and the ability to eliminate Java key classes (for pure C# and C++ applications)
- Query Explain Plan. If you’ve ever had a problem with a production application missing indexes (because you won’t notice it in development and you may not notice it during testing) you’ll be interested in this feature. It analyzes queries (filter based or CohQL) and indicates where indexes are used (and not used.)
We’ve been adding screencasts to the Coherence YouTube channel going over these features – in most cases the screencasts are delivered by the architect/developer responsible for the feature.
We also just wrapped up JavaOne and Oracle Open World a few weeks ago. I had the privilege of co-presenting the Live Object Pattern with Brian Oliver. The description and PDF can be downloaded from the JavaOne site.
If you didn’t get to see the talk, you’re in luck if you’re in New York or London. Noah Arliss, who worked with Brian on the Live Objects concept and the abstract of the JavaOne talk, will be presenting on this topic at the New York SIG on October 28th and the London SIG on November 4th.
All of the SIGs this fall (including the Bay Area SIG on November 3rd) will cover the new features in Coherence 3.7.1. Come out and join us!
Oracle Enterprise Pack for Eclipse now supports Coherence
The latest release of Oracle Enterprise Pack for Eclipse (OEPE) now includes a Coherence Facet. This makes it convenient to quickly start up a Coherence project and launch nodes right in the IDE. Recently I took it for a test drive and took some notes to help users of Eclipse and Coherence get started.
Note that all images below are thumbnails; click on the images to expand.
You have the option of downloading Eclipse with the OEPE plugins pre-installed, but I already had a copy of Eclipse Helios 3.6; therefore I went for the plugin install. The plugin install is straight forward, similar to any other Eclipse plugin. These instructions are lifted straight from the doc:
- Select Help > Install New Software.
- Click Add to add a new update site.
- In the Add Repository dialog, enter the location as http://download.oracle.com/otn_software/oepe/helios, and then click OK.
- Select Oracle Enterprise Pack for Eclipse, verify that all of the subcomponents are selected, and then click Next.
- Confirm information presented in the Install Details, and then click Finish.
Once the install is complete and you’ve restarted Eclipse, the next step is to install Coherence as a User Library. I’ve got the latest 3.6 bits from our source code repository, so I’ll install that as a library. Note that the plugin also supports Coherence 3.5.2.
Now, let’s create a new Java project. As part of the project creation, add Coherence as a project library.
After the project is created, we’ll have to enable Facets to use the Coherence plugin. Bring up the project properties window and search for “Facets” in the search field.
Once Facets are enabled, select Oracle Coherence.
Upon selection, a link indicating that further configuration is required will appear. Click on the link. Select the “Oracle Coherence 3.6″ library. Note how it provides the option to generate configuration files. Let’s leave all of the checkboxes selected.
Now we are ready to start a cache server. Select File | Run Configurations to bring up the Run Configurations dialog. First select “Oracle Coherence” under the list of run configurations. Next, select the “New” button on the upper left portion of the dialog to create a new run configuration.
Under the “Main” tab, enter com.tangosol.net.DefaultCacheServer as the main class. Of course you are free to create configurations with your own classes; however this example will focus on starting up a cache server.
Note the presence of a “Coherence” tab. This tab allows for operational configuration (items typically found in tangosol-coherence-override.xml) such as the cache configuration file name, multicast address configuration, management/JMX, and so on. Here I decided to leave all of the defaults as is.
After clicking on “Run”, here’s what I get:
We can see that the node started up and formed a cluster, but there are no services listed. This is because the OEPE plugin generated a cache configuration file that defaults to all caches being local. Next, let’s examine the cache configuration file (located under src and add a distributed/partitioned cache to the configuration.
One of the nice features the plugin provides is pre-configured auto complete for Coherence configuration files via the DTD.
Here’s the cache configuration file I used:
<!--?xml version="1.0"?-->
*
partitioned
partitioned
true |
With the modified cache configuration, we now see the partitioned cache service start up:
I can see the Coherence plugin for OEPE being quite useful for Coherence developers on Eclipse not only for quickly starting up a Coherence project (since config files are generated) but also for enabling configuration validation out of the box.
Partitions, Backing Maps, and Services…Oh My!
Recently I was asked the following question:
What is the relationship between a partition, cache, and a backing map?
To answer this, first we should go through some definitions:
Cache: An object that maps keys to values. In Coherence, caches are referred to by name (thus the interface NamedCache.) Caches are also typically clustered, i.e. they can be accessed and modified from any member of a Coherence cluster.
Partition: A unit of transfer for a partitioned/distributed cache. (In Coherence terminology, partitioned and distributed are used interchangeably, with preference towards the former.)
Backing Map: Data structure used by a storage node to hold contents of a cache. For partitioned caches, this data is in binary (serialized) form.
I’ll add one more term for completeness:
Service: Set of threads dedicated to handling requests; a cache service handles requests such as put, get, etc. The DistributedCache service is present on all members of the cluster that read/write/store cache data, even if the node is storage disabled.
Now let’s put these concepts together to see how a clustered partitioned cache works.
First we’ll start with a backing map. It is a straight forward data structure, usually an instance of LocalCache:
The contents of this map are managed by a partitioned cache service. In a typical thread dump, this would be the “DistributedCache” thread:
"DistributedCache" daemon prio=5 tid=101a91800 nid=0x118445000 in Object.wait() [118444000] |
A single cache service can manage multiple backing maps. This is the default behavior if multiple services are not specified in the cache configuration file. (Multiple partitioned cache services are beyond the scope of this post; this topic will be addressed in a future posting.)
So how do partitions come into play? Each backing map is logically split into partitions:
This splitting is used for transferring data between storage enabled members. Here are log file snippets of data transfer between two nodes:
Member 1:
(thread=DistributedCache, member=1): 3> Transferring 128 out of 257 primary partitions to member 2 requesting 128 |
Member 2:
(thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions |
As members are added to the cluster, the partitions are split up amongst the members. Note that this includes backup copies of each partition.
As of Coherence 3.5, it is possible to configure backing maps to physically split partitions into their own respective backing maps. This configuration may be desirable for backing maps that are stored off heap (i.e. NIO direct memory or disk) as it will make data transfer more efficient.
Florida JUGs Next Week
I will be in central Florida next week presenting at the following user groups:
An Introduction to Data Grids for Database developers (GatorJUG June 23rd)
This talk will introduce the concept of data grids to developers that have experience with Java EE and relational databases such as Oracle. The programming model will be explored (including caching patterns and similarities to NoSQL) as well as the performance & scalability improvements a data grid offers.
Taking a distributed system from development into a working production environment is a challenge that many developers take for granted. This talk will explore these challenges, especially scenarios that are not typically seen in a development setting.
I’m especially excited about the OJUG talk as I think it will cover many topics of interest to Developers and OPS guys. It is a set of general guidelines that came about from seeing dozens of Coherence applications in production. We will cover such things as:
- What to look for when using vmstat
- Must-have production level JVM settings/flags
- Developer Do’s and Dont’s
- Crash course on thread dumps and heap dumps
We will also be giving away a copy of Oracle Coherence 3.5 at each event! If you are coming please follow the links to the events above and RSVP (you need to be a member of CodeTown to sign up, but registration is free and painless.)
Coherence Key HOWTO
On occasion I am asked about best practices for creating classes to be used as keys in Coherence. This usually comes about due to unexpected behavior that can be explained by incorrect key implementations.
First and foremost, equals
and hashCode
need to be implemented correctly for any type used as a key. I won’t describe how to do this – instead I’ll defer to Josh Bloch who has written the definitive guide on this topic.
There is an additional requirement that needs to be addressed. All serializable (non transient) fields in the key class must be used in the equals
implementation. To understand this requirement, let’s explore how Coherence works behind the scenes.
First, let’s try the following experiment:
public class Key implements Serializable { public Key(int id, String zip) { m_id = id; m_zip = zip; } //... @Override public boolean equals(Object o) { // print stack trace new Throwable("equals debug").printStackTrace(); if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } Key key = (Key) o; if (m_id != key.m_id) { return false; } if (m_zip != null ? !m_zip.equals(key.m_zip) : key.m_zip != null) { return false; } return true; } @Override public int hashCode() { // print stack trace new Throwable("hashCode debug").printStackTrace(); int result = m_id; result = 31 * result + (m_zip != null ? m_zip.hashCode() : 0); return result; } private int m_id; private String m_zip; } |
This key prints out stack traces in equals
and hashCode
. Now use this key with a HashMap:
public static void testKey(Map m) { Key key = new Key(1, "12345"); m.put(key, "value"); m.get(key); } //... testKey(new HashMap()); |
Output is as follows:
java.lang.Throwable: hashCode debug at oracle.coherence.idedc.Key.hashCode(Key.java:60) at java.util.HashMap.put(HashMap.java:372) at oracle.coherence.idedc.KeyTest.testKey(KeyTest.java:46) at oracle.coherence.idedc.KeyTest.testKey(KeyTest.java:52) at oracle.coherence.idedc.KeyTest.main(KeyTest.java:18) java.lang.Throwable: hashCode debug at oracle.coherence.idedc.Key.hashCode(Key.java:60) at java.util.HashMap.get(HashMap.java:300) at oracle.coherence.idedc.KeyTest.testKey(KeyTest.java:47) at oracle.coherence.idedc.KeyTest.testKey(KeyTest.java:52) at oracle.coherence.idedc.KeyTest.main(KeyTest.java:18) |
Try it again with a partitioned cache this time:
testKey(CacheFactory.getCache("dist-test")); |
Note the absence of stack traces this time. Does this mean Coherence is not using the key’s equals
and hashCode
? The short answer (for now) is yes. Here is the flow of events that occur when executing a put with a partitioned cache:
- Invoke NamedCache.put
- Key and value are serialized
- Hash is executed on serialized key to determine which partition the key belongs to
- Key and value are transferred to the storage node (likely over the network)
- Cache entry is placed into backing map in binary form
Note that objects are not deserialized before placement into the backing map – objects are stored in their serialized binary format. As a result, this means that two keys that are equal to each other in object form must be equal to each other in binary form so that the keys can be later be used to retrieve entries from the backing map. The most common way to violate this principle is to exclude non transient fields from equals
. For example:
public class BrokenKey implements Serializable { public BrokenKey(int id, String zip) { m_id = id; m_zip = zip; } //... @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } BrokenKey brokenKey = (BrokenKey) o; if (m_id != brokenKey.m_id) { return false; } return true; } @Override public int hashCode() { int result = m_id; result = 31 * result; return result; } } |
Note this key has two fields (id and zip) but it only uses id in the equals
/hashCode
implementation. I have the following method to test this key:
public static void testBrokenKey(Map m) { BrokenKey keyPut = new BrokenKey(1, "11111"); BrokenKey keyGet = new BrokenKey(1, "22222"); m.get(keyPut); m.put(keyPut, "value"); System.out.println(m.get(keyPut)); System.out.println(m.get(keyGet)); } |
Output using HashMap:
value value |
Output using partitioned cache:
value null |
This makes sense, since keyPut
and keyGet
will serialize to different binaries. However, things get really interesting when combining partitioned cache with a near cache. Running the example using a near cache gives the following results:
value value |
What happened? In this case, the first get resulted in a near cache miss, resulting in a read through to the backing partitioned cache. The second get resulted in a near cache hit because the object’s equals/hashCode was used (since near caches store data in object form.)
In addition to equals
/hashCode
, keep the following in mind:
- Keys should be immutable. Modifying a key while it is in a map generally isn’t a good idea, and it certainly won’t work in a distributed/partitioned cache.
- Key should be as small as possible. Many operations performed by Coherence assume that keys are very light weight (such as the key based listeners that are used for near cache invalidation.)
- Built in types (String, Integer, Long, etc) fit all of this criteria. If possible, consider using one of these existing classes.)