Partitions, Backing Maps, and Services…Oh My!
Recently I was asked the following question:
What is the relationship between a partition, cache, and a backing map?
To answer this, first we should go through some definitions:
Cache: An object that maps keys to values. In Coherence, caches are referred to by name (thus the interface NamedCache.) Caches are also typically clustered, i.e. they can be accessed and modified from any member of a Coherence cluster.
Partition: A unit of transfer for a partitioned/distributed cache. (In Coherence terminology, partitioned and distributed are used interchangeably, with preference towards the former.)
Backing Map: Data structure used by a storage node to hold contents of a cache. For partitioned caches, this data is in binary (serialized) form.
I’ll add one more term for completeness:
Service: Set of threads dedicated to handling requests; a cache service handles requests such as put, get, etc. The DistributedCache service is present on all members of the cluster that read/write/store cache data, even if the node is storage disabled.
Now let’s put these concepts together to see how a clustered partitioned cache works.
First we’ll start with a backing map. It is a straight forward data structure, usually an instance of LocalCache:
The contents of this map are managed by a partitioned cache service. In a typical thread dump, this would be the “DistributedCache” thread:
"DistributedCache" daemon prio=5 tid=101a91800 nid=0x118445000 in Object.wait() [118444000] |
A single cache service can manage multiple backing maps. This is the default behavior if multiple services are not specified in the cache configuration file. (Multiple partitioned cache services are beyond the scope of this post; this topic will be addressed in a future posting.)
So how do partitions come into play? Each backing map is logically split into partitions:
This splitting is used for transferring data between storage enabled members. Here are log file snippets of data transfer between two nodes:
Member 1:
(thread=DistributedCache, member=1): 3> Transferring 128 out of 257 primary partitions to member 2 requesting 128 |
Member 2:
(thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions |
As members are added to the cluster, the partitions are split up amongst the members. Note that this includes backup copies of each partition.
As of Coherence 3.5, it is possible to configure backing maps to physically split partitions into their own respective backing maps. This configuration may be desirable for backing maps that are stored off heap (i.e. NIO direct memory or disk) as it will make data transfer more efficient.