Notes on Oracle Coherence

Akademily
4 min readSep 30, 2020
Notes on Oracle Coherence

Oracle Coherence is a distributed cache that is functionally comparable with

Memcached. In addition to the basic function of the API cache, it has some additional features that are attractive for creating large-scale enterprise applications.

The API is based on the Java Map (Hashtable) interface. It is based on the semantics of the key/value store, where any Java Serializable object can be a value. Consistency allows multiple caches identified by a unique name (which they call “named cache”).

A common usage pattern is to find a cache by its name and then influence the cache.

THE MAIN FUNCTION OF THE CACHE (MAP, JCACHE)

  • Get data by key
  • Update data by key
  • Delete data by key

NamedCache nc = CacheFactory.getCache("mine");
Object previous = nc.put("key", "hello world");
Object current = nc.get("key");
int size = nc.size();
Object value = nc.remove("key");
Set keys = nc.keySet();
Set entries = nc.entrySet();
boolean exists = nc.containsKey("key");

THE LISTENER OF CACHE MODIFICATION EVENTS (OBSERVABLEMAP)

You can register the listener of events in the cache so that it calls the listener code when certain changes occur in the cache.

  • The new element in the cache is inserted
  • Existing cache element removed
  • Existing cache element updated

NamedCache nc = CacheFactory.getCache("stocks");
nc.addMapListener(new MapListener() {
public void onInsert(MapEvent mapEvent) {
...
}
public void onUpdate(MapEvent mapEvent) {
...
}
public void onDelete(MapEvent mapEvent) {
...
}
});

PRESENTATION OF FILTERED CACHE (QUERYMAP)

You can also define a “view” in the cache by providing a “filter”, which is essentially a logical function, in this view, you will only see the elements that are evaluated as true by this function.

NamedCache nc = CacheFactory.getCache("people");

Set keys =
nc.keySet(new LikeFilter("getLastName", "%Stone%"));

Set entries =
nc.entrySet(new EqualsFilter("getAge", 35));

CONTINUOUS QUERYCACHE SUPPORT

The view can also be used as a “continuous request”. All new incoming data that meet the filter criteria will be automatically included in the view.

NamedCache nc = CacheFactory.getCache("stocks");

NamedCache nc = CacheFactory.getCache("stocks"); NamedCache expensiveItems =
new ContinuousQueryCache(nc,
new GreaterThan ("getPrice", 1000));

SUPPORT FOR PARALLEL REQUESTS (INVOCABLEMAP)

We can also execute a query and partial aggregation on all nodes of the cluster in parallel with the subsequent final aggregation.

NamedCache nc = CacheFactory.getCache("stocks");

Double total =
(Double)nc.aggregate(AlwaysFilter.INSTANCE,
new DoubleSum ("getQuantity"));

Set =
(Set)nc.aggregate(new EqualsFilter("getOwner", "Larry"),
new DistinctValue ("getSymbol"));

SUPPORT FOR PARALLEL EXECUTION PROCESSING (INVOCABLEMAP)

We can also execute on all nodes of the cluster in parallel

NamedCache nc = CacheFactory.getCache("stocks");

nc.invokeAll(new EqualsFilter("getSymbol", "ORCL"),
new StockSplitProcessor());

class StockSplitProcessor extends AbstractProcessor {
Object process(Entry entry) {
Stock stock = (Stock)entry.getValue();
stock.quantity *= 2;
entry.setValue(stock);
return null;
}
}

ARCHITECTURE OF IMPLEMENTATION

Oracle Coherence runs on a cluster of identical server machines connected via a network. Each server has several levels of software that provide a unified abstraction for data storage and processing in a distributed environment. The application usually runs within the cluster as well. The cache memory is implemented as a set of intelligent data proxy servers that know the location of the main (primary) and slave (backup) data copies based on their key.

When a client “reads” data from a proxy server, it first tries to find the data in a local cache (also called “near cache” on the same machine). If it is not found, the smart proxy will find the distributed cache for the corresponding copy (also called the L2 cache). Since this is a read, either the main copy or the slave copy may fit. If the smart proxy does not find data from the distributed cache, it will search for data from the internal database. The returned data will then be distributed back to the client, and the cache will be filled.

Updating the data (insert, update, delete) is done in the reverse order. In the master/slave architecture, all updates will be passed to the appropriate master node, which owns this piece of data. Coordinated support of the two modes of update; “Write” and “Write Behind”. Write through “will update the server part of the database immediately after the main copy update, but before the slave copy update and, therefore, will constantly update the database”.

Write at the back “will update the departmental copy and then the database in asynchronous mode. Data lost This is possible in the mode “behind the record”, which has a higher bandwidth because multiple records can be combined into a single record, resulting in fewer records.

While extracting data from the cache into an application is a typical way to process data, it is not very scalable when large amounts of data need to be processed. Instead of sending data to the processing logic, a much more effective way is to send the processing logic to where the data is located.

That’s why Oracle Coherence provides insight into how to process large amounts of data.

While extracting data from the cache into an application is a typical way to process data, it is not very scalable when a large amount of data needs to be processed. Instead of sending data to the processing logic, a much more effective way is to send the processing logic to where the data is located. This is why Oracle Coherence provides an invocableMap interface where the client can provide a class of “processor” that is sent to each node where processing can be performed with local data.

Moving the code to data distributed across multiple nodes also allows for parallel processing, as each node can now perform local processing in parallel.

Processor logic is sent to the processing queue of the execution node, where the active processor disables the processor object and executes it. Note that this execution is performed sequentially, in other words, the processor will complete the processing task before proceeding to the next task.

There is no need to worry about multi-threading, no need to use locks, and therefore no problems with locks.

--

--

Akademily

We conduct reviews, guides and comparative tests of gaming laptops, monitors, graphics cards, keyboards, mouses, headsets and chairs to help you buy the best ga