Labeled Interface Invocation

Meresco’s Latest Addition
Meresco has received an important upgrade. With this upgrade the flexibility of DNA configuration greatly improved. Supporting different implementations for the same interface has become even more flexible with Labeled Interface Invocation (LII).

Multiple Implementations For an Interface
DNA makes a distinction between interfaces and implementation, which is a well-known practice.  However, sometimes there are multiple distinct implementations for the same interface.  For example, two Storage components might support the same interface, yet representing distinct storages.  They can have distinct performance properties or represent distinct data sets.  DNA already supports this by supporting multiple implementations for any interface. At configuration time, one branch might use one implementation while another branch uses another. For a step-by-step programmers guide too how this works see Component Configuration with DNA.

Components Choose Their Own Implementations
Labeled Interface Invocation allows components to choose distinctive implementations of the same interface at run-time. This is done by first labeling the implementations during configuration, and then use these labels during the invocation of interfaces.

Suppose we have two storages supporting the interface

    addNew(data)

Normally, a message to an arbitrary implementation of this interface would look like:

    self.any.addNew(someData)

Now suppose that there are two implementations, differing in the way they store and safeguard data. Lets label these implementations “reviews” and “audits”:

    reviewStore = Storage(name="reviews")
    auditStore = Storage(name="audits")

With Labeled Interface Invocation we can now target messages to any of these implementation as follows:

   self.any["reviews"].addNew(newReview)
   self.any["audits"].addNew(auditRecord)

An interesting aspect of the lines above is that the constant labels “reviews” and “audits” can in fact be variables. Thus, labels can also be computed and routing messages to different components can be done on the fly. For example:

    target = ... compute or lookup a label ...
    self.any[target].addNew(someData)

Conclusion
Meresco’s DNA has been stable for about two years. So new DNA features are rare, and we were pleasantly surprised to have found one. As usual, it took a long time, and a lot of thinking, but only a few lines of code; 3 actually. Most code changes are for tests and other components to used LLI. Here is the complete LLI changeset.

Have any questions about this feature or want support to use it your self? Please contact me.

How to scale up Meresco

Recently Kennisnet asked me how to scale up Edurep with regard to:
– queries per second
– record updates per second
– total number of records

I suspect that this is of broader interest, so below are two approaches for scaling CPUs, memory or bandwidth.

Queries per second
A single machine Meresco system runs between 10 and 100 queries per second. Scaling this requires adding more machines so load can be distributed over CPUs and networks. There are two approaches.

Approach A
Replicate the entire server process and feed updates to them simultaneously.

Approach B
Extract the most demanding components from the server’s configuration and put these on separate machines. Reconnect them using the Inbox component.

Before After

Both approaches are based on standard Meresco functionality and therefore easily configured.

Record updates per second
Meresco is able to process 1 to 10 updates per second concurrently with querying. Scaling this up requires adding machines that can share the load of processing the records using approach B. These machines can feed into one or more query processing machines, effectively enabling scaling along both axes.

The main idea is to decompose a system into subsystems which can be distributed and replicated. This analysis must be done before a system can scale up using cloud-like environments. How Meresco’s configuration supports this will be outlined in a future blog.

Total number of records
Meresco can host 10 – 100 million records on one machine, mostly limited by what its indexes can do. Scaling up requires a closer look at these indexes to see how additional resources must be allocated. In this area Lucene, BerkeleyDB and OWLIM have earned great reputations. Meresco’s architecture helps to get the most out of these.

Meresco’s homegrown Facet Index and Sorted Dictionary Index (used for auto-complete) can be scaled following approach B. However, with a single-node limit of roughly one billion records most applications would not need more than one node.

Conclusion
I realize that I only scratched the surface of how to scale Meresco. There are many details to discuss and you probably wonder how your situation could be dealt with. I’d love to hear your responses!

Integrating Java in Python with JTool

MERESCO combines components written in various programming languages. It uses Python to tie these components together. It integrates Java using JTool.

It began with Lucene
Lucene is a well-known Java library for full-text search. MERESCO used PyLucene which compiled Lucene to native machine code. PyLucene was unstable and did not cover all of Lucene. In 2008 it changed strategy and the performance dropped significantly. We decided to try a completely different approach and that turned out to work very well.

What did we try?
We quickly discovered that compiling Lucene with GCJ was easy and that it resulted in robust, fast and reliable programs. Then we created a Python extension called JTool which mirrors the complete Java API in Python.

How to use?
Here is how you use it in Python:

$ python
>>> import jtool
>>> jtool.load('liblucene-core.so')  # compiled lucene-core.jar
>>> from org.apache.lucene.index import IndexReader
>>> reader = IndexReader.open("/indexdir")

This is how all of Lucene is accessed in MERESCO. It runs fast, reliable and with low memory footprint. The code base of JTool is only 1500 lines, there is no code generation and it is completely generic. So the next question is:

Will JTool work for other Java libraries?
In February 2010 we started looking for a more scalable Triple Store for MERESCO. Our choice was OWLIM…. written in Java. While Lucene is quite a large library, OWLIM is even larger. The latter depends on 22 other Java projects, including the Sesame RDF Framework.

Compilation of OWLIM took a bit more effort as we needed to gather all needed jar files and make sure some factories did not get duplicated in the final library. Then we tried to load this library in Python using JTool:

>>> import jtool
>>> jtool.load("libowlim-core.so")
>>> from org.openrdf.repository.sail import SailRepository
>>> from org.openrdf.query import QueryLanguage
>>> ...

This enabled us to insert RDF and execute SPARQL queries on the triple store. Yes it works!

Future of JTool
JTool can not yet call methods with NULL-parameter or Java 5 varargs. It also does not support callbacks in Python yet. We have solutions for these omissions which we will implement this year. Meanwhile, it is easy enough to create a Java wrapper and use this via JTool. So JTool allows us to quickly integrate any Java libraries in MERESCO.

Avaliability
Sources for JTool up to version 4 are available JTool Sources.
JTool version 5 and up are available in binary form JTool Binaries.

Weightless

The high concurrent performance of Meresco is not achieved by deploying an army of processes and threads but by the asynchronous power of Weightless.

Server processes are either synchronous or asynchronous.

Synchronous

Synchronous servers accept a connection and wait for the whole request to be received before processing. Every connection is handled by a single thread or process. The program flow in synchronous servers seems conceptionally easier, but often gets complicated in practice by all kinds of locking issues.

Asynchronous

Asynchronous servers read when there is data available and send responses when there is something to send, all in one single-threaded process. Code that runs within a asynchronous server needs to be crafted with special care to allow for fair resource sharing between requests. This is known as cooperative scheduling. Because there are no threads that interrupt and lock each other (simplifying the software when compared to the alternative) the server is able to handle a large number of concurrent connections without a noticable speed penalty.

Weightless

Weightless was developed to bring the advantages of asynchronous I/O to Meresco. Weightless is a lightweight framework that provides the infrastructure for asynchronous servers. Weightless comes with HTTP and HTTPS server functionality. By making use of Python generators (co-routines) to facilitate input and output, Weightless provides an easy to use mechanism to read and write data.

More information on weightless can be found at: http://weightless.io

Storage versus Index

In a lot of search engines the data and indices are stored together, creating a single huge entity. This approach potentionally leads to a number of problems, ranging from backup problems to performance issues. Also, with these systems access to the data is limited by what is offered through their respective APIs.

Index
Meresco works differently, using an index for what it is designed to do best: to return the identifiers of documents that match a query. The index of a book gives you the number of the page that covers a certain topic.
Similarly, the identifiers returned by a Meresco index point to documents in a separate storage. This leads to a simple index that even for millions of documents typically stays small enough to fit in memory entirely. This yields an obvious speed advantage.

Storage
The data is stored in a Meresco Storage. The storage is basically a well defined directory structure. Identifiers are used to pinpoint a directory in which the data is stored. This means that it can be stored on basically any filesystem (although e.g. the ext2/3 filesystems impose a limit on the number of subdirectories in a directory).

Native
Having all data in native format on disk makes it easier to control and maintain. Data can be read immediately without having to be decoded or transformed in any other way. Data enrichment tools, for example to get metadata from PDF files or digital images, can do their work in the background directly on the data files.

Caching
Many systems come with their own caching mechanisms. Meresco Storage however takes advantage of the disk caching capabilities of modern unix systems. This results in fast data lookups with no added complexity.

Conclusion
By keeping only identifiers, a Meresco index stays simple, small and fast. The accompanying storage offers fast retrieval of stored documents in their native formats.

What to do with Linked Data?

The web is moving towards linked data. Many data collections are available as Linked Data, including Dutch scientific libraries, museums and archives. What can we do with all this data? What tools do we need? The good news is that Linked Data can be adopted incrementally.

What problem does Linked Data address?

Objects in libraries, museums and archives are increasingly described by experts not related to these institutions. The resulting descriptions relate to persons, places and other concepts, of which none of the experts or institutions can claim authority. Institutions are no longer the only authority on specific data collections and certainly not authoritative on all the concepts their collection relates to. Maintaining collections of authoritative information is becoming increasingly difficult. Life cycle management of metadata records [possibly even maintaining different versions for different communities] become major challenges. Failing to maintain a clear authoritative, and not isolated, collection undermines the existence of museums, archives and libraries and any other (broker) service that is to add value somehow.

How does Linked Data solve the problem?

Linked data allows everyone to make statements about everything [RDF Concepts]. It does not encapsulate knowledge about objects in a record, but represents the knowledge as a set of statements about the object. The record becomes a set of statements. This is simple, but fundamental. Consider the following record about a certain piece of art:

Record 5832:
    identifier = http://institute.org/987639
    title = "A true work of Art"
    creator = "V. van Gogh"

This could be represented with (at least) two statements:

http://institute.org/5832 has a title whose value is “A true work of Art”

http://institute.org/5832 has a creator whose value is “V. van Gogh”

The fundamental change here is that record 5832 no longer plays a role when exchanging data. The record has been artificially created to describe an object, but the record itself is not important, only the statements it introduces are. (Linked Data can transparently introduce intermediate objects to group statements, however these are not manageable items ‘to worry about’ as records were). Only these sets of statements are exchanged. Maintaining an authoritative collection comes down to carefully selecting sets of statements to join.

Resolving Statements

Formally a statement is a triple of (Subject, Predicate, Object). [RDF concepts]. Together, triples form graphs:

Subject and Predicate are always URIs, while Object can be an URI or a value. In the example above the Creator statement could have been:

http://institute.org/5832 has a creator whose URI is info:eu-repo/dai/nl/071792279

For the actual name of this person, we will have to look for statements saying something about info:eu-repo/dai/nl/071792279 as the subject. This again might resolve to a URI so we have to repeat the process until we find a value.

How can institutions take advantage of Linked Data?

If the world around an institution is a cloud of Linked Data sources, the center of the cloud is where the institution has most of its authority. Surrounding this authority center are the related data sources on which the institution has less authority. Together we call this the authority cloud.

With this as a reference, do the following little steps:

  1. Start seeing data collections as statements, both in your Authority Cloud and outside it. Don not worry when they are not in RDF, that is not required.
  2. Start with using global persistent identifiers for all your objects. This allows you and others to make statements about the objects and to have meaningful joins.
  3. Start gathering triples from the sources within your Authority Cloud in a Triple Store. When sources are not in RDF just use simple tools to extract triples.
  4. Populate your local services using the Triple Store to resolve others statements. For example, while indexing your own metadata, use the triple store to create additional search fields, facets, tag clouds etc.
  5. While displaying objects, turn unresolved statements into click-able links.
  6. For advanced users: start making use of the Triple Store’s query capabilities for enhancing your services.

    What tools are needed to deal with Linked Data?

    Keep your tools! Unless you are dissatisfied of course, retain your investment. You will need a scalable triple store in your own data center however. Since this Triple Store contains all the statements you decided need resolving before offering your service, it must be fast and readily available.

    In the next installment of this blog, we will outline how MERESCO can be used to implement Linked Data.