What would be the best way for storing and quering RDF data (triples or quads) using HBase?
I've just found this:
Unfortunately, the .tar.gz has been removed from the utdallas.edu website.
You basically have two options in how to store RDF data in wide-column databases like HBase and Cassandra: the resource-centric approach and the statement-centric approach.
In the statement-oriented approach, each RDF statement corresponds to a row key (for instance, a UUID) and contains
The above is a relatively simple mapping to implement but suffers from some problems, notably the fact that preventing the creation of duplicate statements (an important RDF semantic) means having to do a read before doing a write, which at least in Cassandra quickly becomes a performance bottleneck as writes are much faster than reads.
There are ways to work around this problem, in particular by using content-addressable statement identifiers (e.g. the SHA-1 of the canonicalized N-Triples representation of each statement) as the row keys, but this in turn introduces other trade-offs such as no longer being able to version statement data: every time statement data changes, the old row needs to be deleted and a new one inserted with the new row key.
In view of the previous considerations, the resource-oriented approach is generally a better natural fit for storing RDF data in wide-column databases. In this approach, each RDF subject/resource corresponds to a row key, and each RDF predicate/property corresponds to a column or supercolumn. Keyspaces can be used to represent RDF repositories, and column families can be used to represent named graphs.
The main trade-off with the resource-based approach is that some statement-oriented operations become more complex and/or slower, notably those counting the total number of statements or querying for predicate or object terms without specifying a subject. To support performant basic graph pattern matching, additional
Given that HBase is designed for row based storage of data the simplest thing would be to have a GSPO layout i.e.
In terms of actually reading and writing data you'll most likely need to look at whatever API you want to use to manipulate the RDF and implement the necessary interfaces/classes that allows your chosen API to get data in and out of HBase.
As for querying then you'll need to implement a SPARQL engine for your database which could be rather complex or use an API that does SPARQL in-memory but this will have the overhead of needing to read out some/all of the data from HBase first.
Personally I don't know enough about HBase to say whether such an approach is viable or sensible, if you have hardware on which you can install and run HBase then you would most likely be better off installing and running a Triple Store for your RDF data.
answered 13 Apr '10, 09:41
Rob Vesse ♦
Hey Arto or Vesse, Why not use Cassandra/HBase as a secondary storage mechanism and the Jena GraphMemFaster as a Write-Through Cache? In the Cassandra case you have to use twice as much memory but it would solve your performance problems. Alternatively you could implement a Hexastore in Cassandra if you can afford the memory costs.
I just discovered this paper discussing Rya. It uses Accumulo rather than HBase, and there is no source code available, but it does seem to address the querying issue, at least for the simpler semantic relationships such as rdfs:subclassOf.
answered 28 Dec '12, 17:01