In order to not use an external reasoner each time, I got to ask how can we implement our own reasoning capabilities in a triple store as to do deductions on the triple store? BigOWLim do this but how about creating ours? doable? hard to do it? needs lots of time? complicated? will I need an ontology OWL DL or OWL RL 2 ...etc ?
And plz, would someone explain me what we mean by materialising inferrences in a triple store?
asked 25 Jan '11, 14:17
There are a few different ways:
Tableaux approaches are good at getting a subset of inferences which (succinct) rule based approaches can't (mainly involving disjunctive knowledge, something is of type
Next you have rule-based approaches, which are much more intuitive to get your head around. (Inference-)Rules are just:
If the condition is met by your data, then the consequent holds in your data. An example is for subclass:
If you have two triples matching the first two patterns in your data (on the left side of →), then your data entails the third pattern (on the right side of →).
Rules are more than sufficient for many use-cases (see here for nice discussion). For implementing rule-based approaches, you then have two major options.
Forward chaining means that you take the data you know, and recursively add new data based on the rules. Materialisation means that you store the data the rules give you as if it were input data... you materialise (create/make concrete) the entailments. This is typically done as you load data into your index/application. For example... we know that all
Backward chaining means that you (typically) leave the data as is, but when you get a query, you expand that query (according to the rules) so that if will get more answers. For example... someone is asking for
If you want to implement rules, there are four main rulesets which are common amongst implementers:
Systems also commonly mix and match rules from the various profiles depending on their requirements.
There are also hybrid approaches... but that's further down the road.
It depends. I don't know what your use-case is, but my advice would be to start by implementing a simpler rule-based profile of reasoning, like RDFS (you can even leave our the more esoteric stuff and just do the four rules rdfs2, rdfs3, rdfs7, and rdfs9 for supporting the main entailments of
If you stick to rules, it shouldn't take too much time, and it shouldn't be so complicated... but that depends on how dynamic your data is, what scale you require, what rules you want to apply, etc.
Hopefully the above covers it... it just means creating new triples which are inferred by your rules/data.
This answer is marked "community wiki".
Reasoners are mostly implemented by using so called tableaux algorithms. In general I would say yes, it is hard to do, it needs lots of time and it is complicated.
Lots of smart people invested quite some time into developing reasoners, so it is nothing you can do just 'en passant', but you need to look deep into description logics and algorithms to do different sort of reasoning on these logics.
There are already related questions here:
A good introduction into description logics is the Enrico Franconis course on the topic. If you're a Java developer, you might want to take a look at the source code of Pellet, a Java based reasoner which implements OWL DL reasoning.
That simply means, that all the triples which are inferred by the reasoner are written back to the dataset, e.g. if you have a triple
and your ontology says
that the the reasoner will infer the triple
and will write this triple back to your data. Depending on your scenario that can be troublesome, as you also need to take care that when entities are deleted from your dataset, all triples that were inferred from them have to be removed, too, or at least be checked.
answered 25 Jan '11, 15:47