I want to create an RDF dataset based on a historical corpus of letters. The domain is quite simple:
This simple ontology would result in a social network to be published online to be analysed to discover clusters, spatio-temporal trends, etc., and to be explored through some user-friendly web interface by general users (mostly historians).
What array of technologies would you select for this job?
FOAF seems a rather suitable ontology to model people, but not in the historical domain.
I am particularly interested in visual modelling tools that could be used by non-programmers to enter the data in the dataset.
Thanks in advance, Mulone
Standford's project Mapping the Republic of Letters should have an RDF data model. However, I haven't been able to find the specification of the model, only hints, such as rudimentary mentions of their schema or an photo of the data model diagram. Looking at the tools they are using, it seems they also have some software for the schema maintenance. Maybe that's what you're looking for. Maybe you can find the vocabulary their using or ask them if they could provide it to you?
Finally, you can always built the schema on your own, out of existing vocabularies, using the techniques described in Ontology Dowsing.
answered 11 Jan '13, 12:50
If you're looking for an entirely non-technical solution to start building your vocabulary (if you can't find one to use), you can use a Wiki system. After all, a vocabulary that can be used with RDF needs only to make available URI's to identify things, as triples. Take as an example http://schema.org/ - with the wiki system you can create the wiki pages for each of the elements that you need in your vocabulary.
A free really basic way to start a wiki is to use Google Sites, so that should get you started with the vocabulary.
Then of course you need to make your data available as RDF. Depending on what format you have it in (spreadsheets or relational database) you need to convert it into triples such as: