How are

related to the

Let's take the constraints of the REST architectural style as figured out here:

  • Resource Identification
  • Uniform Interface
  • Self-Describing Messages
  • Hypermedia Driving Application State
  • Stateless Interactions (see here for a good explanation what stateless in this context mean)

Resource Identification is clearly address in point 1 (URIs) and 2 (HTTP URIs) of the Linked Data principles as defined by timbl. However, the explicit suggestion of the use of HTTP URIs is against the REST feature of a uniform generic interface between components ("A REST API should not be dependent on any single communication protocol", see here). Identification is separated from interaction.

Is the common, layered Semantic Web technology stack a implementation of a Uniform Interface re. REST principles? Or is it only HTTP as communication protocol? And what does "The same small set of operations applies to everything" then mean? Do I have to enable an processing of every operation on every (information) resource? Or does this mean, that I only have to provide a uniform behaviour of processing of operations on (information) resource, e.g., if a specific operation is not possible or allow on a specific resource, then the component has to communicate this in a uniform way?
[edit] A verification re. the issue of how the small set of operations of the Uniform Interface have to be support (on the implementation example of HTTP):

HTTP operations are generic: they are allowed or not, per resource, but they are always valid. (see here)

This is in accord with my last statement.

As the common, layered Semantic Web technology stack uses HTTP as communication protocol, it uniformly defines/provides the small set of operations of the Uniform Interface, too. However, the media types define processing models ("Every media type defines a default processing model.", see here). Thereby, layered encoding is possible (see here), e.g., "application/rdf+turtle": RDF Model as knowledge representation structure (data model) and Turtle as syntax (other knowledge representation languages, e.g., RDF Schema are provided "in-band", via namespace references). Furthermore,

The media type identifies a specification that defines how a representation is to be processed. (see here)

Side note: I know, there is some progress in providing media type specification as resources with a URI. However, as far as I know, their resource URIs lack of a good machine-processable, dereferencable specification description, e.g., the lack of a machine-processable HTML specification that enables a machine agent to know that "the anchor elements with an href attribute create a hypertext link that, when selected, invokes a retrieval request (GET)" (this issue is derived from community statement and is not really verified, however, I currently would agree with it ;) ; please correct me, if this assertion is wrong). All in all, an agent must be able to automatically learn the processing model of a previously unknown media type, if wanted (analogues the HTTP Upgrade header field). I know, that there is some progress (discussion) in the TAG community re. a better introduction of new media types.

To summarize, the important aspect is that the media type specifications and the knowledge representation language specifications in general also have to the define the processing model of specific link types (e.g. href in HTML) in a machine-processable way (is this currently really the case? - I would say no!). This is addressed by the constraints "Self-Describing Messages" and "Hypermedia Driving Application State" (a.k.a. HATEOAS).
In other words, I would (currently) conclude that only the methods of the HTTP protocol are an implementation of a set of opertations of a Uniform Interface and Semantic Web knowledge representation languages are related to the other two constraints. [\edit]

Self-Describing Messages are enforced for machine processing by using as basis the common knowledge representation languages of the Semantic Web (i.e. RDF Model, RDF Schema, OWL, RIF) and all knowledge representation languages (incl. further Semantic Web ontologies) are referenced in this 'message'. This is somehow generalized in the third Linked Data principls as defined by timbl ("provide useful information, using the standards").

The forth Linked Data principle as defined by timbl ("Include links to other URIs ") is somehow related to Hypermedia Driven Application State of the REST principles. This principle can again be powered for better machine processing by using the common knowledge representation languages of the Semantic Web as basis. However, I'm a bit unclear how the links drive my application state. Nevertheless, I guess that the application state would change when navigating to a resource by dereferencing a link (HTTP URI).
[edit]This is explained in the introduction section of Principed Design of the Modern Web Architecture:

The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages forms a virtual state machine allowing a user to progress through the application by selecting a link or submitting a short data-entry form, with each action resulting in a transition to the next state of the application by transferring a representation of that state to the user.


Stateless Interaction is not really covered by the Linked Data principles as defined by timbl, or? Albeit, when realizing "state as a resource" (cf. here), I can use again the common knowledge representation languages of the Semantic Web as basis for describing states and using HTTP URIs to make these resources also accessible.

Would you agree with (parts of) my interpretation?
Finally, are the principles of Linked Data really only intended to be read-only. I though read and write would better fit to the principles of REST, or?

Source, where this topic is also discussed somehow:

asked 24 Jan '11, 15:25

zazi's gravatar image

accept rate: 13%

edited 17 Apr '11, 08:16

Two points for you to consider:

  1. Linked Data tends to be big datasets collected/curated by some institution that then publishes them for others to use. The organisation publishing the data will often be using it for their own internal uses so in a lot of cases allowing write access to arbitrary users is not an option since it would affect their own use of the data (though not always)
  2. The technologies for writing Semantic Web (and thus Linked Data) via REST are only just becoming standardised i.e. SPARQL Update and the SPARQL Uniform HTTP Protocol for RDF Graph Management. These are still undergoing the standardisation process which will be completed later this year most likely.
    Yes writing data can be done on the Semantic Web but the technologies for it aren't as widely available as those for just reading the data
permanent link

answered 24 Jan '11, 16:03

Rob%20Vesse's gravatar image

Rob Vesse ♦
accept rate: 29%

Well, I'm more or less aware of all these things. I "just" want to clarify my understanding of the (principles of) REST architectural style and how it aligns to the principles of Linked Data (which are independent of Semantic Web technology) and well the Semantic Web technology, because they can be used to apply the Linked Data principles. Generally, write access must of course be controlled by an authentification process, but this is quite natural, or?

(24 Jan '11, 20:26) zazi zazi's gravatar image

Yes of course it is and there are efforts in progress to standardise upon a Linked Data based authentication mechanism called WebID. It will be a generally applicable successor to OpenID but based upon using RDF and Semantic Web technologies

(25 Jan '11, 09:01) Rob Vesse ♦ Rob%20Vesse's gravatar image

Sorry, again. I don't want to be harsh, but yes, I'm also aware of WebID. However, the topic of this thread should be the clarification of the relation of Linked Data/Semantic Web and REST ;)

(25 Jan '11, 10:03) zazi zazi's gravatar image

Concerning read-only - for Linked Data currently found in the wild, yes, this is the case, but people are working on solutions for a write-enabled Linked Data Web. Rob already mentioned SPARQL as one of the key components, for the big picture see Realizing a write-enabled Web of Data. In fact, we're now launching the WebID Incubator Group at W3C, another pivotal piece in the overall puzzle re a full-fledged write-enabled Web of Linked Data.

permanent link

answered 25 Jan '11, 09:02

Michael%20Hausenblas's gravatar image

Michael Haus...
accept rate: 15%

Here is another approach, which differs from the mainstream REST vs RDF interpretation.

What if we have a domain model, relying on RDF, split it into domain objects and expose subset of triples, describing each object via REST interface? And allow writing /updating triples back via REST interface ? And also allowing to generate new triples, by launching calculations, that create new REST resources and new triples.

And consider using SPARQL on the client side of REST :) The implementation of the server side is not restricted to be a SPARQL endpoint, or even implemented as triple storage. It just needs to offer RDF serialization of resources, instead/or in addition to XML, microformats and HTML.

Then we can use conventional technologies for implementing read/write REST services, but expose objects via their RDF representation, and link to the big LOD cloud, when necessary. If the LOD cloud finds its useful at certain point, it is trivial to retrieve the RDF representations of these REST resources, in a similar way search engines gather their information, rather than compiling the LOD cloud manually.

That's what we have been doing at http://opentox.org, trying to develop REST - RDF web services platform for predictive toxicology with REST / RDF OpenTox API

This has been also presented at last year ACS RDF Symposium - RESTful RDF services for predictive toxicology.

permanent link
This answer is marked "community wiki".

answered 24 Mar '11, 11:00

ngn's gravatar image

accept rate: 0%

edited 24 Mar '11, 11:38

I'm unsure whether your description really differs from the proposed REST + Linked Data comparision. Furthermore, the LOD cloud is not part of this discussion - it is rather than Linked Data in general (see, e.g., http://smiy.wordpress.com/2011/02/17/a-generalisation-of-the-linked-data-publishing-guideline/). Furthermore, I'm unsure about the RESTfulness of your web service. I don't endorse your interpretation of HATEOAS that is given the references presentation (http://vedina.users.sourceforge.net/publications/2010/ACS-RDF-NJ.pdf). ...

(24 Mar '11, 11:37) zazi zazi's gravatar image

... Besides, I recognized from a short view on your API documentation, that you make use of parameters such as "content-type". This a feature that is commonly handled by the HTTP protocol via the ACCEPT header. You may have a look at http://nordsc.com/ext/classification_of_http_based_apis.html to classify your web service.

(24 Mar '11, 11:43) zazi zazi's gravatar image

We do handle content-type via ACCEPT header.

(24 Mar '11, 11:47) ngn ngn's gravatar image

Yes, it doesn't really differ, besides lifting the restriction that services should offer a SPARQL endpoint.

The hypermedia constraint basically means that the client can follow the links and figure out what should do next without any external documentation. While it can is feasible for human Web, it is hardly true for any automatic client. For example the user could probably find what to paste in the forms at one of the OpenTox REST services http://apps.ideaconsult.net:8080/ambit2/algorithm/LR, but there is no generic way to achieve this for a client without some preliminary information.

(24 Mar '11, 11:49) ngn ngn's gravatar image

HATEOAS and REST classifications were extensively discussed on rest-discuss list, and still controversial to the point of somebody setting up this service http://isitrestful.com/ :)

(24 Mar '11, 11:56) ngn ngn's gravatar image

Regarding HATEOAS: I think there are clear statements given by Roy T. Fielding (see http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven). There is no such restriction that services should offer a SPARQL endpoint from a general Linked Data publishing guideline view (which is independent from concrete Semantic Web technologies). Regarding the "content-type" issue: maybe then the description at, e.g., http://opentox.org/dev/apis/api-1.1/Feature is probably a bit misleading (it suggests me that it is a parameter).

(24 Mar '11, 12:12) zazi zazi's gravatar image

Correct, there is no such restriction, but it is somehow generally assumed that linked data == a triple storage solution + SPARQL (even in this page). Thank you, this part of the documentation should be clarified - this particular entry should denote the Content-type header on POST should be sent to reflect the mime type of the content posted to the service.

(24 Mar '11, 13:54) ngn ngn's gravatar image

(imho) REST is a great design from network architecture point of view, but some aspects are a bit underspecified, which leads to all different interpretations and classifications. Even the post cited above is not really helpful when one starts to implement a REST service. Commenting each point in detail may need another thread though.

(24 Mar '11, 14:04) ngn ngn's gravatar image

Yes, trying to grasp the REST architectural style often causes much confusion. For the moment, I'm not aware of any service/API that completely fulfils the constraints ofh the REST architectural style. I'm in doubt whether this is even possible. Albeit, your project seems to be very interesting in the aspects of REST and Linked Data. I'm currently quite impressed. Well done!

(24 Mar '11, 15:15) zazi zazi's gravatar image

Thank you! There is a public mailing list, feel free to join discussions.

(24 Mar '11, 15:30) ngn ngn's gravatar image
showing 5 of 10 show 5 more comments

Even without mentioning REST explicitly, "Toward a Basic Profile for Linked Data" essentially merges ideas from REST and Semantic web worlds.

permanent link

answered 29 Dec '11, 04:27

ngn's gravatar image

accept rate: 0%

Unfortunately there is also a problem with large queries that has led to the SPARQL standard advocating the use of a POST where you should be using a GET.

permanent link

answered 03 Feb '11, 19:21

William%20Greenly's gravatar image

William Greenly
accept rate: 13%

Following the "hypermedia as the engine of application state" constraint, I should be able to use an "advanced search" interface (e.g. similar to that from Google), which guides me when constructing the query. The content of this form can then be sent via HTTP POST to the server, which processes the SPARQL query, or?

(04 Feb '11, 00:17) zazi zazi's gravatar image

An interesting discussion to this topic can be found in this thread http://tech.groups.yahoo.com/group/rest-discuss/message/17281 on the rest-discuss mailing list (part of the topic are also covered by the predecessor topic http://tech.groups.yahoo.com/group/rest-discuss/message/17242).

(04 Feb '11, 09:46) zazi zazi's gravatar image

I think Data Wikis are exciting, but I think many Linked Data publishers have a distinct point-of-view, and that's a good thing.

Have you ever heard of a game called "Association Football?"


You know the game, but you probably don't know that name. I like watching the World Cup and I've coached soccer for kids but until I became a semantician I never knew that, according to Wikipedia and Freebase, the game that many people call soccer or football (or some derivative of football) is officially called "Association Football."

Some political decision was made to call it that on Wikipedia, and I guess that's how it is, but if I was making a service aimed at ordinary people, I think I'd use a different name for that sport. I could certainly use the Freebase API to change that name, but it would get changed back. Organizations that want to maintain a stable POV won't want a write interface.

permanent link

answered 21 Sep '11, 15:19

database_animal's gravatar image

database_animal ♦
accept rate: 15%

edited 21 Sep '11, 16:29

Sorry, but this does not really have something to do with the stated question, or? (btw, write access can be secured by an access control mechanism)

(21 Sep '11, 17:29) zazi zazi's gravatar image

to make something scalable, write access on a data wiki needs defense in depth -- you need access control to make sure people are authenticated and to prevent changes that threaten the integrity of the system as a whole.

On the other hand, you need protection against spam and other things that endanger the correctness and POV of a system.

(21 Sep '11, 19:36) database_animal ♦ database_animal's gravatar image

Yes, I know. However the question of this thread is: How are the principles of Linked Data as data publishing guide (independent of Semantic Web technology) and the Semantic Web as common, standardized technology stack for machine-processable knowledge representation and management in the Web is related to the principles of REST as an architectural design guideline for distributed hypermedia systems? (I guess, that the majority is aware of that an authentication and authorisation mechanism is a necessary requirement for a write-enabled system)

(22 Sep '11, 03:29) zazi zazi's gravatar image

relevant references: http://openjena.org/wiki/Fuseki http://code.google.com/p/djubby/wiki/ReadWriteLinkedData

permanent link

answered 04 Feb '11, 10:08

Wikier's gravatar image

accept rate: 0%


Please explain a bit more the relation of the referenced frameworks to the asked question. Only posting references isn't quite comfortable on a Q&A site. To get more into detail, can you explain for instance, how the "hypermedia as the engine of application state" constraint is fulfilled in Fuseki?

(04 Feb '11, 10:19) zazi zazi's gravatar image
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 24 Jan '11, 15:25

question was seen: 9,665 times

last updated: 29 Dec '11, 04:27