Posted by: John Erickson | February 4, 2010

DOIs, URIs and Cool Resolution


Responses

  1. John – I already mentioned on Twitter. The answer is at http://www.openarchives.org/ore/1.0/http.html#RedirectCN. Apart from that I think your question is too centered around handle system technicalities. I am not sure too many people care about that one. The core question is how are you modeling what DOIs represent on the web. My answer is http doi identify ORE Aggregations. I think that is the DOIs entry point into the linked data and sem web world.

  2. Hi John:

    This is a real nice post on a very timely issue. And thanks for both the background and the solution you propose.

    I am a little confused in one respect though. It seems to me there are two separate (but related) points at hand: 1) a mechanism (e.g. status 303/hash URI) to relate URI for RWO and URI for associated description, and 2) a mechanism (e.g. HTTP conneg) to provide alternate machine/user descriptions. I get the feeling that these two are being conflated somehow and that more prominence is being given #2. In fact, I’m not strictly sure that a machine readable description is actually *required*, although is very highly desirable.

    So, if one just falls back on the default (“human”) description provided by the regular HTTP dereference of an Information Resource, and focuses on the first point.

    The TAG resolution on httpRange-14 spells out very clearly what a 303 resolution is for (“any resource”), but remains quiet on other 3** codes. Interesting to note that RFC 2616 has this to say on 302:

    “However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method.”

    This is obviously the sense in which current DOI HTTP URI is being used. But still it returns a 302 and not a 303 (albeit intending 303). But httpRange-14 says nothing about 302 per se.

    Tony

  3. Hi John,

    Thanks for this article. The clear and thorough explanation is appreciated. However the big question in my mind is: why is the handle proxy a better place for the negotiation than the endpoint URL?

    It seems like the owner of the resource would be a better negotiator.

    Thanks,
    Sean

    • Thanks for the kind worlds!

      RE your question, I believe having the “owner” of the “resource” do the content negotiator misses the point of what the DOI (and the DOI-based HTTP URI) represents, and more importantly what clients and services THINK HTTP URIs “mean.”

      You don’t want clients to require special knowledge about the HTTP URI! If the universe is forced to treat DOI-based HTTP URIs that name data, things or documents differently than other HTTP URIs representing the same things, then there is a problem. This is the situation today (I believe) with how the HS HTTP proxy handles content negotiation. If however the RA/administrator has the option to return different types for different Accept header field values, then the DOI-based URIs become indistinguishable from “normal” HTTP URIs.

      • I might be missing something, but I don’t see how Sean’s proposal requires any special knowledge about the HTTP URI on the part of the client.

        If a DOI 10.example/foo represents a resource which can be accessed as either HTML or RDF, then one could have http://dx.doi.org/10.example/foo redirect to http://example.com/foo which *itself* returns the appropriate type (or even another redirection) based on the Accept: header. From the client’s point of view, if you ask for HTML you get HTML, if you ask for RDF you get RDF. That’s no different, from the point of view of the client, from having http://dx.doi.org/10.example/foo redirect to http://example.com/foo.html in one case and http://example.com/foo.rdf in the other.

        Thanks,
        Robert

      • (This is a response to Robert Tupelo-Schneck’s comment on February 8, 2010 at 5:31 pm)

        I argue that this *does* miss the point. Why should the client double-pump (to use a football analogy)? If the client “wants” the server — it doesn’t know for DOI — to provide RDF+XML (for example), why shouldn’t the HTTP proxy turn around and give it the URI for the appropriate representation? You’re suggesting that the proxy return first something it didn’t ask for, then re-negotiate, and then get what it wants.

        My larger point is, I believe there is an opportunity here to employ the HS (and the DOI) in a way that we’ve been talking about for (in my case) about 14 years — really being entity identifiers for multi-faceted objects and not expensive URL proxies…If all the proxy does is pass off content negotiation (when in fact it could be much smarter), in the coming Web of Objects the HS risks looking less attractive…

      • Hi John,

        I’m not (yet) arguing against the proxy being smarter about content negotiation, however I’m still not sure what makes it better than the endpoint URL doing the negotiation.

        I can see a few different ways for the Handle/DOI proxy to be in the loop when doing content negotiation:

        1) The proxy is a simple handle-to-URL resolver and the resulting URL (example.com in Robert’s example) takes care of content negotiation. This seemed appropriate to me because the URL’s owner could return either XML+RDF directly or a redirect to a separate RDF URL.

        2) The proxy looks for a requested content-type in the request and uses it to determine if the request should be redirected to a different URL in the resolved handle. This has either the same or one fewer “bounce” than method #1 above but also requires that the content owner register different URLs in their handles, as opposed to managing the negotiation locally in their own HTTP server.

        3) The proxy looks for a requested content-type in the request, extracts the RDF data directly from the handle values and returns it to the user. This doesn’t require any HTTP communication apart from the initial proxy request. But it does require pushing potentially lots of data into the handle system where it is likely harder to manage than the if it were within the content owner’s HTTP server.

        I assume that nobody is interested in option #3 and that you are suggesting option #2 because option #1 is already there. While on one hand I prefer to keep the proxy as a simple and fast tool, it is probably pretty easy to enable option #2 using the relatively new multiple-resolution handle value which already does a sort of (location, not content) negotiation.

        Thanks,
        Sean

      • Made a general response but it got published as a reply further up in the thread (before last two messages). – Tony

      • (This is a response to Sean’s post of February 8, 2010 at 10:11 pm…)

        I propose that we think about it in the following way (terms are from RFC 2295):

        1. The digital object named by the HDL (and thus the HDL-based HTTP URI) is a “transparently negotiable resource” — a resource which has multiple representations (variants) associated with it

        2. The HDL record contains as individual elements, but is not limited to, a “variant list” of the URIs of representations bound to that object

        3. The variants are distinguished in the HDL record by different TYPEs; for the purposes of this discussion, these TYPEs indicate content-type (MIME type) variants, but could also indicate snapshots in time or other bases for user agent selection. See esp.

        4. To support “conneg” the administrator (e.g. the registration agency, or RA) creates and maintains on the proxy a mapping of TYPEs to conneg criteria, especially content-types.

        5. These mappings may be highly customized; for example, some administrators may wish to support fine-grained differentiation of MIME types; some might want to support language types; some might specialize in supporting well-behaved responses to HDLs/DOIs naming OAI-ORE-style aggregations.

        6. This approach is not obviously compatible with using a HDL server as an immediate repository. A different configuration scheme would be required, say, to return (selected) contents of the HDL record serialized as RDF/XML or RDF/N3, say.

        John

      • Hi:

        Not a direct response to John’s post but just to announce a related post of mine on CrossTech: ‘DOI: What Do We Got’ http://bit.ly/bnCkrn (Aiming with this post just to put something together visually as an aid to understanding.)

        Cheers,

        Tony

      • Just to be clear, there’s no technical obstacle to implementing this. I suspect that, rather than using multiple type-value pairs as John suggests, we would use a single type-value pair where the value is structured data (a bit of XML, say) which indicates how to redirect depending on the Accept: header or other factors. In fact as Sean mentioned we’d probably want to use a type we’ve already been working on which indicates how to redirect based on client geolocation, query parameters, or other factors. All that’s just implementation detail however.

        I would like some further explanation of Tony’s complaint against Sean’s #1, which, after all, works today. It’s true that the existing setup presumes one default URL for redirection, but it doesn’t have to be HTML; it can be any URL returning an entity of any content type, or even a single URL returning one of multiple content types based on content negotiation.

        I understand that there is one more redirect involved, and that from the standpoint of having the handle be the primary identifier for an object it is appealing to have the handle itself understand the various representations rather than push that off another level. For completeness, anything else?

      • Sorry, Robert, my reply got shuttled up the thread again. See above at

        By: Tony Hammond on February 9, 2010 at 11:34 am

        Sorry again, about that.

  4. I don’t see how #1 is a tenable position. An RA member should not be able to usurp the baseline semantics set by the RA community. CrossRef, for example, has always said that DOIs for journal articles are for abstract works. And that from a Linked Data perspective (and following TAG ruling on httpRange-14) would make the correct response be a 303, regardless of whether there is HTTP conneg to support parallel HTML, RDF/XML representations.

    Seems that if there were an RDF/XML pointer in the handle record that could be used to honour an Accept header request as per option #2.

    But with the existing setup there is one default URL which is presumed (required, I think) to be an HTML version and which a 303 could redirect to in absence of conneg machinery or alternate URL records. That would have the merit of clearly signaling to the HTTP community the semantic intent – that there is a complex object (or RWO) at the end. Now that might be something that an RA proxy server should provide rather than a dumb HS. But it should not be something that the content publisher provides – would mean one content provider is dealing with RWO’s or Non-Information Resources, and another with bog-standard Information Resources.

    Would really like to see handle keep better aligned with HTTP. We need a more “graceful” interface between the two. And that might well mean support for HTTP conneg.

    Cheers,

    Tony

    • Readers: Tony’s comment is a response to Sean’s “February 8, 2010 at 10:11 pm” post…Wordpress’ comment nesting is a bit confusing (“too clever by half…”)

  5. Hi Robert:

    “I would like some further explanation of Tony’s complaint against Sean’s #1, which, after all, works today.”

    Would question whether it does work. Yes, in terms of getting some bits from here to there – i.e. as a blind redirect. But as a semantic operation which aligns with current web architecture – maybe not.

    From my point of view the alternate (i.e. machine readable) descriptions (accessed via HTTP conneg) are secondary to the establishing of a correct naming architecture (URI) and service response (HTTP status codes). What is

    http://dx.doi.org/10.1234/567

    Is it an information resource – per AWWW – or not? TAG says httpRange-14 fixes that by allowing 303 to indicate that it is a “See Other”, i.e. this is URI for a RWO (real world object – a nono-information resource).

    But what do we return but a 302. Semantically we’re lost.

    Don’t see why – even without the RDF/XML (which of course we would like) – we couldn’t make progress by redirecting to HTML on 303 rather than on 302.

    Is all. 🙂

    Tony

    • Do I understand you Tony as saying that if the proxy returned a 303 instead of a 302, it wouldn’t matter to you whether any conneg was done at the proxy or one step removed at an URI returned (via 303) by the proxy?

      • I think that what Tony is saying that the current 302 is simply wrong by “modern” standards, and a first, minimal step should be to respond instead with a 303 with the default response URI.

        I’m not sure what the point is of the HS having multiple resolution if it can’t be leveraged esp. via conneg at the HTTP proxy. Given a DO that aggregates multiple representations (URIs persisted in the HDL record), what is the conceptual problem with enabling the agent and the server to negotiate over the available variants using the established and generally-accepted methods?

        Doing this ups the value proposition of the Handle System model significantly. If all you expect the proxy to do is hand over a URI for which the agent must further negotiate, what’s the point?

      • Indeed, I completely agree; since we are already interested in multiple resolution at the handle level it is the right thing to do to support content negotiation at that level. I apologize if I was not clear about that. I’m just interested in understanding all the reasons why the handle level is the right level, instead of at a content owner’s HTTP server.

      • I guess we agree to agree then.Probably a confusing fatcor here is that in the #-free-HTTP-URIs’-only-name-documents school of HTTP thought, what you get back from a GET is not just the URI owners view of the world, but an authoritative serialization of named the thing itself. And that stronger notion of authority has leaked out a bit into dialog with other perspectives on HTTP, and is being applied to situations where we’re dealing with annotational descriptions rather than authoritative representations-as-in-serializations.

  6. […] Update: if this topic interests you, and you want to read more about it, definitely check out John Erickson‘s blog post DOIs, URIs and Cool Resolution. […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: