Web feeds and repositories

December 10, 2008

I was invited to give a presentation on RSS and Atom as part of a SUETr workshop on interoperability yesterday. Of course I didn’t even scratch the surface of what can be achieved with feeds in terms of mash-ups, 3rd party sites and visualisations – but I did try to get across the breadth of ‘repository’ problems feeds can address, and the importance of feeds in easy wins to add value to your repository efforts (this theme courtesy of Les Carr on his blog).

The slides can be downloaded from either of these places: –

Advertisements

[I’ve only just found a good mechanism and time to listen to podcasts, so this is a little after the event, still worthwhile I hope.]

Earlier this month Richard Wallis of Talis interviewed JP Rangaswami at BT, and posted a podcast of the conversation. Sterling stuff – I thoroughly recommend listening to it in full. I’ve pulled out some of the bits as quotes here.

If you work in a very vendor dominated world you can abdicate responsibility for
a lot of what you do by transferring not just the risk, but the worldreward to the
vendor. That doesn’t scale any more.

If a problem is generic, look to the open source community to solve it. If it’s
a narrow market for the problem … then look to the commercial environment to
solve it. If it is unique to your enterprise, you’d better solve it yourself,
because no-one else is going to solve it for you.

We’ve lived through a whole generation of mistakes when we had proprietary
architectures for the way we had information in enterprises. First you paid
money to completely drown the information in concrete, then you paid money to
dig it out to move it somewhere else. That’s what enterprise application
integration looked like, spending money to sticking it into
somebody’s silo then spending even more money taking it out of silos. Instead of
exposing data you were excavating data, and paying for the privilege of your own
data. That is the danger we face if we don’t get issues to do with identity,
with authentication and permissioning, with intellectual property rights correct
in this generation. Because we will end up repeatedly wasting money digging out
stuff that should have been made available much more cheaply because the costs
of reproduction and transmission are going down.

Wishing I could self-replicate and get to Online Information (as well as going to the DCC Conference) to hear JP speak there!

In my last post, I described a potential solution to some of the difficulties
in handling repository embargo, using OpenID. I outlined some of the potential
difficulties in the solution near the end, but I evidently didn’t go far enough!
Talat Chaudri responded on twitter: –

@jimdowning trouble is that (a) lots of people don’t have OpenIDs
and (b) they’d have to maintain them to ensure that the OpenIDs stay
live

Owen Stephens also commented on the post, linking to a post that pointed out that the OpenID “user experience” leaves much to be desired.

On the subject of the user experience and the lack of adoption, I spent some time thinking of the best way to leap to OpenID’s defence, and
then realized I really don’t have to. OpenID is transitioning from early adopter
to fast follower maturity – and one that’s currently working
out what this digital identity thing means
, which suggests to me that OpenID
is timely and relevant. I’m sure the OpenID user experience will get better –
something like ClaimID.com, but with better features for FOAF type stuff.

I thought a bit more about Talat’s second objection (“[Users] would have to
maintain them to ensure that the OpenIDs stay live”) to the solution I proposed,
and realized that it’s the same characteristic that makes OpenID work as a
persistable, distributed identity mechanism that create the problems: the use of
indirection. What motivates people to keep their references up to date? To my mind the solution for OpenID has to involve removing personal details form filling completely (I still need to dig into the attribute exchange features in OpenID).

I’m going to blog some more substantial notes on last Friday’s RepoCamp as and when time permits. In the meantime, a cool idea and a plea for collaborators.

The RepoCamp involved the announcement of not one, but two developer challenges in the style of the one at Open Repositories 2008. The first is a general challenge (for which I can’t easily find a reference: help please, WoCRIG!) to do something cool involving interoperating systems. The second challenge is specific to the OAI-ORE specification, and involves creating a prototype that makes the usefulness of ORE visible to end-users.

I’ve got a cool idea for this, but I’m going to need to collaborate to get it done in time, so I’m blogging it in the hope that someone with a bit of time on their hands will get in touch.

The idea: a javascript library (or userscript) that follows all the links on a page and if the link is an ORE Resource Map, or if a Resource Map can be auto-discovered from it, the link is decorated with an ORE icon. Clicking the ORE icon pops up a display of the contents of the ORE aggregation, a la Stacks in OS X 10.5.

There are some fun bells and whistles in there;  including making the interface super shiny and minimizing bandwidth.

Anyone want to help out? I was planning to use John Resig’s jQuery and HTML parsing libraries and possibly processing.js.