Slides from the presentations I gave at OR08 are now available from DSpace@Cambridge: –

CrystalEye – from desktop to data repository.

Preview of the TheOREM Project.

They should also be appearing (possibly with video, who knows?) at the official conference repo at http://pubs.or08.ecs.soton.ac.uk/.

Advertisements

CrystalEye is a repository of crystallographic data. It’s built by a software system written by Nick Day that uses sections of Jumbo and CDK for functionality. It isn’t feasible for Nick to curate all this data (>100,000 structures) manually, and software bugs are a fact of life, so errors creep in.

Egon Willighagen and Antony Williams (ChemSpiderMan) have been looking at the CrystalEye data, and have used their blogs (as well as commenting on PM-R’s) to feed issues back. This is a great example of community data checking. Antony suggested that we implement a “Post a comment” feature on each page to make feedback easier. This is a great idea, so we had a quick think about it and propose a web2.0 alternative mechanism: Connotea.

To report a problem in CrystalEye, simply bookmark an example of the problem with the tag “crystaleyeproblem”, using the Description field to describe the problem. All the problems will appear on the tag feed.

When we fix the problem we’ll add the tag “crystaleyefixed” to the same bookmark. If you subscribe to this feed, you’ll know to remove the crystaleyeproblem tag.

In the fullness of time, we’re planning to use connotea tags to annotate structures where full processing hasn’t been possible (uncalculatable bond orders, charges etc).

I had planned to co-author a number of posts on CrystalEye with Nick Day, starting with the basic functionality in the web interface and moving on to the features in the new Atom feed. As things turned out Nick is rather busy with other things, the data archiving stuff caught everyone’s intention and my well laid plans ganged (gung?), as aft they do, agly (as Burns might have put it). Consequently I’m going to shove out CrystalEye posts as and when.

The point of this post is simply to demonstrate that Atom’s extensibility provides a way to combine several functionalities in the same feed, with the subtext that this makes it a promising alternative to CMLRSS. I’ve already written how the Atom feed can be used for data harvesting. This is something of a niche feature for a minority, though. The big news about the CrystalEye Atom feed is that it looks good in normal feed readers.

As a demonstration, here’s a CrystalEye CMLRSS feed in my aggregator: –

Text. Nice. Of course, I need a chemistry aggregator (like the one in Bioclipse) to make sense of a chemistry feed, right? Nope. Atom allows HTML content, so as well as including CML enclosures for chemistry aware aggregators, you can include diagrams: –

To quote PM-R: “Look – chemistry!”

I’ve been in a reflective mood about CrystalEye over the last few days. In repository-land where I spend part of my time, OAI-PMH is regarded as a really simple way of getting data from repositories, and approaches like Atom are often regarded as insufficiently featured. So I’ll admit I was a bit surprised about the negative reaction provoked by the idea of CrystalEye only providing incremental data feeds.

The “give me a big bundle of your raw data” request was one I’d heard before, from Rufus Pollock at OKFN, when I was working on the DSpace@Cambridge project, a topic he returned to yesterday, arguing that data projects should put making raw data available as a higher priority than developing “Shiny Front Ends” (SFE).

I agree on the whole. In a previous life working on public sector information systems I often had extremely frustrating conversations with data providers who didn’t see anything wrong in placing access restrictions on data they claimed was publicly available (usually the restriction was that any other gov / NGO could see the data but the public they served couldn’t).

When it comes to the issue with CrystalEye we’re not talking about access restriction, we’re talking about the form the data is made available, and the effort needed to obtain it. This is a familiar motif: –

  • The government has data that’s available if you ask in person, but that’s more effort than we’d like to expend, we’d like it to be downloadable
  • The publishers make (some) publications available as PDF, but analyzing the science requires manual effort, we’d like them to publish the science in a form that’s easier to process and analyze
  • The publishers make (some) data available from their websites, but it’s not easy to crawl the websites to get hold of it – it would be great if they gave us feeds of their latest data
  • CrystalEye makes CML data available, but potential users would prefer us to bundle it up onto DVDs and mail it to them.

Hold on, bit of a role reversal at the end there! Boot’s on the other foot. We have a reasonable reply; we’re a publicly funded research group who happen to believe in Open Data, not a publicly funded data provider. We have to prioritise our resources accordingly, but I still think the principle of providing open access to the raw data applies.

You’ll have to excuse a non-chemist stretching a metaphor: There’s an activation energy between licensing data as open, and making it easy to access and use. CrystalEye has made me wonder how much of this energy has to come from the provider, and how much from the consumer.

While I was working in the real world with Nick on the Atom feeds and harvester for CrystalEye, it seems they became an issue of some contention in the blogosphere. So I’m using this post to lay out why we implemented harvesting this way. These are in strict order of when they occur to me, and I may well be wrong about one or all of them since I haven’t run benchmarks, since getting things working is more important that being right.

This was the quickest way of offering a complete harvest

Big files would be a pain for the server. Our version of Apache uses a thread pool approach, so for the server’s sake I’m more concerned about clients occupying connections for a long time than I am about the bandwidth. The atom docs can be compressed on the fly to reduce the bandwidth, and after the first rush as people fill their crystaleye caches, we’ll hopefully be serving 304s most of the time.

Incremental harvest is a requirement for data repositories, and the “web-way” is to do it through the uniform interface (HTTP), and connected resources.

We don’t have the resource to provide DVD’s of content for everyone who wants the data. Or turning that around – we hope more people will want the data than we have resource to provide for. This is isn’t about the cost of a DVD, or the cost of postage, it’s about manpower, which costs orders of magnitude more than bits of plastic and stamps.

I’ve particularly valued Andrew Dalke’s input on this subject (and I’d love to kick off a discussion on the idea of versioning in CrystalEye, but I don’t have time right now): –

However, I would suggest that the experience with GenBank and other bioinformatics data sets, as well as PubChem, has been that some sort of bulk download is useful. As a consumer of such data I prefer fetching the bulk data for my own use. It makes more efficient bandwidth use (vs. larger numbers of GET requests, even with HTTP 1.1 pipelining), it compresses better, I’m more certain about internal integrity, and I can more quickly get up and working because I can just point an ftp or similar client at it. When I see a data provider which requires scraping or record-by-record retrieval I feel they don’t care as much about letting others play in their garden.

(Andrew Dalke)

… and earlier …

… using a system like Amazon’s S3 makes it easy to distribute the data, and cost about US $20 for the bandwidth costs of a 100GB download. (You would need to use multiple files because Amazon has a 5GB cap on file size.) Using S3 would not affect your systems at all, except for the one-shot upload time and the time it would take to put such a system into place.

(Andrew Dalke)

Completely fair points. I’ll certainly look at implementing a system to offer access through S3, although everyone might have to be even more patient than they have been for these Atom feeds. We do care about making this data available – compare the slight technical difficulties in implementing an Atom harvester with the time and effort it’s taken Nick to implement and maintain spiders to get this data from the publishers in order to make it better available!

One of the features of the Crystaleye atom feeds is that they can be used for harvesting data from the system. This is not a feature of Atom syndication itself, but of proposed standard extension (RFC5005). So what does it look like?

RFC5005 specifies three different types of historical feed, we’re only interested at the moment in “Archived feeds”. An archived feed document must include an element like this: –


Basic harvesting is achieved extremely simply, get hold of the latest feed document from http://wwmm.ch.cam.ac.uk/crystaleye/feed/atom/feed.xml, and iterate through the entries. Each entry contains (amongst other things), a unique identifier (a URN UUID), and a link to the CML file: –

...
urn:uuid:bedc0edd-fab1-4e12-9d45-7ab23aaa02d5
2007-10-15T17:25:53Z
...

So getting the data is just a matter of doing a little XPath or DOM descent and using the link href to GET the data. When you’ve got all the entries, you need to follow a link to the previous (next oldest) feed document in the archive, encoded like this: –


(This ‘prev-archive’ rel is the special sauce added by RFC5005). Incremental harvesting is done by the same mechanism, but with a couple of extra bells and whistles to minimize bandwidth and redundant downloads. There are three ways you might do this: –

  • The first way is to keep track of all the entry IDs you’ve seen, and to stop when you see an entry you’ve already seen.
  • The easiest way is to keep track of the time you last harvested, and add an If-Modified-Since header to the HTTP requests when you harvest – when you receive a 304 (Not Modified) in return, you’ve finished the increment.
  • The most thorough way is to keep track of the ETag header returned with each file, and use it in the If-None-Match header in your incremental harvest. Again, this will return 304 (Not Modified) whenever your copy is good.

Implementing a harvester

Atom archiving is easy to code to in any language with decent HTTP and XML support. As an example, I’ve written a Java harvester (binary, source). The source builds with Maven2. The binary can be run using


java -jar crystaleye-harvester.jar [directory to stick data in]

Letting this rip for a full harvest will take a while, and will take up ~10G of space (although less bandwidth since the content is compressed).

Being a friendly client

First and foremost, please do not multi-thread your requests.

Please put a little delay in between requests. A few 100ms should be enough; the sample harvester uses 500ms – which should be as much as we need.

If you send an HTTP header “Accept-Encoding: gzip,deflate”, CrystalEye will send the content compressed (and you’ll need to gunzip it at the client end). This can save a lot of bandwidth, which helps.