Search this keyword

How to publish a journal RSS feed

This morning I posted
this tweet:
Harvesting Nuytsia RSS http://science.dec.wa.gov.a... Non-trivial as links are not to individual articles #fail
My grumpiness (on this occasion, seems lots of things seem to make me grumpy lately) is that often journal RSS feeds leave a lot to be desired. As RSS feeds are a major source of biodiversity information (for a great example of their use see uBio's RSS, described in doi:10.1093/bioinformatics/btm109) it would be helpful if publishers did a few basic things. Some of these suggestions are in Lisa Roger's RSS and Scholarly Journal Tables of Contents: the ticTOCs Project, and Good Practice Guidelines for Publishers, but some aren't.

In the spirit of being constructive, here are some dos and don'ts.

Do

1. Validate the RSS feed

Fun your feed through Feed Validator to make sure it's valid. Your feed is an XML document, if it won't validate then aggregators may struggle with it. Testing it in your favourite web browser isn't enough, but if a browser fails to display it this may be a clue that something's wrong. For example, Safari won't display the ZooKeys RSS feed, and at the time of writing this feed is not valid.

2. Make sure your feed is autodiscoverable

rssbar.png

When I visit your web page my browser should tell me that there is a feed available (typically with a RSS icon in the location bar). If there's no such icon, then I have to look at your page to find the feed (if it exists). The Nuytsia page is an example of a non-discoverable feed. To make your feed autodiscoverable is easy, just add a link tag inside the head tag on the page. For example, something like this:


<link rel="alternate" type="application/rss+xml"
title="RSS Feed for Nuytsia"
href="http://science.dec.wa.gov.au/nuytsia/nuytsia.rss.xml" />


3. Use standard identifiers as the links

If your journal has DOIs, use those in the links, not the URL of the article web page. The later is likely to change (the DOI won't, unless you are being naughty), and given a DOI I can harvest the metadata via CrossRef.

4. Each item link in the feed links to ONE article

This was the reason for my grumpy tweet. The journal Nuytsia has a RSS feed (great!), but the links are not to individual articles. Instead, they are database queries that may generate one or more results. For example, this link RYE, B.L., (2009). Reinstatement of the Western Australian genus Oxymyrrhine (Myrtaceae : Chamelauci... actually lists two papers, both authored by B. L. Rye. This breaks the underlying model where the feed lists individual articles.

5. Include lots of metadata in your feed

If you don't use DOIs, then include metadata about your article in your feed. That way, I don't need to scrape your web pages, all I need is already in the feed.

6. Make it possible to harvest metadata about your articles

If you don't use DOIs are your article identifier, or use the DOIs are the item links in your RSS feed, then make it easy for me to get the bibliographic details from either the RSS feed, or from the web page. If you use RSS 1.0, then ideally you are using PRISM and I can get the metadata from that. If not, you can embed the metadata in the HTML page describing the article using Dublin Core and meta and link tags. For example, if you resolve this doi:10.1076/snfe.38.2.115.15923 and view the HTML source you will see this:


<meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1" />
<meta http-equiv="Content-Language" content="en-gb" />
<link rel="shortcut icon" href="/mpp/favicon.ico" />
<meta name="verify-v1" content="xKhof/of+uTbjR1pAOMT0/eOFPxG8QxB8VTJ07qNY8w=" />
<meta name="DC.publisher" content="Taylor & Francis" />
<meta name="DC.identifier" content="info:doi/10.1076/snfe.38.2.115.15923" />
<meta name="description" content="In this study we determined the effects of topography on the distribution of ground-dwelling ants in a primary terra-firme forest near Manaus, in cent..." />
<meta name="authors" content="Heraldo L. Vasconcelos ,Antônio C. C. Macedo,José M. S. Vilhena" />
<meta name="DC.creator" content="Heraldo L. Vasconcelos" />
<meta name="DC.creator" content="Antônio C. C. Macedo" />
<meta name="DC.creator" content="José M. S. Vilhena" />


Not pretty, but it enables me to get the details I want.

7. Support conditional HTTP GET

If you don't want feed readers and aggregators to hammer your service, support HTTP conditional GET (see here for details) so that feed readers only grab your feed if it has changed. Not many journal publishers do this, if they get overloaded by people grabbing RSS feeds they've only themselves to blame.

Don'ts

1. Sign up/log in

Don't ever ask me to sign up or log in to get the RSS feed (Cambridge University Press, I'm looking at you). If you think your content is so good/precious that I should sign up for it, you are sadly mistaken. Nature doesn't ask me to login, nor should you.

2. Break DOIs

Another major cause of grumpiness is the frequency with which DOIs break, especially for recently published articles (i.e., precisely those that will be encountered in RSS feeds). There is quite simply no excuse for this. If your workflow results in DOIs being put on web pages before they are registered with CrossRef, then you (or CrossRef) are incompetent.

Using Google Spreadsheets and RSS feeds to edit IPNI

One thing I find myself doing a lot is creating Excel spreadsheets and filling them will lists of taxonomic names and bibliographic references, for which I then try to extract identifiers (such as DOIs). This is a tedious business, but the hope is that by doing it once I can create a useful resource. However, often I get bored and the spreadsheets lie forgotten in some deep recess of my computer's hard drive.

It occurs to me that making these spreadsheets publicly available would be useful, but how to do this? In particular, how to do this in a way that makes it easy for me to extract recent edits, and to update the data from new sources? Google Spreadsheets seems an obvious answer, but I wasn't aware of just how obvious until I started playing with the spreadsheet APIs. These enable you to add data via the API (using HTTP PUT and ATOM), which means that I can easily push new data to the spreadsheet.

As a test, I've harvested the IPNI RSS feeds I created earlier (see http://bioguid.info/rss), extracted basic details about the name and any bibliographic identifiers my RSS generator had found, and sent these direct to a Google Spreadsheet. Some IPNI references didn't parse, so I can manually edit these, and many references lack an identifier (my tools usually finds those with DOIs). Often with a bit searching one can turn up a URL or a Handle to a paper, or even simply expand on the bibliographic details (which are a bit skimpy in IPNI). I'm also toying with using Zotero as an online bibliographic store for references that don't have an online presence.

So, what I've got now is a spreadsheet that can be edited, updated, and harvested, and will persist beyond any short term enthusiasm I have for trying to annotate IPNI.

NCBI RDF

Following on from the last post, I've now set up a trivial NCBI RDF service at bioguid.info/taxonomy/ (based on the ISSN resolver I released yesterday and announced on the Bibliographic Ontology Specification Group).

If you visit it in a web browser it's nothing special. However, if you choose to display XML you'll see some simple RDF. I've mapped some NCBI fields to corresponding terms in ttp://rs.tdwg.org/ontology/voc/TaxonConcept# (including the deprecated rankString term, which really shouldn't be deprecated, IMHO). I've also extracted what LSIDs I can from any linkouts. For example, a name that appears in Index Fungorum will have the corresponding LSID, likewise for IPNI. URLs are simply listed as rdfs:seeAlso.

Here's the RDF for NCBI taxon 101855 (you can grab this from http://bioguid.info/taxonomy/101855):


<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:tcommon="http://rs.tdwg.org/ontology/voc/Common#"
xmlns:tc="http://rs.tdwg.org/ontology/voc/TaxonConcept#">
<tc:TaxonConcept rdf:about="taxonomy:101855 ">
<dcterms:title>Lulworthia uniseptata</dcterms:title>
<dcterms:created>1999-08-16</dcterms:created>
<dcterms:modified>2005-01-19</dcterms:modified>
<dcterms:issued>1999-09-14</dcterms:issued>
<tc:nameString>Lulworthia uniseptata</tc:nameString><tc:rankString>species</tc:rankString>
<tcommon:taxonomicPlacementFormal>cellular organisms, Eukaryota, Fungi/Metazoa group, Fungi, Dikarya, Ascomycota, Pezizomycotina, Sordariomycetes, Sordariomycetes incertae sedis, Lulworthiales, Lulworthiaceae, Lulworthia</tcommon:taxonomicPlacementFormal>
<tc:hasName rdf:resource="urn:lsid:indexfungorum.org:names:105488"/>
<rdfs:seeAlso rdf:resource="http://www.marinespecies.org/aphia.php?p=taxdetails&id=100407"/>
<rdfs:seeAlso rdf:resource="http://www.mycobank.org/MycoTaxo.aspx?Link=T&Rec=105488"/>
<rdfs:seeAlso rdf:resource="http://www.indexfungorum.org/Names/namesrecord.asp?RecordId=105488"/>
<rdfs:seeAlso rdf:resource="http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=194551"/>
<rdfs:seeAlso rdf:resource="http://www.mycobank.org/MycoTaxo.aspx?Link=T&Rec=341143"/>
</tc:TaxonConcept>
</rdf:RDF>


Note the tc:hasName link to urn:lsid:indexfungorum.org:names:105488.

All a bit crude. The NCBI lookup is live (i.e., it's not served from a local copy of the database). I'll look at fixing this at some point, as well as caching the linkout lookups (one advantage of the live query is you can get the three dates (created, modified, and published). But for now it's a starting point to start to play with SPARQL queries across NCBI taxonomy, Index Fungorum, and IPNI using a common vocabulary.