Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed.
ISSN 2051-8188. Written content on this site is licensed under a Creative Commons Attribution 4.0 International license.
Friday, July 24, 2009
Index Fungorum
I've added Index Fungorum to the list of RSS feeds that I generate at bioguid.info/rss. The feed uses the Index Fungorum web services to get the names added the previous day, and tries to extract any bibliographic identifiers from the metadata associated with each record (we get the metadata by resolving the LSID for the name). As with IPNI, bibliographic information in an Index Fungorum record lists the page the name was published on, which makes locating identifiers such as DOIs a bit of a struggle. Still, it's nice to have another feed of taxonomic names.
Tuesday, July 14, 2009
Zotero: creating bibliographies in the cloud
Lately I've become more and more interested in moving data off my machine(s) and into the cloud. I'm keen to do this partly to avoid having data in one place (e.g., a machine at work) when I need it someplace else (e.g., at home), and there are great tools for doing this (such as the wonderful Dropbox).
As a developer, the cloud appeals, not so much because of the compute power that some are salivating over, but because it may free me from having to create my own software. For example, some time ago I have created an OpenURL resolver to help me find articles online. I harvest a bunch of sources, such as CrossRef, PubMed, some OPAI respositories, etc., but there's always times where I find a reference online that I'd like to add, and that reference doesn't have an identifier such as a DOI.
Typically I add these manually, or by importing a file. I could write some interface code to add (and edit) a bibliographic reference (and, indeed I did some time back), but wouldn't it be great if somebody else had done this for me?
Well, there are some tools out there for handling bibliographies online, such as Connotea, Mendeley, and Zotero (a Firefox add-on). Initially I was skeptical of Zotero (and I'm not a big Firefox user), but now that I'm looking for a place to store obscure papers it's rapidly growing on me. I like the fact that I can add references in situ, and that I can upload PDFs (which can be stored remotely on a WebDAV disk such as an iDisk). But what makes Zotero even more attractive is that it generates an RSS feed of my bibliography, which I can then harvest just as I harvest other resources.
Using a resource like Zotero saves me the hassle of having to write my own bibliographic editor, plus I benefit from using a tool that's a lot more polished than one I could make. Because of this, and my experience with the Google Spreadsheets API, I'm ultimately aiming to never have to write a user interface again. If I write services, and rely on third parties to make tools that can either generate services I can use, or consume my services, then my life becomes a lot simpler.
OK, perhaps I exaggerate. I like making interfaces, such as my eBio09 entry, or the experiment with SpaceTree. However, I can imagine a situation where I don't have to write a data entry interface ever again.
How to publish a journal RSS feed
This morning I posted
this tweet:
In the spirit of being constructive, here are some dos and don'ts.
Do
1. Validate the RSS feed
Fun your feed through Feed Validator to make sure it's valid. Your feed is an XML document, if it won't validate then aggregators may struggle with it. Testing it in your favourite web browser isn't enough, but if a browser fails to display it this may be a clue that something's wrong. For example, Safari won't display the ZooKeys RSS feed, and at the time of writing this feed is not valid.
2. Make sure your feed is autodiscoverable
When I visit your web page my browser should tell me that there is a feed available (typically with a RSS icon in the location bar). If there's no such icon, then I have to look at your page to find the feed (if it exists). The Nuytsia page is an example of a non-discoverable feed. To make your feed autodiscoverable is easy, just add a link tag inside the head tag on the page. For example, something like this:
3. Use standard identifiers as the links
If your journal has DOIs, use those in the links, not the URL of the article web page. The later is likely to change (the DOI won't, unless you are being naughty), and given a DOI I can harvest the metadata via CrossRef.
4. Each item link in the feed links to ONE article
This was the reason for my grumpy tweet. The journal Nuytsia has a RSS feed (great!), but the links are not to individual articles. Instead, they are database queries that may generate one or more results. For example, this link RYE, B.L., (2009). Reinstatement of the Western Australian genus Oxymyrrhine (Myrtaceae : Chamelauci... actually lists two papers, both authored by B. L. Rye. This breaks the underlying model where the feed lists individual articles.
5. Include lots of metadata in your feed
If you don't use DOIs, then include metadata about your article in your feed. That way, I don't need to scrape your web pages, all I need is already in the feed.
6. Make it possible to harvest metadata about your articles
If you don't use DOIs are your article identifier, or use the DOIs are the item links in your RSS feed, then make it easy for me to get the bibliographic details from either the RSS feed, or from the web page. If you use RSS 1.0, then ideally you are using PRISM and I can get the metadata from that. If not, you can embed the metadata in the HTML page describing the article using Dublin Core and meta and link tags. For example, if you resolve this doi:10.1076/snfe.38.2.115.15923 and view the HTML source you will see this:
Not pretty, but it enables me to get the details I want.
7. Support conditional HTTP GET
If you don't want feed readers and aggregators to hammer your service, support HTTP conditional GET (see here for details) so that feed readers only grab your feed if it has changed. Not many journal publishers do this, if they get overloaded by people grabbing RSS feeds they've only themselves to blame.
Don'ts
1. Sign up/log in
Don't ever ask me to sign up or log in to get the RSS feed (Cambridge University Press, I'm looking at you). If you think your content is so good/precious that I should sign up for it, you are sadly mistaken. Nature doesn't ask me to login, nor should you.
2. Break DOIs
Another major cause of grumpiness is the frequency with which DOIs break, especially for recently published articles (i.e., precisely those that will be encountered in RSS feeds). There is quite simply no excuse for this. If your workflow results in DOIs being put on web pages before they are registered with CrossRef, then you (or CrossRef) are incompetent.
this tweet:
Harvesting Nuytsia RSS http://science.dec.wa.gov.a... Non-trivial as links are not to individual articles #failMy grumpiness (on this occasion, seems lots of things seem to make me grumpy lately) is that often journal RSS feeds leave a lot to be desired. As RSS feeds are a major source of biodiversity information (for a great example of their use see uBio's RSS, described in doi:10.1093/bioinformatics/btm109) it would be helpful if publishers did a few basic things. Some of these suggestions are in Lisa Roger's RSS and Scholarly Journal Tables of Contents: the ticTOCs Project, and Good Practice Guidelines for Publishers, but some aren't.
In the spirit of being constructive, here are some dos and don'ts.
Do
1. Validate the RSS feed
Fun your feed through Feed Validator to make sure it's valid. Your feed is an XML document, if it won't validate then aggregators may struggle with it. Testing it in your favourite web browser isn't enough, but if a browser fails to display it this may be a clue that something's wrong. For example, Safari won't display the ZooKeys RSS feed, and at the time of writing this feed is not valid.
2. Make sure your feed is autodiscoverable
When I visit your web page my browser should tell me that there is a feed available (typically with a RSS icon in the location bar). If there's no such icon, then I have to look at your page to find the feed (if it exists). The Nuytsia page is an example of a non-discoverable feed. To make your feed autodiscoverable is easy, just add a link tag inside the head tag on the page. For example, something like this:
<link rel="alternate" type="application/rss+xml"
title="RSS Feed for Nuytsia"
href="http://science.dec.wa.gov.au/nuytsia/nuytsia.rss.xml" />
3. Use standard identifiers as the links
If your journal has DOIs, use those in the links, not the URL of the article web page. The later is likely to change (the DOI won't, unless you are being naughty), and given a DOI I can harvest the metadata via CrossRef.
4. Each item link in the feed links to ONE article
This was the reason for my grumpy tweet. The journal Nuytsia has a RSS feed (great!), but the links are not to individual articles. Instead, they are database queries that may generate one or more results. For example, this link RYE, B.L., (2009). Reinstatement of the Western Australian genus Oxymyrrhine (Myrtaceae : Chamelauci... actually lists two papers, both authored by B. L. Rye. This breaks the underlying model where the feed lists individual articles.
5. Include lots of metadata in your feed
If you don't use DOIs, then include metadata about your article in your feed. That way, I don't need to scrape your web pages, all I need is already in the feed.
6. Make it possible to harvest metadata about your articles
If you don't use DOIs are your article identifier, or use the DOIs are the item links in your RSS feed, then make it easy for me to get the bibliographic details from either the RSS feed, or from the web page. If you use RSS 1.0, then ideally you are using PRISM and I can get the metadata from that. If not, you can embed the metadata in the HTML page describing the article using Dublin Core and meta and link tags. For example, if you resolve this doi:10.1076/snfe.38.2.115.15923 and view the HTML source you will see this:
<meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1" />
<meta http-equiv="Content-Language" content="en-gb" />
<link rel="shortcut icon" href="/mpp/favicon.ico" />
<meta name="verify-v1" content="xKhof/of+uTbjR1pAOMT0/eOFPxG8QxB8VTJ07qNY8w=" />
<meta name="DC.publisher" content="Taylor & Francis" />
<meta name="DC.identifier" content="info:doi/10.1076/snfe.38.2.115.15923" />
<meta name="description" content="In this study we determined the effects of topography on the distribution of ground-dwelling ants in a primary terra-firme forest near Manaus, in cent..." />
<meta name="authors" content="Heraldo L. Vasconcelos ,Antônio C. C. Macedo,José M. S. Vilhena" />
<meta name="DC.creator" content="Heraldo L. Vasconcelos" />
<meta name="DC.creator" content="Antônio C. C. Macedo" />
<meta name="DC.creator" content="José M. S. Vilhena" />
Not pretty, but it enables me to get the details I want.
7. Support conditional HTTP GET
If you don't want feed readers and aggregators to hammer your service, support HTTP conditional GET (see here for details) so that feed readers only grab your feed if it has changed. Not many journal publishers do this, if they get overloaded by people grabbing RSS feeds they've only themselves to blame.
Don'ts
1. Sign up/log in
Don't ever ask me to sign up or log in to get the RSS feed (Cambridge University Press, I'm looking at you). If you think your content is so good/precious that I should sign up for it, you are sadly mistaken. Nature doesn't ask me to login, nor should you.
2. Break DOIs
Another major cause of grumpiness is the frequency with which DOIs break, especially for recently published articles (i.e., precisely those that will be encountered in RSS feeds). There is quite simply no excuse for this. If your workflow results in DOIs being put on web pages before they are registered with CrossRef, then you (or CrossRef) are incompetent.
Using Google Spreadsheets and RSS feeds to edit IPNI
One thing I find myself doing a lot is creating Excel spreadsheets and filling them will lists of taxonomic names and bibliographic references, for which I then try to extract identifiers (such as DOIs). This is a tedious business, but the hope is that by doing it once I can create a useful resource. However, often I get bored and the spreadsheets lie forgotten in some deep recess of my computer's hard drive.
It occurs to me that making these spreadsheets publicly available would be useful, but how to do this? In particular, how to do this in a way that makes it easy for me to extract recent edits, and to update the data from new sources? Google Spreadsheets seems an obvious answer, but I wasn't aware of just how obvious until I started playing with the spreadsheet APIs. These enable you to add data via the API (using HTTP PUT and ATOM), which means that I can easily push new data to the spreadsheet.
As a test, I've harvested the IPNI RSS feeds I created earlier (see http://bioguid.info/rss), extracted basic details about the name and any bibliographic identifiers my RSS generator had found, and sent these direct to a Google Spreadsheet. Some IPNI references didn't parse, so I can manually edit these, and many references lack an identifier (my tools usually finds those with DOIs). Often with a bit searching one can turn up a URL or a Handle to a paper, or even simply expand on the bibliographic details (which are a bit skimpy in IPNI). I'm also toying with using Zotero as an online bibliographic store for references that don't have an online presence.
So, what I've got now is a spreadsheet that can be edited, updated, and harvested, and will persist beyond any short term enthusiasm I have for trying to annotate IPNI.
It occurs to me that making these spreadsheets publicly available would be useful, but how to do this? In particular, how to do this in a way that makes it easy for me to extract recent edits, and to update the data from new sources? Google Spreadsheets seems an obvious answer, but I wasn't aware of just how obvious until I started playing with the spreadsheet APIs. These enable you to add data via the API (using HTTP PUT and ATOM), which means that I can easily push new data to the spreadsheet.
As a test, I've harvested the IPNI RSS feeds I created earlier (see http://bioguid.info/rss), extracted basic details about the name and any bibliographic identifiers my RSS generator had found, and sent these direct to a Google Spreadsheet. Some IPNI references didn't parse, so I can manually edit these, and many references lack an identifier (my tools usually finds those with DOIs). Often with a bit searching one can turn up a URL or a Handle to a paper, or even simply expand on the bibliographic details (which are a bit skimpy in IPNI). I'm also toying with using Zotero as an online bibliographic store for references that don't have an online presence.
So, what I've got now is a spreadsheet that can be edited, updated, and harvested, and will persist beyond any short term enthusiasm I have for trying to annotate IPNI.
Friday, July 10, 2009
NCBI RDF
Following on from the last post, I've now set up a trivial NCBI RDF service at bioguid.info/taxonomy/ (based on the ISSN resolver I released yesterday and announced on the Bibliographic Ontology Specification Group).
If you visit it in a web browser it's nothing special. However, if you choose to display XML you'll see some simple RDF. I've mapped some NCBI fields to corresponding terms in ttp://rs.tdwg.org/ontology/voc/TaxonConcept# (including the deprecated rankString term, which really shouldn't be deprecated, IMHO). I've also extracted what LSIDs I can from any linkouts. For example, a name that appears in Index Fungorum will have the corresponding LSID, likewise for IPNI. URLs are simply listed as rdfs:seeAlso.
Here's the RDF for NCBI taxon 101855 (you can grab this from http://bioguid.info/taxonomy/101855):
Note the tc:hasName link to urn:lsid:indexfungorum.org:names:105488.
All a bit crude. The NCBI lookup is live (i.e., it's not served from a local copy of the database). I'll look at fixing this at some point, as well as caching the linkout lookups (one advantage of the live query is you can get the three dates (created, modified, and published). But for now it's a starting point to start to play with SPARQL queries across NCBI taxonomy, Index Fungorum, and IPNI using a common vocabulary.
If you visit it in a web browser it's nothing special. However, if you choose to display XML you'll see some simple RDF. I've mapped some NCBI fields to corresponding terms in ttp://rs.tdwg.org/ontology/voc/TaxonConcept# (including the deprecated rankString term, which really shouldn't be deprecated, IMHO). I've also extracted what LSIDs I can from any linkouts. For example, a name that appears in Index Fungorum will have the corresponding LSID, likewise for IPNI. URLs are simply listed as rdfs:seeAlso.
Here's the RDF for NCBI taxon 101855 (you can grab this from http://bioguid.info/taxonomy/101855):
<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:tcommon="http://rs.tdwg.org/ontology/voc/Common#"
xmlns:tc="http://rs.tdwg.org/ontology/voc/TaxonConcept#">
<tc:TaxonConcept rdf:about="taxonomy:101855 ">
<dcterms:title>Lulworthia uniseptata</dcterms:title>
<dcterms:created>1999-08-16</dcterms:created>
<dcterms:modified>2005-01-19</dcterms:modified>
<dcterms:issued>1999-09-14</dcterms:issued>
<tc:nameString>Lulworthia uniseptata</tc:nameString><tc:rankString>species</tc:rankString>
<tcommon:taxonomicPlacementFormal>cellular organisms, Eukaryota, Fungi/Metazoa group, Fungi, Dikarya, Ascomycota, Pezizomycotina, Sordariomycetes, Sordariomycetes incertae sedis, Lulworthiales, Lulworthiaceae, Lulworthia</tcommon:taxonomicPlacementFormal>
<tc:hasName rdf:resource="urn:lsid:indexfungorum.org:names:105488"/>
<rdfs:seeAlso rdf:resource="http://www.marinespecies.org/aphia.php?p=taxdetails&id=100407"/>
<rdfs:seeAlso rdf:resource="http://www.mycobank.org/MycoTaxo.aspx?Link=T&Rec=105488"/>
<rdfs:seeAlso rdf:resource="http://www.indexfungorum.org/Names/namesrecord.asp?RecordId=105488"/>
<rdfs:seeAlso rdf:resource="http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=194551"/>
<rdfs:seeAlso rdf:resource="http://www.mycobank.org/MycoTaxo.aspx?Link=T&Rec=341143"/>
</tc:TaxonConcept>
</rdf:RDF>
Note the tc:hasName link to urn:lsid:indexfungorum.org:names:105488.
All a bit crude. The NCBI lookup is live (i.e., it's not served from a local copy of the database). I'll look at fixing this at some point, as well as caching the linkout lookups (one advantage of the live query is you can get the three dates (created, modified, and published). But for now it's a starting point to start to play with SPARQL queries across NCBI taxonomy, Index Fungorum, and IPNI using a common vocabulary.
NCBI taxonomy, TDWG vocabularies, and RDF
Lately I've been returning to playing with RDF and triple stores. This is a serious case of déjà vu, as two blogs I've now abandoned will testify (bioGUID and SemAnt). Basically, a combination of frustration with the tools, data cleaning, and the lack of identifiers got in the way of making much progress. I gave up on triple stores for a while, rolling my own Entity–Attribute–Value (EAV) database, which I used for the Elsevier Challenge (EAV databases are essentially key-value databases, CouchDB being a well-known example).
Now, I'm revisiting triple stores and SPARQL, partly because Linked Data is gaining momentum, and partly because we now have a few LSID providers, and some decent vocabularies from TDWG. Having created a LSID resolver that plays nicely with Linked Data (it also does the same thing for DOIs), it's time to dust off SPARQL and see what can be done.
One reason there's interest in having GUIDs and standard vocabularies is so that we can link different sources of information together. But more than just linking, we should be able to compute across these links and learn new things, or at least add annotations from one database to another.
To make this concrete, take the NCBI taxon 101855 , Lulworthia uniseptata. If we visit the NCBI page we see links to other resources, such as Index Fungorum record 105488, which tells us that Lulworthia uniseptata was published in Trans. Mycol. Soc. Japan 25(4): 382 (1984), and that the current name is Lulwoana uniseptata, which was published in Mycol. Res. 109(5): 562 (2005).
Wouldn't it be nice to be able to automatically link these things together? And wouldn't it be nice to have identifiers for the literature, rather than only human-readable text strings? Using bioGUID, we can discover that Mycol. Res. 109(5): 562 (2005) has the DOI doi:10.1017/S0953756205002716 -- I haven't found Trans. Mycol. Soc. Japan 25(4): 382 (1984) online anywhere.
Now, given that we have LSIDs for Index Fungorum, I can resolve urn:lsid:indexfungorum.org:names:369395 and discover that
urn:lsid:indexfungorum.org:names:369395 tname:hasBasionym urn:lsid:indexfungorum.org:names:105488
and, I can add the statement
urn:lsid:indexfungorum.org:names:36939 tcommon:publishedInCitation doi:10.1017/S0953756205002716
What I'd like to do is link this to the NCBI taxon, so that I can display this additional knowledge in one place (i.e., there is an additional name for this fungus, and where it is published). To do this, I need the NCBI taxonomy in RDF. Turns out that everyone and their dog has been generating RDF versions of the NCBI taxonomy, including Uniport (source of the diagram above). The problem is, each effort creates their own project-specific vocabulary. For example , here is the record for NCBI taxon 101855 in Uniprot RDF (http://www.uniprot.org/taxonomy/101855):
<?xml version='1.0' encoding='UTF-8'?>
<rdf:RDF xmlns="http://purl.uniprot.org/core/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<rdf:Description rdf:about="http://purl.uniprot.org/taxonomy/101855">
<rdf:type rdf:resource="http://purl.uniprot.org/core/Taxon"/>
<rank rdf:resource="http://purl.uniprot.org/core/Species"/>
<scientificName>Lulworthia uniseptata</scientificName>
<otherName>Zalerion maritimum</otherName>
<rdfs:subClassOf rdf:resource="http://purl.uniprot.org/taxonomy/45817"/>
<partOfLineage>false</partOfLineage>
</rdf:Description>
</rdf:RDF>
Uniprot has it's own vocabulary, http://purl.uniprot.org/core/. So, what I'd like to do is create a version of the NCBI taxonomy using TDWG's TaxonConcept vocabulary, so that it becomes straightforward to link NCBI to name databases such as Index Fungorum, IPNI, Zoobank, and ION that are serving taxon names.
Subscribe to:
Posts (Atom)