Wednesday, June 19, 2024

Visualising big trees: a talk at the Systematics Association 2024

This blog post has some notes in support of a talk given to the Systematics Association meeting in Reading June 20th, 2024.

I will post a link to the slides here once I have given the talk.

Example web sites

Demos

Kew phylogeny

NCBI

Catalogue of Life

Background reading

Written with StackEdit.

Tuesday, June 18, 2024

Nanopubs, a way to create even more silos

Pensoft have recently introduced “nanopubs”, small structured publications that can be thought of as containing the minimum possible statement that could be published.

Nanopublications are the smallest units of publishable information: a scientifically meaningful assertion about anything that can be uniquely identified and attributed to its author and serve to communicate a single statement, its original source (provenance) and citation record (publication info). Nanopublications are fully expressed in a way that is both human-readable and machine-interpretable. For more, see https://nanopub.net, Pensoft blog, this video and on our website. Nanopublications

Nanopubs are promoted as FAIR, that is findable, accessible, interoperabile, and reusable. I like the idea of nanopubs, but the examples I have seen so far are problematic. As an aside, there are reasons not to be optimistic about nanopubs (or text-mining in general), see The Business of Extracting Knowledge from Academic Publications.

I’m going to focus on one nanopub RAXCvEZfCc, which comes from the paper Towards computable taxonomic knowledge: Leveraging nanopublications for sharing new synonyms in the Madagascan genus Helictopleurus (Coleoptera, Scarabaeinae). This nanopub says that Helictopleurus dorbignyi Montreuil, 2005 is a subjective synonym of Helictopleurus halffteri Balthasar, 1964.

In other words,

This seems a fairly simple thing to say, indeed we could say it with a single triple, but the corresponding nanopub requires 33 RDF triples to say this.

<https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.nanopub.org/nschema#hasAssertion> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#Head> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.nanopub.org/nschema#hasProvenance> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#provenance> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#Head> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.nanopub.org/nschema#hasPublicationInfo> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#Head> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.nanopub.org/nschema#Nanopublication> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#Head> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <https://w3id.org/biolink/vocab/OrganismTaxonToOrganismTaxonAssociation> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <http://www.w3.org/2000/01/rdf-schema#comment> "Subjective synonymy based on morphological comparison of the type specimens of the two species names" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <https://w3id.org/biolink/vocab/object> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#objtaxon> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <https://w3id.org/biolink/vocab/predicate> <http://purl.obolibrary.org/obo/NOMEN_0000285> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <https://w3id.org/biolink/vocab/subject> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#subjtaxon> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#objtaxon> <https://w3id.org/kpxl/biodiv/terms/hasTaxonName> <https://www.checklistbank.org/dataset/9880/taxon/3K9T4> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#subjtaxon> <https://w3id.org/kpxl/biodiv/terms/hasTaxonName> <https://www.checklistbank.org/dataset/9880/taxon/3K9ST> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> <http://rs.tdwg.org/dwc/terms/basisOfRecord> <http://rs.tdwg.org/dwc/terms/PreservedSpecimen> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#provenance> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> <http://www.w3.org/ns/prov#wasAttributedTo> <https://orcid.org/0000-0002-1938-6105> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#provenance> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#assertion> <http://www.w3.org/ns/prov#wasDerivedFrom> <https://arpha.pensoft.net/preview.php?document_id=22521> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#provenance> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#sig> <http://purl.org/nanopub/x/hasAlgorithm> "RSA" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#sig> <http://purl.org/nanopub/x/hasPublicKey> "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCnFtZQdjMpPH4duOBwDybRdPo93QCanFGN8cnpyHqZRQ+FINXypUYCNRSx3VBaWZoLVB/CYCoMY0or/oxBQwl5N7Y/8Ebj+G9ZSNsSkM9uo2DL91f26Y1y2UDE7bnajG909kXQnJS1G59cqIaKyLInjMFD5vWnptysj/ljBv3NTwIDAQAB" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#sig> <http://purl.org/nanopub/x/hasSignature> "YzTUmwGRmqHiJVyU1A6rPI1bHbAJPS+Zw6hnDPWzZ9a/7TP+yM/HAf5E9BTS3HNKaCgLAHSnsRg5Q0lPauYQyJd9tbLzR6VU/WJv399Z7/qrn4EhgCULkIhrCAkuWzRtSyHMEbuzyu51ZSQCCPgMZ3HwpVtRa+gVDgqu3nsi5x4=" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#sig> <http://purl.org/nanopub/x/hasSignatureTarget> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/dc/terms/created> "2023-12-24T06:24:14.480Z"^^<http://www.w3.org/2001/XMLSchema#dateTime> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/dc/terms/creator> <https://orcid.org/0000-0002-1938-6105> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/dc/terms/license> <https://creativecommons.org/licenses/by/4.0/> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/nanopub/x/hasNanopubType> <http://purl.obolibrary.org/obo/NOMEN_0000017> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/nanopub/x/hasNanopubType> <https://w3id.org/kpxl/biodiv/terms/BiodivNanopub> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://purl.org/nanopub/x/introduces> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#association> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <https://w3id.org/kpxl/biodiv/terms/BiodivNanopub> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <http://www.w3.org/2000/01/rdf-schema#label> "Helictopleurus dorbignyi Montreuil, 2005 (species) - ICZN subjective synonym - Helictopleurus halffteri Balthasar, 1964 (species)" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/o/ntemplate/wasCreatedFromProvenanceTemplate> <http://purl.org/np/RAYfEAP8KAu9qhBkCtyq_hshOvTAJOcdfIvGhiGwUqB-M> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/o/ntemplate/wasCreatedFromPubinfoTemplate> <http://purl.org/np/RAA2MfqdBCzmz9yVWjKLXNbyfBNcwsMmOqcNUxkk1maIM> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/o/ntemplate/wasCreatedFromPubinfoTemplate> <http://purl.org/np/RAR40PzxS9rmUC2lH2ct7IlYhyEib-3GXY5DkuR8wgHRw> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/o/ntemplate/wasCreatedFromPubinfoTemplate> <http://purl.org/np/RAh1gm83JiG5M6kDxXhaYT1l49nCzyrckMvTzcPn-iv90> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig> <https://w3id.org/np/o/ntemplate/wasCreatedFromTemplate> <http://purl.org/np/RAf9CyiP5zzCWN-J0Ts5k7IrZY52CagaIwM-zRSBmhrC8> <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://www.checklistbank.org/dataset/9880/taxon/3K9ST> <https://w3id.org/np/o/ntemplate/hasLabelFromApi> "Helictopleurus dorbignyi Montreuil, 2005 (species)" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> . <https://www.checklistbank.org/dataset/9880/taxon/3K9T4> <https://w3id.org/np/o/ntemplate/hasLabelFromApi> "Helictopleurus halffteri Balthasar, 1964 (species)" <https://w3id.org/np/RAXCvEZfCcjYuH5DWOIujBehGQt61y_nRHWssw9u6aYig#pubinfo> .

In part this is because it includes cryptographic signing, presumably to ensure that the statement is what you think it is. There is also a plethora of information about how the nanopublication was derived. Presumably, this is to satisfy reproducibility concerns. But none of this matters if you are producing data that people can’t easily use.

The core statement looks like this:

This graph is saying that there is a triple

By itself this isn’t terribly useful because neither of the two taxa are “things” that have identifiers, they are blank nodes. So, what is the statement about? If we follow the biodiv:hasTaxonName links, we see that there are names associated with these taxa (Helictopleurus dorbignyi, and Helictopleurus halffteri), and these are linked to records in a database in ChecklistBank. This seems complicated, but I assume it is equivalent to saying “in this publication we regard taxa with the names Helictopleurus dorbignyi, and Helictopleurus halffteri to be the same thing”.

Interoperablity

I feel that I have been banging this drum for years now, but you cannot have interoperability unless you use the same identifiers for the same things. That means persistent identifiers, identifiers that you have some confidence will be around in ten, 20, or 50 years (at least).

Leaving aside whatever the persistence of the nanopubs themselves, I find it alarming that the link to the source of the statement that these two names are synonyms is not the DOI for the paper 10.3897/BDJ.12.e120304, but a link to the publishing platform ARPHA: https://arpha.pensoft.net/preview.php?document_id=22521. This link takes me to a login page, not the actual publication, so I can’t retrieve the source of the statement made in the nanopublication using the nanopublication itself.

The taxon names have as their identifiers https://www.checklistbank.org/dataset/9880/taxon/3K9T4 and https://www.checklistbank.org/dataset/9880/taxon/3K9ST. These identifiers are also local to a particular dataset. Why not use identifiers such as the Catalogue of Life entries for these names (i.e., e.g. https://www.catalogueoflife.org/data/taxon/3K9T4, which supports RDF via embedded JSON-LD) or even LSIDs? We have urn:lsid:organismnames.com:name:2521540 for Helictopleurus halffteri and urn:lsid:organismnames.com:name:1770738 for Helictopleurus dorbignyi.

Interestingly, the one well-known external identifier linked to is the ORCID for the author of the nanopub, 0000-0002-1938-6105). I can’t help think that this suggests that authorship of the nanopublication is more important than the fact it publishes.

One can imagine that nanopublications will be registered with authors’ ORCID profiles, which helps flesh out their online CV. This is nice, but where is the equivalent for linking the publication to the nanopub via its DOI, or the taxon names to the nanopub? How do we know whether these nanopubs contradict other nanopubs, or support them, or add new information? For example, there seems to be no way to go from the DOI for the paper to the nanopub.

Vocabulary

Another aspect of interoperability is using the same terms to describe relationships. I’m struck by how many different vocabularies the nanopub requires. Some of these are specific to the administrivia of the nanopub, but others are biological.

For example, http://purl.obolibrary.org/obo/NOMEN_0000285 is used to define the relation between. I confess it’s unclear to me why NOMEN_0000285 isn’t used to directly link the two ChecklistBank records, rather than the indirection via #subjtaxon and #objtaxon, given that is a relationship between names (isn’t it?).

Other ontologies include Biolink-Model and biodiv which I can’t seem to find a description of (the URL resolves to queries on the nanodash site). It amazes me how readily people create new ontologies, especially as in the wider world there is a trend towards one vocabuary to rule them all (schema.org).

Summary

I find it disheartening that the bulk of the information in a nanopub is administrivia about that nanopub. I understand the desire to establish provenance and to cryptographically sign the information, but all this is of limited use if the actual scientific information is poorly expressed.

If nanopubs are to be useful I think they need to:

  • Use persistent identifiers for every entity being referred to, ideally using existing, well-known identifiers. If you are referring to a publication that has a DOI, use that DOI. If you are referring to a taxon or a taxon name, use an appropriate identifier (e.g., an LSID for the name, a URL to a classification).

  • Use simple, existing vocabularies wherever possible. Can you model the data using schema.org (and extensions such as Bioschemas). If not, are you sure you can’t?

Unless more care is taken, nanopubs will go the way of much of the RDF world, creating new, even more verbose, even more arcane silos of data. This is partly a consequence of the primary incentive, which is to publish minimal units of information. Given that we now have persistent identifiers for people (ORCIDs) and those identifiers are linked to an infrastructure that can automatically register publications linked to ORCIDs, can we expect to see a flood of nanopubs? What vaue will these have if we can’t make ready use of the “facts” they assert? How will people build tools on top of nanopubs if the only thing that reliably links to the external world is the ORCID of the person who created it.

Written with StackEdit.

Friday, April 19, 2024

Notes on transforming BHL images

How to cite: Page, R. (2024). Notes on transforming BHL images https://doi.org/10.59350/2gpbb-98a53

I’ve been down this road before, e.g. BHL, DjVu, and reading the f*cking manual and Demo of full-text indexing of BHL using CouchDB hosted by Cloudant, but I’m revisiting converting BHL page scans to black and white images, partly to clean them up, to make them closer to what a modern reader might expect, and partly to reduce the size of the image. The latter means faster loading times and smaller PDFs for articles.

The links above explored using foreground image layers from DjVu (less useful now that DjVu is almost dead as a format), and using CSS in web browsers to convert a colour image to gray scale. I’ve also experimented with the approach taken by Google Books (see https://github.com/rdmpage/google-book-images), which uses jbig2enc to compress images and reduce the number of colours.

In my latest experiments, I use jbig2enc to transform BHL page images into black and white images where each pixel is either black or white (i.e., image depth = 1), then use ImageMagick to resize the image to the Google Books width of 685 pixels and a depth of 2. Typically this gives an image around 25Kb - 30Kb in size. It looks clean and readable.

This approach breaks down for photographs and especially colour plates. For example, this image looks horrible:

When compressing images that have photos or illustrations jbig2enc can extract the part of the image that includes the illustration, for example:

This isn’t perfect, but it raises the possibility that we can convert text and line drawings to black and white, and then add back photographs and plates (whether black or white, or colour). After some experimentation using tools such as ImageMagick composite I have a simple workflow:

  • compress page image using jbig2enc
  • take the extracted illustration and set all white pixels to be transparent
  • convert the black and white image output by jbig2enc to colour (required for the next step)
  • create a composite image by overlaying the extracted illustration (now on a transparent background) on top of the black-and-white page image

The result looks passable:

In this case, we still have a lot of the sepia-toned background, the illustration hasn’t been cleanly separated, but we do at least get some colour.

Still work to do, but it looks promising and suggests a way to make dramatically smaller PDFs of BHL content. There are crude code and example files in GitHub.

Update

Some Googling turned up Removing orange tint-mask from color-negatives, which gives us the following command:

convert 16281585.jpg -negate -channel all -normalize -negate -channel all 16281585-rgb.jpg

Applying this to our image results in:

This looks a lot better. Results will vary depending on the eveness of the page scan (i.e., is there a shadow on the image), but I think this gives us a way to display the plates with a higher degree of contrast.

Reading

Adam Langley, Dan S. Bloomberg, “Google Books: making the public domain universally accessible”, Proc. SPIE 6500, Document Recognition and Retrieval XIV, 65000H (2007/01/29); doi:10.1117/12.710609

Written with StackEdit.

Wednesday, March 27, 2024

Hugging Face Autotrain

How to cite: Page, R. (2024). Hugging Face Autotrain https://doi.org/10.59350/7p1n4-wdv84

These are notes to myself on using Hugging Face AutoTrain. The first version of this had a very nice interface where you could simply upload a folder of images and train a model. It was limited in the range of tasks and models, but made up for that in ease of use. Now AutoTrain has been replaced by AutoTrain Advanced, which not everyone is happy about.

Training a model

After a bit of fussing about (and paying attention to the log messages) I’ve managed to train a model to classify images in much the same way as before. The steps are as follows:

Go to AutoTrain Advanced. You should see a screen like this:

By default Docker and AutoTrain are selected. It will also show the free hardware spec (CPU basic • 2 vCPU • 16GB). I found that for image classification this hardware choice would cause AutoTrain to fail, so I selected Nvidia T4 small • 4 vCPU • 15GB.

Give your space a name and click on Create Space to create the space. You will now see something like this:

It took 3-4 minutes to build the space. Once the space is built you will then be asked to log in to Hugging Face (seems odd, but that’s what it asks you to do). You are then asked to give your space permissions to connect to your account.

Now you will see a slightly scary looking interface (this is one reason why people miss the old “easy” AutoTrain).

For Task I selected Image Classification and the default base model (google/vit-base-patch16-224). I ignored every other setting, and simply uploaded the training data. This was a zip file containing separate folders for each category of image, so that images, say of cats, would be in a folder called cats, pictures of dogs would be in dogs, etc.

I then clicked Start and after a warning that this would cost money (I subscribe to Hugging Face)saw this:

You can track progress in the logs, which you can see using the middle of the buttons below.

Once completed, the space pauses, which is a little alarming but simply means that it has finished training. Yay, you now have a trained model!

When I first tried this, I got errors because I didn’t upload the data in the proper format (my zip file had a folder that contained the training data folders, it needs the folders to be in the root of the zip archive). It also failed to train on the base (free) hardware, I only discovered this by looking at the logs and see error messages regarding the lack of a GPU.

What now?

The other thing about the original AutoTrain was that it gave you an app to explore how you model worked on other data. The new AutoTrain simply pauses after training and you are left with “um, what do I do now?”

After some fussing I discovered that in my profile I now had a brand new Model appearing in my list of models.

If I click on the model I go to the model page, where there is a Deploy button, this is how you get an app. First though, make sure your model is publicly visible (by default it is private). Click on Settings and go to the Change model visibility to make it public. If you now click on the Deploy button you will see a list of options:

I picked Spaces. This enables you to create a simple online app. I accepted all the defaults (including the base, free hardware with no GPU) and in a couple of minutes you get a app that looks like this:

Upload an image, press Submit and you will get a classification of that image:

Apps tend to sleep, so it may be that you come back to an app, load and image, and get an error message that the model is still loading. Wait a moment, try again, and it should work.

API

Using the app is fun, but if you wasn’t to use the model to classify lots of images then you want to use the API. The Deploy button lists `Inferences API (serverless) as an option. Clicking on that gives you the URL you can to POST images to, it will return the results in JSON. As with the app, if the model is sleeping then your first call may through an error, typically wait a moment and try again, and then you can classify images in bulk.

Summary

Hugging Face is quite an extraordinary tool, and it is a way to try and make sense of the xplosiuon of AI techniques available. But it is clearly written by developers for developers, and that can make it intimidating, even for someone like me who writes code, uses GitHub, etc. The original AutoTrain was a joy to use in comparison, and this feels like a missed opportunity where Hugging Face could have keep both the old "easy" version alongside the new, more powerful, but rather clunkier "advanced" version. Still, this is easier than dealing directly with the hellscape that is Python.

Written with StackEdit.

Tuesday, February 20, 2024

Problems with the DataCite Data Citation Corpus

How to cite: Page, R. (2024). Problems with the DataCite Data Citation Corpus https://doi.org/10.59350/t80g1-xys37

DataCite have released the Data Citation Corpus, together with a dashboard that summarises the corpus. This is billed as:

A trusted central aggregate of all data citations to further our understanding of data usage and advance meaningful data metrics

The goal is to build a citation database between scholarly articles and data, such as datasets in repositories, sequences in GenBank, protein structures in PDB, etc. Access to the corpus can be obtained by submitting a form, then having a (very pleasant) conversation with DataCite about the nature of the corpus. This process feels clunky because it introduces friction. If you want people to explore this, why not make it a simple download?

I downloaded the corpus, which is nearly 7 Gb of JSON, formatted as an array(!), thankfully with one citation per line so it is reasonably easy to parse. (JSON Lines would be more convenient).

I loaded this into a SQLite database to make it easier to query, and I have some thoughts. Before outling why I think the corpus has serious problems, I should emphasise that I’m a big fan of what DataCite are trying to do. Being able to track data usage to give credit to researchers and repositories (citations to data as well as papers), to track provenance of data (e.g., when a GenBank sequence turns out to be wrong being able to find all the studies that used it), and to find addition links between papers beyond bibliographic links (e.g., when data is cited but not the original publication) are all good things. Obviously, lots of people have talked about this, but this is my blog so I’ll cite myself as an example 😉.

Page, R. Visualising a scientific article. Nat Prec (2008). https://doi.org/10.1038/npre.2008.2579.1

My main interest in the corpus is tracking citations of DNA sequences, which are often not linked to even the original publication in GenBank. I was hopeful the corpus could help in this work.

Ok, let’s now look at the actual corpus.

Data structure

Each citation comprises a JSON object, with a mix of external identifiers such as DOIs, and internal identifiers as UUIDs. The later are numerous, and make the data file much bigger than it needs to be. For example, there are two sources of citation data, DataCite, and the Chan Zuckerberg Initiative. These have sourceId values of 3644e65a-1696-4cdf-9868-64e7539598d2 and c66aafc0-cfd6-4bce-9235-661a4a7c6126, respectively. There are a little over 10 million citations in the corpus, so that’s a lot of bytes that could simply have been 1 or 2.

More frustrating than the wasted space is the lack of any list of what each UUID means. I figured out that 3644e65a-1696-4cdf-9868-64e7539598d2 is DataCite only by looking at the data, knowing that CZI had contributed more ecords than DataCite. For other entities such as repositories and publishers, one has to go spelunking in the data to make reasonable guesses as to what the repositories are. Given that most citations seem to be to biomedical entities, why not use something such as the compact identifiers from Identifiers.org for each reppository?

Dashboard

DataCite provides a dashboard to summarise key features of the corpus. There are a couple of aspects of the dashboard that I find frustrating.

Firstly, the “citation counts by subject” is misleading. A quick glance suggests that law and sociology are the subjects that most actively cite data. This would be surprising, especially given that much of the data generated by CZI comes from PubMed Central. Only 50,000 citations out of 10 million comprise articles with subject tags, so this chart is showing results for approximately 0.5% of the corpus. The chart includes the caveat “The visualization includes the top 20 subiects where metadata is available.” but omits to tell us that as a result the chart is irrelevant for >99% of the data.

The dashboard is interesting in what it says about the stakeholders of this project. We see counts of citations broken down by source (CZI or DataCite), and publisher, but not by repository. This suggests that repositories are second class citizens. Surely they deserve a panel on the dashboard? I suspect researchers are going to be more interested in what kinds of data are being cited than what academic publishers are in the corpus. For instance, 3.75 million (37.5%) citations are to sequences in GenBank, 1.7 million (17.5%) are to the Protein Data Bank (PDB), and 0.89 million (8.9%) are to SNPs.

Chan Zuckerberg Initiative and AI

The corpus is a collaboration between DataCite and the Chan Zuckerberg Initiative (CZI) and CZI are responsible for the bulk of the data. Unfortunately there is no description of how those citations were extracted from the source papers. Perhaps CZI used something like SciBERT which they employed in earlier work to extract citations to scientific software https://arxiv.org/abs/2209.00693? We don’t know. One reason this matters is that there are lots of cases where the citations are incorrect, and if we are going to figure out why, we need to know how they were obtained. At present it is simply a black box.

These are just a few examples of incorrect citations:

These are just a few examples I came across while pottering around with the corpus. I’ve not done any large-scale analysis, but one ZooKeys article I came across https://doi.org/10.3897/zookeys.739.21580 cites 32 entities, only four of which are correct.

I get that text mining is hard, but I would expect AI would do better than what we could achieve by simply matching dumb regular expressions. For example, surely a tool that claims any measure of intelligence would be able to recognised that this sentence lists grant numbers, not a GenBank accession number?

Funding This study was supported by Longhua Hospital Shanghai University of Traditional Chinese Medicine (grant number: Y21026), and Longhua Hospital Shanghai University of Traditional Chinese Medicine (YW.006.035)

As a fallback, we could also check that a given identifier is valid. For example, there is no sequence with the accession number Y21026. The set of possible identifiers is finite (if large), why didn’t the corpus check whether each identifier extracted actually existed?

Update: major errors found

I've created a GitHub repo to keep track of the errors I'm finding.

Protein Data Bank

The Protein Data Bank (PDB) is the second largest repository in the corpus with 1,729,783 citations. There are 177,220 distinct PDB identifiers cited. These identifiers should match the pattern /^[0-9][A-Za-z0-9]{3}$/, that is, a number 0-9 followed by three alphanumeric characters. However 31,612 (18%) do not. Examples include "//osf.io/6bvcq" and "//evs.nci.nih.gov/ftp1/CTCAE/CTCAE_4.03/Archive/CTCAE_4.0_2009-05-29_QuickReference_8.5x11.pdf". So the tools for finding PDB citations do not understand what a PDB identifier should look like.

Out of curiousity I downloaded all the exiting PDB identifiers from https://files.wwpdb.org/pub/pdb/holdings/current_file_holdings.json.gz, which gave me 216,225 distinct PDB identifiers. Comparing actual PDB identifiers with ones included in the corpus I got 1,233,993 hits, which is 71% of the total in the corpus. Hence over half a million (a little under a third of the PDB citations) appear to be made up.

Individual articles

Taxonomic revision of Stigmatomma Roger (Hymenoptera: Formicidae) in the Malagasy region

The paper https://doi.org/10.3897/BDJ.4.e8032 is credited with citing 126 entities, including 108 sequences and 14 PDB records. None of this is true. The supposed PDB records are figure numbers, e.g. “Fig. 116d” becomes PDB 116d, and the sequence accession numbers are specimen codes or field numbers.

Nucleotide sequences

Sequence data is the single largest data type cited in the corpus, with 3.8 million citations. I ran a sample of the first 1000 sequences accession numbers in the corpus against GenBank and in 486 cases GenBank didn't recognise the accession number as valid. So potentially half the sequence citations are wrong.

Summary

I think the Data Citation Corpus is potentially a great resource, but if it is going to be “[a] trusted central aggregate of all data citations” then I think there are a few things it needs to do:

  • Make the data more easily accessible so that people can scrutinise it without having to jump through hoops
  • Tell us how the Chan Zuckerberg Initiative did the entity matching
  • Improve the entity matching
  • Add a quality control step that validates extracted identifiers
  • Expand the dashboard to give users a better sense of what data is being cited

Written with StackEdit.

Wednesday, November 29, 2023

It's 2023 - why are we still not sharing phylogenies?

How to cite: Page, R. (2023). It’s 2023 - why are we still not sharing phylogenies? https://doi.org/10.59350/n681n-syx67

A quick note to support a recent Twitter thread https://twitter.com/rdmpage/status/1729816558866718796?s=61&t=nM4XCRsGtE7RLYW3MyIpMA

The article “Diversification of flowering plants in space and time” by Dimitrov et al. describes a genus-level phylogeny for 14,244 flowering plant genera. This is a major achievement, and yet neither the tree nor the data supporting that tree are readily available. There is lots of supplementary information (as PDF files), but no machine readable tree or alignment data.

Dimitrov, D., Xu, X., Su, X. et al. Diversification of flowering plants in space and time. Nat Commun 14, 7609 (2023). https://doi.org/10.1038/s41467-023-43396-8

What we have is a link to a web site which in turn has a link to a OneZoom visualisation. If you look at the source code for the web site you can see the phylogeny in Newick format as a Javascript file.

This is a far from ideal way to share data. Readers can’t easily get the tree, explore it, evaluate it, or use it in their own analyses. I grabbed the tree and put it online as a GitHub GIST. Once you have the tree you can do things such as try a different tree viewer, such as PhyloCloud

That is a start, but it’s clearly not ideal. Why didn’t the authors put the tree (and the data) into a proper repository, such as Zenodo where it would be persistent and citable, and also linked to the authors’ ORCID profile? That way everybody wins, readers get a tree to explore, the authors have an additional citable output.

The state of sharing of phylogenetic data is dire, not helped by the slow and painful demise of TreeBASE. Sharing machine readable trees and datasets still does not seem to be the norm in phylogenetics.

Written with StackEdit.

Thursday, October 26, 2023

Where are the plant type specimens? Mapping JSTOR Global Plants to GBIF

How to cite: Page, R. (2023). Where are the plant type specimens? Mapping JSTOR Global Plants to GBIF. https://doi.org/10.59350/m59qn-22v52

This blog post documents my attempts to create links between two major resources for plant taxonomy: JSTOR’s Global Plants and GBIF, specifically between type specimens in JSTOR and the corresponding occurrence in GBIF. The TL;DR is that I have tried to map 1,354,861 records for type specimens from JSTOR to the equivalent record in GBIF, and managed to find 903,945 (67%) matches.

Why do this?

Why do this? Partly because a collaborator asked me, but I’ve long been interested in JSTOR’s Global Plants. This was a massive project to digitise plant type specimens all around the world, generating millions of images of herbarium sheets. It also resulted in a standardised way to refer to a specimen, namely its barcode, which comprises the herbarium code and a number (typically padded to eight digits). These barcodes are converted into JSTOR URLs, so that E00279162 becomes https://plants.jstor.org/stable/10.5555/al.ap.specimen.e00279162. These same barcodes have become the basis of efforts to create stable identifiers for plant specimens, for example https://data.rbge.org.uk/herb/E00279162.

JSTOR created an elegant interface to these specimens, complete with links to literature on JSTOR, BHL, and links to taxon pages on GBIF and elsewhere. It also added the ability to comment on individual specimens using Disqus.

However, JSTOR Global Plants is not open. If you click on a thumbnail image of a herbarium sheet you hit a paywall.

In contrast data in GBIF is open. The table below is a simplified comparison of JSTOR and GBIF.

Feature JSTOR GBIF
Open or paywall Paywall Open
Consistent identifier Yes No
Images All specimens Some specimens
Types linked to original name Yes Sometimes
Community annotation Yes No
Can download the data No Yes
API No Yes

JSTOR offers a consistent identifier (the barcode), an image, has the type linked to the original name, and community annotation. But there is a paywall, and no way to download data. GBIF is open, enables both bulk download and API access, but often lacks images, and as we shall see below, the identifiers for specimens are a hot mess.

The “Types linked to original name” feature concerns whether the type specimen is connected to the appropriate name. A type is (usually) the type specimen for a single taxonomic name. For example, E00279162 is the type for Achasma subterraneum Holttum. This name is now regarded as a synonym of Etlingera subterranea (Holttum) R. M. Sm. following the transfer to the genus Etlingera. But E00279162 is not a type for the name Etlingera subterranea. JSTOR makes this clear by stating that the type is stored under Etlingera subterranea but is the type for Achasma subterraneum. However, this information does not make it to GBIF, which tells us that E00279162 is a type for Etlingera subterranea and that it knows of no type specimens for Achasma subterraneum. Hence querying GBIF for type specimens is potentially fraught with error.

Hence JSTOR has often cleaner and more accurate data. But it is behind a paywall. Hence I set about to get a list of all the type specimens that JSTOR has, and try and match those to GBIF. This would give me a sense of how much content behind JSTOR’s paywall was freely available in GBIF, as well as how much content JSTOR had that was absent from GBIF. I also wanted to use JSTOR’s reference to the original plant name to get around any GBIF’s tendency to link types to the wrong name.

Challenges

Mapping JSTOR barcodes to records in GBIF proved challenging. In an ideal world specimens would have a single identifier that everyone would use when citing or otherwise referring to that specimen. Of course this is not the case. There are all manner of identifiers, ranging from barcodes, collector names and numbers, local database keys (integers, UUIDs, and anything in between). Some identifiers include version codes. All of this greatly complicates linking barcodes to GBIF records. I made extensive use of my Material examined tool that attempts to translate specimen codes into GBIF records. Under the hood this means lots of regular expressions, and I spent a lot of time adding code to handle all the different ways herbaria manage to mangle barcodes.

In some cases JSTOR barcodes are absent from the specimen information in the GBIF occurrence record itself but are hidden in metadata for the image (such as the URL to the image). My “Material examined” tool uses the GBIF API, and that doesn’t enable searches for parts of image URLs. Hence for some herbaria I had to download the archive, extract media URLs and look for barcodes. In the process I encountered a subtle bug in Safari that truncated downloads, see Downloads failing to include all files in the archive.

Some herbaria have data in both JSTOR and GBIF, but no identifiers in common (other than collector names and numbers, which would require approximate string matching). But in some cases the herbaria have their own web sites which mention the JSTOR barcodes, as well as the identifiers those herbaria do share with GBIF. In these cases I would attempt to scrape the herbaria web sites, extract the barcode and original identifier, then find the original identifier in GBIF.

Another observation is that in some cases the imagery in JSTOR is not the same as GBIF. For example LISC002383 and 813346859 are the same specimens but the images are different. Why are the images provided to JSTOR not being provided to GBIF?

In the process of making this mapping it became clear that there are herbaria that aren’t in GBIF, for example Singapore (SING) is not in GBIF but instead is hosted at Oxford University (!) at https://herbaria.plants.ox.ac.uk/bol/sing. There seem to be a number of herbaria that have content in JSTOR but not GBIF, hence GBIF has gaps in its coverage of type specimens.

Interestingly JSTOR rarely seems to be a destination for links. An exception is the Paris museum, for example specimens MPU015018 has a link to JSTOR record for same specimen MPU015018.

Matching taxonomic names

As a check on matching JSTOR to GBIF I would also check that the taxonomic names associated with the two records are the same. The challenge here is that the names may have changed. Ideally both JSTOR and GBIF would have either a history of name changes, or at least the original name the specimen was associated with (i.e., the name for which the specimen is the type). And of course, this isn’t the case. So I relied on a series of name comparisons, such as “are the names the same?”, “if names are different are the specific epithets the same?”, and “if names are specific epithets are different are the generic names the same?”. Because the spelling of species names can change depending on the gender of the genus, I also used some stemming rules to catch names that were the same even if their ending was different.

This approach will still miss some matches, such as hybrid names, and cases where a specimen is stored under a completely different name (e.g., the original name is a heterotypic synonym of a different name).

Mapping

The mapping made so far is available on GitHub https://github.com/rdmpage/jstor-plant-specimens and Zenodo https://doi.org/10.5281/zenodo.10044359.

At the time of writing I have retrieved 1,354,861 records for type specimens from JSTOR, of which 903,945 (67%) have been matched to GBIF.

This has been a sobering lesson in just how far we are from being able to treat specimens as citable things, we simply don’t have decent identifiers for them. JSTOR made a lot of progress, but that has been hampered by being behind a paywall, and the fact that many of these identifiers are being lost or mangled by the time they make their way into GBIF, which is arguably where most people get information on specimens.

There’s an argument that it would be great to get JSTOR Global Plants into GBIF. It would certainly add a lot of extra images, and also provide a presence for a number of smaller herbaria that aren’t in GBIF. I think there’s also a case to be made for having a GBIF hosted portal for plant type specimens, to help make these valuable objects more visible and discoverable.

Below is a barchart of the top 50 herbaria ranked by number of type specimens in JSTOR, showing the numbers of specimens mapped to GBIF (red) and those not found (blue).

Reading

  • Boyle, B., Hopkins, N., Lu, Z. et al. The taxonomic name resolution service: an online tool for automated standardization of plant names. BMC Bioinformatics 14, 16 (2013). https://doi.org/10.1186/1471-2105-14-16

  • CETAF Stable Identifiers (CSI)

  • CETAF Specimen URI Tester

  • Holttum, R. E. (1950). The Zingiberaceae of the Malay Peninsula. Gardens’ Bulletin, Singapore, 13(1), 1-249. https://biostor.org/reference/163926

  • Hyam, R.D., Drinkwater, R.E. & Harris, D.J. Stable citations for herbarium specimens on the internet: an illustration from a taxonomic revision of Duboscia (Malvaceae) Phytotaxa 73: 17–30 (2012). https://doi.org/10.11646/phytotaxa.73.1.4

  • Rees T (2014) Taxamatch, an Algorithm for Near (‘Fuzzy’) Matching of Scientific Names in Taxonomic Databases. PLoS ONE 9(9): e107510. https://doi.org/10.1371/journal.pone.0107510

  • Ryan D (2018) Global Plants: A Model of International Collaboration . Biodiversity Information Science and Standards 2: e28233. https://doi.org/10.3897/biss.2.28233

  • Ryan, D. (2013), THE GLOBAL PLANTS INITIATIVE CELEBRATES ITS ACHIEVEMENTS AND PLANS FOR THE FUTURE. Taxon, 62: 417-418. https://doi.org/10.12705/622.26

  • (2016), Global Plants Sustainability: The Past, The Present and The Future. Taxon, 65: 1465-1466. https://doi.org/10.12705/656.38

  • Smith, G.F. and Figueiredo, E. (2013), Type specimens online: What is available, what is not, and how to proceed; Reflections based on an analysis of the images of type specimens of southern African Polygala (Polygalaceae) accessible on the worldwide web. Taxon, 62: 801-806. https://doi.org/10.12705/624.5

  • Smith, R. M. (1986). New combinations in Etlingera Giseke (Zingiberaceae). Notes from the Royal Botanic Garden Edinburgh, 43(2), 243-254.

  • Anna Svensson; Global Plants and Digital Letters: Epistemological Implications of Digitising the Directors’ Correspondence at the Royal Botanic Gardens, Kew. Environmental Humanities 1 May 2015; 6 (1): 73–102. doi: https://doi.org/10.1215/22011919-3615907

Written with StackEdit.