Friday, July 28, 2023

Sub-second searching of millions of DNA barcodes using a vector database

How to cite: Page, R. (2023). Sub-second searching of millions of DNA barcodes using a vector database. https://doi.org/10.59350/qkn8x-mgz20

Recently I’ve been messing about with DNA barcodes. I’m junior author with David Schindel on forthcoming book chapter Creating Virtuous Cycles for DNA Barcoding: A Case Study in Science Innovation, Entrepreneurship, and Diplomacy, and I’ve blogged about Adventures in machine learning: iNaturalist, DNA barcodes, and Lepidoptera. One thing I’ve always wanted is a simple way to explore DNA barcodes both geographically and phylogenetically. I’ve made various toys (e.g., Notes on next steps for the million DNA barcodes map and DNA barcode browser) but one big challenge has been search.

The goal is to be able to do is take a DNA sequence and search the DNA barcode database for barcodes that are similar to that sequence, then build a phylogenetic tree for the results. And I want this to be fast. The approach I used in my :“DNA barcode browser” was to use Elasticsearch and index the DNA sequences as n-grams (=k-mers). This worked well for small numbers of sequences, but when I tried this for millions of sequences things got very slow, typically it took around eight seconds for a search to complete. This is about the same as BLAST on my laptop for the same dataset. These sort of search times are simply too slow, hence I put this work on the back burner. That is, until I started exploring vector databvases.

Vector databases, as the name suggests, store vectors, that is, arrays of numbers. Many of the AI sites currently gaining attention use vector databases. For example, chat bots based on ChatGPT are typically taking text, converting it to an “embedding” (a vector), then searching in a database for similar vectors which, hopefully, correspond to documents that are related to the original query (see ChatGPT, semantic search, and knowledge graphs).

The key step is to convert the thing you are interested in (e.g., text, or an image) into an embedding, which is a vector of fixed length that encodes information about the thing. In the case of DNA sequences one way to do this is to use k-mers. These are short, overlapping fragments of the DNA sequence (see This is what phylodiversity looks like). In the case of k-mers of length 5 the embedding is a vector of the frequencies of the 45 = 1,024 different k-mers for the letters A, C, G, and T.

But what do we do with these vectors? This is where the vector database comes in. Search in a vector database is essentially a nearest-neighbour search - we want to find vectors that are similar to our query vector. There has been a lot of cool research on this problem (which is now highly topical because of the burgeoning interest in machine learning), and not only are there vector databases, but tools to add this functionality to existing databases.

So, I decided to experiment. I grabbed a copy of PostgreSQL (not a database I’d used before), added the pgvector extension, then created a database with over 9 million DNA barcodes. After a bit of faffing around, I got it to work (code still needs cleaning up, but I will release something soon).

So far the results are surprisingly good. If I enter a nucleotide sequence, such as JF491468 (Neacomys sp. BOLD:AAA7034 voucher ROM 118791) and search for the 100 most similar sequences I get back 100 Neacomys sequences in 0.14 seconds(!). I can then take the vectors for each of those sequences (i.e., the array of k-mer frequencies), compute a pairwise distance matrix, then build a phylogeny (in PAUP,* naturally).

Searches this rapid mean we can start to interactively explore large databases of DNA barcodes, as well as quickly take new, unknown sequences and ask “have we seen this before?”

As a general tool this approach has limitations. Vector databases have a limit on the size of vector they can handle, so k-mers much larger than 5 will not be feasible (unless the vectors are sparse in the sense that not all k-mers actually occur). Also it’s not clear to me how much this approach succeeds because of the nature of barcode data. Typically barcodes are either very similar to each other (i.e., from the same species), or they are quite different (the famous “barcode gap”). This may have implications for the success of nearest neighbour searching.

Still early days, but so far this has been a revelation, and opens up some interesting possibilities for how we could explore and interact with DNA barcodes.

Written with StackEdit.

Tuesday, July 18, 2023

What, if anything, is the Biodiversity Knowledge Hub?

How to cite: Page, R. (2023). What, if anything, is the Biodiversity Knowledge Hub? https://doi.org/10.59350/axeqb-q2w27

To much fanfare BiCIKL launched the “Biodiversity Knowledge Hub” (see Biodiversity Knowledge Hub is online!!!). This is advertised as a “game-changer in scientific research”. The snappy video in the launch tweet claims that the hub will

  • it will help your research thanks to interlinked data…
  • …and responds to complex queries with the services provided…

Interlinked data, complex queries, this all sounds very impressive. The video invites us to “Vist the Biodiversity Knowledge Hub and give it a shot”. So I did.

The first thing that strikes me is the following:

Disclaimer: The partner Organisations and Research Infrastructures are fully responsible for the provision and maintenance of services they present through BKH. All enquiries about a particular service should be sent directly to its provider.

If the organisation trumpeting a new tool takes no responsibility for that tool, then that is a red flag. To me it implies that they are not taking this seriously, they have no skin in the game. If this work mattered you’d have a vested interest in seeing that it actually worked.

I then tried to make sense of what the Hub is an what it offers.

Is it maybe an aggregation? Imagine diversity biodiversity datasets linked together in a single place so that we could seamlessly query across that data, bouncing from taxa to sequences to ecology and more. GBIF and GenBank are examples of aggregations here data is brought together, cleaned, reconciled, and services built on top of that. You can go to GBIF and get distribution data for a species, you can go to GenBank and compare your sequence with millions of others. Is the Hub an aggregation?.. no, it is not.

Is it afederation? Maybe instead of merging data from multiple sources, that data lives on the original sites, but we can query across it a bit like a travel search engine queries across multiple airlines to find us the best flight. The data still needs to be reconciled, or at least share identifiers and vocabularies. Is the Hub a federation?.. no, it is not.

OK, so maybe we still have data in separate silos, but maybe the Hub is a data catalogue where we can search for data using text terms (a bit like Google’s Dataset Search)? Or even better, maybe it describes the data in machine readable terms so that we could find out what data are relevant to our interests (e.g., what data sets deal with taxa and ecological associations base don sequence data?). Is it a data catalogue? … no, it is not.

OK, then what actually is it?

It is a list. They built a list. If you go to FAIR DATA PLACE you see an invitation to EXPLORE LINKED DATA. Sounds inviting (“linked data, oohhh”) but it’s a list of a few projects: ChecklistBank, e-Biodiv, LifeBlock, OpenBiodiv, PlutoF, Biodiversity PMC, Biotic Interactions Browser, SIBiLS SPARQL Endpoint, Synospecies, and TreatmentBank.

These are not in any way connected, they all have distinct APIs, different query endpoints, speak different languages (e.g., REST, SPARQL), and there’s no indication that they share identifiers even if they overlap in content. How can I query across these? How can I determine whether any of these are relevant to my interests? What is the point in providing SPARQL endpoints (e.g., OpenBiodiv, SIBiLS, Synospecies) without giving the user any clue as to what they contain, what vocabularies they use, what identifiers, etc.?

The overall impression is of a bunch of tools with varying levels of sophistication stuck together on a web page. This is in no way a “game-changer”, nor is it “interlinked data”, nor is there any indication of how it supports “complex queries”.

It feels very much like the sort of thing one cobbles together as a demo when applying for funding. “Look at all these disconnected resources we have, give us money and we can join them together”. Instead it is being promoted as an actual product.

Instead of the hyperbole, why not tackle the real challenges here? At a minimum we need to know how each service describes data, those services should use the same vocabularies and identifiers for the same things, be able to tell us what entities and relationships they cover, and we should be able to query across them. This all involves hard work, obviously, so let’s stop pretending that it doesn’t and do that work, rather than claim that a list of web sites is a “game-changer”.

Written with StackEdit.

Adventures in machine learning: iNaturalist, DNA barcodes, and Lepidoptera

How to cite: Page, R. (2023). Adventures in machine learning: iNaturalist, DNA barcodes, and Lepidoptera. https://doi.org/10.59350/5q854-j4s23

Recently I’ve been working with a masters student, Maja Nagler, on a project using machine learning to identify images of Lepidoptera. This has been something of an adventure as I am new to machine learning, and have only minimal experience with the Python programming language. So what could possibly go wrong?

The inspiration for this project comes from (a) using iNaturalist’s machine learning to help identify pictures I take using their app, and (b) exploring DNA barcoding data which has a wealth of images of specimens linked to DNA sequences (see gallery in GBIF), and presumably reliably identified (by the barcodes). So, could we use the DNA images to build models to identify specimens? Is it possible to use models already trained on citizen science data, or do we need custom models trained on specimens? Can models trained on museum specimens be used to identify living specimens?

To answer this we’ve started simple, using the iNaturalist 2018 competition as a starting point. There is a code in GitHub for an entry in that challenge, and the challenge data is available, so the idea was to take that code and model and see how well it works on DNA barcode images.

That was the plan. I ran into a slew of Python-related issues involving out of date code, dependencies, and issues with running on a MacBook. Python is, well, a mess. I know there are ways to “tame” the mess, but I’m amazed that anyone can get anything done in machine learning given how temperamental the tools are.

Another consideration is that machine learning is computationally intensive, and typically uses PC with NVIDIA chips. Macs don 't have these chips. However, Apple’s newer Macs provide Metal Performance Shaders (MPS) which does speed things up. But getting everything to work together was a nightmare. This is a field full of obscure incantations, bugs, and fixes. I describe some of the things I went through in the README for the repository. Note that this code is really a safety net. Maja is working on a more recent model (using Google’s Colab), I just wanted to make sure that we had a backup in place in case my notion that this ML stuff would be “easy” turned out to be, um, wrong.

Long story short, everything now works. Because our focus is Lepidoptera (moths and butterflies) I ended up subsetting the original challenge dataset to include just those taxa. This resulted in 1234 species. This is obviously a small number, but it means we can train a reasonable model in less than a week (ML is really, really, computationally expensive).

There is still lots to do, but I want to share a small result. After training the model on Lepidoptera from the iNaturalist 2018 dataset, I ran a small number of images from the DNA barcode dataset. The results are encouraging. For example, for Junonia villida all the barcoded specimens were either correctly identified (green) or were in the top three hits (orange) (the code works is it outputs the top three hits for each image). So a model trained on citizen science images of (mostly) living specimens can identify museum specimens.

For other species the results are not so great, but are still interesting. For example, for Junonia orithya quite a few images are not correctly identified (red). Looking at the images, it looks like specimens photographed ventrally are going to be a problem (unlikely to be common angle for photographs of living specimens), and specimens with scale grids and QR codes are unlikely to be seen in the wild(!).

An obvious thing to do would be to train a model based on DNA barcode specimens and see how well it identifies citizen science images (and Maja will be doing just that). If that works well, then that would suggest that there is scope for expanding models for identifying live insects to include museum specimen images (and visa versa), see also Towards a digital natural history museum.

It is early days, still lots of work to do, and deadlines are pressing, but I’m looking forward to seeing how Maja’s project evolves. Perhaps the pain of Python, PyTorch, MPS, etc. will all be worth it.

Written with StackEdit.

Saturday, June 17, 2023

A taxonomic search engine

How to cite: Page, R. (2023). A taxonomic search engine. https://doi.org/10.59350/r3g44-d5s15

Tony Rees commented on my recent post Ten years and a million links. I’ve responded to some of his comments, but I think the bigger question deserves more space, hence this blog post.

Tony’s comment

Hi Rod, I like what you’re doing. Still struggling (a little) to find the exact point where it answers the questions that are my “entry points” so to speak, which (paraphrasing a post of yours from some years back) start with:

  • Is this a name that “we” (the human race I suppose) recognise as having been used for a taxon (think Global Names Resolver, Taxamatch, etc.) - preferably an automatable query and response (i.e., a machine can ask it and incorporated the result into a workflow)
  • Does it refer to a currently accepted taxon or if not, what is the accepted equivalent
  • What is its taxonomic placement (according to one or a range of expert systems)
  • Also, for reporting/comparison/analysis purposes…
    - How many accepted taxa (at whatever rank) are currently known in group X (or the whole world)
  • How many new names (accepted or unaccepted) were published in year A (or date range A-C)
  • How many new names were published (or co-authored) by author Z
  • (and probably more)

Having access to more of the primary literature is great, and necessary, but does not help me in those respects (since the published works must still be parsed by a human, not a machine). But maybe it does answer some other questions like how many original works were published by author Z, in a particular time frame.

Of course as you will be aware, using ORCIDs for authors is only a small portion of the puzzle, since ORCIDs are not issued for deceased authors, or those who never request one, so far as I am aware.

None of the above is a criticism of what you are doing! Just trying to see if I can establish any new linkages to what you are doing which will enable me to automate portions of my own efforts to a greater degree (let machines do things that currently still require a human). So far (as evidenced by the most recent ION data dump you were able to supply) it is giving me a DOI in many cases as a supplement to the title of the original work (per ION/Zoological Record) which is something of a time saver in my quest to read the original work (from which I could extract the DOI as well once reached) but does not really automate anything since I still have to try and find it in order to peruse the content.

Mostly random thoughts above, possibly of no use, but I do ruminate on the universe of connected “things not strings” in the hope that one day life will get easier for biodiversity informatics workers, or maybe that the “book of life” will be self-assembling…

My response

I think there are several ways to approach this. I’ll walk through them below, but TL;DR

  • Define the questions we have and how we would get the answers. For example, what combination database and SQL queries, or web site and API calls, or knowledge graph and SPARQL do we need to answer each question?
  • Decide what sort of interface(s) we want. Do we want a web site with a search box, a bunch of API calls, or a natural language interface?
  • If we want natural language, how do we do that? Do we want a ChatBot?
  • And as an aside, how can we speed up reading the taxonomic literature?

The following are more notes than a reasoned essay. I wanted to record a bunch of things to help me think about these topics.

Build a natural language search engine

One of the first things I read that opened my eyes to the potential of OpenAI-powered tools and how to build them was Paul Graham GPT which I talked about here. This is a simple question and answer tool that takes a question and returns an answer, based on Paul Graham’s blog posts. We could do something similar for taxonomic names (or indeed, anything where we have some text and want to query it). At its core we have a bunch of blocks of text, embeddings for those blocks, then we get embeddings for the question and find the best-matching embeddings for the blocks of text.

Generating queries

One approach is to use ChatGPT to formulate a database query based on a natural langue question. There have been a bunch of people exploring generating SPARQL queries from natural langue, e.g. ChatGPT for Information Retrieval from Knowledge Graph, ChatGPT Exercises — Generating a Course Description Knowledge Graph using RDF, and Wikidata and ChatGPT and this could be explored for other query languages.

So in this approach we take natural language questions and get back the queries we need to answer those questions. We then go away and run those queries.

Generating answers

This still leaves us with what to do with the answers. Given, say, a SPARQL response, we could have code that generates a bunch of simple sentences from that response, e.g. “name x is a synonym of name y”, “there are ten species in genus x”, “name x was published in the paper zzz”, etc. We then pass those sentences to an AI to summarise into nicer natural language. We should aim for something like the Wikipedia-derived snippets from DBpedia (see Ozymandias meets Wikipedia, with notes on natural language generation). Indeed, we could help make more engaging answers by adding DBpedia snippets for the relevant taxa, abstracts from relevant papers, etc. to the SPARQL results and ask the AI to summarise all of that.

Skipping queries altogether

Another approach is to generate all the answers ahead of time. Essentially, we take our database or knowledge graph and generate simplified sentences summarising everything we know: “species x was described by author y in 1920”, “species x was synonymies with species y in 1967”, etc. We then get embeddings for these answers, store them in a vector database, and we can query them using a chatbot-style interface.

There is a big literature on embedding RDF (see RDF2vec.org), and also converting RDF to sentences. These “RDF verbalisers” are further discussed on the WebNLG Challenge pages, and an example is here: jsRealB - A JavaScript Bilingual Text Realizer for Web Development.

This approach is like the game Jeopardy!: we generate all the answers and the goal is to match the user’s question to one or more of those answers.

Machine readability

Having access to more of the primary literature is great, and necessary, but does not help me in those respects (since the published works must still be parsed by a human, not a machine).

This is a good point, but help is at hand. There are a bunch of AI tools to “read” the literature for you, such as SciSpace’s Copilot. I think there’s a lot we could do to explore these tools. We could also couple them with the name - publication links in the ten year library. For example, if we know that there is a link between a name and a DOI, and we have the text for the article with that DOI we could then ask targeted questions regarding what the papers says about that name. One way to implement this is to do something similar to the Paul Graham GPT demo described above. We take the text of the paper, chunk it into smaller blocks (e.g., paragraphs), get embeddings for each block, add those to a vector database, and we can then search that paper (and others) using natural language. We could imagine an API that takes a paper and splits out the core “facts” or assertions that the paper makes. This also speaks to Notes on collections, knowledge graphs, and Semantic Web browsers where I bemoaned the lack of a semantic web browser.

Summary

I think the questions being asked are all relatively straightforward to answer, we just need to think a little bit about the best way to answer them. Much of what I’ve written above is focussed on making such a system more broadly useful and engaging, with richer answers that a simple database query. But a first step is to define the questions and the queries that would answer them, then figure out what interface to wrap this up in.

Written with StackEdit.

Wednesday, May 31, 2023

Ten years and a million links

As trailed on a Twitter thread last week I’ve been working on a manuscript describing the efforts to map taxonomic names to their original descriptions in the taxonomic literature.

The preprint is on bioRxiv doi:10.1101/2023.05.29.542697

A major gap in the biodiversity knowledge graph is a connection between taxonomic names and the taxonomic literature. While both names and publications often have persistent identifiers (PIDs), such as Life Science Identifiers (LSIDs) or Digital Object Identifiers (DOIs), LSIDs for names are rarely linked to DOIs for publications. This article describes efforts to make those connections across three large taxonomic databases: Index Fungorum, International Plant Names Index (IPNI), and the Index of Organism Names (ION). Over a million names have been matched to DOIs or other persistent identifiers for taxonomic publications. This represents approximately 36% of names for which publication data is available. The mappings between LSIDs and publication PIDs are made available through ChecklistBank. Applications of this mapping are discussed, including a web app to locate the citation of a taxonomic name, and a knowledge graph that uses data on researcher’s ORCID ids to connect taxonomic names and publications to authors of those names.

Much of the work has been linking taxa to names, which still has huge gaps. There are also interesting differences in coverage between plants, animals, and fungi (see preprint for details).

There is also a simple app to demonstrate these links, see https://species-cite.herokuapp.com.

Written with StackEdit.

Tuesday, April 25, 2023

Library interfaces, knowledge graphs, and Miller columns

Some quick notes on interface ideas for digital libraries and/or knowledge graphs.

Recently there’s been something of an explosion in bibliographic tools to explore the literature. Examples include:

  • Elicit which uses AI to search for and summarise papers
  • _scite which uses AI to do sentiment analysis on citations (does paper A cite paper B favourably or not?)
  • ResearchRabbit which uses lists, networks, and timelines to discover related research
  • Scispace which navigates connections between papers, authors, topics, etc., and provides AI summaries.

As an aside, I think these (and similar tools) are a great example of how bibliographic data such as abstracts, the citation graph and - to a lesser extent - full text - have become commodities. That is, what was once proprietary information is now free to anyone, which in turns means a whole ecosystem of new tools can emerge. If I was clever I’d be building a Wardley map to explore this. Note that a decade or so ago reference managers like Zotero were made possible by publishers exposing basic bibliographic data on their articles. As we move to open citations we are seeing the next generation of tools.

Back to my main topic. As usual, rather than focus on what these tools do I’m more interested in how they look. I have history here, when the iPad came out I was intrigued by the possibilities it offered for displaying academic articles, as discussed here, here, here, here, and here. ResearchRabbit looks like this:

Scispace’s “trace” view looks like this:

What is interesting about both is that they display content from left to right in vertical columns, rather than the more common horizontal rows. This sort of display is sometimes called Miller columns or a cascading list.

By Gürkan Sengün (talk) - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=594715

I’ve always found displaying a knowledge graph to be a challenge, as discussed elsewhere on this blog and in my paper on Ozymandias. Miller columns enable one to drill down in increasing depth, but it doesn’t need to be a tree, it can be a path within a network. What I like about ResearchRabbit and the original Scispace interface is that they present the current item together with a list of possible connections (e.g., authors, citations) that you can drill down on. Clicking on these will result in a new column being appended to the right, with a view (typically a list) of the next candidates to visit. In graph terms, these are adjacent nodes to the original item. The clickable badges on each item can be thought of as sets of edges that have the same label (e.g., “authored by”, “cites”, “funded”, “is about”, etc.). Each of these nodes itself becomes a starting point for further exploration. Note that the original starting point isn’t privileged, other than being the starting point. That is, each time we drill down we are seeing the same type of information displayed in the same way. Note also that the navigation can be though of as a card for a node, with buttons grouping the adjacent nodes. When we click on an individual button, it expands into a list in the next column. This can be thought of as a preview for each adjacent node. Clicking on an element in the list generates a new card (we are viewing a single node) and we get another set of buttons corresponding to the adjacent nodes.

One important behaviour in a Miller column interface is that the current path can be pruned at any point. If we go back (i.e., scroll to the left) and click on another tab on an item, everything downstream of that item (i.e., to the right) gets deleted and replaced by a new set of nodes. This could make retrieving a particular history of browsing a bit tricky, but encourages exploration. Both Scispace and ResearchRabbit have the ability to add items to a collection, so you can keep track of things you discover.

Lots of food for thought, I’m assuming that there is some user interface/experience research on Miller columns. One thing to remember is that Miller columns are most often associated with trees, but in this case we are exploring a network. That means that potentially there is no limit to the number of columns being generated as we wander through the graph. It will be interesting to think about what the average depth is likely to be, in other words, how deep down the rabbit hole will be go?

Update

Should add link to David Regev's explorations of Flow Browser.

Written with StackEdit.

Monday, April 03, 2023

ChatGPT, semantic search, and knowledge graphs

One thing about ChatGPT is it has opened my eyes to some concepts I was dimly aware of but am only now beginning to fully appreciate. ChatGPT enables you ask it questions, but the answers depend on what ChatGPT “knows”. As several people have noted, what would be even better is to be able to run ChatGPT on your own content. Indeed, ChatGPT itself now supports this using plugins.

Paul Graham GPT

However, it’s still useful to see how to add ChatGPT functionality to your own content from scratch. A nice example of this is Paul Graham GPT by Mckay Wrigley. Mckay Wrigley took essays by Paul Graham (a well known venture capitalist) and built a question and answer tool very like ChatGPT.

Because you can send a block of text to ChatGPT (as part of the prompt) you can get ChatGPT to summarise or transform that information, or answer questions based on that information. But there is a limit to how much information you can pack into a prompt. You can’t put all of Paul Graham’s essays into a prompt for example. So a solution is to do some preprocessing. For example, given a question such as “How do I start a startup?” we could first find the essays that are most relevant to this question, then use them to create a prompt for ChatGPT. A quick and dirty way to do this is simply do a text search over the essays and take the top hits. But we aren’t searching for words, we are searching for answers to a question. The essay with the best answer might not include the phrase “How do I start a startup?”.

Enter Semantic search. The key concept behind semantic search is that we are looking for documents with similar meaning, not just similarity of text. One approach to this is to represent documents by “embeddings”, that is, a vector of numbers that encapsulate features of the document. Documents with similar vectors are potentially related. In semantic search we take the query (e.g., “How do I start a startup?”), compute its embedding, then search among the documents for those with similar embeddings.

To create Paul Graham GPT Mckay Wrigley did the following. First he sent each essay to the OpenAI API underlying ChatGPT, and in return he got the embedding for that essay (a vector of 1536 numbers). Each embedding was stored in a database (Mckay uses Postgres with pgvector). When a user enters a query such as “How do I start a startup?” that query is also sent to the OpenAI API to retrieve its embedding vector. Then we query the database of embeddings for Paul Graham’s essays and take the top five hits. These hits are, one hopes, the most likely to contain relevant answers. The original question and the most similar essays are then bundled up and sent to ChatGPT which then synthesises an answer. See his GitHub repo for more details. Note that we are still using ChatGPT, but on a set of documents it doesn’t already have.

Knowledge graphs

I’m a fan of knowledge graphs, but they are not terribly easy to use. For example, I built a knowledge graph of Australian animals Ozymandias that contains a wealth of information on taxa, publications, and people, wrapped up in a web site. If you want to learn more you need to figure out how to write queries in SPARQL, which is not fun. Maybe we could use ChatGPT to write the SPARQL queries for us, but it would be much more fun to be simply ask natural language queries (e.g., “who are the experts on Australian ants?”). I made some naïve notes on these ideas Possible project: natural language queries, or answering “how many species are there?” and Ozymandias meets Wikipedia, with notes on natural language generation.

Of course, this is a well known problem. Tools such as RDF2vec can take RDF from a knowledge graph and create embeddings which could in tern be used to support semantic search. But it seems to me that we could simply this process a bit by making use of ChatGPT.

Firstly we would generate natural language statements from the knowledge graph (e.g., “species x belongs to genus y and was described in z”, “this paper on ants was authored by x”, etc.) that cover the basic questions we expect people to ask. We then get embeddings for these (e.g., using OpenAI). We then have an interface where people can ask a question (“is species x a valid species?”, “who has published on ants”, etc.), we get the embedding for that question, retrieve natural language statements that the closest in embedding “space”, package everything up and ask ChatGPT to summarise the answer.

The trick, of course, is to figure out how t generate natural language statements from the knowledge graph (which amounts to deciding what paths to traverse in the knowledge graph, and how to write those paths is something approximating English). We also want to know something about the sorts of questions people are likely to ask so that we have a reasonable chance of having the answers (for example, are people going to ask about individual species, or questions about summary statistics such as numbers of species in a genus, etc.).

What makes this attractive is that it seems a straightforward way to go from a largely academic exercise (build a knowledge graph) to something potentially useful (a question and answer machine). Imagine if something like the defunct BBC wildlife site (see Blue Planet II, the BBC, and the Semantic Web: a tale of lessons forgotten and opportunities lost) revived here had a question and answer interface where we could ask questions rather than passively browse.

Summary

I have so much more to learn, and need to think about ways to incorporate semantic search and ChatGPT-like tools into knowledge graphs.

Written with StackEdit.

ChatGPT, of course

I haven’t blogged for a while, work and other reasons have meant I’ve not had much time to think, and mostly I blog to help me think.

ChatGPT is obviously a big thing at the moment, and once we get past the moral panic (“students can pass exams using AI!”) there are a lot of interesting possibilities to explore. Inspired by essays such as How Q&A systems based on large language models (eg GPT4) will change things if they become the dominant search paradigm — 9 implications for libraries and Cheating is All You Need, as well as [Paul Graham GPT](https://paul-graham-gpt.vercel.app) I thought I’d try a few things and see where this goes.

ChatGPT can do some surprising things.

Parse bibliographic data

I spend a LOT of time working with bibliographic data, trying to parse it into structured data. ChatGPT can do this:

Note that it does more than simply parse the strings, it expands journal abbreviations such as “J. Malay Brch. R. Asiat. Soc.” to the full name “Journal of the Malayan Branch of the Royal Asiatic Society”. So we can get clean, parsed data in a range of formats.

Parse specimens

Based on the success with parsing bibliographic strings I wondered how well it could handle citation software specimens (“material examined”). Elsewhere I’ve been critical of Plazi’s ability to do this, see Problems with Plazi parsing: how reliable are automated methods for extracting specimens from the literature?.

For example, given this specimen record on p. 130 of doi:10.5852/ejt.2021.775.1553

LAOS • Kammoune Province, Bunghona Market, 7 km Nof Xe Bangfai River;
17.13674° N, 104.98591° E; E. Jeratthitikul, K. Wisittikoson, A. Fanka, N. Wutthituntisil and P. Prasankok leg.; sold by local people;
MUMNH-UNI2831.

ChatGPT extracted a plausible Darwin Core record:

I’ve been experimenting with parsing specimen records using the same machine learning approach for bibliographic data (e.g., Citation parsing tool released), perhaps it’s time to rethink that idea.

Correct OCR errors

OCR errors are a well known limitation of the Biodiversity Heritage Library (BHL), see for example OCR Improvements: An Early Analysis. Given that ChatGPT is built on top of a large language model it seems obvious to see how it handles OCR errors. I tried the following snippet of text from BHL page 4312227:

i BREVIORA No. 185

Jiiaii Jr., 16, who always join him and Hedriek in th(> search for
frogs. Iloraeio Mayorga, research assistant to tlie author, has also
been (wtreniely useful, not only in field work, but also in the
tedious curatorial and laboratory work that usually fol- lows. Dr.
John Randall kindly took tlie photouraplis of E. hedricki, and the
Galiiiaues family of San Juan made their El Verde house available as
a (MMiter of activities for the colleetiuff party. To all these
jx’opic, the autlioi- feels (iee])ly indebted.

Elet’tiierodactylus iiKnRrcKi s]). n.

Type. Museum of Coniparative Zoology Xo. ;}()!)()8, c^ from El Verde,
west flank of El Yuiuinc Puerto Hico. I.IOO ft. Coll. J. A. Kivero,
11 Aug-. I!)(i2.

ChatGPT returns this:

i BREVIORA No. 185

Juan Jr., 16, who always joins him and Hedrick in the search for
frogs. Horacio Mayorga, research assistant to the author, has also
been extremely useful, not only in field work, but also in the tedious
curatorial and laboratory work that usually follows. Dr. John Randall
kindly took the photographs of E. hedricki, and the Galinanes family
of San Juan made their El Verde house available as a center of
activities for the collecting party. To all these people, the author
feels deeply indebted.

Eleutherodactylus hedricki sp. n.

Type. Museum of Comparative Zoology No. 30908, c? from El Verde, west
flank of El Yunque, Puerto Rico. 1100 ft. Coll. J. A. Rivero, 11 Aug.
1962.

Comparing this to the scanned image ChatGPT it does pretty well, for example the gobbledegook “Elet’tiierodactylus iiKnRrcKi” is correctly translated as “Eleutherodactylus hedricki”. Running all of BHL through ChatGPT probably isn’t feasible, but one could imagine targeted cleaning of key papers.

Summary

These small experiments are fairly trivial, but they are the sort of tedious tasks that would otherwise require significant programming (or other resources) to solve. But ChatGPT can do rather more, as I hope to discuss in the next post.

Written with StackEdit.

Tuesday, March 14, 2023

Dugald Stuart Page 1936-2022

My dad died last weekend. Below is a notice in today's New Zealand Herald. I'm in New Zealand for his funeral. Don't really have the words for this right now.

Friday, December 16, 2022

David Remsen

I heard yesterday from Martin Kalfatovic (BHL) that David Remsen has died. Very sad news. It's starting to feel like iPhylo might end up being a list of obituaries of people working on biodiversity informatics (e.g., Scott Federhen).

I spent several happy visits at MBL at Woods Hole talking to Dave at the height of the uBio project, which really kickstarted large scale indexing of taxonomic names, and the use of taxonomic name finding tools to index the literature. His work on uBio with David ("Paddy") Patterson led to the Encyclopedia of Life (EOL).

A number of the things I'm currently working on are things Dave started. For example, I recently uploaded a version of his dataset for Nomenclator Zoologicus[1] to ChecklistBank where I'm working on augmenting that original dataset by adding links to the taxonomic literature. My BioRSS project is essentially an attempt to revive uBioRSS[2] (see Revisiting RSS to monitor the latest taxonomic research).

I have fond memories of those visits to Woods Hole. A very sad day indeed.

Update: The David Remsen Memorial Fund has been set up on GoFundMe.

1. Remsen, D. P., Norton, C., & Patterson, D. J. (2006). Taxonomic Informatics Tools for the Electronic Nomenclator Zoologicus. The Biological Bulletin, 210(1), 18–24. https://doi.org/10.2307/4134533

2. Patrick R. Leary, David P. Remsen, Catherine N. Norton, David J. Patterson, Indra Neil Sarkar, uBioRSS: Tracking taxonomic literature using RSS, Bioinformatics, Volume 23, Issue 11, June 2007, Pages 1434–1436, https://doi.org/10.1093/bioinformatics/btm109

Thursday, September 29, 2022

The ideal taxonomic journal

This is just some random notes on an “ideal” taxonomic journal, inspired in part by some recent discussions on “turbo-taxonomy” (e.g., https://doi.org/10.3897/zookeys.1087.76720 and https://doi.org/10.1186/1742-9994-10-15), and also examples such as the Australian Journal of Taxonomy https://doi.org/10.54102/ajt.qxi3r which seems well-intentioned but limited.

XML

One approach is to have highly structured text that embeds detailed markup, and ideally a tool that generates markup in XML. This is the approach taken by Pensoft. There is an inevitable trade-off between the burden on authors of marking up text versus making the paper machine readable. In some ways this seems misplaced effort given that there is little evidence that publications by themselves have much value (see The Business of Extracting Knowledge from Academic Publications). “Value” in this case means as a source of data or factual statements that we can compute over. Human-readable text is not a good way to convey this sort of information.

It’s also interesting that many editing tools are going in the opposite direction, for example there are minimalist tools using Markdown where the goal is to get out of the author’s way, rather than impose a way of writing. Text is written by humans for humans, so the tools should be human-friendly.

The idea of publishing using XML is attractive in that it gives you XML that can be archived by, say, PubMed Central, but other than that the value seems limited. A cursory glance at download stats for journals that provide PDF and XML downloads, such as PLoS One and ZooKeys, PDF is by far the more popular format. So arguably there is little value in providing XML. Those who have tried to use JATS-XML as an authoring tool have not had a happy time: How we tried to JATS XML. However, there are various tools to help with the process, such as docxToJats,
texture, and jats-xml-to-pdf if this is the route one wants to take.

Automating writing manuscripts

The dream, of course, is to have a tool where you store all your taxonomic data (literature, specimens, characters, images, sequences, media files, etc.) and at the click of a button generate a paper. Certainly some of this can be automated, much nomenclatural and specimen information could be converted to human-readable text. Ideally this computer-generated text would not be edited (otherwise it could get out of sync with the underlying data). The text should be transcluded. As an aside, one way to do this would be to include things such as lists of material examined as images rather than text while the manuscript is being edited. In the same way that you (probably) wouldn’t edit a photograph within your text editor, you shouldn’t be editing data. When the manuscript is published the data-generated portions can then be output as text.

Of course all of this assumes that we have taxonomic data in a database (or some other storage format, including plain text and Mark-down, e.g. Obsidian, markdown, and taxonomic trees) that can generate outputs in the various formats that we need.

Archiving data and images

One of the really nice things that Plazi do is have a pipeline that sends taxonomic descriptions and images to Zenodo, and similar data to GBIF. Any taxonomic journal should be able to do this. Indeed, arguably each taxonomic treatment within the paper should be linked to the Zenodo DOI at the time of publication. Indeed, we could imagine ultimately having treatments as transclusions within the larger manuscript. Alternatively we could store the treatments as parts of the larger article (rather like chapters in a book), each with a CrossRef DOI. I’m still sceptical about whether these treatments are as important as we make out, see Does anyone cite taxonomic treatments?. But having machine-readable taxonomic data archived and accessible is a good thing. Uploading the same data to GBIF makes much of that data immediately accessible. Now that GBIF offers hosted portals there is the possibility of having custom interfaces to data from a particular journal.

Name and identifier registration

We would also want automatic registration of new taxonomic names, for which there are pipelines (see “A common registration-to-publication automated pipeline for nomenclatural acts for higher plants (International Plant Names Index, IPNI), fungi (Index Fungorum, MycoBank) and animals (ZooBank)” https://doi.org/10.3897/zookeys.550.9551). These pipelines do not seem to be documented in much detail, and the data formats differ across registration agencies (e.g., IPNI and ZooBank). For example, ZooBank seems to require TaxPub XML.

Registration of names and identifiers, especially across multiple registration agencies (ZooBank, CrossRef, DataCite, etc.) requires some coordination, especially when one registration agency requires identifiers from another.

Summary

If data is key, then the taxonomic paper itself becomes something of a wrapper around that data. It still serves the function of being human-readable, providing broader context for the work, and as an archive that conforms to currently accepted ways to publish taxonomic names. But in some ways it is the last interesting part of the process.

Written with StackEdit.

Wednesday, September 14, 2022

DNA barcoding as intergenerational transfer of taxonomic knowledge

I tweeted about this but want to bookmark it for later as well. The paper “A molecular-based identification resource for the arthropods of Finland” doi:10.1111/1755-0998.13510 contains the following:

…the annotated barcode records assembled by FinBOL participants represent a tremendous intergenerational transfer of taxonomic knowledge … the time contributed by current taxonomists in identifying and contributing voucher specimens represents a great gift to future generations who will benefit from their expertise when they are no longer able to process new material.

I think this is a very clever way to characterise the project. In an age of machine learning this may be commonest way to share knowledge , namely as expert-labelled training data used to build tools for others. Of course, this means the expertise itself may be lost, which has implications for updating the models if the data isn’t complete. But it speaks to Charles Godfrey’s theme of “Taxonomy as information science”.

Note that the knowledge is also transformed in the sense that the underlying expertise of interpreting morphology, ecology, behaviour, genomics, and the past literature is not what is being passed on. Instead it is probabilities that a DNA sequence belongs to a particular taxon.

This feels is different to, say iNaturalist, where there is a machine learning model to identify images. In that case, the model is built on something the community itself has created, and continues to create. Yes, the underlying idea is that same: “experts” have labelled the data, a model is trained, the model is used. But the benefits of the iNaturalist model are immediately applicable to the people whose data built the model. In the case of barcoding, because the technology itself is still not in the hands of many (relative to, say, digital imaging), the benefits are perhaps less tangible. Obviously researchers working with environmental DNA will find it very useful, but broader impact may await the arrival of citizen science DNA barcoding.

The other consideration is whether the barcoding helps taxonomists. Is it to be used to help prioritise future work (“we are getting lots of unknown sequences in these taxa, lets do some taxonomy there”), or is it simply capturing the knowledge of a generation that won’t be replaced:

The need to capture such knowledge is essential because there are, for example, no young Finnish taxonomists who can critically identify species in many key groups of ar- thropods (e.g., aphids, chewing lice, chalcid wasps, gall midges, most mite lineages).

The cycle of collect data, test and refine model, collect more data, rinse and repeat that happens with iNaturalist creates a feedback loop. It’s not clear that a similar cycle exists for DNA barcoding.

Written with StackEdit.

Thursday, September 08, 2022

Local global identifiers for decentralised wikis

I've been thinking a bit about how one could use a Markdown wiki-like tool such as Obsidian to work with taxonomic data (see earlier posts Obsidian, markdown, and taxonomic trees and Personal knowledge graphs: Obsidian, Roam, Wikidata, and Xanadu).

One "gotcha" would be how to name pages. If we treat the database as entirely local, then the page names don't matter, but what if we envisage sharing the database, or merging it with others (for example, if we divided a taxon up into chunks, and different people worked on those different chunks)?

This is the attraction of globally unique identifiers. You and I can independently work on the same thing, such as data linked to scientific paper, safe in the knowledge that if we both use the DOI for that paper we can easily combine what we've done. But global identifiers can also be a pain, especially if we need to use a service to look them up ("is there a DOI for this paper?", "what is the LSID for this taxonomic name?").

Life would be easier if we could generate identifiers "locally", but had some assurance that they would be globally unique, and that anyone else generating an identifier for the same thing would arrive at the same identifier (this eliminates things such as UUIDs which are intentionally designed to prvent people genrrating the same identifier). One approach is "content addressing" (see, e.g. Principles of Content Addressing - dead link but in the Wayabck Machine, see also btrask/stronglink). For example, we can generate a cryptographic hash of a file (such as a PDF) and use that as the identifier.

Now the problem is that we have globally unique, but ugly and unfriendly identifiers (such as "6c98136eba9084ea9a5fc0b7693fed8648014505"). What we need are nice, easy to use identifiers we can use as page names. Wikispecies serves as a possible role model, where taxon names serve as page names, as do simplified citations (e.g., authors and years). This model runs into the problem that taxon names aren't unique, nor are author + year combinations. In Wikispecies this is resolved by having a centralised database where it's first come, first served. If there is a name clash you have to create a new name for your page. This works, but what if you have multiple databases un by different people? How do we ensure the identifiers are the same?

Then I remembered Roger Hyam's flight of fantasy over a decade ago: SpeciesIndex.org – an impractical, practical solution. He proposed the following rules to generate a unique URI for a taxonomic name:

  • The URI must start with "http://speciesindex.org" followed by one or more of the following separated by slashes.
  • First word of name. Must only contain letters. Must not be the same as one of the names of the nomenclatural codes (icbn or iczn). Optional but highly recommended.
  • Second word of name. Must only contain letters and not be a nomenclatural code name. Optional.
  • Third word of name. Must only contain letters and not be a nomenclatural code name. Optional.
  • Year of publication. Must be an integer greater than 1650 and equal to or less than the current year. If this is an ICZN name then this should be the year the species (epithet) was published as is commonly cited after the name. If this is an ICBN name at species or below then it is the date of the combination. Optional. Recommended for zoological names if known. Not recommended for botanical names unless there is a known problem with homonyms in use by non-taxonomists.
  • Nomenclatural code governing the name of the taxon. Currently this must be either 'icbn' or 'iczn'. This may be omitted if the code is unknown or not relevant. Other codes may be added to this list.
  • Qualifier This must be a Version 4 RFC-4122 UUID. Optional. Used to generate a new independent identifier for a taxon for which the conventional name is unknown or does not exist or to indicate a particular taxon concept that bears the embedded name.
  • The whole speciesindex.org URI string should be considered case sensitive. Everything should be lower case apart from the first letter of words that are specified as having upper case in their relevant codes e.g. names at and above the rank of genus.

Roger is basically arging that while names aren't unique (i.e., we have homonyms such as Abronia) they are pretty close to being so, and with a few tweaks we can come up with a unique representation. Another way to think about this if we had a database of all taxonomics, we could construct a trie and for each name find the shortest set of name parts (genus, species, etc), year, and code that gave us a unique string for that name. In many cases the species name may be all we need, in other cases we may need to add year and/or nomenclatural code to arrive at a unique string.

What about bibliographic references? Well many of us will have databases (e.g., Endnote, Mendeley, Zotero, etc.) which generate "cite keys". These are typically short, memorable identifiers for a reference that are unique within that database. There is an interesting discussion on the JabRef forum regarding a "Universal Citekey Generator", and source code is available cparnot/universal-citekey-js. I've yet to explore this in detail, but it looks a promising way to generate unique identifiers from basic metadata (echos of more elaborate schemes such as SICIs). For example,

Senna AR, Guedes UN, Andrade LF, Pereira-Filho GH. 2021. A new species of amphipod Pariphinotus Kunkel, 1910 (Amphipoda: Phliantidae) from Southwestern Atlantic. Zool Stud 60:57. doi:10.6620/ZS.2021.60-57.
becomes "Senna:2021ck". So if two people have the same, core, metadata for a paper they can generate the same key.

Hence it seems with a few conventions (and maybe some simple tools to support them) we could have decentralised wiki-like tools that used the same identifiers for the same things, and yet those identfiiers were short and human-friendly.

Thursday, September 01, 2022

Does anyone cite taxonomic treatments?

Taxonomic treatments have come up in various discussions I'm involved in, and I'm curious as to whether they are actually being used, in particular, whether they are actually being cited. Consider the following quote:
The taxa are described in taxonomic treatments, well defined sections of scientific publications (Catapano 2019). They include a nomenclatural section and one or more sections including descriptions, material citations referring to studied specimens, or notes ecology and behavior. In case the treatment does not describe a new discovered taxon, previous treatments are cited in the form of treatment citations. This citation can refer to a previous treatment and add additional data, or it can be a statement synonymizing the taxon with another taxon. This allows building a citation network, and ultimately is a constituent part of the catalogue of life. - Taxonomic Treatments as Open FAIR Digital Objects https://doi.org/10.3897/rio.8.e93709

"Traditional" academic citation is from article to article. For example, consider these two papers:

Li Y, Li S, Lin Y (2021) Taxonomic study on fourteen symphytognathid species from Asia (Araneae, Symphytognathidae). ZooKeys 1072: 1-47. https://doi.org/10.3897/zookeys.1072.67935
Miller J, Griswold C, Yin C (2009) The symphytognathoid spiders of the Gaoligongshan, Yunnan, China (Araneae: Araneoidea): Systematics and diversity of micro-orbweavers. ZooKeys 11: 9-195. https://doi.org/10.3897/zookeys.11.160

Li et al. 2021 cites Miller et al. 2009 (although Pensoft seems to have broken the citation such that it does appear correctly either on their web page or in CrossRef).

So, we have this link: [article]10.3897/zookeys.1072.67935 --cites--> [article]10.3897/zookeys.11.160. One article cites another.

In their 2021 paper Li et al. discuss Patu jidanweishi Miller, Griswold & Yin, 2009:

There is a treatment for the original description of Patu jidanweishi at https://doi.org/10.5281/zenodo.3792232, which was created by Plazi with a time stamp "2020-05-06T04:59:53.278684+00:00". The original publication date was 2009, the treatments are being added retrospectively.

In an ideal world my expectation would be that Li et al. 2021 would have cited the treatment, instead of just providing the text string "Patu jidanweishi Miller, Griswold & Yin, 2009: 64, figs 65A–E, 66A, B, 67A–D, 68A–F, 69A–F, 70A–F and 71A–F (♂♀)." Isn't the expectation under the treatment model that we would have seen this relationship:

[article]10.3897/zookeys.1072.67935 --cites--> [treatment]https://doi.org/10.5281/zenodo.3792232

Furthermore, if it is the case that "[i]n case the treatment does not describe a new discovered taxon, previous treatments are cited in the form of treatment citations" then we should also see a citation between treatments, in other words Li et al.'s 2021 treatment of Patu jidanweishi (which doesn't seem to have a DOI but is available on Plazi' web site as https://tb.plazi.org/GgServer/html/1CD9FEC313A35240938EC58ABB858E74) should also cite the original treatment? It doesn't - but it does cite the Miller et al. paper.

So in this example we don't see articles citing treatments, nor do we see treatments citing treatments. Playing Devil's advocate, why then do we have treatments? Does't the lack of citations suggest that - despite some taxonomists saying this is the unit that matters - they actually don't. If we pay attention to what people do rather than what they say they do, they cite articles.

Now, there are all sorts of reasons why we don't see [article] -> [treatment] citations, or [treatment] -> [treatment] citations. Treatments are being added after the fact by Plazi, not by the authors of the original work. And in many cases the treatments that could be cited haven't appeared until after that potentially citing work was published. In the example above the Miller et al. paper dates from 2009, but the treatment extracted only went online in 2020. And while there is a long standing culture of citing publications (ideally using DOIs) there isn't an equivalent culture of citing treatments (beyond the simple text strings).

Obviously this is but one example. I'd need to do some exploration of the citation graph to get a better sense of citations patterns, perhaps using CrossRef's event data. But my sense is that taxonomists don't cite treatments.

I'm guessing Plazi would respond by saying treatments are cited, for example (indirectly) in GBIF downloads. This is true, although arguably people aren't citing the treatment, they're citing specimen data in those treatments, and that specimen data could be extracted at the level of articles rather than treatments. In other words, it's not the treatments themselves that people are citing.

To be clear, I think there is value in being able to identify those "well defined sections" of a publication that deal with a given taxon (i.e., treatments), but it's not clear to me that these are actually the citable units people might hope them to be. Likewise, journals such as ZooKeys have DOIs for individual figures. Does anyone actually cite those?

Wednesday, August 24, 2022

Can we use the citation graph to measure the quality of a taxonomic database?

More arm-waving notes on taxonomic databases. I've started to add data to ChecklistBank and this has got me thinking about the issue of data quality. When you add data to ChecklistBank you are asked to give a measure of confidence based on the Catalogue of Life Checklist Confidence system of one - five stars: ★ - ★★★★★. I'm scepetical about the notion of confidence or "trust" when it is reduced to a star system (see also Can you trust EOL?). I could literally pick any number of stars, there's no way to measure what number of stars is appropriate. This feeds into my biggest reservation about the Catalogue of Life, it's almost entirely authority based, not evidence based. That is, rather than give us evidence for why a particular taxon is valid, we are (mostly) just given a list of taxa are asked to accept those as gospel, based on assertions by one or more authorities. I'm not necessarly doubting the knowledge of those making these lists, it's just that I think we need to do better than "these are the accepted taxa because I say so" implict in the Catalogue of Life.

So, is there any way we could objectively measure the quality of a particular taxonomic checklist? Since I have a long standing interest in link the primary taxonomic litertaure to names in databases (since that's where the evidence is), I keep wondering whether measures based on that literture could be developed.

I recently revisited the fascinating (and quite old) literature on rates of synonymy:

Gaston Kevin J. and Mound Laurence A. 1993 Taxonomy, hypothesis testing and the biodiversity crisisProc. R. Soc. Lond. B.251139–142 http://doi.org/10.1098/rspb.1993.0020
Andrew R. Solow, Laurence A. Mound, Kevin J. Gaston, Estimating the Rate of Synonymy, Systematic Biology, Volume 44, Issue 1, March 1995, Pages 93–96, https://doi.org/10.1093/sysbio/44.1.93

A key point these papers make is that the observed rate of synonymy is quite high (that is, many "new species" end up being merged with already known species), and that because it can take time to discover that a species is a synonym the actual rate may be even higher. In other words, in diagrams like the one reproduced below, the reason the proportion of synonyms declines the nearer we get to the present day (this paper came out in 1995) is not because are are creating fewer synonyms but because we've not yet had time to do the work to uncover the remaining synonyms.

Put another way, these papers are arguing that real work of taxonomy is revision, not species discovery, especially since it's not uncommon for > 50% of species in a taxon to end up being synonymised. Indeed, if a taxoomic group has few synonyms then these authors would argue that's a sign of neglect. More revisionary work would likely uncover additional synonyms. So, what we need is a way to measure the amount of research on a taxonomic group. It occurs to me that we could use the citation graph as a way to tackle this. Lets imagine we have a set of taxa (say a family) and we have all the papers that described new species or undertook revisions (or both). The extensiveness of that work could be measured by the citation graph. For example, build the citation graph for those papers. How many original species decsriptions are not cited? Those species have been potentially neglected. How many large-scale revisions have there been (as measured by the numbers of taxonomic papers those revisions cite)? There are some interesting approaches to quantifying this, such as using hubs and authorities.

I'm aware that taxonomists have not had the happiest relationship with citations:

Pinto ÂP, Mejdalani G, Mounce R, Silveira LF, Marinoni L, Rafael JA. Are publications on zoological taxonomy under attack? R Soc Open Sci. 2021 Feb 10;8(2):201617. doi: 10.1098/rsos.201617. PMID: 33972859; PMCID: PMC8074659.
Still, I think there is an intriguing possibility here. For this approach to work, we need to have linked taxonomic names to publications, and have citation data for those publications. This is happening on various platforms. Wikidata, for example, is becoming a repository of the taxonomic literature, some of it with citation links.
Page RDM. 2022. Wikidata and the bibliography of life. PeerJ 10:e13712 https://doi.org/10.7717/peerj.13712
Time for some experiments.

Monday, August 22, 2022

Linking taxonomic names to the literature

Just some thoughts as I work through some datasets linking taxonomic names to the literature.

In the diagram above I've tried to capture the different situatios I encounter. Much of the work I've done on this has focussed on case 1 in the diagram: I want to link a taxonomic name to an identifier for the work in which that name was published. In practise this means linking names to DOIs. This has the advantage of linking to a citable indentifier, raising questions such as whether citations of taxonmic papers by taxonomic databases could become part of a taxonomist's Google Scholar profile.

In many taxonomic databases full work-level citations are not the norm, instead taxonomists cite one or more pages within a work that are relevant to a taxonomic name. These "microcitations" (what the U.S. legal profession refer to as "point citations" or "pincites", see What are pincites, pinpoints, or jump legal references?) require some work to map to the work itself (which is typically the thing that has a citatble identifier such as a DOI).

Microcitations (case 2 in the diagram above) can be quite complex. Some might simply mention a single page, but others might list a series of (not necessarily contiguous) pages, as well as figures, plates etc. Converting these to citable identifiers can be tricky, especially as in most cases we don't have page-level identifiers. The Biodiversity Heritage Library (BHL) does have URLs for each scanned page, and we have a standard for referring to pages in a PDF (page=<pageNum>, see RFC 8118). But how do we refer to a set of pages? Do we pick the first page? Do we try and represent a set of pages, and if so, how?

Another issue with page-level identifiers is that not everything on a given page may be relevant to the taxonomic name. In case 2 above I've shaded in the parts of the pages and figure that refer to the taxonomic name. An example where this can be problematic is the recent test case I created for BHL where a page image was included for the taxonomic name Aphrophora impressa. The image includes the species description and a illustration, as well as text that relates to other species.

Given that not everything on a page need be relevant, we could extract just the relevant blocks of text and illustrations (e.g., paragraphs of text, panels within a figure, etc.) and treat that set of elements as the thing to cite. This is, of course, what Plazi are doing. The set of extracted blocks is glued together as a "treatment", assigned an identifier (often a DOI), and treated as a citable unit. It would be interesting to see to what extent these treatments are actually cited, for example, do subsequent revisions that cite work that include treatments cite those treatments, or just the work itself? Put another way, are we creating "threads" between taxonomic revisions?

One reason for these notes is that I'm exploring uploading taxonomic name - literature links to ChecklistBank and case 1 above is easy, as is case 3 (if we have treatment-level identifiers). But case 2 is problematic because we are linking to a set of things that may not have an identifier, which means a decision has to be made about which page to link to, and how to refer to that page.

Wednesday, August 03, 2022

Papers citing data that cite papers: CrossRef, DataCite, and the Catalogue of Life

Quick notes to self following on from a conversation about linking taxonomic names to the literature. There are different sorts of citation:
  1. Paper cites another paper
  2. Paper cites a dataset
  3. Dataset cites a paper
Citation type (1) is largely a solved problem (although there are issues of the ownership and use of this data, see e.g. Zootaxa has no impact factor. Citation type (2) is becoming more widespread (but not perfect as GBIF's #citethedoi campaign demonstrates. But the idea is well accepted and there are guides to how to do it, e.g.:
Cousijn, H., Kenall, A., Ganley, E. et al. A data citation roadmap for scientific publishers. Sci Data 5, 180259 (2018). https://doi.org/10.1038/sdata.2018.259
However, things do get problematic because most (but not all) DOIs for publications are managed by CrossRef, which has an extensive citation database linking papers to other paopers. Most datasets have DataCite DOIs, and DataCite manages its own citations links, but as far as I'm aware these two systems don't really taklk to each other. Citation type (3) is the case where a database is largely based on the literature, which applies to taxonomy. Taxonomic databases are essentially collections of literature that have opinions on taxa, and the database may simply compile those (e.g., a nomenclator), or come to some view on the applicability of each name. In an ideal would, each reference included in a taxonomic database would gain a citation, which would help better reflect the value of that work (a long standing bone of contention for taxonomists). It would be interesting to explore these issues further. CrossRef and DataCite do share Event Data (see also DataCite Event Data). Can this track citations of papers by a dataset? My take on Wayne's question:
Is there a way to turn those links into countable citations (even if just one per database) for Google Scholar?
is that he's is after type 3 citations, which I don't think we have a way to handle just yet (but I'd need to look at Event Data a bit more). Google Scholar is a black box, and the academic coimmunity's reliance on it for metrics is troubling. But it would be interetsing to try and figure out if there is a way to get Google Scholar to index the citations of taxonomic papers by databases. For instance, the Catalogue of Life has an ISSN 2405-884X so it can be treated as a publication. At the moment its web pages have lots of identifiers for people managing data and their organisations (lots of ORCIDs and RORs, and DOIs for individual datasets (e.g., checklistbank.org) but precious little in the way of DOIs for publications (or, indeed, ORCIDs for taxonomists). What would it take for taxonomic publications in the Catalogue of Life to be treated as first class citations?

Friday, May 27, 2022

Round trip from identifiers to citations and back again

Note to self (basically rewriting last year's Finding citations of specimens).

Bibliographic data supports going from identifier to citation string and back again, so we can do a "round trip."

1.

Given a DOI we can get structured data with a simple HTTP fetch, then use a tool such as citation.js to convert that data into a human-readable string in a variety of formats.

Identifier Structured data Human readable string
10.7717/peerj-cs.214 HTTP with content-negotiation CSL-JSON CSL templates Willighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214

2.

Going in the reverse direction (string to identifier) is a little more challenging. In the "old days" a typical strategy was to attempt to parse the citation string into structured data (see AnyStyle for a nice example of this), then we could extract a truple of (journal, volume, starting page) and use that to query CrossRef to find if there was an article with that tuple, which gave us the DOI.

Identifier Structured data Human readable string
10.7717/peerj-cs.214 OpenURL query journal, volume, start page Citation parser Willighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214

3.

Another strategy is to take all the citations strings for each DOI, index those in a search engine, then just use a simple search to find the best match to your citation string, and hence the DOI. This is what https://search.crossref.org does.

Identifier Human readable string
10.7717/peerj-cs.214 search Willighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214

At the moment my work on material citations (i.e., lists of specimens in taxonomic papers) is focussing on 1 (generating citations from specimen data in GBIF) and 2 (parsing citations into structured data).

Wednesday, May 11, 2022

Thoughts on TreeBASE dying(?)

So it looks like TreeBASE is in trouble, it's legacy Java code a victim of security issues. Perhaps this is a chance to rethink TreeBASE, assuming that a repository of published phylogenies is still considered a worthwhile thing to have (and I think that question is open).

Here's what I think could be done.

  1. The data (individual studies with trees and data) are packaged into whatever format is easiest (NEXUS, XML, JSON) and uploaded to a repository such as Zenodo for long term storage. They get DOIs for citability. This becomes the default storage for TreeBASE.
  2. The data is transformed into JSON and indexed using Elasticsearch. A simple web interface is placed on top so that people can easily find trees (never a strong point of the original TreeBASE). Trees are displayed natively on the web using SVG. The number one goal is for people to be able to find trees, view them, and download them.
  3. To add data to TreeBASE the easiest way would be for people to upload them direct to Zenodo and tag them "treebase". A bot then grabs a feed of these datasets and adds them to the search engine in (1) above. As time allows, add an interface where people upload data directly, it gets curated, then deposited in Zenodo. This presupposes that there are people available to do curation. Maybe have "stars" for the level of curation so that users know whether anyone has checked the data.

There's lots of details to tweak, for example how many of the existing URLs for studies are preserved (some URL mapping), and what about the API? And I'm unclear about the relationship with Dryad.

My sense is that the TreeBASE code is very much of its time (10-15 years ago), a monolithic block of code with SQL, Java, etc. If one was starting from scratch today I don't think this would be the obvious solution. Things have trended towards being simpler, with lots of building blocks now available in the cloud. Need a search engine? Just spin up a container in the cloud and you have one. More and more functionality can be devolved elsewhere.

Another other issue is how to support TreeBASE. It has essentially been a volunteer effort to date, with little or no funding. One reason I think having Zenodo as a storage engine is that it takes care of long term sustainability of the data.

I realise that this is all wild arm waving, but maybe now is the time to reinvent TreeBASE?

Updates

It's been a while since I've paid a lot of attention to phylogenetic databases, and it shows. There is a file-based storage system for phylogenies phylesystem (see "Phylesystem: a git-based data store for community-curated phylogenetic estimates" https://doi.org/10.1093/bioinformatics/btv276) that is sort of what I had in mind, although long term persistence is based on GitHub rather than a repository such as Zenodo. Phylesystem uses a truly horrible-looking JSON transformation of NeXML (NeXML itself is ugly), and TreeBASE also supports NeXML, so some form of NeXML or a JSON transformation seems the obvious storage format. It will probably need some cleaning and simplification if it is to be indexed easily. Looking back over the long history of TreeBASE and phylogenetic databases I'm struck by how much complexity has been introduced over time. I think the tech has gotten in the way sometimes (which might just be another way of saying that I'm not smart enough to make sense of it all.

So we could imagine a search engine that covers both TreeBASE and Open Tree of Life studies.

Basic metadata-based searches would be straightforward, and we could have a user interface that highlights the trees (I think TreeBASE's biggest search rival is a Google image search). The harder problem is searching by tree structure, for which there is an interesting literature without any decent implementations that I'm aware of (as I said, I've been out of this field a while).

So my instinct is we could go a long way with simply indexing JSON (CouchDB or Elasticsearch), then need to think a bit more cleverly about higher taxon and tree based searching. I've always thought that one killer query would be not so much "show me all the trees for my taxon" but "show me a synthesis of the trees for my taxon". Imagine a supertree of recent studies that we could use as a summary of our current knowledge, or a visualisation that summarises where there are conflicts among the trees.

Relevant code and sites