Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed. ISSN 2051-8188. Written content on this site is licensed under a Creative Commons Attribution 4.0 International license.
As trailed on a Twitter thread last week I’ve been working on a manuscript describing the efforts to map taxonomic names to their original descriptions in the taxonomic literature.
Putting together a manuscript on linking taxonomic names to the primary literature, basically “um, what, exactly, have you been doing all these years?”. TL;DR Across fungi, plants, and animals approx 1.3 million names have been linked to a persistent identifier for a publication.
A major gap in the biodiversity knowledge graph is a connection between taxonomic names and the taxonomic literature. While both names and publications often have persistent identifiers (PIDs), such as Life Science Identifiers (LSIDs) or Digital Object Identifiers (DOIs), LSIDs for names are rarely linked to DOIs for publications. This article describes efforts to make those connections across three large taxonomic databases: Index Fungorum, International Plant Names Index (IPNI), and the Index of Organism Names (ION). Over a million names have been matched to DOIs or other persistent identifiers for taxonomic publications. This represents approximately 36% of names for which publication data is available. The mappings between LSIDs and publication PIDs are made available through ChecklistBank. Applications of this mapping are discussed, including a web app to locate the citation of a taxonomic name, and a knowledge graph that uses data on researcher’s ORCID ids to connect taxonomic names and publications to authors of those names.
Much of the work has been linking taxa to names, which still has huge gaps. There are also interesting differences in coverage between plants, animals, and fungi (see preprint for details).
Some quick notes on interface ideas for digital libraries and/or knowledge graphs.
Recently there’s been something of an explosion in bibliographic tools to explore the literature. Examples include:
Elicit which uses AI to search for and summarise papers
_scite which uses AI to do sentiment analysis on citations (does paper A cite paper B favourably or not?)
ResearchRabbit which uses lists, networks, and timelines to discover related research
Scispace which navigates connections between papers, authors, topics, etc., and provides AI summaries.
As an aside, I think these (and similar tools) are a great example of how bibliographic data such as abstracts, the citation graph and - to a lesser extent - full text - have become commodities. That is, what was once proprietary information is now free to anyone, which in turns means a whole ecosystem of new tools can emerge. If I was clever I’d be building a Wardley map to explore this. Note that a decade or so ago reference managers like Zotero were made possible by publishers exposing basic bibliographic data on their articles. As we move to open citations we are seeing the next generation of tools.
Back to my main topic. As usual, rather than focus on what these tools do I’m more interested in how they look. I have history here, when the iPad came out I was intrigued by the possibilities it offered for displaying academic articles, as discussed here, here, here, here, and here. ResearchRabbit looks like this:
What is interesting about both is that they display content from left to right in vertical columns, rather than the more common horizontal rows. This sort of display is sometimes called Miller columns or a cascading list.
I’ve always found displaying a knowledge graph to be a challenge, as discussed elsewhere on this blog and in my paper on Ozymandias. Miller columns enable one to drill down in increasing depth, but it doesn’t need to be a tree, it can be a path within a network. What I like about ResearchRabbit and the original Scispace interface is that they present the current item together with a list of possible connections (e.g., authors, citations) that you can drill down on. Clicking on these will result in a new column being appended to the right, with a view (typically a list) of the next candidates to visit. In graph terms, these are adjacent nodes to the original item. The clickable badges on each item can be thought of as sets of edges that have the same label (e.g., “authored by”, “cites”, “funded”, “is about”, etc.). Each of these nodes itself becomes a starting point for further exploration. Note that the original starting point isn’t privileged, other than being the starting point. That is, each time we drill down we are seeing the same type of information displayed in the same way. Note also that the navigation can be though of as a card for a node, with buttons grouping the adjacent nodes. When we click on an individual button, it expands into a list in the next column. This can be thought of as a preview for each adjacent node. Clicking on an element in the list generates a new card (we are viewing a single node) and we get another set of buttons corresponding to the adjacent nodes.
One important behaviour in a Miller column interface is that the current path can be pruned at any point. If we go back (i.e., scroll to the left) and click on another tab on an item, everything downstream of that item (i.e., to the right) gets deleted and replaced by a new set of nodes. This could make retrieving a particular history of browsing a bit tricky, but encourages exploration. Both Scispace and ResearchRabbit have the ability to add items to a collection, so you can keep track of things you discover.
Lots of food for thought, I’m assuming that there is some user interface/experience research on Miller columns. One thing to remember is that Miller columns are most often associated with trees, but in this case we are exploring a network. That means that potentially there is no limit to the number of columns being generated as we wander through the graph. It will be interesting to think about what the average depth is likely to be, in other words, how deep down the rabbit hole will be go?
Update
Should add link to David Regev's explorations of Flow Browser.
One thing about ChatGPT is it has opened my eyes to some concepts I was dimly aware of but am only now beginning to fully appreciate. ChatGPT enables you ask it questions, but the answers depend on what ChatGPT “knows”. As several people have noted, what would be even better is to be able to run ChatGPT on your own content. Indeed, ChatGPT itself now supports this using plugins.
Paul Graham GPT
However, it’s still useful to see how to add ChatGPT functionality to your own content from scratch. A nice example of this is Paul Graham GPT by Mckay Wrigley. Mckay Wrigley took essays by Paul Graham (a well known venture capitalist) and built a question and answer tool very like ChatGPT.
Because you can send a block of text to ChatGPT (as part of the prompt) you can get ChatGPT to summarise or transform that information, or answer questions based on that information. But there is a limit to how much information you can pack into a prompt. You can’t put all of Paul Graham’s essays into a prompt for example. So a solution is to do some preprocessing. For example, given a question such as “How do I start a startup?” we could first find the essays that are most relevant to this question, then use them to create a prompt for ChatGPT. A quick and dirty way to do this is simply do a text search over the essays and take the top hits. But we aren’t searching for words, we are searching for answers to a question. The essay with the best answer might not include the phrase “How do I start a startup?”.
Semantic search
Enter Semantic search. The key concept behind semantic search is that we are looking for documents with similar meaning, not just similarity of text. One approach to this is to represent documents by “embeddings”, that is, a vector of numbers that encapsulate features of the document. Documents with similar vectors are potentially related. In semantic search we take the query (e.g., “How do I start a startup?”), compute its embedding, then search among the documents for those with similar embeddings.
To create Paul Graham GPT Mckay Wrigley did the following. First he sent each essay to the OpenAI API underlying ChatGPT, and in return he got the embedding for that essay (a vector of 1536 numbers). Each embedding was stored in a database (Mckay uses Postgres with pgvector). When a user enters a query such as “How do I start a startup?” that query is also sent to the OpenAI API to retrieve its embedding vector. Then we query the database of embeddings for Paul Graham’s essays and take the top five hits. These hits are, one hopes, the most likely to contain relevant answers. The original question and the most similar essays are then bundled up and sent to ChatGPT which then synthesises an answer. See his GitHub repo for more details. Note that we are still using ChatGPT, but on a set of documents it doesn’t already have.
Knowledge graphs
I’m a fan of knowledge graphs, but they are not terribly easy to use. For example, I built a knowledge graph of Australian animals Ozymandias that contains a wealth of information on taxa, publications, and people, wrapped up in a web site. If you want to learn more you need to figure out how to write queries in SPARQL, which is not fun. Maybe we could use ChatGPT to write the SPARQL queries for us, but it would be much more fun to be simply ask natural language queries (e.g., “who are the experts on Australian ants?”). I made some naïve notes on these ideas Possible project: natural language queries, or answering “how many species are there?” and Ozymandias meets Wikipedia, with notes on natural language generation.
Of course, this is a well known problem. Tools such as RDF2vec can take RDF from a knowledge graph and create embeddings which could in tern be used to support semantic search. But it seems to me that we could simply this process a bit by making use of ChatGPT.
Firstly we would generate natural language statements from the knowledge graph (e.g., “species x belongs to genus y and was described in z”, “this paper on ants was authored by x”, etc.) that cover the basic questions we expect people to ask. We then get embeddings for these (e.g., using OpenAI). We then have an interface where people can ask a question (“is species x a valid species?”, “who has published on ants”, etc.), we get the embedding for that question, retrieve natural language statements that the closest in embedding “space”, package everything up and ask ChatGPT to summarise the answer.
The trick, of course, is to figure out how t generate natural language statements from the knowledge graph (which amounts to deciding what paths to traverse in the knowledge graph, and how to write those paths is something approximating English). We also want to know something about the sorts of questions people are likely to ask so that we have a reasonable chance of having the answers (for example, are people going to ask about individual species, or questions about summary statistics such as numbers of species in a genus, etc.).
What makes this attractive is that it seems a straightforward way to go from a largely academic exercise (build a knowledge graph) to something potentially useful (a question and answer machine). Imagine if something like the defunct BBC wildlife site (see Blue Planet II, the BBC, and the Semantic Web: a tale of lessons forgotten and opportunities lost) revived here had a question and answer interface where we could ask questions rather than passively browse.
Summary
I have so much more to learn, and need to think about ways to incorporate semantic search and ChatGPT-like tools into knowledge graphs.
I spend a LOT of time working with bibliographic data, trying to parse it into structured data. ChatGPT can do this:
Note that it does more than simply parse the strings, it expands journal abbreviations such as “J. Malay Brch. R. Asiat. Soc.” to the full name “Journal of the Malayan Branch of the Royal Asiatic Society”. So we can get clean, parsed data in a range of formats.
LAOS • Kammoune Province, Bunghona Market, 7 km Nof Xe Bangfai River;
17.13674° N, 104.98591° E; E. Jeratthitikul, K. Wisittikoson, A. Fanka, N. Wutthituntisil and P. Prasankok leg.; sold by local people;
MUMNH-UNI2831.
ChatGPT extracted a plausible Darwin Core record:
I’ve been experimenting with parsing specimen records using the same machine learning approach for bibliographic data (e.g., Citation parsing tool released), perhaps it’s time to rethink that idea.
Correct OCR errors
OCR errors are a well known limitation of the Biodiversity Heritage Library (BHL), see for example OCR Improvements: An Early Analysis. Given that ChatGPT is built on top of a large language model it seems obvious to see how it handles OCR errors. I tried the following snippet of text from BHL page 4312227:
i BREVIORA No. 185
Jiiaii Jr., 16, who always join him and Hedriek in th(> search for
frogs. Iloraeio Mayorga, research assistant to tlie author, has also
been (wtreniely useful, not only in field work, but also in the
tedious curatorial and laboratory work that usually fol- lows. Dr.
John Randall kindly took tlie photouraplis of E. hedricki, and the
Galiiiaues family of San Juan made their El Verde house available as
a (MMiter of activities for the colleetiuff party. To all these
jx’opic, the autlioi- feels (iee])ly indebted.
Elet’tiierodactylus iiKnRrcKi s]). n.
Type. Museum of Coniparative Zoology Xo. ;}()!)()8, c^ from El Verde,
west flank of El Yuiuinc Puerto Hico. I.IOO ft. Coll. J. A. Kivero,
11 Aug-. I!)(i2.
ChatGPT returns this:
i BREVIORA No. 185
Juan Jr., 16, who always joins him and Hedrick in the search for
frogs. Horacio Mayorga, research assistant to the author, has also
been extremely useful, not only in field work, but also in the tedious
curatorial and laboratory work that usually follows. Dr. John Randall
kindly took the photographs of E. hedricki, and the Galinanes family
of San Juan made their El Verde house available as a center of
activities for the collecting party. To all these people, the author
feels deeply indebted.
Eleutherodactylus hedricki sp. n.
Type. Museum of Comparative Zoology No. 30908, c? from El Verde, west
flank of El Yunque, Puerto Rico. 1100 ft. Coll. J. A. Rivero, 11 Aug.
1962.
Comparing this to the scanned image ChatGPT it does pretty well, for example the gobbledegook “Elet’tiierodactylus iiKnRrcKi” is correctly translated as “Eleutherodactylus hedricki”. Running all of BHL through ChatGPT probably isn’t feasible, but one could imagine targeted cleaning of key papers.
Summary
These small experiments are fairly trivial, but they are the sort of tedious tasks that would otherwise require significant programming (or other resources) to solve. But ChatGPT can do rather more, as I hope to discuss in the next post.
My dad died last weekend. Below is a notice in today's New Zealand Herald. I'm in New Zealand for his funeral. Don't really have the words for this right now.
I heard yesterday from Martin Kalfatovic (BHL) that David Remsen has died. Very sad news. It's starting to feel like iPhylo might end up being a list of obituaries of people working on biodiversity informatics (e.g., Scott Federhen).
I spent several happy visits at MBL at Woods Hole talking to Dave at the height of the uBio project, which really kickstarted large scale indexing of taxonomic names, and the use of taxonomic name finding tools to index the literature. His work on uBio with David ("Paddy") Patterson led to the Encyclopedia of Life (EOL).
A number of the things I'm currently working on are things Dave started. For example, I recently uploaded a version of his dataset for Nomenclator Zoologicus[1] to ChecklistBank where I'm working on augmenting that original dataset by adding links to the taxonomic literature. My BioRSS project is essentially an attempt to revive uBioRSS[2] (see Revisiting RSS to monitor the latest taxonomic research).
I have fond memories of those visits to Woods Hole. A very sad day indeed.
Update: The David Remsen Memorial Fund has been set up on GoFundMe.
1. Remsen, D. P., Norton, C., & Patterson, D. J. (2006). Taxonomic Informatics Tools for the Electronic Nomenclator Zoologicus. The Biological Bulletin, 210(1), 18–24. https://doi.org/10.2307/4134533
2. Patrick R. Leary, David P. Remsen, Catherine N. Norton, David J. Patterson, Indra Neil Sarkar, uBioRSS: Tracking taxonomic literature using RSS, Bioinformatics, Volume 23, Issue 11, June 2007, Pages 1434–1436, https://doi.org/10.1093/bioinformatics/btm109
One approach is to have highly structured text that embeds detailed markup, and ideally a tool that generates markup in XML. This is the approach taken by Pensoft. There is an inevitable trade-off between the burden on authors of marking up text versus making the paper machine readable. In some ways this seems misplaced effort given that there is little evidence that publications by themselves have much value (see The Business of Extracting Knowledge from Academic Publications). “Value” in this case means as a source of data or factual statements that we can compute over. Human-readable text is not a good way to convey this sort of information.
It’s also interesting that many editing tools are going in the opposite direction, for example there are minimalist tools using Markdown where the goal is to get out of the author’s way, rather than impose a way of writing. Text is written by humans for humans, so the tools should be human-friendly.
The idea of publishing using XML is attractive in that it gives you XML that can be archived by, say, PubMed Central, but other than that the value seems limited. A cursory glance at download stats for journals that provide PDF and XML downloads, such as PLoS One and ZooKeys, PDF is by far the more popular format. So arguably there is little value in providing XML. Those who have tried to use JATS-XML as an authoring tool have not had a happy time: How we tried to JATS XML. However, there are various tools to help with the process, such as docxToJats,
texture, and jats-xml-to-pdf if this is the route one wants to take.
Automating writing manuscripts
The dream, of course, is to have a tool where you store all your taxonomic data (literature, specimens, characters, images, sequences, media files, etc.) and at the click of a button generate a paper. Certainly some of this can be automated, much nomenclatural and specimen information could be converted to human-readable text. Ideally this computer-generated text would not be edited (otherwise it could get out of sync with the underlying data). The text should be transcluded. As an aside, one way to do this would be to include things such as lists of material examined as images rather than text while the manuscript is being edited. In the same way that you (probably) wouldn’t edit a photograph within your text editor, you shouldn’t be editing data. When the manuscript is published the data-generated portions can then be output as text.
Of course all of this assumes that we have taxonomic data in a database (or some other storage format, including plain text and Mark-down, e.g. Obsidian, markdown, and taxonomic trees) that can generate outputs in the various formats that we need.
Archiving data and images
One of the really nice things that Plazi do is have a pipeline that sends taxonomic descriptions and images to Zenodo, and similar data to GBIF. Any taxonomic journal should be able to do this. Indeed, arguably each taxonomic treatment within the paper should be linked to the Zenodo DOI at the time of publication. Indeed, we could imagine ultimately having treatments as transclusions within the larger manuscript. Alternatively we could store the treatments as parts of the larger article (rather like chapters in a book), each with a CrossRef DOI. I’m still sceptical about whether these treatments are as important as we make out, see Does anyone cite taxonomic treatments?. But having machine-readable taxonomic data archived and accessible is a good thing. Uploading the same data to GBIF makes much of that data immediately accessible. Now that GBIF offers hosted portals there is the possibility of having custom interfaces to data from a particular journal.
Name and identifier registration
We would also want automatic registration of new taxonomic names, for which there are pipelines (see “A common registration-to-publication automated pipeline for nomenclatural acts for higher plants (International Plant Names Index, IPNI), fungi (Index Fungorum, MycoBank) and animals (ZooBank)” https://doi.org/10.3897/zookeys.550.9551). These pipelines do not seem to be documented in much detail, and the data formats differ across registration agencies (e.g., IPNI and ZooBank). For example, ZooBank seems to require TaxPub XML.
Registration of names and identifiers, especially across multiple registration agencies (ZooBank, CrossRef, DataCite, etc.) requires some coordination, especially when one registration agency requires identifiers from another.
Summary
If data is key, then the taxonomic paper itself becomes something of a wrapper around that data. It still serves the function of being human-readable, providing broader context for the work, and as an archive that conforms to currently accepted ways to publish taxonomic names. But in some ways it is the last interesting part of the process.
I tweeted about this but want to bookmark it for later as well. The paper “A molecular-based identification resource for the arthropods of Finland” doi:10.1111/1755-0998.13510 contains the following:
…the annotated barcode records assembled by FinBOL participants represent a tremendous intergenerational transfer of taxonomic knowledge … the time contributed by current taxonomists in identifying and contributing voucher specimens represents a great gift to future generations who will benefit from their expertise when they are no longer able to process new material.
I think this is a very clever way to characterise the project. In an age of machine learning this may be commonest way to share knowledge , namely as expert-labelled training data used to build tools for others. Of course, this means the expertise itself may be lost, which has implications for updating the models if the data isn’t complete. But it speaks to Charles Godfrey’s theme of “Taxonomy as information science”.
Note that the knowledge is also transformed in the sense that the underlying expertise of interpreting morphology, ecology, behaviour, genomics, and the past literature is not what is being passed on. Instead it is probabilities that a DNA sequence belongs to a particular taxon.
This feels is different to, say iNaturalist, where there is a machine learning model to identify images. In that case, the model is built on something the community itself has created, and continues to create. Yes, the underlying idea is that same: “experts” have labelled the data, a model is trained, the model is used. But the benefits of the iNaturalist model are immediately applicable to the people whose data built the model. In the case of barcoding, because the technology itself is still not in the hands of many (relative to, say, digital imaging), the benefits are perhaps less tangible. Obviously researchers working with environmental DNA will find it very useful, but broader impact may await the arrival of citizen science DNA barcoding.
The other consideration is whether the barcoding helps taxonomists. Is it to be used to help prioritise future work (“we are getting lots of unknown sequences in these taxa, lets do some taxonomy there”), or is it simply capturing the knowledge of a generation that won’t be replaced:
The need to capture such knowledge is essential because there are, for example, no young Finnish taxonomists who can critically identify species in many key groups of ar- thropods (e.g., aphids, chewing lice, chalcid wasps, gall midges, most mite lineages).
The cycle of collect data, test and refine model, collect more data, rinse and repeat that happens with iNaturalist creates a feedback loop. It’s not clear that a similar cycle exists for DNA barcoding.