Monday, October 25, 2021

Problems with Plazi parsing: how reliable are automated methods for extracting specimens from the literature?

The Plazi project has become one of the major contributors to GBIF with some 36,000 datasets yielding some 500,000 occurrences (see Plazi's GBIF page for details). These occurrences are extracted from taxonomic publication using automated methods. New data is published almost daily (see latest treatments). The map below shows the geographic distribution of material citations provided to GBIF by Plazi, which gives you a sense of the size of the dataset.

By any metric Plazi represents a considerable achievement. But often when I browse individual records on Plazi I find records that seem clearly incorrect. Text mining the literature is a challenging problem, but at the moment Plazi seems something of a "black box". PDFs go in, the content is mined, and data comes up to be displayed on the Plazi web site and uploaded to GBIF. Nowhere does there seem to be an evaluation of how accurate this text mining actually is. Anecdotally it seems to work well in some cases, but in others it produces what can only be described as bogus records.

Finding errors

A treatment in Plazi is a block of text (and sometimes illustrations) that refers to a single taxon. Often that text will include a description of the taxon, and list one or more specimens that have been examined. These lists of specimens ("material citations") are one of the key bits of information that Plaza extracts from a treatment as these citations get fed into GBIF as occurrences.

To help explore treatments I've constructed a simple web site that takes the Plazi identifier for a treatment and displays that treatment with the material citations highlighted. For example, for the Plazi treatment 03B5A943FFBB6F02FE27EC94FABEEAE7 you can view the marked up version at https://plazi-tester.herokuapp.com/?uri=622F7788-F0A4-449D-814A-5B49CD20B228. Below is an example of a material citation with its component parts tagged:

This is an example where Plazi has successfully parsed the specimen. But I keep coming across cases where specimens have not been parsed correctly, resulting in issues such as single specimens being split into multiple records (e.g., https://plazi-tester.herokuapp.com/?uri=5244B05EFFC8E20F7BC32056C178F496), geographical coordinates being misinterpreted (e.g., https://plazi-tester.herokuapp.com/?uri=0D228E6AFFC2FFEFFF4DE8118C4EE6B9), or collector's initials being confused with codes for natural history collections (e.g., https://plazi-tester.herokuapp.com/?uri=252C87918B362C05FF20F8C5BFCB3D4E).

Parsing specimens is a hard problem so it's not unexpected to find errors. But they do seem common enough to be easily found, which raises the question of just what percentage of these material citations are correct? How much of the data Plazi feeds to GBIF is correct? How would we know?

Systemic problems

Some of the errors I've found concern the interpretation of the parsed data. For example, it is striking that despite including marine taxa no Plazi record has a value for depth below sea level (see GBIF search on depth range 0-9999 for Plazi). But many records do have an elevation, including records from marine environments. Any record that has a depth value is interpreted by Plazi as being elevation, so we have aerial crustacea and fish.

Map of Plazi records with depth 0-9999m

Map of Plazi records with elevation 0-9999m

Anecdotally I've also noticed that Plazi seems to do well on zoological data, especially journals like Zootaxa, but it often struggles with botanical specimens. Botanists tend to cite specimens rather differently to zoologists (botanists emphasise collector numbers rather than specimen codes). Hence data quality in Plazi is likely to taxonomic biased.

Plazi is using GitHub to track issues with treatments so feedback on erroneous records is possible, but this seems inadequate to the task. There are tens of thousands of data sets, with more being released daily, and hundreds of thousands of occurrences, and relying on GitHub issues devolves the responsibility for error checking onto the data users. I don't have a measure of how many records in Plazi have problems, but because I suspect it is a significant fraction because for any given day's output I can typically find errors.

What to do?

Faced with a process that generates noisy data there are several of things we could do:

  1. Have tools to detect and flag errors made in generating the data.
  2. Have the data generator give estimates the confidence of its results.
  3. Improve the data generator.

I think a comparison with the problem of parsing bibliographic references might be instructive here. There is a long history of people developing tools to parse references (I've even had a go). State-of-the art tools such as AnyStyle feature machine learning, and are tested against human curated datasets of tagged bibliographic records. This means we can evaluate the performance of a method (how well does it retrieve the same results as human experts?) and also improve the method by expanding the corpus of training data. Some of these tools can provide a measures of how confident they are when classifying a string as, say, a person's name, which means we could flag potential issues for anyone wanting to use that record.

We don't have equivalent tools for parsing specimens in the literature, and hence have no easy way to quantify how good existing methods are, nor do we have a public corpus of material citations that we can use as training data. I blogged about this a few months ago and was considering using Plazi as a source of marked up specimen data to use for training. However based on what I've looked at so far Plazi's data would need to be carefully scrutinised before it could be used as training data.

Going forward, I think it would be desirable to have a set of records that can be used to benchmark specimen parsers, and ideally have the parsers themselves available as web services so that anyone can evaluate them. Even better would be a way to contribute to the training data so that these tools improve over time.

Plazi's data extraction tools are mostly desktop-based, that is, you need to download software to use their methods. However, there are experimental web services available as well. I've created a simple wrapper around the material citation parser, you can try it at https://plazi-tester.herokuapp.com/parser.php. It takes a single material citation and returns a version with elements such as specimen code and collector name tagged in different colours.

Summary

Text mining the taxonomic literature is clearly a gold mine of data, but at the same time it is potentially fraught as we try and extract structured data from semi-structured text. Plazi has demonstrated that it is possible to extract a lot of data from the literature, but at the same time the quality of that data seems highly variable. Even minor issues in parsing text can have big implications for data quality (e.g., marine organisms apparently living above sea level). Historically in biodiversity informatics we have favoured data quantity over data quality. Quantity has an obvious metric, and has milestones we can celebrate (e.g., one billion specimens). There aren't really any equivalent metrics for data quality.

Adding new types of data can sometimes initially result in a new set of quality issues (e.g., GBIF metagenomics and metacrap) that take time to resolve. In the case of Plazi, I think it would be worthwhile to quantify just how many records have errors, and develop benchmarks that we can use to test methods for extracting specimen data from text. If we don't do this then there will remain uncertainty as to how much trust we can place in data mined from the taxonomic literature.

Update

Plazi has responded, see Liberating material citations as a first step to more better data. My reading of their repsonse is that it essentially just reiterates Plazi's approach and doesn't tackle the underlying issue: their method for extracting material citations is error prone, and many of those errors end up in GBIF.

Thursday, October 07, 2021

Reflections on "The Macroscope" - a tool for the 21st Century?

YtNkVT2U This is a guest post by Tony Rees.

It would be difficult to encounter a scientist, or anyone interested in science, who is not familiar with the microscope, a tool for making objects visible that are otherwise too small to be properly seen by the unaided eye, or to reveal otherwise invisible fine detail in larger objects. A select few with a particular interest in microscopy may also have encountered the Wild-Leica "Macroscope", a specialised type of benchtop microscope optimised for low-power macro-photography. However in this overview I discuss the "Macroscope" in a different sense, which is that of the antithesis to the microscope: namely a method for visualizing subjects too large to be encompassed by a single field of vision, such as the Earth or some subset of its phenomena (the biosphere, for example), or conceptually, the universe.

My introduction to the term was via addresses given by Jesse Ausubel in the formative years of the 2001-2010 Census of Marine Life, for which he was a key proponent. In Ausubel's view, the Census would perform the function of a macroscope, permitting a view of everything that lives in the global ocean (or at least, that subset which could realistically be sampled in the time frame available) as opposed to more limited subsets available via previous data collection efforts. My view (which could, of course, be wrong) was that his thinking had been informed by a work entitled "Le macroscope, vers une vision globale" published in 1975 by the French thinker Joël de Rosnay, who had expressed such a concept as being globally applicable in many fields, including the physical and natural worlds but also extending to human society, the growth of cities, and more. Yet again, some ecologists may also have encountered the term, sometimes in the guise of "Odum's macroscope", as an approach for obtaining "big picture" analyses of macroecological processes suitable for mathematical modelling, typically by elimination of fine detail so that only the larger patterns remain, as initially advocated by Howard T. Odum in his 1971 book "Environment, Power, and Society".

From the standpoint of the 21st century, it seems that we are closer to achieving a "macroscope" (or possibly, multiple such tools) than ever before, based on the availability of existing and continuing new data streams, improved technology for data assembly and storage, and advanced ways to query and combine these large streams of data to produce new visualizations, data products, and analytical findings. I devote the remainder of this article to examples where either particular workers have employed "macroscope" terminology to describe their activities, or where potentially equivalent actions are taking place without the explicit "macroscope" association, but are equally worthy of consideration. To save space here, references cited here (most or all) can be found via a Wikipedia article entitled "Macroscope (science concept)" that I authored on the subject around a year ago, and have continued to add to on occasion as new thoughts or information come to hand (see edit history for the article).

First, one can ask, what constitutes a macroscope, in the present context? In the Wikipedia article I point to a book "Big Data - Related Technologies, Challenges and Future Prospects" by Chen et al. (2014) (doi:10.1007/978-3-319-06245-7), in which the "value chain of big data" is characterised as divisible into four phases, namely data generation, data acquisition (aka data assembly), data storage, and data analysis. To my mind, data generation (which others may term acquisition, differently from the usage by Chen et al.) is obviously the first step, but does not in itself constitute the macroscope, except in rare cases - such as Landsat imagery, perhaps - where on its own, a single co-ordinated data stream is sufficient to meet the need for a particular type of "global view". A variant of this might be a coordinated data collection program - such as that of the ten year Census of Marine Life - which might produce the data required for the desired global view; but again, in reality, such data are collected in a series of discrete chunks, in many and often disparate data formats, and must be "wrangled" into a more coherent whole before any meaningful "macroscope" functionality becomes available.

Here we come to what, in my view, constitutes the heart of the "macroscope": an intelligently organized (i.e. indexable and searchable), coherent data store or repository (where "data" may include imagery and other non numeric data forms, but much else besides). Taking the Census of Marine Life example, the data repository for that project's data (plus other available sources as inputs) is the Ocean Biodiversity Information System or OBIS (previously the Ocean Biogeographic Information System), which according to this view forms the "macroscope" for which the Census data is a feed. (For non habitat-specific biodiversity data, GBIF is an equivalent, and more extensive, operation). Other planetary scale "macroscopes", by this definition (which may or may not have an explicit geographic, i.e. spatial, component) would include inventories of biological taxa such as the Catalogue of Life and so on, all the way back to the pioneering compendia published by Linnaeus in the eighteenth century; while for cartography and topographic imagery, the current "blockbuster" of Google Earth and its predecessors also come well into public consciousness.

In the view of some workers and/or operations, both of these phases are precursors to the real "work" of the macroscope which is to reveal previously unseen portions of the "big picture" by means either of the availability of large, synoptic datasets, or fusion between different data streams to produce novel insights. Companies such as IBM and Microsoft have used phraseology such as:

By 2022 we will use machine-learning algorithms and software to help us organize information about the physical world, helping bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a "macroscope" – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth's complex data together to analyze it by space and time for meaning." (IBM)
As the Earth becomes increasingly instrumented with low-cost, high-bandwidth sensors, we will gain a better understanding of our environment via a virtual, distributed whole-Earth "macroscope"... Massive-scale data analytics will enable real-time tracking of disease and targeted responses to potential pandemics. Our virtual "macroscope" can now be used on ourselves, as well as on our planet." (Microsoft) (references available via the Wikipedia article cited above).

Whether or not the analytical capabilities described here are viewed as being an integral part of the "macroscope" concept, or are maybe an add-on, is ultimately a question of semantics and perhaps, personal opinion. Continuing the Census of Marine Life/OBIS example, OBIS offers some (arguably rather basic) visualization and summary tools, but also makes its data available for download to users wishing to analyse it further according to their own particular interests; using OBIS data in this manner, Mark Costello et al. in 2017 were able to demarcate a finite number of data-supported marine biogeographic realms for the first time (Costello et al. 2017: Nature Communications. 8: 1057. doi:10.1038/s41467-017-01121-2), a project which I was able to assist in a small way in an advisory capacity. In a case such as this, perhaps the final function of the macroscope, namely data visualization and analysis, was outsourced to the authors' own research institution. Similarly at an earlier phase, "data aggregation" can also be virtual rather than actual, i.e. avoiding using a single physical system to hold all the data, enabled by open web mapping standards WMS (web map service) and WFS (web feature service) to access a set of distributed data stores, e.g. as implemented on the portal for the Australian Ocean Data Network.

So, as we pass through the third decade of the twenty first century, what developments await us in the "macroscope" area"? In the biodiversity space, one can reasonably presume that the existing "macroscopic" data assembly projects such as OBIS and GBIF will continue, and hopefully slowly fill current gaps in their coverage - although in the marine area, strategic new data collection exercises may be required (Census 2020, or 2025, anyone?), while (again hopefully), the Catalogue of Life will continue its progress towards a "complete" species inventory for the biosphere. The Landsat project, with imagery dating back to 1972, continues with the launch of its latest satellite Landsat 9 just this year (21 September 2021) with a planned mission duration for the next 5 years, so the "macroscope" functionality of that project seems set to continue for the medium term at least. Meanwhile the ongoing development of sensor networks, both on land and in the ocean, offers an exciting new method of "instrumenting the earth" to obtain much more real time data than has ever been available in the past, offering scope for many more, use case-specific "macroscopes" to be constructed that can fuse (e.g.) satellite imagery with much more that is happening at a local level.

So, the "macroscope" concept appears to be alive and well, even though the nomenclature can change from time to time (IBM's "Macroscope", foreshadowed in 2017, became the "IBM Pairs Geoscope" on implementation, and is now simply the "Geospatial Analytics component within the IBM Environmental Intelligence Suite" according to available IBM publicity materials). In reality this illustrates a new dichotomy: even if "everyone" in principle has access to huge quantities of publicly available data, maybe only a few well funded entities now have the computational ability to make sense of it, and can charge clients a good fee for their services...

I present this account partly to give a brief picture of "macroscope" concepts today and in the past, for those who may be interested, and partly to present a few personal views which would be out of scope in a "neutral point of view" article such as is required on Wikipedia; also to see if readers of this blog would like to contribute further to discussion of any of the concepts traversed herein.