Tuesday, September 11, 2018

Guest post - Quality paralysis: a biodiversity data disease

Bob mesibovThe following is a guest post by Bob Mesibov.

In 2005, GBIF released Arthur Chapman's Principles of Data Quality and Principles and Methods of Data Cleaning: Primary Species and Species-Occurrence Data as freely available electronic publications. Their impact on museums and herbaria has been minimal. The quality of digitised collection data worldwide, to judge from the samples I've audited (see disclaimer below), varies in 2018 from mostly OK to pretty awful. Data issues include:

  • duplicate records
  • records with data items in the wrong fields
  • records with data items inappropriate for a given field (includes Chapman's "domain schizophrenia")
  • records with truncated data items
  • records with items in one field disagreeing with items in another
  • character encoding errors and mojibake
  • wildly erroneous dates and spatial coordinates
  • internally inconsistent formatting of dates, names and other data items (e.g. 48 variations on "sea level" in a single set of records)

In a previous guest post I listed 10 explanations for the persistence of messy data. I'd gathered the explanations from curators, collection managers and programmers involved with biodiversity data projects. I missed out some key reasons for poor data quality, which I'll outline in this post. For inspiration I'm grateful to Rod Page and to participants in lively discussions about data quality at the SPNHC/TDWG conference in Dunedin this August.

  1. Our institution, like all natural history collections these days, isn't getting the curatorial funding it used to get, but our staff's workload keeps going up. Institution staff are flat out just keeping their museums and herbaria running on the rails. Staff might like to upgrade data quality, but as one curator wrote to me recently, "I simply don't have the resources necessary."
  2. We've been funded to get our collections digitised and/or online, but there's nothing in the budget for upgrading data quality. The first priority is to get the data out there. It would be nice to get follow-up funding for data cleaning, but staff aren't hopeful. The digitisation funder doesn't seem to think it's important, or thinks that staff can deal with data quality issues later, when the digitisation is done.
  3. There's no such thing as a Curator of Data at our institution. Collection curators and managers are busy adding records to the collection database, and IT personnel are busy with database mechanics. The missing link is someone on staff who manages database content. The bigger the database, the greater the need for a data curator, but the usual institutional response is "Get the collections people and the IT people together. They'll work something out."
  4. Aggregators act too much like neutrals. We're mobilising our data through an aggregator, but there are no penalties if we upload poor-quality data, and no rewards if we upload high-quality data. Our aggregator has a limited set of quality tests on selected data fields and adds flags to individual records that have certain kinds of problems. The flags seem to be mainly designed for users of our data. We don't have the (time/personnel/skills) to act on this "feedback" (or to read those 2005 GBIF reports).

There's a 15th explanation that overlaps the other 14 and Rod Page has expressed it very clearly: there's simply no incentive for anyone to clean data.

  • Museums and herbaria don't get rewards, kudos, more visitors, more funding or more publicity if staff improve the quality of their collection data, and they don't get punishments, opprobrium, fewer visitors, reduced funding or less publicity if the data remain messy.
  • Aggregators likewise. Aggregators also don't suffer when they downgrade the quality of the data they're provided with.
  • Users might in future get some reputational benefit from alerting museums and herbaria to data problems, through an "annotation system" being considered by TDWG. However, if users clean datasets for their own use, they get no reward for passing blocks of cleaned data to overworked museum and herbarium staff, or to aggregators, or to the public through "alternative" published data versions.

With the 15 explanations in mind, we can confidently expect collection data quality to remain "mostly OK to pretty awful" for the foreseeable future. Data may be upgraded incrementally as loans go out and come back in, and as curators, collection managers and researchers compare physical holdings one-by-one with their digital representations. Unfortunately, the improvements are likely to be overwhelmed by the addition of new, low-quality records. Very few collection databases have adequate validation-on-entry filters, and staff don't have time for, or assistance with checking. Or a good enough reason to check.

"Quality paralysis" is endemic in museums and herbaria and seems likely to be with us for a long time to come.


DISCLAIMER: Believe it or not, this post isn't an advertisement for my data auditing services.

I began auditing collection data in 2012 for my own purposes and over the next few years I offered free data auditing to a number of institutions in Australia and elsewhere. There were no takers.

In 2017 I entered into a commercial arrangement with Pensoft Publishers to audit the datasets associated with data papers in Pensoft journals, as a free Pensoft service to authors. Some of these datasets are based on collections data, but when auditing I don't deal with the originating institutions directly.

I continue to audit publicly available museum and herbarium data in search of raw material for my website A Data Cleaner's Cookbook and its companion blog BASHing data. I also offer free training in data auditing and cleaning.