Tuesday, April 25, 2023

Library interfaces, knowledge graphs, and Miller columns

Some quick notes on interface ideas for digital libraries and/or knowledge graphs.

Recently there’s been something of an explosion in bibliographic tools to explore the literature. Examples include:

  • Elicit which uses AI to search for and summarise papers
  • _scite which uses AI to do sentiment analysis on citations (does paper A cite paper B favourably or not?)
  • ResearchRabbit which uses lists, networks, and timelines to discover related research
  • Scispace which navigates connections between papers, authors, topics, etc., and provides AI summaries.

As an aside, I think these (and similar tools) are a great example of how bibliographic data such as abstracts, the citation graph and - to a lesser extent - full text - have become commodities. That is, what was once proprietary information is now free to anyone, which in turns means a whole ecosystem of new tools can emerge. If I was clever I’d be building a Wardley map to explore this. Note that a decade or so ago reference managers like Zotero were made possible by publishers exposing basic bibliographic data on their articles. As we move to open citations we are seeing the next generation of tools.

Back to my main topic. As usual, rather than focus on what these tools do I’m more interested in how they look. I have history here, when the iPad came out I was intrigued by the possibilities it offered for displaying academic articles, as discussed here, here, here, here, and here. ResearchRabbit looks like this:

Scispace’s “trace” view looks like this:

What is interesting about both is that they display content from left to right in vertical columns, rather than the more common horizontal rows. This sort of display is sometimes called Miller columns or a cascading list.

By Gürkan Sengün (talk) - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=594715

I’ve always found displaying a knowledge graph to be a challenge, as discussed elsewhere on this blog and in my paper on Ozymandias. Miller columns enable one to drill down in increasing depth, but it doesn’t need to be a tree, it can be a path within a network. What I like about ResearchRabbit and the original Scispace interface is that they present the current item together with a list of possible connections (e.g., authors, citations) that you can drill down on. Clicking on these will result in a new column being appended to the right, with a view (typically a list) of the next candidates to visit. In graph terms, these are adjacent nodes to the original item. The clickable badges on each item can be thought of as sets of edges that have the same label (e.g., “authored by”, “cites”, “funded”, “is about”, etc.). Each of these nodes itself becomes a starting point for further exploration. Note that the original starting point isn’t privileged, other than being the starting point. That is, each time we drill down we are seeing the same type of information displayed in the same way. Note also that the navigation can be though of as a card for a node, with buttons grouping the adjacent nodes. When we click on an individual button, it expands into a list in the next column. This can be thought of as a preview for each adjacent node. Clicking on an element in the list generates a new card (we are viewing a single node) and we get another set of buttons corresponding to the adjacent nodes.

One important behaviour in a Miller column interface is that the current path can be pruned at any point. If we go back (i.e., scroll to the left) and click on another tab on an item, everything downstream of that item (i.e., to the right) gets deleted and replaced by a new set of nodes. This could make retrieving a particular history of browsing a bit tricky, but encourages exploration. Both Scispace and ResearchRabbit have the ability to add items to a collection, so you can keep track of things you discover.

Lots of food for thought, I’m assuming that there is some user interface/experience research on Miller columns. One thing to remember is that Miller columns are most often associated with trees, but in this case we are exploring a network. That means that potentially there is no limit to the number of columns being generated as we wander through the graph. It will be interesting to think about what the average depth is likely to be, in other words, how deep down the rabbit hole will be go?

Update

Should add link to David Regev's explorations of Flow Browser.

Written with StackEdit.

Monday, April 03, 2023

ChatGPT, semantic search, and knowledge graphs

One thing about ChatGPT is it has opened my eyes to some concepts I was dimly aware of but am only now beginning to fully appreciate. ChatGPT enables you ask it questions, but the answers depend on what ChatGPT “knows”. As several people have noted, what would be even better is to be able to run ChatGPT on your own content. Indeed, ChatGPT itself now supports this using plugins.

Paul Graham GPT

However, it’s still useful to see how to add ChatGPT functionality to your own content from scratch. A nice example of this is Paul Graham GPT by Mckay Wrigley. Mckay Wrigley took essays by Paul Graham (a well known venture capitalist) and built a question and answer tool very like ChatGPT.

Because you can send a block of text to ChatGPT (as part of the prompt) you can get ChatGPT to summarise or transform that information, or answer questions based on that information. But there is a limit to how much information you can pack into a prompt. You can’t put all of Paul Graham’s essays into a prompt for example. So a solution is to do some preprocessing. For example, given a question such as “How do I start a startup?” we could first find the essays that are most relevant to this question, then use them to create a prompt for ChatGPT. A quick and dirty way to do this is simply do a text search over the essays and take the top hits. But we aren’t searching for words, we are searching for answers to a question. The essay with the best answer might not include the phrase “How do I start a startup?”.

Enter Semantic search. The key concept behind semantic search is that we are looking for documents with similar meaning, not just similarity of text. One approach to this is to represent documents by “embeddings”, that is, a vector of numbers that encapsulate features of the document. Documents with similar vectors are potentially related. In semantic search we take the query (e.g., “How do I start a startup?”), compute its embedding, then search among the documents for those with similar embeddings.

To create Paul Graham GPT Mckay Wrigley did the following. First he sent each essay to the OpenAI API underlying ChatGPT, and in return he got the embedding for that essay (a vector of 1536 numbers). Each embedding was stored in a database (Mckay uses Postgres with pgvector). When a user enters a query such as “How do I start a startup?” that query is also sent to the OpenAI API to retrieve its embedding vector. Then we query the database of embeddings for Paul Graham’s essays and take the top five hits. These hits are, one hopes, the most likely to contain relevant answers. The original question and the most similar essays are then bundled up and sent to ChatGPT which then synthesises an answer. See his GitHub repo for more details. Note that we are still using ChatGPT, but on a set of documents it doesn’t already have.

Knowledge graphs

I’m a fan of knowledge graphs, but they are not terribly easy to use. For example, I built a knowledge graph of Australian animals Ozymandias that contains a wealth of information on taxa, publications, and people, wrapped up in a web site. If you want to learn more you need to figure out how to write queries in SPARQL, which is not fun. Maybe we could use ChatGPT to write the SPARQL queries for us, but it would be much more fun to be simply ask natural language queries (e.g., “who are the experts on Australian ants?”). I made some naïve notes on these ideas Possible project: natural language queries, or answering “how many species are there?” and Ozymandias meets Wikipedia, with notes on natural language generation.

Of course, this is a well known problem. Tools such as RDF2vec can take RDF from a knowledge graph and create embeddings which could in tern be used to support semantic search. But it seems to me that we could simply this process a bit by making use of ChatGPT.

Firstly we would generate natural language statements from the knowledge graph (e.g., “species x belongs to genus y and was described in z”, “this paper on ants was authored by x”, etc.) that cover the basic questions we expect people to ask. We then get embeddings for these (e.g., using OpenAI). We then have an interface where people can ask a question (“is species x a valid species?”, “who has published on ants”, etc.), we get the embedding for that question, retrieve natural language statements that the closest in embedding “space”, package everything up and ask ChatGPT to summarise the answer.

The trick, of course, is to figure out how t generate natural language statements from the knowledge graph (which amounts to deciding what paths to traverse in the knowledge graph, and how to write those paths is something approximating English). We also want to know something about the sorts of questions people are likely to ask so that we have a reasonable chance of having the answers (for example, are people going to ask about individual species, or questions about summary statistics such as numbers of species in a genus, etc.).

What makes this attractive is that it seems a straightforward way to go from a largely academic exercise (build a knowledge graph) to something potentially useful (a question and answer machine). Imagine if something like the defunct BBC wildlife site (see Blue Planet II, the BBC, and the Semantic Web: a tale of lessons forgotten and opportunities lost) revived here had a question and answer interface where we could ask questions rather than passively browse.

Summary

I have so much more to learn, and need to think about ways to incorporate semantic search and ChatGPT-like tools into knowledge graphs.

Written with StackEdit.

ChatGPT, of course

I haven’t blogged for a while, work and other reasons have meant I’ve not had much time to think, and mostly I blog to help me think.

ChatGPT is obviously a big thing at the moment, and once we get past the moral panic (“students can pass exams using AI!”) there are a lot of interesting possibilities to explore. Inspired by essays such as How Q&A systems based on large language models (eg GPT4) will change things if they become the dominant search paradigm — 9 implications for libraries and Cheating is All You Need, as well as [Paul Graham GPT](https://paul-graham-gpt.vercel.app) I thought I’d try a few things and see where this goes.

ChatGPT can do some surprising things.

Parse bibliographic data

I spend a LOT of time working with bibliographic data, trying to parse it into structured data. ChatGPT can do this:

Note that it does more than simply parse the strings, it expands journal abbreviations such as “J. Malay Brch. R. Asiat. Soc.” to the full name “Journal of the Malayan Branch of the Royal Asiatic Society”. So we can get clean, parsed data in a range of formats.

Parse specimens

Based on the success with parsing bibliographic strings I wondered how well it could handle citation software specimens (“material examined”). Elsewhere I’ve been critical of Plazi’s ability to do this, see Problems with Plazi parsing: how reliable are automated methods for extracting specimens from the literature?.

For example, given this specimen record on p. 130 of doi:10.5852/ejt.2021.775.1553

LAOS • Kammoune Province, Bunghona Market, 7 km Nof Xe Bangfai River;
17.13674° N, 104.98591° E; E. Jeratthitikul, K. Wisittikoson, A. Fanka, N. Wutthituntisil and P. Prasankok leg.; sold by local people;
MUMNH-UNI2831.

ChatGPT extracted a plausible Darwin Core record:

I’ve been experimenting with parsing specimen records using the same machine learning approach for bibliographic data (e.g., Citation parsing tool released), perhaps it’s time to rethink that idea.

Correct OCR errors

OCR errors are a well known limitation of the Biodiversity Heritage Library (BHL), see for example OCR Improvements: An Early Analysis. Given that ChatGPT is built on top of a large language model it seems obvious to see how it handles OCR errors. I tried the following snippet of text from BHL page 4312227:

i BREVIORA No. 185

Jiiaii Jr., 16, who always join him and Hedriek in th(> search for
frogs. Iloraeio Mayorga, research assistant to tlie author, has also
been (wtreniely useful, not only in field work, but also in the
tedious curatorial and laboratory work that usually fol- lows. Dr.
John Randall kindly took tlie photouraplis of E. hedricki, and the
Galiiiaues family of San Juan made their El Verde house available as
a (MMiter of activities for the colleetiuff party. To all these
jx’opic, the autlioi- feels (iee])ly indebted.

Elet’tiierodactylus iiKnRrcKi s]). n.

Type. Museum of Coniparative Zoology Xo. ;}()!)()8, c^ from El Verde,
west flank of El Yuiuinc Puerto Hico. I.IOO ft. Coll. J. A. Kivero,
11 Aug-. I!)(i2.

ChatGPT returns this:

i BREVIORA No. 185

Juan Jr., 16, who always joins him and Hedrick in the search for
frogs. Horacio Mayorga, research assistant to the author, has also
been extremely useful, not only in field work, but also in the tedious
curatorial and laboratory work that usually follows. Dr. John Randall
kindly took the photographs of E. hedricki, and the Galinanes family
of San Juan made their El Verde house available as a center of
activities for the collecting party. To all these people, the author
feels deeply indebted.

Eleutherodactylus hedricki sp. n.

Type. Museum of Comparative Zoology No. 30908, c? from El Verde, west
flank of El Yunque, Puerto Rico. 1100 ft. Coll. J. A. Rivero, 11 Aug.
1962.

Comparing this to the scanned image ChatGPT it does pretty well, for example the gobbledegook “Elet’tiierodactylus iiKnRrcKi” is correctly translated as “Eleutherodactylus hedricki”. Running all of BHL through ChatGPT probably isn’t feasible, but one could imagine targeted cleaning of key papers.

Summary

These small experiments are fairly trivial, but they are the sort of tedious tasks that would otherwise require significant programming (or other resources) to solve. But ChatGPT can do rather more, as I hope to discuss in the next post.

Written with StackEdit.