Citation matching is a tough problem (see the papers below for a starting point).
To date my approach has been to write various regular expressions to extract citations (mainly from web pages and databases). The goal, in a sense, is to discover the rules used to write the citation, then extract the component parts (authors, date, title, journal, volume, pagination, etc.). It's error prone — the citation might not exactly follow the rules, there might be errors (e.g., OCR, etc.). There are more formal ways of doing this (e.g., using statistical methods to discover which set of rules is most likely to have generated the citation, but these can get complicated.
It occurs to me another way of doing this would be the following:
- Assume, for arguments sake, we have a database of most of the references we are likely to encounter.
- Using the most common citation styles, generate a set of possible citations for each reference.
- Use approximate string matching to find the closest citation string to the one you have. If the match is above a certain threshold, accept the match.
The idea is essentially to generate the universe of possible citation strings, and find the one that's closest to the string you are trying to match. Of course, tis universe could be huge, but if you restrict it to a particular field (e.g., taxonomic literature) it might be manageable. This could be a useful way of handling "microcitations". Instead of developing regular expressions of other tools to discover the underlying model, generate a bunch of microcitations that you expect for a given reference, and string match against those.
Might not be elegant, but I suspect it would be fast.