I've been viewing Apple's Knowledge Navigator concept video from 1987 and it's striking how much of this we have today, and yet how far away we are from the complete vision. For some background on this promotional video see The Making of Knowledge Navigator. The computer scientist Alan Kay provided some advice to the makers (who put the video together for a presentation by then Apple CEO John Sculley). Kay is a true visionary, he's currently working on children, computers, and education, motivated by the realisation that, like the printing press before it, computing will change the way people think, and how children learn using computers could have a profound impact on our future.
The Knowledge Navigator video looked futuristic when it came out, but now we have ubiquitous touch interfaces, video chat, and can talk to computers (albeit not with the level of sophistication shown in the video). But there are a couple of things in the concept video that are in many ways even more impressive.
Early on, our professor is trying to track down a paper, and he can't quite remember the name of the author. His visual assistant (a more sophisticated version of Siri) finds it, which of itself isn't too exciting (Google supports searching for things when you don't quite know what it is you're looking for). What is more impressive is that the professor can access and play with the data in the paper, and compare the predictions made in that paper with more recent data.
This requires that we have access to the data and models from a published paper, and a way to easily add new data and redo the analyses. This is related to "reproducible science" doi:10.1038/s41562-016-0021 and the notion of "executable papers" doi:10.1016/j.procs.2011.04.074, but goes beyond that because we don't just reproduce the results in the original paper, we can add to them. And it's all seamless and effortless. Anyone who has tried to get adta from a paper and do something with it will recognise that we are a long way from this.
The second interesting example is when our professor is chatting online with a colleague about deforestation in South America, and she sends him her graphical model of the spread of the Sahara. They then view these side-by-side. Note that this is not two separate videos, the simulations merge together and their timeline syncs so that they play together simultaneously. The parameters of the simulations can also be changed on the fly.
This ability to collaborate in real time in the same space with both data and analysis is something that we don't really have, at least I'm not aware of it. Yes, we can work together on editing a Google Document, but throwing together two data sets or visualisations and have them align themselves automatically is pretty cool.
While some aspects of the Knowledge Navigator video look quaint, it's striking that the actual core of the video - a researcher redoing an analysis published by another researcher, or collaborating with a colleague with different but related data is still something we haven't been able to achieve yet (for some related work on collaboratively viewing evolutionary trees see "Interactive Tree Comparison for Co-located Collaborative Information Visualization" doi:10.1109/TVCG.2007.70568). In this respect the Knowledge Navigator is still a vision of the future.