INFODECODATA and Manuel Lima about the rise of the info-visualisation research field
First off, I’d first like mention that there’s an upcoming symposium as a part of the INFODECODATA exhibition on Sunday 13 June at the Graphic Design Museum Breda. There’ll be plenty of interesting speakers including Lev Manovich, Jack van Wijck and Yuri Engelhardt, who will be discussing fundamental topics like ‘Will the Graphic Designer become a Software Developer?’ and ‘Is Design the new Science?’.
As I wrote earlier on my blog, Yuri supervised the Info-Visualisation course at the UvA in which a conference was organized by the students to present their products.
Apart from the interesting concepts, developed by multidisciplinary teams, presentations were held by various keynote speakers. I took the following notes from the keynote of researcher, designer, and founder of VisualComplexity.com, Manuel Lima about the uprise of both the blog and the field of data-visualisations in general.
Lima started off by what has inspired him to theorize and write about data visualisation practices (one of them was ‘The Understanding Spectrum‘ by Nathan Shedroff) and quickly proceeded with a historiography of visual culture. While people throughout the centuries have sought new ways to present data, described in for example the Alfred Crosby’s in ‘The measure of reality‘, by “quantifying the unquantifiable” (Lima, 2010), collecting and organizing this data – with today’s technologies these processes has seem to become much more mathematically precise and efficient.
Lima continued by pinpointing five aspects that caused the visualisation techniques to emerge so rapidly. First there is the capabilities of data storage. Following Moore’s law of storage and computer chip research growing exponentially, also Kryder’s law – of “everything that can be digital, will be – applies here. Furthermore, software tends to follow along the lines of these ‘laws’; where the most largest encyclopedia – the French Encyclopedia – consisted of 17.000 articles, published in 53 volumes, the English version of Wikipedia surpassed 3 million articles, and would consist of 1.300 volumes (of text-only).
The second reason why the field has expanded so greatly is due to the fact more and more parties (institutions, businesses, governments et cetera) tend make their data open and accessible. Also the software to build these datasets, like Swifel or Many Eyes, are used widely. But, as another keynote speaker, Daniel Aguilar from Bestiario, would argue: many of the open platforms that promote themselves as such, are in fact limited or unstructured (for example, a client only offered a collection of flat Acrobat Reader files), which make this ‘openness’ seem more like a buzz-word.
Thirdly, datasets of social platforms have grown immensely, from tagged Flickr photos to workout information (Nike+), and currently with the #hashtags of Twitter (recently, the hashtags of the Dutch political debate, #rtldebat, popped up as first in the worldwide Trending Topics), people are willingly engaging with ongoing trends through different means of producing and sharing.
Fourthly, tools have become available at a great scale, from Adobe Flash to Prefuse or Processing. While some of them require advanced programming knowledge others offer templates level the production field.
Lastly, mainstream media have adopted visualisation methods quickly thus taking it out of the academic sphere. Through publishing these graphics the became popular amongst readers, and gives traditional media the chance to explore communicating through graphics. Common examples are visuals in the New York Times or even NRC.next.
In conclusion, Lima sees there’s still a lot work to do, to finally develop the ‘new science field’, most notably, a “taxonomy is needed to meet the requirements” (Lima, 2010). He tries to cover the existing models (i.e radial or 3d globes) as well as explore some future directions in his upcoming book (unfortunately, I couldn’t find a link anywhere). Also, the exploration of potential interaction techniques still has a long road ahead to eventually make “use of computer-supported, interactive, visual representation of abstract data to amplify cognition” (Card, Mackinklay and Shneiderman, 1999) in empirically effective ways.
Finally, fundamental questions arise from current visualisations. For example, while comparing the mapping projects of ‘Mouse’s neuronal network’ and ‘Millennium simulation’ they tend to behave in striking similar ways. In that sense, data-mapping can give us understanding of both micro- as meso and macro-patterns in nature or culture and draw lines between them, and thus seem to offer us exciting new tools for a broad range of research fields.
- Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman – Readings in information visualization: using vision to think (1999)