Lev’s lecture (2)

On: May 30, 2009
Print Friendly, PDF & Email
About Jan Simons
Associate Professor New Media at the Dept. of Media Studies, Universiteit van Amsterdam.

   

Lev’s lecture (2)

Lev Manovich’s lecture on May 17 in Paradiso, Amsterdam had at least one unmistakeable feature of a good lecture: it rose controversy. Already immediately after concluding his presentation, Lev was accused during the Q&A of endangering the humanities by reducing culture to numbers and erasing human intentionality and creativity, those long cherished but already for quite some time thoroughly ‘deconstructed’ and discredited pet concepts of the humanities (let’s not forget that the ‘death of the author’ had been proclaimed way before computers made their appearance in the humanities). About a week after his lecture, Eddy Shanken wrote in his review of the lecture on the Masters of Media blog (May 26) that Lev’s lecture was ‘terrible in terms of both its preparation and its intellectual content.’

As far as Lev’s presentation is concerned, I could argue to his defense that he lived up to his claim in Language of New Media, that the database has become the symbolic form of the age of new media and that he treated his PowerPoint sheets as a Random Access Memory, assembling his argument on the fly. But I have to admit that I wouldn’t accept such a style of presenting from our students and that it doesn’t look very professional if a text exceeds the boundaries of a text bloc or when the speaker corrects typos while delivering his talk. On the other hand, I also have to admit that I am not a big admirer of the well rehearsed, well times, fast paced, smooth and slick PowerPoint presentations usually delivered by Internet and business gurus like Gerd Leonhardt either. But granting the point about style and presentation, what about the lecture’s intellectual content?

Here, I think, your verdict depends on what you think the lecture was about: to me, the lecture was, as its title already suggested, about ‘cultural analytics’, and not about art history, or rather, it asked how art history might look like when the methods and tools of ‘cultural analytics’ will be available and applicable. Cultural analytics, as proposed by Manovich and his research group in San Diego, and as – summarily – described in some of the papers on his website, aims at the development of methods and tools for the computational collection, analysis, and visualization of data pertaining to digitally born or digitized cultural objects. One should, however, keep in mind that cultural analytics is an incipient field that is still far from being a fullfledged discipline. It still has to define its main research areas, its leading research questions, maybe even its research object and its research methods.

In this respect, cultural analytics is on a par with many of humanities and social sciences based fields of research that try to come to grips with new cultural phenomena like the user generated content on sites like YouTube and Flickr, but that are basically groping in the dark because, to elaborate on David Rumsfeld, “they know that they don’t know what they don’t know”, and they also know very well that there is little hope that their methods and tools of research will ever enable them to map more than a very tiny bit of the vast and ever expanding territory that extends beyond the boundaries of their own research objects. They are like the blind people who are put in front of different parts of an elephant (its trunk, its belly, its legs, its tail) and then are asked go guess what kind of object they have in front of them. Cultural analytics might not know either with what kind of an animal – or animals – we’re dealing with when it comes to contemporary digital culture. But it has at least the advantage over traditional humanties approaches that it is aware that the sheer amount and the fast pace of growth of computer based content, generated by DIY’ers as well as professionals, institutions, corporations, administrations, etc, asks for computational approaches.

But this is probably the least controversial part of Lev’s lecture. Nobody will probably disagree that ethnographic studies of the behaviours of members of YouTube communities may yield fascinating insights in the intentions, motivations, and practices of very real – and very ‘human’ – YouTube users. But it will remain completely undecidable whether the research results will be representative for a group other or bigger than the researched subjects, let alone whether the results of the research can be generalized to all users of YouTube (or online video services). One may try to trace the vagaries of a music video or a film clip by comparing the various remixes a chaine of users has made out of it, but whether this yields any generalizable rules or regularities will remain the notorious ‘topic for further research’. In fact, most humanities research into practices around digital cultural objects are based on intuitive, often common sense notions about how new media function and operate (often reiterating the same prejudices and biases that have surrounded previous media when they were still ‘new’), or on observations of practices of very small samples of users (quite often students, for obvious reasons). Admittedly, many shortcomings of humanities research come from their traditional aversity against empirical research and quantitative methods and statistics.

However, literary scholars like Franco Moretti have already convincingly shown that a certain disregard for content and more attention for quantitative data can produce insights in issues like, for instance, the global spread of the novel as a cultural form, the distribution of canonic literature from cultural centers to the peripheries of Europe, the reading practices of ordinary readers, and a lot of other questions that have fallen outside the scope of the mostly content and quality oriented approaches of literary studies. One such insight is that ‘normal’ literary production – what we might call its ‘long tail’ – is characterized by continuity and the preservation of forms, themes, and styles, rather than by innovation, change, and ‘paradigm shifts’ as an almost exclusive focus on the ‘short head’ of the canonical master pieces might suggest. Lev alluded to a similar finding when he referred to the vast amounts of ‘non-canonical’ paintings that are nowadays being collected and documented by numerous local and regional cultural institutions and museums. Now, Moretti did certainly not read all those thousands and thousands of novels that were produced between, say, 1750 and 1850, already for the simple reason that the vast majority of those novels is no longer available. He had to go by documents such as the catalogues of libraries and rely upon – and often reinterpret – the far from uniform ‘metadata’ the cataloguers used to ‘tag’ the novels in their collections. Nevertheless, this datamining and quantitative research allowed Moretti to draw some interesting, and surprising maps of Europe’s literary culture in the 19th century. Interestingly, the pattern of the spread of the novel as a dominant form and the contradictory and conflictual forms that result from the adoption and appropriation of this form in other cultures (e.g., Russia, Brazil), seems to have repeated itself in the spread of the ‘classical Hollywood film’ as the global dominant form. Might there be similar processes at work in the age of ‘more media’? Anyway, ‘content’ is not the only royal road to knowledge about culture (as the social sciences already know and practice since the 19th century).

This might not even be the most interesting question for cultural analitics to raise, and honesty obliges to admit that cultural analytics does not only have no answers (yet) to the questions posed by the rise of an increasingly computer-based and user-generated culture, but it may even not have yet found the right questions to ask. But it has one premisse, and that is that a computer-based culture of ‘more media’ requires computational rather than hermeneutic and interpretive methods, and that is already available, or has to be customized, or developed from scratch, that could do the datamining, the quantitave analysis, and the visualization of the resulting patterns. It is, of course, true, if not a truism, that in order to perform al these analytical tasks, computers need to be fed with relevant parameters that in turn are determined by interesting research questions, and that the quantitative results of the analyses of the collected data need to be interpreted in the light of a guiding theory or hypothesis, and that a visualization needs to be designed in such a way that the relevant correlations and patterns jump into the eye. For this, we need theories, hypotheses, and research questions, indeed. And when the few projects Lev discussed in his lecture are taken as representative examples of the kind of research envisaged by cultural analytics, we can only agree that it looks less than promissing: indeed, Lev’s demonstration of an analysis of a randomly choosen (and nevertheless biased) sample of 35 paintings did not yield any significant insight that wasn’t already known, and one may indeed reasonably ask oneself why it would be important to analyse the development of brightness in the paintings of Rothko over his career? If Lev’s lecture was about art history, it would indeed not even impress a first years undergraduate.

But, again, this was not what Lev’s lecture was about: it was not about art history, and it was not even about research questions and answers. It was about methods and tools, and the examples Lev came up with were no more and no less than demos, or, in project management lingo, they were proofs of concept. What was to be ‘proven’ was not that Rothko gradually developed a use of brighter colours or that abstract modern painting became increasingly more simple by using increasingly less shapes. These isights didn’t need to be proven, because they are, as Eddy points out, already ‘generally accepted knowledge among art historians’. What needed to be demonstrated was that this knowledge could also be arrived at through the use of software and through computational analysis. To take the other example of 35 paintings: the question is not whether this is a representative sample for statistical analysis (which it isn’t, indeed), but rather why use a computer to analyse a sample that any art historian could easily cope with manually (or non-digitally)? The answer to this question is simple: to show that currently available software allows researchers to come up with and visualize the same patterns and developments that art historians have already revealed with conventional, non-computational methods of analysis. Which simply means that the methods and tools sought by cultural analytics work: quod erat demonstrandum.

Thisis the point of a proof of concept: to show that your methods and tools work you’d better take a well known and preferably small sample of data, and then you show that your analysis yields results that are equal to, and perhaps even better or better grounded, than those of previous and current methods. If your results would widely diverge from what is ‘generally accepted knowledge’, you would have a serious problem, because then you’d be either obliged to prove your rivals wrong, or to admit your own failure. But if you come up with results that converge with and are confirmed by already existing knowledge, you may be pretty confident that you have appropriate methods and tools for studying not only this very small sample of data, but that you might be able to cope with the already vast and ever increasing, flowing, and moving oceans of cultural objects. And that is, I think, what Lev’s lecture achieved, but alas not for everybody.

Of course, research can not be reduced to datamining and computational analysis: these activities need to be informed and guided by theories, hypotheses, and research questions. But on the other hand, as long as we don’t really kow what kind of animal we have in front of us, when we try to study contemporary, computer based and user generated culture, we might better start with doing some measurements first. Otherwise, we might run the risk of never getting the elephant in sight.

Comments are closed.