Lev’s lecture (2)

On: May 30, 2009
Print Friendly, PDF & Email
About Jan Simons
Associate Professor New Media at the Dept. of Media Studies, Universiteit van Amsterdam.

   

Lev’s lecture (2)

Lev Manovich’s lecture on May 17 in Paradiso, Amsterdam had at least one unmistakeable feature of a good lecture: it rose controversy. Already immediately after concluding his presentation, Lev was accused during the Q&A of endangering the humanities by reducing culture to numbers and erasing human intentionality and creativity, those long cherished but already for quite some time thoroughly ‘deconstructed’ and discredited pet concepts of the humanities (let’s not forget that the ‘death of the author’ had been proclaimed way before computers made their appearance in the humanities). About a week after his lecture, Eddy Shanken wrote in his review of the lecture on the Masters of Media blog (May 26) that Lev’s lecture was ‘terrible in terms of both its preparation and its intellectual content.’

As far as Lev’s presentation is concerned, I could argue to his defense that he lived up to his claim in Language of New Media, that the database has become the symbolic form of the age of new media and that he treated his PowerPoint sheets as a Random Access Memory, assembling his argument on the fly. But I have to admit that I wouldn’t accept such a style of presenting from our students and that it doesn’t look very professional if a text exceeds the boundaries of a text bloc or when the speaker corrects typos while delivering his talk. On the other hand, I also have to admit that I am not a big admirer of the well rehearsed, well times, fast paced, smooth and slick PowerPoint presentations usually delivered by Internet and business gurus like Gerd Leonhardt either. But granting the point about style and presentation, what about the lecture’s intellectual content?

Here, I think, your verdict depends on what you think the lecture was about: to me, the lecture was, as its title already suggested, about ‘cultural analytics’, and not about art history, or rather, it asked how art history might look like when the methods and tools of ‘cultural analytics’ will be available and applicable. Cultural analytics, as proposed by Manovich and his research group in San Diego, and as – summarily – described in some of the papers on his website, aims at the development of methods and tools for the computational collection, analysis, and visualization of data pertaining to digitally born or digitized cultural objects. One should, however, keep in mind that cultural analytics is an incipient field that is still far from being a fullfledged discipline. It still has to define its main research areas, its leading research questions, maybe even its research object and its research methods.

In this respect, cultural analytics is on a par with many of humanities and social sciences based fields of research that try to come to grips with new cultural phenomena like the user generated content on sites like YouTube and Flickr, but that are basically groping in the dark because, to elaborate on David Rumsfeld, “they know that they don’t know what they don’t know”, and they also know very well that there is little hope that their methods and tools of research will ever enable them to map more than a very tiny bit of the vast and ever expanding territory that extends beyond the boundaries of their own research objects. They are like the blind people who are put in front of different parts of an elephant (its trunk, its belly, its legs, its tail) and then are asked go guess what kind of object they have in front of them. Cultural analytics might not know either with what kind of an animal – or animals – we’re dealing with when it comes to contemporary digital culture. But it has at least the advantage over traditional humanties approaches that it is aware that the sheer amount and the fast pace of growth of computer based content, generated by DIY’ers as well as professionals, institutions, corporations, administrations, etc, asks for computational approaches.

But this is probably the least controversial part of Lev’s lecture. Nobody will probably disagree that ethnographic studies of the behaviours of members of YouTube communities may yield fascinating insights in the intentions, motivations, and practices of very real – and very ‘human’ – YouTube users. But it will remain completely undecidable whether the research results will be representative for a group other or bigger than the researched subjects, let alone whether the results of the research can be generalized to all users of YouTube (or online video services). One may try to trace the vagaries of a music video or a film clip by comparing the various remixes a chaine of users has made out of it, but whether this yields any generalizable rules or regularities will remain the notorious ‘topic for further research’. In fact, most humanities research into practices around digital cultural objects are based on intuitive, often common sense notions about how new media function and operate (often reiterating the same prejudices and biases that have surrounded previous media when they were still ‘new’), or on observations of practices of very small samples of users (quite often students, for obvious reasons). Admittedly, many shortcomings of humanities research come from their traditional aversity against empirical research and quantitative methods and statistics.

However, literary scholars like Franco Moretti have already convincingly shown that a certain disregard for content and more attention for quantitative data can produce insights in issues like, for instance, the global spread of the novel as a cultural form, the distribution of canonic literature from cultural centers to the peripheries of Europe, the reading practices of ordinary readers, and a lot of other questions that have fallen outside the scope of the mostly content and quality oriented approaches of literary studies. One such insight is that ‘normal’ literary production – what we might call its ‘long tail’ – is characterized by continuity and the preservation of forms, themes, and styles, rather than by innovation, change, and ‘paradigm shifts’ as an almost exclusive focus on the ‘short head’ of the canonical master pieces might suggest. Lev alluded to a similar finding when he referred to the vast amounts of ‘non-canonical’ paintings that are nowadays being collected and documented by numerous local and regional cultural institutions and museums. Now, Moretti did certainly not read all those thousands and thousands of novels that were produced between, say, 1750 and 1850, already for the simple reason that the vast majority of those novels is no longer available. He had to go by documents such as the catalogues of libraries and rely upon – and often reinterpret – the far from uniform ‘metadata’ the cataloguers used to ‘tag’ the novels in their collections. Nevertheless, this datamining and quantitative research allowed Moretti to draw some interesting, and surprising maps of Europe’s literary culture in the 19th century. Interestingly, the pattern of the spread of the novel as a dominant form and the contradictory and conflictual forms that result from the adoption and appropriation of this form in other cultures (e.g., Russia, Brazil), seems to have repeated itself in the spread of the ‘classical Hollywood film’ as the global dominant form. Might there be similar processes at work in the age of ‘more media’? Anyway, ‘content’ is not the only royal road to knowledge about culture (as the social sciences already know and practice since the 19th century).

This might not even be the most interesting question for cultural analitics to raise, and honesty obliges to admit that cultural analytics does not only have no answers (yet) to the questions posed by the rise of an increasingly computer-based and user-generated culture, but it may even not have yet found the right questions to ask. But it has one premisse, and that is that a computer-based culture of ‘more media’ requires computational rather than hermeneutic and interpretive methods, and that is already available, or has to be customized, or developed from scratch, that could do the datamining, the quantitave analysis, and the visualization of the resulting patterns. It is, of course, true, if not a truism, that in order to perform al these analytical tasks, computers need to be fed with relevant parameters that in turn are determined by interesting research questions, and that the quantitative results of the analyses of the collected data need to be interpreted in the light of a guiding theory or hypothesis, and that a visualization needs to be designed in such a way that the relevant correlations and patterns jump into the eye. For this, we need theories, hypotheses, and research questions, indeed. And when the few projects Lev discussed in his lecture are taken as representative examples of the kind of research envisaged by cultural analytics, we can only agree that it looks less than promissing: indeed, Lev’s demonstration of an analysis of a randomly choosen (and nevertheless biased) sample of 35 paintings did not yield any significant insight that wasn’t already known, and one may indeed reasonably ask oneself why it would be important to analyse the development of brightness in the paintings of Rothko over his career? If Lev’s lecture was about art history, it would indeed not even impress a first years undergraduate.

But, again, this was not what Lev’s lecture was about: it was not about art history, and it was not even about research questions and answers. It was about methods and tools, and the examples Lev came up with were no more and no less than demos, or, in project management lingo, they were proofs of concept. What was to be ‘proven’ was not that Rothko gradually developed a use of brighter colours or that abstract modern painting became increasingly more simple by using increasingly less shapes. These isights didn’t need to be proven, because they are, as Eddy points out, already ‘generally accepted knowledge among art historians’. What needed to be demonstrated was that this knowledge could also be arrived at through the use of software and through computational analysis. To take the other example of 35 paintings: the question is not whether this is a representative sample for statistical analysis (which it isn’t, indeed), but rather why use a computer to analyse a sample that any art historian could easily cope with manually (or non-digitally)? The answer to this question is simple: to show that currently available software allows researchers to come up with and visualize the same patterns and developments that art historians have already revealed with conventional, non-computational methods of analysis. Which simply means that the methods and tools sought by cultural analytics work: quod erat demonstrandum.

Thisis the point of a proof of concept: to show that your methods and tools work you’d better take a well known and preferably small sample of data, and then you show that your analysis yields results that are equal to, and perhaps even better or better grounded, than those of previous and current methods. If your results would widely diverge from what is ‘generally accepted knowledge’, you would have a serious problem, because then you’d be either obliged to prove your rivals wrong, or to admit your own failure. But if you come up with results that converge with and are confirmed by already existing knowledge, you may be pretty confident that you have appropriate methods and tools for studying not only this very small sample of data, but that you might be able to cope with the already vast and ever increasing, flowing, and moving oceans of cultural objects. And that is, I think, what Lev’s lecture achieved, but alas not for everybody.

Of course, research can not be reduced to datamining and computational analysis: these activities need to be informed and guided by theories, hypotheses, and research questions. But on the other hand, as long as we don’t really kow what kind of animal we have in front of us, when we try to study contemporary, computer based and user generated culture, we might better start with doing some measurements first. Otherwise, we might run the risk of never getting the elephant in sight.

2 Responses to “Lev’s lecture (2)”
  • May 31, 2009 at 11:39 am

    dear All,

    Niels sent me the link and comments that were made by Edward and although I already answered him personally, watching this discussion grow I thought it might also be good to post some of my response to him here.

    I agree with you, it was at times disappointing – for me, in the way that he didn’t bring his point forward. By highlighting and emphasising the examples I think he drifted away from the point that the research is not a cultural content analysis, ie. trying to find artistic meaning and intention, but it is quantitative and formalistic analysis of (cultural) data. At the moment they have only done small experiments to see if the software would work, so the examples are not more than small experiments and should be regarded as such. And I agree it totally remains to be seen if it will be a useful tool for analysing historical cultural data. But this is a point he makes himself as well in the interview we put together on our website (www.virtueelplatform.nl/archive2020).
    I have always been sceptical to attempts that were made to close the bridge between social sciences and art historical especially because of the different instrumentation/strategies (ie statistics versus intentional and individual analysis) that are used. I tend to be very suspect of statistical analysis for cultural content. Having said that there are of course also interesting similarities and at some points the two fields could learn a lot from each other, if not by their methodologies in their way of approaching and asking questions. Furthermore, another question is if we should be looking for a new methodological framework for empirical research and analytical theory to understand and come to terms with the new data that is coming to us – as for example Richard Rogers proposes.

    As far as I know and can see the LA research group is still trying to deal with the development of the software and they have paid little attention to the actual questions that rise from this kind of specific analysis (which by the experiments he showed now indeed doesn’t seem to bring us any further than already existing empirical observations). Nevertheless, I’m a bit more optimistic, as in the previous mentioned interview he poses interesting question of possible future scenarios. Of course it remains to be seen if it indeed will provide us with new insights. But I tend to think it might, as long as more attention goes to analysing digital born data and of course from that perspective the kind of questions that are raised – ie. to see if these questions will be different and lead to new insight. The latter might surface when making such statistics, which why I think it is still relevant to conduct this kind of research.

    Coming back to your reference to Richard’s research, which I agree has delivered far more interesting results on the analysis of digital born data. On the other hand I wonder how he would deal with specific cultural content and data (born digital cultural content)? Would it be possible to analyse these in the same manner, will it be possible to use these tools for art historical analysis and to generate (general) statements about content issues?
    This brings me to my last comment and that is that Lev’s research is not new, for many years researchers and artists alike have developed and used visualization tools and techniques. Some with more success and interesting results than others. In the end I believe that ideally the various research departments and artists should find, one way or another, a way of working together to learn and develop from each other. And more importantly they should look at and acknowledge previous attempts and research in order to avoid making the same mistakes. Hopefully that will happen…

    This discussion list to me proves that something like that is on its way – communication and debate are in the end the most valuable and I agree with some people on the list that regardless of the ‘performance’ this has produced already more content and moved on to important remarks and points for further discussion.
    By inviting Lev Virtueel Platform wanted to raise awareness and show the cultural policy field what the potential of digitising data could be. As they still know so little about it and have little clue of what is happening we thought Lev would be a low threshold for them. At the same time we want to bring people with an international reputation to the Netherlands, to speak and present to students, researchers, artists and generally interested people. This makes it possible to meet in person, discuss and exchange ideas. For me that was and still is the most important reason for organising these events and even though the discussion that is going on here started from disappointment it did generate a good and I think fruitful debate!
    Sadly but often we learn most from our and other people’s mistakes, but I hope in the near future the discussion will also grow after a more positive feedback.
    It would be good if the discussion continues to evolve in the direction of critically analyzing methodologies for artistic meaning and intention!

    Thanks and best, Annet

  • November 20, 2009 at 1:18 am

    […] at the Paradiso to which his presentation on cultural analytics raised a great deal criticism [1] [2]. Shortly after his last talk professor and art historian Edward Shanken wrote the following on The […]

Leave a Reply