Lev Manovich on User Generated Content @ Video Vortex
The following post is a combination of a transcription of Manovich’s keynote and my own notes and commentary.
Introduction by Geert Lovink
Online video is renegotiating its (problematic) relationship with cinema. It deals with cinematographic principles versus the principles of the online age. We cannot directly transfer the cinematographic principles into the online age as new media has its own specificities. YouTube is not just video on the web but YouTube is a natively digital object.
Ten years ago Lev Manovich proposed to consider the database as the (new) dominant media form. The database is the hegemonic media form online, as can be seen on YouTube, Flickr, MySpace and Google. We should think beyond technology now the database is also becoming a dominant social form. The database is shaping the social.
User Generated Content by Lev Manovich
After the novel, and subsequently cinema privileged narrative as the key form of cultural expression of the modern age, the computer age introduces its correlate – database. Many new media objects do not tell stories; they don’t have beginning or end; in fact, they don’t have any development, thematically, formally or otherwise which would organize their elements into a sequence. Instead, they are collections of individual items, where every item has the same significance as any other. (Manovich, Database as a Symbolic Form)
These individual items could be considered to be little narratives. Even though it is debatable we could argue that within the database structure the actual elements are almost intensified little narratives.
It is interesting to note that Manovich starts his note by stating “I shouldn’t be here.” Even though he has a YouTube, Flickr and MySpace account he doesn’t use them because he is too shy. He dislikes talking from an expert point of view as he more of an observer than a participant.
The problems of user-generated content
The challenge user-generated content (UGC) presents to media theory is the same as it does to programmers: scale. If the number of people that produces content grows new social challenges arise such as the question of quality. Not only is the term user-generated content a term that is created by the industry it is also misleading. It is an umbrella term that not only simply counts users, it also tends to homogenize the content. Is every picture that is uploaded to Flickr meaningful? Content is created and uploaded for different reasons, purposes and audiences. Not all content is intended for wide distribution as some pictures are only uploaded for friends and family and others for general viewing. Some pictures are taken especially for (special-interest) Flickr groups and pools which illustrates that all pictures have a different purpose and meaning.
New social media behaviors
Both hardware and software (and interfaces if we choose to put the interface between hardware and software instead of seeing it as software) direct new users to turn their media into social media. YouTube pushes you to interact by offering a wide abundance of “social options” such as Share/Post video/Add to groups etc. I recognize this trend in the use of my new mobile phone. Not only is it my first mobile phone with an integrated camera it also gives me the option to publish a picture on the web immediately. This has been made even easier by installing a piece of software called Shozu that immediately pops up a “Send to Flickr” dialog after I have taken a picture. This causes me to upload nearly every picture I take to Flickr. My mobile phone creates new social media behaviors.
With newly created social media behavior we also need a new field of study to bring into focus the elements of digital culture created by software. Software shapes media behavior and that is why we need to study it.
Henri Jenkin’s Convergence Culture critique
Manovich main critique on Jenkins’ assumptions about user-generated content is that Jenkins’ does not evaluate the content. There is the underlying assumption that everything that the fans create is good. Even though Jenkins is from Humanities he takes a sociological approach and doesn’t look “inside” the content. We need to ask what the grammar of the content is? What is it composed of?
Users and Templates
Is the content produced by using old models, templates and iconography copied from mass media? Who are creating the new models? Are they still created by the professionals? Templates are no longer provided only by professionals such as the Word templates provided by Microsoft. Nowadays amateurs produce templates too. In relation to my thesis this can be applied to user-generated WordPress templates and plugins. Very few of these templates are created by so-called professionals. Where do we draw the boundary of professionals and amateurs? Are the default WordPress templates created by professionals? Most WordPress themes are created by the user community which we may label as amateurs. Some of these users are professional webdesigners or coders but others create themes for fun, recognition or money. To refer back to Henry Jenkins the themes are also part of the remix culture. Users adjust and adapt existing themes to their own needs. Keeping in mind Manovich’ critique on Jenkins not all themes bear the same quality. Not all themes are written according to W3C standards for example.
Models, templates and iconography are part of the cultural DNA of content. We should not only study the circulation of content but also this underlying cultural DNA.
Critique on the Long Tail
The long tail is often presented as having a fixed form while it actually comes in different shapes and forms. Not only is the curve is changing over the years, it is also different for different industries. The long tail in architecture for example is very steep with just a few major architects such as Rem Koolhaas. Manovich asks us what the different shapes of the long tail are in terms of popularity. Not only should we see the long tail through the eye of popularity, but we could also see it through the eye of quality or quotability. How many elements of a piece of content is used by others to produce a new piece of content?
To be able to adequately analyze global culture (the numbers of professionals, prosumers and users continuously growing) – patterns of creation, consumption, circulation and remix of content – we need new tools. (Manovich slide)
This is exactly what we are dealing with at the Digital Method Initiative: natively digital objects need new tools and research methods that take into account the natively digital.
Future (media) theory will be software based. The analysis must be able to deal with the scale of contemporary culture. On top of that a gap between the cultural tools and the industrial tools (for example data mining) must be bridged. We need large displays to visualize the immense amount of data that is being produced and visualized. New software will be based on theoretical tools. Software theory will do justice to the scale of contemporary cultural production. The form of scale we are currently dealing with is new/unkown to Humanities. The content we find online is only a small part of the totality of the cultural circulation. Instead of a structuralism like semiotics, where the structure is imagined and tested with individual texts, here the individual movements (flows) of content form an emergent structure.
On top of quantative analysis we also need to take into account a qualitative analysis that deals with questions such as what happens if you switch on a certain piece of technology. It is a double approach that not only looks at software but also studies it using software. Michael poses the important reflexivity question as the software we use to study software has certain assumptions embedded into it. The Wikiscanner has a particular vision of Wikipedia built in it.
It will be interesting to see what kind of new cultural reflections will these (new) tools will lead to.