Obey the HABBO way and you’re OK
The Thomas Jefferson of the wired generation. That’s one of the tittles political activist writer, poet and Grateful Dead lyricist John Perry Barlow got after he in 1996 forwarded his “A Declaration of the Independence of Cyberspace” around the world. This text was a reaction to the enactment of the Communications Decency Act in 1996. In this declaration Barlow warned all governments that cyberspace was “naturally independent of the tyrannies you seek to impose on us….Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are based on matter. There is no matter here.” According to Barlow in 1996 cyberspace was a place where there would be no place for politics and rules as implied in ‘the real world’. The internet would be outside existing country boarders and would create it’s own rules and social contracts to determine how it would overcome it’s problems.
John Perry Barlow’s declaration can be seen as a critique towards governmental interference in cyberspace. He sees an utopian Internet in witch users are able to create their own rules and laws without restrictions or political interference. When I read this declaration for the first time I immediately thought of the role that large companies and corporations nowadays take on the internet when it comes to policy and rule making. At video vortex I already saw an interesting presentation on legal protocols on the web by Peter Westenberg.(a stream of the lecture can be seen here) In this lecture Westenberg showed the amount of changes that were made in You Tube’s terms of agreement within the last two years. Westenberg showed that within this period of time You Tube’s terms of agreement where almost completely rewritten. He also pointed out that when signing the terms of agreement we grant you tube the right to rewrite these terms. We agree that the terms of agreement can change all the time and by doing so we agree to agree to these new rewritten automatically (When thinking about these terms rationally you wouldn’t agree, agree?)
Westenberg pointed out that people don’t really care about these terms of agreement because they want to use the service that a certain company (in this case You Tube) provides. Users don’t mind living up to rules and obeying certain terms of agreement as long as they get access to the programs they signed up for. This gives the online companies and corporations a huge power on how people act online. The online environment I want to analyze keeping this in mind is Habbo Hotel. I chose this environment because it has a set of strict rules and regulations and butt still attracts a lot of young children are drawn towards it and accept these terms. On the one hand Habbo looks like a playful online environment butt on the other hand the rules, regulations and terms of agreement or pretty strict.
Habbo Hotels are localized communities where millions of children in the age of 8 to 18 every day meet. Ever since the lounge of the first Habbo hotel in 2000 in Finland the internet PC game has opened the world of e-communication to children. According to Sulake (the online entertainment company that runs Habbo) there are 80.000.000 registered users and the hotel has more than 6.000.000 unique visitors every month that spend an average time of 30 minutes on the site. Habbo at the moment has 31 different local communities. On the site users or able to build their own characters (Habbo’s), chat, make friends, exchange Habbo furniture and buy Habbo credits to be able to buy furniture to decorate their Habbo rooms. Habbo is one of the world’s largest and fastest growing virtual worlds and social networking services for teenagers. According to Wikipedia “the game is also centered around The Habbo Way, which are the standards and rules which all Habbo players are expected to follow, or face a ban from accessing the hotel for a certain amount of time. Players are urged to report any breach of it using a system which notifies the hotel’s moderators (Hobba’s).
When creating your own Habbo there are only pre-selected choices of how it can look. There is a high homogeneity between the different male and female Habbo’s. There are different rooms within the Habbo Hotel. A couple of these rooms are pretty hard to get in unless if you are a Habbo Club Member (a feature you have to pay for). The level of equality between the Habbo’s is based on the amount of money they seem to pay. In the Hotel there are also strict rules about the language that one can use. Swearing words etc. are being replaced by the word Bobba threw ‘the Bobba filter’. (This filter was shut down recently for children older than 13 years because it was starting to become a hype to try to avoid the filter for a big group of children)
While at first glance Sulake seems to create an open online environment in which children have the possibilities to use their creativity to create their own online space and community this doesn’t seem to be the company’s goal. Because, when you look into these created spaces and communities you can see that they are more restricted and bounded to certain rules as it at first glance appears. The user can only choose from a couple of well thought threw options and the amount of creativity that is ‘accepted’ by the Hotel seems to be pretty limited. John Perry Barlow’s declaration of independence in 1996 feared for the laws and rules that the different governments would apply on the internet. He declared that: “the global social space we are building to be naturally independent of the tyrannies you seek to impose on us.” In an interview with the same author in 2004 I read the following quote:
“In order to be libertarian, you have to be an optimist. You have to have a benign view of human nature, to believe that human beings left to their own devices are basically good. But I’m not so sure about human institutions, and I think the real point of argument here is whether or not large corporations are human institutions or some other entity we need to be thinking about curtailing. Most libertarians are worried about government but not worried about business. I think we need to be worrying about business in exactly the same way we are worrying about government.”
(the intervieuw can be read here)
The governments in Barlow’s declaration seem to have made place for the big corporations and businesses. The tyrannies threatening our naturally independent social space seem to be rules and regulations designed by the large online corporations. Habbo Hotel seems to not be concerned about creating a independent free social space as long as they keep creating young Habbo consumers. Instead of being part of a global independent social space as desired by Barlow the Habbo way contains a lot of rules and restrictions. Rules and restrictions, according to Westenberg, ‘we don’t mind to agree to’.
A couple of weeks ago I bumped into an interesting short post on the Seeriously blog. It was a post about a new Facebook application by Activeworlds (AW) and yesterday I decided to give it a closer look by stepping into this virtual world in Facebook.
Before reporting my actual experiences of the virtual world ‘deeply integrated in Facebook’, I will give a brief description of Activeworlds, Inc; the company that made the platform and created the Facebook application. Activeworlds, Inc originated as WebWorld in the summer of 1994. After several name, owner and positioning changes it was eventually named Activeworlds, Inc in September 2002.The future vision that AW has for 3D virtual worlds is that they will eventually become web-browser substitutes. So instead of using 2D browsers like Internet Explorer and Firefox, users should be able to walk around in a 3D environment where they can click on web pages and links. As AW CEO, Rick Noll states it: “We are building towards a future where virtual world sites will be mainstream and realistically implemented”.
At first glance the virtual worlds of AW seem to have an education and commerce porpuse since these are subjects with a prominent space on the company’s website. But when taking a closer look at their strategy it becomes clear that because AW hosts over a 1000 different virtual worlds where users can play, shop, make friends, learn, and so on, AW its main strategy is based on diversity. When looking at all the different virtual worlds that are available it becomes clear that AW is trying to cover all possible virtual world niches. A few examples: Russian World, France World, US World, AWschool, AWteen, Virtual mall, Atlantis, Sales World, AWChess, AWadult, and so on.
Entering the world
After the required ticking of the (people at Activeworlds) ‘Know who I am and access my information’ box I (being a Facebook member) was allowed to download the application and I installed the software on my computer. After selecting the newbie recommended ‘All worlds Gate’ I was ready to roll; a chat window opened and a small screen started loading in Facebook.
The first thing that caught my attention were the big ‘register’ buttons and the menu that provides users the opportunity to invite all their Facebook friends by simply clicking on their profile picture. Before the entire world was loaded the next thing that crossed my mind was that the small window in my Facebook page was nothing more than an interactive banner; a banner-extra. But then the world was finally loaded and I could start running around, chatting with people and play; doing research.
I decided to look for someone more experienced in the virtual world and within a couple of seconds I ran into a female avatar called ‘HoneyB1’. After chatting with Honey for a while I asked if she was fine with a short virtual interview and she agreed. The first question I asked was whether she entered the world via Facebook too, but to my surprise she replied that she did not even know Facebook. It turned out that the beautifull looking 32 year old girl from Australia wasn’t even aware of the phenomenon social networks! She had entered the virtual world via an Australian AW enabled website.
When I asked her if she was a frequent user she told me she considers herself a local in several different AW worlds. Her favorite though is the All Worlds Gate (the world we were standing in during the interview) because “it is a good world to meet people since almost everybody is new”. Then I realized that almost all the avatars that walked by looked the same; as a new unregistered user you are only allowed to enter the world with a tourist avatar; big belly, Hawaiian shirt, camera, shorts, ‘Crocodile Dundee tourist hat’, white sox etc.
After finishing the brief interview I decided to extend my virtual research by asking some general questions aimed at all users I bumped into in order to find out what the amount of users was that had entered this world via Facebook. The result: 0 avatars out of the roughly 30 that replied, had joined the world the way I did; through a Facebook application. One of the few persons that were even aware of the social network Facebook told me that she was not a member herself, but she uses it to check the pages of her three sons…..
The next step in my ‘research’ was checking out the graphics and the characteristics of the world. As an unregistered tourist I was only able to walk around, chat and click on almost every billboard or portal. I found this restricted world comforting since I am used to a lot of options in other virtual worlds I have tested. The relatively small All Worlds Gate provides users with a huge ticking clock on the so called ‘information area’. I assume the clock indicates a virtual-AW-wide time since it did not match the (real) time it actually was during my investigation.
When you click on a billboard the future-vision of AW that I described earlier becomes clear; after clicking on the billboard a screen within my screen within my Facebook screen (!) opens and I am able to browse through a website. It turns out to be an AW Gatekeepers website and after heaving read that there is always a gatekeeper running around in the All Worlds Gate (where I am at that moment) I decided to go find one to ask him or her some questions.
After I had expressed through the chat application that I was looking for a gatekeeper, a certain ‘ManxMing o’ with a Mila Jovovich in 5th element look-a-like avatar approached me and proudly told me she was a gatekeeper for Activeworlds. After she introduced herself I asked her what a gatekeeper is: what does it mean to be a gatekeeper? She replied with a rather copy-paste pre-instructed answer:
“All Gate Keepers are volunteers. Our mission is to enhance the experience of citizens and new users, as well as promote the AW community; by providing a welcoming environment, that allows for instruction, assistance and camaraderie.”
After I had expressed my doubts on the volunteer part of her answer she told me that she honestly applied for the gatekeeper’s position herself, she ‘works’ at home and does not get paid. Furthermore she considers herself a helper, a true AW fan from the beginning and she did not know Facebook! After the conversation I think she wanted to impress me even more by blurting out:
“To register, just click on the “register now” button on your screen or go to the active worlds web site at www.activeworlds.com. It only costs $6.95 U.S. to register per month of unlimited usage! Or $69.95 a year.”
She closed of with an impressive:
“,oº°ºo..(¯`’•.¸(¯`’•. Welcome to the Active Worlds Gateway.•’´¯).•’´¯)..oº°ºo,”
Wow, what a highly engaged user did I just bump into, she must be a true fan!
After thanking ManxMing for her friendly collaboration I decided that it was time to say goodbye and go home; back to Facebook.
In this part I would like to express some of my views on AW it’s general strategy in the market of virtual worlds but also it’s approach concerning the luring in of new users.
The first thing I want to mention has to do with accessibility. As I discussed in my earlier post about virtual worlds in Modern China, Novoking is a virtual world created by a company in China that tries to attract the more newbie users by not offering too many functions. The application that enables Facebook users to access the AW virtual world fairly easy and integrates the world in a social network does exactly the same thing. By keeping it plain and simple, users get to know the virtual world very quickly instead of being scared of by too many functions, buttons, and complicated functions. This makes it the easiest accessible virtual world I have encountered so far. So by offering a very simple version of a virtual world, almost like a teaser, the learning curve is even smoother than the one of the Novoking world. The user friendly possibility to click on Facebook users and invite them into this virtual world adds to this.
The second thing that caught my attention while walking around in the virtual world was the phenomenon of the ‘use’ of extremely engaged users. If we assume that Manxming was indeed an unpaid volunteer, the so called gatekeepers can be considered a very effective, innovative and cheap way of engaging other users. Approaching these highly involved users as a company and expressing your respect by providing them with extra information and authoritarian abilities, has several advantages:
– Gatekeepers such as Manxming are the best ambassadors a company could wish for
– Gatekeepers are very cheap; since they are proud to be a part of your company, they just need some instructions and attention and in the case of AW they do not demand any reward for this
– Gatekeepers are a good method for word-of-mouth advertising because they are on the same level as users.
– Gatekeepers are able to answer all the questions new users have in a very personal way.
– Gatekeepers will build en generate content in virtual worlds.
These are only a few of the advantages that highly engaged users could offer. This example does not only apply in this case, it also applies for social networks and countless other Web 2.0 websites, these users should be approached pro-actively and stimulated at all times!
A final thought that I would like to express about AW is about the fact that the company tries to serve the whole virtual world market by offering over a 1000 niche virtual worlds. I think the goal of trying to become the biggest virtual world in all branches is rather arrogant and ignorant. In the near future there will be an endless amount of different virtual worlds, each with very specific characteristics and different users. I would suggest AW to start focusing on a more specific genre such as virtual malls or virtual education through gaming, like for example Seeriously does. I think it will not take too long before the virtual market will be much more competitive. Picking a specific virtual world niche and establishing yourself as the biggest authority in this niche should be the next strategic step for AW.
Sources used and more information
In an ongoing discussion on a forum I got into a scuffle with a formidable opponent about what blogs exactly are. I tried parroting all that I had been taught in various classes during the BA and MA courses in New Media; basically that weblogs are a form (as argued by Albert Benschop in class at one point) and not a function. I agreed with wikipedia’s initial definition that, whatever blogs are, they must have this fundamental characteristic:
“A blog (short for web log) is a website where entries are made and displayed in a reverse chronological order”
In a rebuttal, my adversary made points about the necessity for blogs to be written in an explicitly personal manner, and that the more people collaborate on a blog, the less “bloggy” it becomes and the more it starts to be a part of some generic “online media”. I tried to refute these points by listing blogs that are not overly personal (Engadget), blogs that have a legion of contributers (Huffington Post) and academic blogs (such as ours, or Terra Nova) which can be both.
Augmented Reality (AR) is no longer science fiction. The usage of AR is rising in our society. What is AR aiming on? On the enrichment of physical spaces with computer generated images and the availability of location based content. AR can be a strong potential for traditional ways of learning. But what does AR do with the withdrawal of knowledge and the processing of this knowledge? What should we take into account if we want to use AR effectively for educational purposes?
One of the theories that have been developed in cognitive science is the ‘situativity theory of cognition’. The origin of this theory can be found in psychology. Situated cognition looks at the human cognition and proposes that the user actively absorbs knowledge when he or she makes a connection between facts learned and the environment in which events related to these facts take place (Greeno 1998: p.2). The environment plays an important role in the active withdrawal of knowledge and information:
The environment constrains activity, affords particular types of activity or performance, and supports performance.
It is not enough to present users a list of dry facts. Knowledge must be linked to practices that create a deep impact on users. You must create opportunities for users to put in to practice the dry facts that they have assembled (Kurt Squire et al, 2007). The learning process is shaped further when implemented in an activity.
A nice example of an AR system is Wikitude. It extracts content from Wikipedia and presents the user with data about their surroundings, nearby landmarks, and other points of interest by overlaying information on the real-time camera view of a smart-phone. Layar is also a nice example. This AR application shows what is around you by displaying real time digital information on top of reality through the camera of your mobile phone.
The withdrawal of knowledge in new media environments generally take place on a non-linear way. It is unlike reading an article or a book. You do not follow a path from A to Z, but you choose your own pathways. As mentioned in a previous article we are like ‘trippers jumping from one link to another’ examining content that can disseminate from content read before. This also applies to AR systems. Content can differ from each other.
This causes a ‘cognitive switch’; a new way of recording and processing knowledge’ (Calleja, 2004). While processing information, information is always changing. Processes of assimilation and withdrawal are constantly in play. Our brains try to create order by giving us the illusion that the information that we assimilate is in some way linear by making links between information that has been stored (Calleja, 2004: p.5). Basically we are learning by making connections between pieces of information that seem to have no relation with each other on forehand.
AR: a strong potential?
In AR systems we learn on context-dependent ways and through trial and error. It provides a safe way to explore: if you fail it will not have ‘fatal’ consequences. Take for instance AR systems in which students can learn how to perform a heart surgery. Abstract knowledge can be applied / practiced and effects are immediately spotted. Besides, fatal errors are out of the question. This can have a positive effect on the learning process. But what issues should you take into account?
- Pay close attention to how you involve the environment while offering contextual information. Link information directly to practices in the environment.
- Provide users more contextual information instead of only abstract information. AR systems can only operate efficiently when information can be linked directly to objects in the physical space.
- Offer information at the right time and at the right spot so that no accumulation of information within the system occurs and filter available information about objects in physical spaces by using algorithms. This would otherwise have a disadvantageous effect on the efficiency of the system.
So AR can become a strong potential for the learning process. However, rather than being a technology that is suitable for the transmission of abstract information, AR is a platform on which you can learn on a situated way.
As posted on Dancing Uphill
“Dan was apparent ﬁfty plus, a little paunchy and stubbled. He had raccoon-mask bags under his eyes and he slumped listlessly. As I approached, I pinged his Whufﬁe and was startled to see that it had dropped to nearly zero. “Jesus,” I said, as I sat down next to him. “You look like hell, Dan.” […] Lil was waiting on the sofa, a folded blanket and an extra pillow on the side table, a pot of coffee and some Disneyland Beijing mugs beside them. She stood and extended her hand. “I’m Lil,” she said. “Dan,” he said. “It’s a pleasure.” I knew she was pinging his Whufﬁe and I caught her look of surprised disapproval. Us oldsters who predate Whufﬁe know that it’s important; but to the kids, it’s the world. Someone without any is automatically suspect. I watched her recover quickly, smile, and surreptitiously wipe her hand on her jeans. “Coffee?” she said.” (Doctorow, 2003, p. 23)
Cory Doctorow’s novel ‘Down and Out in the Magic Kingdom’ describes a future world based on post-scarcity economy, in which everything is free. ‘Whuffie’ is the name for an abstract personal currency, based on reputation, motivating people to pursue a useful and creative lifestyle. The ‘Whuffie’ number is equivalent to a person’s social status in society, for instance, you lose points when being rude or committing a crime, you gain points when helping someone cross the street or composing a brilliant symphony. The most striking is every person having a brain implant, which enables them to interface with ‘the Net’, giving them the possibility to check everyone’s ‘Whuffie’ instantly and wirelessly.
In our contemporary society, we don’t possess the means to explicitly define or compare social status as the people in ‘Down and Out in the Magic Kingdom’ do. Nonetheless, looking up to or down on people, comparing people or finding motivation to act by other people’s social status in the most broad sense is very real; social status is a timeless phenomenon. ‘The Net’ obviously bears a resemblance to the Internet, which in Doctorow’s novel is a platform for status comparison; it enables social status to be embedded and used as a currency in real life. Social networks on the web also serve as a platform for defining one’s identity or individualism (Donath, boyd 2004; Ito et al. 2010) and to identify one’s position within a group or community of (likeminded) people; virtual communities (Smith, Kollock 1999). These processes are intimately related to (attainment of) social status.
The social network profile could be compared to the brain implant in Doctorow’s novel, being a link between the identity of the user it represents to the larger whole of available profiles; ‘the Net’, being the social network site as a possible accelerator for status comparison. The ‘Whuffie’ could then be compared to the social status captured in or radiating from a profile.
Sociological literature discussing general (offline) social status often focuses on the social-economic status of individuals (Hollingshead 1975; Lin 1999), having education, occupation and income as its foundations. Processes of interaction then constitute a scale of status comparison, for instance casual conversation. However, in contemporary society, community is not conceptualized anymore in terms of physical proximity but in terms of social networks (Smith, Kollock 1999, p. 17), which extend through communication technologies. With online social networks it is possible to establish and nourish relationships out of one’s physical reaching space, establishing evolving standards of status. Many scholars describing online social networks especially focus on youth subcultures in social networks as MySpace, Facebook and Friendster (Ito et al. 2010; boyd 2008). Social network analysis is highly focused on teens because of their early adoption of networked technology, highlighting the desire to engage in publics (boyd, 2008). Herein, online status is directly linked to popularity, constituted in number of friends, ‘top friends’ ranking lists, number of comments and physical attractiveness in photos. Doctorow also mentions a similar idea in his work of fiction: “Us oldsters who predate Whufﬁe know that it’s important; but to the kids, it’s the world.” (Doctorow 2003, p. 23) Nonetheless, online social network status is also very relevant for people somewhat older, for instance young urban professionals engaging in career related interaction via social network site LinkedIn. More developed social-economic status, comes at a certain age; “education changes during […] youth, but it generally stabilizes in the adult years. […] Occupation may change in the early years of adult life, but it also tends to become stable as a person grows into the late twenties and on into the thirties.” (Hollingshead, 1975)
In this paper I compare offline to online social-economic status, especially directed at professional social network site LinkedIn. I compare sociological accounts of social-economic status in communities to online accounts of status in virtual communities. Questions posed include the following: How is social-economic status constituted online? How do users of LinkedIn compare social-economic status? How do they influence each other by it? An important starting point is viewing online communities as ‘real’ communities and the Web as a reflection of offline culture, as an argument for connecting sociological literature to the Web. Among authors supporting this are those of the Digital Methods Initiative (Rogers, Stevenson, Weltevree, 2009) and Smith and Kollock (1999). The main limitation of this study is the body of literature handling online social network status being mainly applied to online teen and youth culture behavior on social network sites as MySpace, Facebook and Friendster (Donath, boyd 2004; boyd 2008; Ito et al. 2010). I will refer to these authors, because some realizations also apply to this paper, but it is important to note that there are multiple gaps between subculture analysis by aforementioned authors and this paper, for instance, in subject age, occupation, education, income and motivation to network. This constructs the difference between online social status and online social-economic status (although there is some overlap). To illustrate this difference I compare the main header on professional social network site LinkedIn; “Over 55 million professionals use LinkedIn to exchange information, ideas and opportunities: Stay informed about your contacts and industry, find the people & knowledge you need to achieve your goals and control your professional identity online.” with the headline of social network site Friendster; “Friendster helps you stay connected with everything that matters to you: Friends, family and fun! It’s free to join, so go on, see what all the fuss is about!” Both sociological literature on status and social-economic status as expressed by the ‘Whuffie’ are more relevant when compared to status expressions on professional social networking sites as LinkedIn than to social networking sites mainly directed at ‘fun’ social interaction as Friendster.
My main question for this paper is:
To what extent do professional social networking sites as LinkedIn enable explicit social-economic status comparison?
With this main question I have the following sub questions, which I will answer in the chapters following:
How to define (social-economic) status?
How to attain and build status on social networking sites?
How are virtual communities and peers affected by status?
Defining Status and Status Attainment
In ‘The Four Factors Index of Social Status’ Hollingshead defines status as ‘the positions individuals or nuclear families occupy in the status structure of a given society’ (Hollingshead, 1975). The four factors used in Hollingshead index are; education, occupation, sex and marital status. Education and occupation are herein mainly linked to income and an explicit position in societies hierarchy by job-position. A more specific social-economic status definition by Clauss-Ehlers reads; “a position on an economic hierarchy based upon income, education, and occupation” (Clauss-Ehlers, 2006). This professional factor is especially important when speaking of not only social status but also of economic status. It is important to note that when linking this definition to online social-economic status as expressed by LinkedIn, we speak of an individual’s position in society only, not of a family’s.
In ‘Social Networks and Status Comparison’ Lin defines status attainment as ‘a process by which individuals mobilize and invest resources for returns in socioeconomic standings’ (Lin 1999, p. 467). In this definition resources are referred to as goods in society valued by normative judgments of how these goods correspond with being wealthy or powerful (meaning goods in the most broad sense, for instance, skills, money or a acquaintance’s authority position acting as a social resource for finding a job.) These resources can be deployed for increasing one’s social-economic status.
Attaining and Building Online Status
Social network sites are relatively new channels of communication, one would say we have been given a choice to participate or not. However, numerous authors describe the opposite, we find ourselves in a situation in which new social conventions are formed around the use of communication, resulting in a situation where one is almost expected to be a member of online social network sites. For instance, Donath and boyd describe that we live in a world in which communication is instant, ubiquitous and mobile and access to information and communication is a key element of status and power (Donath, boyd 2004). Not taking part in these new technological possibilities might devaluate one’s potential in increasing status; one may risk exclusion. Social network sites both function as spaces where new bonds are forged and as showcases of connections. The function of the social network profile as an integral piece of presenting the user can especially be linked to status-attainment. Connections can be lined among the resources identified in the definition of status attainment as mentioned above.
Furthermore, a profile can be viewed in the context of connections, hereby providing information about the user. “Social status, political beliefs, musical taste, etc., may be inferred from the company one keeps” (Donath, boyd, 2004). Secondly, establishing relations with people already in the network of some of your own connections can make one surer of establishing a trustworthy relationship. Having an extensive social network can be both a sign of status as a means to increase in chances for safe connection. Important to realize is that people who share much in common are more likely to get connected. This idea of ‘homophily’ or ‘birds of a feather stick together’ (boyd 2005) and its consequences will be discussed later on in this paper.
As I wrote earlier, there is an increasing adoption of social networking sites among youth, which can explain online status considerations in general. “These sites function as social hangout spaces for teens, social network sites are home to the struggles that teens face as they seek status among peers” (boyd, 2008, p. 226). Teens use these sites to nourish existing friendships and to develop new ones, but also to seek attention and create drama among peers. Social network sites both change and intensify the ways teens experience drama and negotiate status. An important factor in status development is both the public display of connections, comments, profile information and photo’s as the profile’s owner awareness of this public display (Donath, boyd 2004). This openness of information (often a profile is completely open to existing connections and a user can opt for exposing information to strangers) creates an opportunity for active identity and status building; it creates a tension between self-presentation and (assumed) audience opinion. “Impression management is certainly crucial for identity management and for the construction of oneself online, it requires a level of awareness of others’ reactions” (boyd, 2002).
The open display of connections has parallels to the casual dropping of (high status) names in conversation, used for raising one’s own status and positioning oneself in hierarchy or discovering if a common bond (for instance, an overlap in acquaintances) exists between two people. However, in casual conversation, one could feel free to exaggerate, or to show off with impressive, but unverifiable, facts. This also happens on social network sites. “Teens want to be validated by their broader peer group and thus try to make themselves look cool […]. Even when status is not necessarily accessible for them in everyday life, there is sometimes hope that they can resolve this through online presentations” (boyd, 2008). An online status does not necessarily indicate the existence of the same offline status, at least in youth subculture. It is possible to fake parts of your profile information or creating fake connections, increasing reputation. “Online, identity is mutable and unanchored by the body that is its locus in the real world” (Donath, boyd 2004). This could direct at online status being more fluid and less concretely linked to offline status. On LinkedIn one could easily create false education and occupation info or create a fake profile for Bill Gates and connecting with him, heavily increasing the apparent social-economic status of the profile owner. However, the gains of deceiving someone can be quite low and the costs quite high. For instance, making a business deal or taking on a job on false grounds can ruin one’s status. It is much more important for people to be able to rely on their belief in other’s identity.
The use of connections as a showcase for one’s identity can act as a check of identity claims, thus affirming status. Connections that one knows read profile info. By being directly linked to a profile and being displayed as a connection, profile info gets implicitly validated. Furthermore, LinkedIn includes testimonials, called ‘recommendations’. With this function one can receive compliments about past work, suggesting the profile owner to be adequate in his particular activity. The profile owner can also recommend connections himself. This function has its own section on the website, directly linked to connections, further increasing connection and profile reliability, subsequently increasing status. The recommendation function is a way to build sympathy among connections and thus ensuring co-operation. “The power of reputation to enforce co-operative behavior lies not in confrontation with the subject, but in conversation surrounding him” (Donath, boyd 2004). Open display of connections and the recommendation function could be directly linked to the definition of status attainment as mentioned above. Investing resources (complementing connections on, for instance, skills or experience) can lead to increased socioeconomic standing by having a better relationship to connections, increasing the chance for a business deal or job and increasing income, or by the profile owner simply getting a recommendation from a connection himself.
Online Status and Connections in Virtual Communities
People seek status out of very basic evolutionary reasons, according to Wilkinson; “higher rank individuals would have greater access to material resources and the highest quality mates, increasing the proportion of their genes in future populations” (Wilkinson 2006, p. 5). Strong motivations of increasing status are due to natural selection and evolution. Therefore, struggle with status always exists within communities, being closely related to hierarchy deference and dominance, expressing identity/personality and in the end, as Wilkinson argues, to survival of the fittest. Status has an organic biological and evolutionary basis. However, status has developed in our contemporary society, it is for instance derived of excellence in a particular domain of activity without being strongly based on superior physical force: “For example, paraplegic physicist Stephen Hawking […] certainly enjoys high status throughout the world” (Wilkinson, p. 6). This differentiation in status expressions constitutes the variability in which status can appear in modern society, which is also an argument for supporting increase of status through connecting with a variety of individuals with different talents or expertise. Status attainment therefore demands processes of peer interaction and active deployment of one’s ties in community.
Social resources can be accessed through direct and indirect ties (Lin 1999, p. 468). The example of using a connection’s authority position for defining status attainment illustrates that resources can be borrowed via connections in community, emphasizing the importance of valuable connections. The acquaintance in the definition’s example is an indirect tie for increasing status. LinkedIn especially provides connection with indirect ties, not always physically within reach of the profile owner, but accessible when needed. LinkedIn provides a concrete keeping in touch with connections and maintaining relationships, which could be valuable in the future, from both sides. LinkedIn functions as an interpersonal channel, of which Granovetter concluded in ‘The Strength of Weak Ties’; “those who used interpersonal channels seemed to land more satisfactory and better jobs” (Granovetter, 1974). Furthermore, Granovetter distinguishes weak and strong ties. Strong ties being ties with people one has many commonalities with, weak ties being connections with people one has only one or a few commonalities with, in social circles less accessed or less like one’s own. It is hypothesized that as a whole weak ties tend to form bridges that strengthen one’s network. Via weak ties one can access information in social circles not likely to be available in one’s direct surroundings (Granovetter, 1973) thus enriching one’s social network and increasing its potential. Valuable information therefore especially is available through professional social network sites as LinkedIn, by the direct and stable connection with weak ties. One might even consider if the term weak tie still applies. LinkedIn could transform weak ties into strong ties, since any connection (and potential resource for status attainment) is always only a few clicks away.
On the other hand, the supposed main purpose of social networking sites is connecting to new people with which one shares a common ground (similar characteristics as lifestyle, hobbies, taste in music, job, etcetera) (Donath, boyd 2004) and therefore empowering homophily; “it is through this commonality that one can find security in one’s views, feel validated and supported, and have the kind of environment that fosters motivation and joy. […] people do not have to defend their minority status” (boyd 2005). Out of an evolutionary perspective, this safety is indeed important; people are used to residing in communities of likeminded individuals, because this gives them the highest chance of survival. However, contemporary technology gives us the possibility to reach beyond our physical reaching space, offering chances of connection with a wide diversity of audiences. We have been given the possibility to transcend the homophilous environments in which we feel secure. This means we can learn and be influenced in multiple directions, enriching experience and status as never before.
LinkedIn, however, is one of the least open social network sites. Connecting with a stranger is not very common. When adding a new contact a profile owner must select one out of six options; how do you know [contact’s name]? Colleague, Classmate, We’ve done business together, Friend, Other, I don’t know [contact’s name]. Underneath is a message saying; “Important: Only invite people you know well and who know you.” Furthermore, if you do invite people you don’t know recipients can indicate they don’t know you. This has repercussions, since LinkedIn will from then on always ask for a to-be-added contact’s email address. LinkedIn’s main reason for this is to keep the online professional networks it empowers relevant; no infinite numbers of friends, only valuable contacts. Thus, one could argue for LinkedIn being a relatively homophilous environment. However, LinkedIn does provide a profile owner to connect to a connection’s connections; view profiles, send messages, suggest valuable contacts, search for references, etcetera. Linkedin is closed down enough to ensure reliable connections, by only allowing connecting to individuals a profile owner knows and interaction with a connection’s connections, but open enough for growing one’s network in a valuable way, being a environment fostering increase of status. Profiles on social networking sites mainly indicating as being only for ‘fun’ as Friendster or MySpace tend do devalue contacts by having so many of them it seems to become both insincere and useless (boyd, 2008).
Online Status Comparison and LinkedIn Functionalities
Aspiring a higher position in status hierarchy is a natural instinct, as discussed earlier in this paper. Wilkinson describes life is a competitive climb on the ladder of status (Wilkinson, 2006), out of different capitalistic, materialistic or ideological reasons. People compare status because this supplies them with hierarchical information; what is my position in society? And subsequently; what could I do to make it to a higher step?
Let’s look at the different functions of Linkedin, which are indications of identity and status. Linked to Clauss-Ehlers definition of social economic status, I will especially pay attention to education and occupation (income is, of course, private info). Linked to Donath and boyd’s and boyd’s analyses of social networks I will especially pay attention to number of connections and other identity specific parts as profile photo, along with personal information, recommendations, ‘what are you working on?’ and functions alike.
Sample Linkedin User Profile Page
Self-presentation is faceted on LinkedIn. An identity is subdivided into different secluded sections. The main profile section includes name, current position and location and a profile photo (which cannot be enlarged, seemingly to minimize possible effects of physical appearance). Directly under this is an overview of the profile; current occupation, past occupation, education, number of recommendations, number of connections and (company or portfolio) websites. These different profile parts are set out more detailed further down in the profile.
A LinkedIn profile covers all aspects named in Clauss-Ehlers definition of social-economic status, it highlights them by putting them at the top of the page. Except income, which is presumably too private for mentioning.
Important to realize is that LinkedIn does not represent identities as whole; they get chopped up into manageable pieces, which enables LinkedIn users to compare the pieces apart from the whole. “These foci organize the structure of social networks because they are the circumstances and reasons people meet each other and form ties with each other” (Donath, boyd 2004). Online, identity is subdivided and categorized. This fragmented nature of the LinkedIn profile constitutes a faceted identity, leading to a differentiation of impressions a profile can give. These different parts of one’s identity, seemingly divided into various aspects, lead to a different notion of identity, and thus, status. The different parts of the profile owner’s identity have a greater chance of appealing to people yet to connect with than the identity as an inseparable whole. Furthermore, by comparing different profile parts, rather than the profile as a whole, relative positions become clear. To establish a link with my introduction; the ‘Whuffie’ as a number of total status gets subdivided into smaller units, which enable subdivision specific cross-profile comparison.
LinkedIn chops up status and identity into measurable and comparable units. This enables concrete comparison between different profiles, based on different parts of socio-economic status. Individuals are aware of this; “Awareness empowers individuals, as it gives them the ability to understand their position in a given system and use that knowledge to operate more effectively. In social interactions, people want to be aware of their own presentation, of what is appropriate in the given context, and how others perceive them. […] these two components are essential for interpersonal contextual awareness” (boyd 2002). The contextual awareness boyd discusses is highly relevant to LinkedIn. People actively construct their identity, with heightening of status in mind.
The previously mentioned ‘recommendations’ function extends the subdivision, emphasizing certain sources of one’s status in the profile, creating a preference for certain foundations of status (for instance, a particular position or education). A profile owner can actively focus on certain subdivision by, for instance, recommending weak ties in social circles (concerning that particular position or education) and asking for a recommendation in return. Thus, a profile owner can focus on a desired status subdivision through recommendations. Another function enabling a specific focus is ‘what are you working on?’, in which one can simply fill in one’s current professional activity. This can be a means to keep connections up to date and possibly renew interaction. The activity message can also be implicitly directed at certain connections, further enabling shifting of focus. A relatively new functionality is linking Twitter accounts to LinkedIn, enabling a live feed of tweets, this is comparable to the workings of ‘what are you working on?’.
Previous statements of the strength of weak ties, mutable identity and subdivided status, imply connections being based on small areas of common ground in subdivisions of the profile, which the profile owner actively constructs to radiate social-economic status. These subdivisions in status provide weak tie connections to be made more easily. This enhances use of subdivisions in the profile, going hand in hand with LinkedIn’s closed nature in connecting to new people. Weak tie connections may only know some aspects of the profile owner’s identity, assuming other claims in the profile being true because they do not know about them. This creates a tension between offline social-economic status as a whole, and, online, based upon subdivision in the LinkedIn profile. Donath and boyd also signalize this: “The type of information that flows through a tie, whether about the person or about the world at large, depends on the focus that brought them together and on the shared facets of their identity” (Donath, boyd, 2004).
Online status seems to be more flexible than online status, being able to shift its appearance when coming across different social environments; interaction with contacts from different social circles. Considering LinkedIn being a community especially empowering connection of weak ties, online status seems to be not an exact entity like the ‘Whuffie’, but a transforming whole of different parts, appearing anew to each connection’s eyes. Furthermore, a profile owner can shift the focus of the profile to privilege particular weak ties. This already gets specified on the homepage of LinkedIn; “control you professional identity online”. Through active management of only identity “one writes one’s social-economic status into being.” (boyd, 2008)
Through this online status management one can effectively pursue goals in professional life, with relatively little effort. Connecting to related individuals, in whatever broad sense, is always at hand, as is active status comparison. While communication gets increasingly computer mediated, the computer becomes almost as a limb to humans. “In today’s society, access to information is a key element of status and power and communication is instant, ubiquitous and mobile” (Donath, boyd 2004). When we are mobile and ubiquitously connected, what exactly is the difference between checking someone’s ‘Whuffie’ through a brain implant and checking someone’s social-economic status on a professional network site through a mobile Internet connection? Next to the interface, maybe the only difference is the exact number of the ‘Whuffie’ being self-explanatory and LinkedIn profile info needing interpretation.
To conclude; my main question was:
To what extent do professional social networking sites as LinkedIn enable explicit social-economic status comparison?
Social-economic status as based upon education and occupation is explicitly materialized in profile info on LinkedIn. Through a whole of weak tie connections (meaning connections with which the profile owner only has few commonalities) which form a bridge, connecting to otherwise unavailable social circles become a possibility. LinkedIn’s possibilities of staying in touch with weak ties, along with functions as ‘recommendations’, may be reforming the definition of ‘weak tie’. Therefore, social economic status has the opportunity to grow beyond offline only accounts of social-economic status. Through active construction of online identity, the subdivided nature of self-presentation via profile info provides explicit status comparison between different subdivided profiles and thus, between different profile owners. This gives opportunities for personal growth of the profile owner and a higher relative position in society’s hierarchy.
 LinkedIn homepage headline on 11-01-2010 (www.linkedin.com)
 Friendster homepage headline on 11-01-2010 (www.friendster.com)
 Note on LinkedIn’s add connection page 13-01-2010
 LinkedIn homepage header on 14-01-2010 (www.linkedin.com)
- boyd, danah. ‘Faceted Id/entity: Managing representation in a digital world’. Master’s Thesis, MIT Media Lab, 2002.
- boyd, danah. ‘Sociable Technology and Democracy’. In Extreme Democracy, ed. Jon Lebkowsky, Mitch Ratcliffe. Toronto: Lulu, 2005.
- boyd, danah. ‘Taken Out of Context: American Teen Sociality in Networked Publics’. PhD diss., University of California, 2008.
- Clauss-Ehlers, Caroline. ‘Diversity Training for Classroom Teaching: A Manual for Students and Educators’. New York: Springer, 2006.
- Couvering, Elizabeth Van. ‘Is Relevance Relevant? Market, Science, and War: Discourses of Search Engine Quality’. Journal of Computer Mediated Communication, 12(3), 2007.
- Doctorow, Cory. ‘Down and Out in the Magic Kingdom’. New York: Tor Books, 2003.
- Donath, Judith and danah boyd. ‘Public displays of connection’. BT Technological Journal, 22(4), 2004: 71-82.
- Granovetter, M. ‘The Strength of Weak Ties’. Am. J. Sociol. 78, 1973: 1360-1380.
- Hollingshead, A.B. ‘Fout Factor Index of Social Status’. Unpublished Working Paper, 1975.
- Ito, Muziko et al. ‘Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media’. Cambridge: MIT Press, 2010.
- Lin, Nan. ‘Social Networks and Status Attainment’. Annu. Rev. Sociol., 1999 (25): 467-487.
- Rogers, Richard, Michael Stevenson and Esther Weltevrede. ’Social Research with the Web’. Pre-Print, Amsterdam: Govcom.org Foundation, 2009.
- Smith, Marc A and Peter Kollock. ‘Communities in Cyberspace’. London: Routelegde, 1999.
- Wilkinson, Will. ‘Out of Position: Against the Politics of Relative Standing’. Policy, Vol. 22, No. 3, 2006: 3-9.
Something has changed in the fashion industry. Since 2002 blogging about fashion started and has become more and more popular. In 2003 fashion blogger Kathryn Finney of the Budget Fashionista was invited to the New York Fashion Week. A year later Fashiontribes was being invited and seated forth row at shows like Bill Blass. In 2008 Tina Craig and Kelly Cook of BagSnob.com were seated second row at shows like Diana von Furstenburg and Oscal de la Renta. In 2009 famous fashion bloggers were seated front row, even a 13 year old blogger named Tavi Gevinson could be found front row of shows like Marc Jacobs Rodarte and others.
The bloggers Bryan Boy and Tommy Ton by Truce
Tavi Gevinson started writing her blog Style Rookie on March 31 2008 when she was 11. In 2008 she appeared in New York Times magazine and her blog readers increased up to 50.000 readers. In 2009 Tavi partnered with Borders and Frontiers and designed her own t-shirt. In that same year she appeared on the cover of Pop Magazine, which feature photographs by Jamie Morgan and was designed by artist Damien Hirst. In 2010 she featured in Teen Vogue and the French Vogue.
Tavi Gevinson by Astrid Stawiarz/Getty Images
Did something change?
In the article The Year in Style Bloggers Crash Fashion’s Front Row by Eric Wilson we can find an interesting quote from Kelly Cutrone, fashion publicist and founder of the firm People’s Revolution: ““Do I think, as a publicist, that I now have to have my eye on some kid who’s writing a blog in Oklahoma as much as I do on an editor from Vogue? Absolutely. Because once they write something on the Internet, it’s never coming down. And it’s the first thing a designer is going to see.”
In his article Wilson also states that designers are adapting to this new kind of fashion reporting opposed to fashion magazines. Because the requirements of posting an update to a blog, fashionbloggers can react much faster to the current state of the fashion industry. He finds that some fashion magazine editors feel threatened and seasoned critcs are afraid to be replaced by fashionbloggers. Fashionmagazines are trying and started their own blogs and tweets, but when you read these magazine new media productions “you often sense a generational disconnect, something like the queasy feeling of getting a “friend” request from your mother on Facebook” Wilsone states.
Axel Bruns analyses this concept of personal publishing, which he calls citizen journalism, in his work Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage.
In chapter 4 New Blogs and Citizen Journalism: Perceptual Collaboration in Evaluating the News, he compares Citizen Journalism with open source and finds that Citizen Journalism is driven by similar motivations as open source: “it, too, acts as a corrective and a supplement to the output of commercial, industrial journalism. Like open source, too, it has recently begun to challenge the role of its corporate counterpart as opinion (and innovation) leader.” In the rise of fashion blogging versus fashion magazines, we can see this happening as well. Both report about items of clothing and accessories, trends in various apparel markets, celebrity fashion choices and shopping advice. Fashion bloggers challenge the role of fashion magazines with their fast growing popularity and becoming more and more acknowledged by the fashion industry.
In chapter seven Folksonomies: Produsage and/of Knowledge Structures Buns states that we are entering a new postindustrial, networked era where knowledge is distributed by many diverse alternative sources of information which leads to a “mass amateurization of the media”. This mass amateurization doesn’t have to cause reduction of the quality of information. “Under the beneficial conditions and in the presence of widely shared understandings of what constitutes ‘quality’, the ‘amateur’-driven processes of communal content creation and evaluation are just as able to generate quality content.” Bruns suggests that our focus in research shouldn’t on be the quality of knowledge or information, but the way this knowledge should be structured and perceived. “our focus shifts from an examination of the produsage of information and knowledge to the creation of information about information, knowledge about knowledge- or in short, to the creation of metadata structures.” Next to that, Bruns suggest we should ask ourselves what role remains for the professional experts. Concerning the fashion blogs and the creation of content about fashion by amateurs, we should ask ourselves how we structure and validate this new kind of information and ask ourselves what the role is of the fashion magazines in this new playing field.
New role of fashion magazines?
Bruns finds that experts can still play an important role, this role changed from producer of knowledge structures to one who guides the process as a co-curator. But in this role the expert will be dependent of the content delivered by amateurs, this creation of content Bruns calls Produsage. He finds that experts no longer exist at arm’s length from the customers and for this reason he finds that the user community is not without power. He quotes Benkler: “the user community is not without power…: collective reaction through opinion storms are activated by abusive monopolistic behavior, and can quickly damage the reputation of the perpetrator, thereby forcing a change in behavior in the monopolistic ambition. Competing resources are almost always available, or can be built by the open source community”.
We can see how this occurred in the case of Tavi, the 13 year old blogger. In an interview with Anne Slowey in New York magazine, who has a senior position at Elle, she stated that Tavi’s blog was ‘a bit gimmicky’. This critique was read by dozens of Tavi’s fans as an example of the tension between old media and new. Here a 13 year old fashion blogger has the power of a community against the older, more experienced expert Anne Slowey.
Bruns states that this monolistic power of the new leaders of the knowledge space is problematic, because the content created freely by amateurs can be used to create content used for making money. He takes the example of Google and quotes Lanier:“In the new environment, Google News is for the moment better funded and enjoys a more secure future than most of the rather small number of fine reporters around the world who ultimately create most of its content. The aggregator is richer than the aggregated.”
To bring this back to fashion blogging versus fashion magazines, this could become a problem when magazines will use fashion blogs as their content. In that way they can use free content to make a business. This already happened when designers started to sent clothes, bags etc to famous fashion bloggers. These bloggers would write positive blog posts about these designers and gave them very cheap publicity, using the trust of the unknown reader. Here the designers used the new leaders (fashion bloggers) for cheap marketing and the new leaders used their community for a free bag. Since October 2009, new guidelines from the Federal Trade Commission, require blogs to disclose in their online product reviews if they receive free merchandise or payment for the items they write about.
Personally, I think fashion magazines should acknowledge the power of fashion bloggers and the fact that they will have prominent seating’s at fashion shows. I like the suggestion of Bruns to take up the role as curator and guide the new process of fashion reporting . But the difficulty lies in the fact that all this content is created freely and if you as a fashion magazine pay fashion bloggers, they might become regular employees and the magic of free produced content is gone. On the other hand if fashion magazines use fashion blogs without paying them, they make a business with freely produced content. As always the practical interpretation is very difficult. Fur further reading I would like to suggest the work Copy What Can’t Be Sold (and Sell What Can’t Be Copied): What Musicians Have Learned From Bloggingof Chis Castiglione on how the music industry dealt with digital publishing.
Bruns, Axel Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage New York: Peter Lang, 2008
Dodes, Rachel Bloggers Get Under the Tent: Fashion Once Dismissed Them As Snarky and Small-Time, But Now They’re Getting Respect The Wall Street Journal: September 2006
Wilson, Eric The year in style: Bloggers crash fashion’s front row The New York Times: December 2009
David Kline and Dan Burstein points out that the blogosphere will transform many areas of politics, business, media and culture. In their book ‘Blog! How the newest media revolution is changing politics, business, and culture’ they have interviewed the world’s most influential bloggers. The book contains three parts: Politics & policy, Business & economics and Media & culture. Dan Burstein have written the introduction. Each part is divided in an essay of David Kline, interviews with the bloggers and commentary. It has a good structure and it is not necessary to read the book chronological. The three parts of the book have their own accent.
Potitics & policy
In Politics & policy part stresses the influence of blogs on politics. Kline points out that blogs have influenced the 2004 presidental election. He links it with the public dissatisfaction with the mainstream media. Blogs have the ability to give the public a voice and give a more diverse view. Maybe the mainstream media will give a ‘false balance of objectivity’, as Geneva Overholser points out:: “It leads to a false balance of ‘on the one hand, on the other hand’ opinions stories that make the two ‘hand’ appear equal even when the factual weight lies 98 percent on one side.” (9) Or maybe there are more opinions than two. The blogosphere could give these alternative opinions. But, as Erza Klein points out, blogs “encourage polarization and extremism rather than debate and extremism rather than debate and understanding.”
I think the blogosphere could help politics to know what’s really going on in society. Maybe anyone can be a watchdog. Especially for local problems or problems the mainstream media ignores. An example is the Webantenne for the Dutch government. The Dutch government want to listen what citizens needs in society. In stead of deciding what they think best for the country or only going in conversation with the mainstream media, now they want also go in conversation with citizens.
Business & economics
A returning point in the book is that authority is going to change. The one-way politics, business and media are coming to an end. Blogs give people the ability to talk back. In business, the producer/customer boundaries fades. (more…)
Firefox use in Europe is up to 24%, but use here in the Netherlands is at 14%.
Anyone want to speculate why this is? Of the stereotypes I’m aware of, the one that best fits this figure is apparent in the phrase “doe gewoon, dan doe je al gek genoeg”, which translates to something like “don’t do anything outrageous, ‘normal’ is crazy enough as it is”.
More seriously, though, what kind of firefox ‘marketing campaign’ would work at a local level? It’s worth noting that there is just one local version of spreadfirefox.com (a Japanese, ‘beta’ version). Perhaps the national level is irrelevant here, and one should start by getting firefox on all of the computers at one’s university?
In addition to the article of Laura, I’d like to add another Skype alternative: Fring.
Fring is a free mobile VoIP application which allows you to talk and chat via an internet connection with pc-based services such as Skype, MSN, ICQ, Google Talk, SIP and Twitter.
My experience with Fring is based on a windows pc and a Nokia phone. The installation of the software is quick and easy. Don’t forget to have all of your usernames and passwords by hand. The interface is very user friendly and easy to use.
When you start the application it connects with the internet, trough wifi, 3G or GPRS. Then it combines all your contacts of all the applications mentioned above into one list. Now you are able to call or chat with your contacts without making extra costs on, for example, SMS.
Most of such applications work through networks, Fring doesn’t, it works via the servers of the other applications. When you select a person, it gives you a choice to call via gsm or via Fring. If you’re calling a contact which is on your SIM card, you will pay for that call in the same way as Skype Out calls. On YouTube you can find several videos on how to use Fring in different forms.
I’ve been using Fring for a couple of weeks now and what I’ve noticed is that when I’m calling somebody who is also using Fring, the connection has got a delay in it. So when you talk to each other you’ll be forced to leave silences in between your sentences or dialogs in order not to talk at the same time the other person does. It has got the effect of a long distance call. When I’m calling with Skype for example, this delay doesn’t happen.
In the upcoming month I will be testing Fring and I’ll give you an update by the time I figured out all of the functions and options. Also I’m thinking of comparing it with Jaiku, anyone got some experience on that software?
As the first half of this semester draws to an end, and we all wrap up our compulsory blogging and prepare for the next hurdles of our MA course, I was surprised at not hearing a single word about the dark side of blogging. And I don’t mean blogging done by people who are insane, racist, paedophiles or anything that is morally wrong but who can still spout their sick ideas upon the free and big internet, but those blogs that hardly contain a single sentence. They don’t need long discourses, stands and ideas which those millions of bloggers throw onto the net everyday, and they don’t need followers who want to read about their thoughts, frustrations and desperate need for mental connection all over the world. They post music up, for free, and although they generally don’t spread foul ideas, mental sickness or any kind of material to corrupt the youth, most authorities in the world would like to see them vanish from the internet first without any discussion. We’re entering a dark side of blogging, which involves music piracy, fear, taboos, but most of all complete confusion whether it must be stopped or actually benefits both artists and listeners. Maybe you’re the kind of worry-free iTunes shopper who wants to stay clear of any illegal online business and not even think about getting involved in anything that harms you and your computer. And taking no risks here is fine, perfectly understandable, but I suggest you leave this blog alone and skip to the next one. For those who want to get a closer look in the supposedly shady world op MP3 blogging, here are some thoughts, facts, and ideas…
Music piracy of all ages
If we say that most MP3 blogging concerns illegal online activity, and that most of these blogs post up music which is legally protected by copyright laws, we must return again to the matter of music piracy. Although many people are actually getting very tired of the subject and rather just pay for the, let’s be honest, comfortable ways of buying music online and getting it straight on an iPod without any hassle or fear, music piracy is still around despite these comforts. The digitization of music, books, and films has proven that the right marketing must be applied to successfully make a profit. In the last years of the twentieth century it became clear that the internet proved to be a fire hazard going way beyond the white label market of CD’s and photocopied books.
Burning a CD into one’s home was a novelty, and the days of taping CD’s of friends on cassettes and copying films with VCR’s with its accompanying terrible quality were just on the way out, something which makes us cry with laughter now when scrolling through our iTunes libraries and buying full Blu-Ray downloads to be watched on our plasma’s the same night. Some might argue that it is exactly the streamlining of online music and video through the right outlets that put a stop to many people downloading illegal material because buying it involves less hassle and since it’s legitimate we don’t have to fear material harming our computers like viruses and spyware. But the truth really, is that copyright holders all over the world are still struggling like never before to enforce legal action upon any form of online piracy, where both provider and user shall be held responsible for their conscious decision of breaking the law and rob hardworking artists from their daily bread.
So where do these uncaring, risk taking, selfish online thieves come from and what makes them tick? Is it so hard to pay some small amount of money for a download where the maker worked his fingers to the bone for and put his blood, sweat, and tears into? As we all know, there hasn’t been a single pinpoint in time to show more for this than the emergence of Napster, today more than ten years ago. Download and install a program, get a user name, share some stuff you have and boom; download anything, from anyone, in any quality, until your capacity is reached. Free. These P2P programs took over from the first websites just blatantly posting copyrighted MP3 material on their websites, a way of which the spreaders could easily be caught by tracing their IP address (which mostly resulted in shutting down the website). P2P programs proved to be more difficult; the maintainers of the networks claimed to hold no responsibility for the material being shared, so only the users could be targeted which were protected by their privacy rights. After a lot of trouble Napster was forced to reorganize itself so copyrighted material couldn’t be shared anymore, but in the meantime numerous new P2P programs emerged, of which many are still being used today.
Despite the survival of the P2P programs, it certainly isn’t used as much no more as when Napster ruled the scene. Fear began to sneak in when certain users of P2P programs were caught and got charged enormous fines, often for only downloading a single copyrighted album instead of holding tens of thousands of songs. Torrents proved to be more useful; files were disseminated over a network in clusters, so the holder of the file (In Torrentian called a ‘seeder’) couldn’t be held responsible for spreading illegal material since he didn’t distribute the whole file. A smart system, but these days holders of torrent sites (like the Pirate Bay) are being handled firm by intellectual property holders as well. Many find the world of torrents also a lot more shady and harmful than P2P sharing (concerning viruses and spyware, common around sites with sex ads which torrent sites often adorn themselves with).
And then there were the music blogs…
Whether you are a in favour or against music piracy, there are actually very little spreaders of the phenomenon claiming they got involved in it for the money not giving a damn about musicians losing money. On the contrary, in most cases they are avid music lovers who see the spreading of the music they love as a holy mission. So if they love the music so much, why do they spread it around for free and let their beloved artists miss out on the little money they’re entitled to after all the hard work they put in making their music? At this point we actually come at some divide. But before discussing this ideology, what is an MP3 blog exactly and how does it differ from a normal blog?
The idea is in big lines the same; everyone can put up a blog and start blogging. With a normal blog text and images are used, with MP3 blogs a bit more; put up an image of the cover of a musical work, provide a track list and provide a link to the place where to download it. And as everyday people write blogs, MP3 bloggers (also called ‘audiobloggers’) post up one or several works for people to download. They all stay online and can be searched back for by months, even years. As long as the links still work, of course.
Speaking of these links, where is the music hosted? The answer is simple; file hosting sites such as Megaupload, Rapidshare and Hotfile are simple, initially free services where one can upload virtually anything and let anyone download it. Host, uploader and downloader seem, just as in the Napster scenario, untouchable; privacy rights and not holding responsibility for uploaded material still proves to work ten years after the Napster scenario. Thus clicking the download link on an MP3 blog goes to one of these kinds of sites, where after a short wait (presuming the downloader is a free user and doesn’t have a ‘premium’ account) a compressed file with mostly a not too suggestive name containing the music can be downloaded.
To go back to the ‘divide’ I was talking about earlier, there are MP3 blogs where the latest hits from the album charts are all uploaded everyday in huge quantities. Mostly these sites use Google ads or similar applications to make a profit from the visitors, and since the material they provide is commercial, a great amount of visitors can be anticipated. These MP3 blogs are mainly focussed on profit, not music. Most of these sites are hosted and/or put up in countries where infringers of copyright material are hard to get caught and brought to trial, if at all possible. Sites like these, offering the latest albums from the likes of Lady Gaga and Justin Timberlake, can indeed be put under the shady, sex-ad adorned side op MP3 blogging. They’re focussed on profit making by providing the best selling music in favour of money and not music. Some even defend that the best selling music is already profitable so putting it up illegally isn’t hurting it, but they are forgetting that the bloggers themselves don’t care about the music; as long as it sells it gets posted. Whatever the music is.
A big danger in stereotyping audioblogging is claiming that they all are like the example I provided above, whilst the truth is that the opposite is true. Believe it or not, but most MP3 blogs online today (which, in reality, is because they are not trying to be like the profit makers) are interested solely in bringing music to the people that is rare, hard to find, upcoming, and forgotten. They have a mission in what they do, not staying in the shades of anonymity but be the master and provider of their blog to a troop of like-minded downloaders who adore him or her and the mission they seem to perpetuate. Also, they help the bloggers with bringing in their own input of likewise music to the blog and become part of it. In this way, an audioblog can create a strong community supporting its cause, and can help it against authorities trying to get it offline.
For instance (providing an example from my own research which for its own sake I will let the name remain anonymous), an audioblog dedicated to underground hip-hop music is not just putting up any hip-hop album it get its hands on; it is actively avoiding the big selling music from the charts and is trying to bring music from the unexposed artists, or releases from vinyl, rare mix tapes and limited edition material which is long out of circulation to the blog. Why? Because the people who want it, but can’t buy it, can still get it and spread its sound. And what most people don’t know, but is branded by the blog as quality, can be tasted. Releases like that mostly don’t provide a download link to a file hosting site, but also a link to a store where the album can be bought. In this light, these blogs claim to do the very opposite of copyright infringement; they support the artist, ironically by providing their music for free in good faith that the true fan discovers instead of profits, and buys the album after hearing the illegally downloaded material. To make the story even a bit more peculiar, artists themselves have started supporting these sites, claiming that they actually helped them increase their profits and more importantly, get exposure in the music scene, a task which many big press and radio outlets such as MTV seemed to have abandoned in favour of money.
Piracy = piracy
The problem of this whole phenomenon, is once again that authorities have little interest in disseminating the problem of the ‘divide’ I have tried to schematize; no matter what you think that legitimizes your cause, MP3 bloggers are still supporting piracy and are therefore trespassing the law. It is a common problem in law enforcement where people can benefit from illegal activity without hurting or profiting from other people, but which is hard to separate from the profiteers who are solely interested in personal gain. Therefore, the discourse I tried to outline above is not a pro or against statement concerning audio piracy, but trying to make you think about that there’s more to audio piracy than some shady people using commercial music and the usual gambling and sex ads to make a buck. MP3 blogs can be used as a means to make people remember ‘forgotten’ music, to expose artists making quality music but still trying to find the right audience, two things which can only benefit both artist and listener. We are learned now to avoid the shades of the net where piracy resides with its malware and viruses, but it is this view of piracy that also keeps the position of the major record labels firmly in place without concern to the initial artists. It is this view that made artists to just provide their album for free online, if only to make a stance to the sterility and one-way profit making sites like iTunes who could be compared to the ‘real’ dark bloggers; posting music up which is commercial to profit for a buck, where the benefit goes largely to them and the label instead of the artist. It pushed many artists, like Prince and Radiohead, to change strategies of bringing their music to the fans, just to make a stance.
But before wrapping up, let’s not forget the downloaders who just download anything and don’t bring a single penny back to the artists, whether they are small or big. It is this kind of people who are also responsible for keeping piracy in a pure light of negativity; they do have the money to support the music they love, but rather spend their money on what can’t be downloaded for free. Because it does matter when a love for music is there but no money, just as when the music has been tried but appeared to be not worth the money, making the matter subjective from the downloader’s point of view. This gives the downloader an important issue to think about; what is the music you download worth to you? Or even better; are you worth it to commit piracy if you are too reluctant to reciprocate in any way?
Either way, I hope to have shed some light into the darkness. No biggie.
Goldstone, Andrew. ‘MP3 Blogs. A Silver Bullet for the Music Industry or a Smoking Gun for Copyright Infringement?’. Available at SSRN: http://ssrn.com/abstract=930270
And not a reference, but a nonetheless interesting site where you can read about what established artists think about music piracy: Pirate Verbatim
The recent case of Jack the Cat going missing on an American Airlines flight has seen its fair share of attention in both new and traditional media. But what are the reasons behind AA’s relentless efforts to cope with digital activism?
It started when American Airlines lost Karen Pascoe’s cat Jack on a flight to L.A. from New York. Jack escaped his kennel after being checked in and its owner was noticed shortly after that the pet went missing. Pascoe was forced to take a later flight after her search at the N.Y. airport came up empty. She was assured that she’d get a phone call as soon as Jack was found. Despite numerous phone calls and emails, Pascoe claims AA finally contacted her only 66 hours later just to tell her that the pet hasn’t yet been found. Imagine that meanwhile back in New York everybody was busy with Irene. This is when a Facebook page was created for the missing cat, which gained rapidly a consistent number of outraged supporters. The outrage also spread to Twitter, where the #findjackthecat hashtag was created.
Noticing the build-up (at this moment the Facebook page has over 13,000 supporters), AA responded and apologized, launching a real, most probably costly, search & rescue mission in order to find the cat. The company even engaged the New York Port Authority in its mission and used dog-tracking services in order to find the missing pet. In the meantime, the airline updates its Facebook account with its efforts to find Jack and its being tweeting actively in the matter.
The story has caught enough round-the-world attention and some even may consider that the AA frenzy is getting quite hilarious. Indeed, the attention for this case might be considered surprising, but giving its background it is likely that AA is overcompensating after criticism of its almost absent engagement with audiences through New Media. STELLA Service ranked American Airlines last in a list of airlines in terms of response time to customer tweets and calls during Hurricane Irene.
But are the AA efforts truly justified when considering the potential damages? Are these outraged supporters going to give in to the call to action and actually stop using the AA services? Probably not.
Malcolm Gladwell argues in an editorial for the New Yorker (“Small Change – Why the revolution will not be tweeted”) that social media is not in fact “the” platform for activism and that social networks activism does not produce real social change. Firstly, because real activism can only be achieved by relying on the “strong ties” of friendship and family which connect activists among each other, while social media platforms are built around “weak” or “loose ties”, and second because effective activism requires hierarchical structure, not something to be found in the diffuse structure of social media networks.
Twitter is a way of following (or being followed by) people you may never have met. Facebook is a tool for efficiently managing your acquaintances, for keeping up with the people you would not otherwise be able to stay in touch with. That’s why you can have a thousand “friends” on Facebook, as you never could in real life.
In fact, might seem that the effects of digital activism actually tend to oppose to those initially desired, according to Gladwell. The feeling of completion, or maybe “will to powerlessness” as defined by Geert Lovink (2008), that we get from joining a Facebook group cause is enough to make us think we have acted, leaving no room for real activism.
The results of networking often are a rampant will to powerlessness that escapes the idea of collective progress under the pretext of participation, fluidity, escapism, and over-commitment. (Geert Lovink, 2008)
Bottom line, to Gladwell the only way to get people to adhere to your cause is by not asking too much of them.
Social networks are effective at increasing participation — by lessening the level of motivation that participation requires.
Although maybe AA has nothing to fear regarding immediate high impact actions, its efforts are justified by the fact that in the long run bad reputation can have a strong outcome on its business. According to John Bell, Managing Director of Global 360° Digital Influence Practice – Ogilvy’s global social media marketing and communications practice, it is already established that consumers are taking most of their product and brand relevant topics online, meaning that word of mouth and peer-to-peer recommendations trend to be the main information channels. It is in this context that Google becomes a reputation manager and the main focus is drawn to search results. Companies must now practice effective search reputation management in order to control the information to which consumers ultimately base their purchasing decisions on and there is also a trend in developing tools in this direction, like Google’s Me on the Web service.
Even at this moment Jack the Cat scores in the top 3 Google results when searching for “American Airlines + Cat”.
• Lovink, Geert. Zero Comments. New York: Routledge, 2008.
• Gladwell, Malcolm. “Small Change – Why the revolution will not be tweeted.” New Yorker Magazine, October 4, 2010.
The masters of media blog is redesigned and updated! Since the beginning of this semester the masters of media v 2.0 have been posting on this blog. A new group of masters also needs a fresh new look. In this post you can read about the new features, as well as an evaluation of the collaborative process during the redesign of the blog.
The past couple of weeks we collaboratively brainstormed and negotiated about the redesign of this blog. Most of the masters took on a task such as making a proposal for the design, linklist, tag cloud, navigation, and looking into new plugins and the new features of the WordPress 2.3 update. After this initial research we came together in a meeting with both master classes to vote on important decisions. In this group decision process we decided on some main issues for the redesign of this blog. After online/offline discussions Maarten’s layout proposal was voted on as best suitable. These group discussions turned out to be very productive for decisions on the general structure and reorganization. But once we got to implementing and refining the design group decision-making didn’t turn out to be as effective. During implementation of the new design, Erik came across some decisions that needed to be made more collectively. Group decision in class was not as productive on these specific issues that needed more close attention. Erik, Esther, Roos, and first year MoM blogger Anne got together on a Monday to work the whole evening on refinement and implementation of the design.
Redesign and implementation
The group of four turned out to be a good number for working effectively on these problems. One of the foremost issues addressed was the proposed header of the blog. Although the Japanese people with cell phone taking a picture of our new MoM logo was very funny when it was proposed in class, it was not very “masters of media.” We needed something more “new media,” something more geeky, something we would blog about. In sync, Roos and Anne came up with the idea to use a QR-code of our blog URL as the image of our header. A nice extra of the QR-code logo is that it is great for hiding easter eggs. One is implemented, I’m sure others will follow soon. Besides some designer pixel frenzy and the proper implementation of these ideas we had to call it a night at around midnight.
Today Erik, Roos and Esther came together again to finish the blog for publication. The design was tweaked and some very nice plugins were added. The new most popular posts listing shows the most popular posts last month. Besides being a very nice addition in the side menu, this plugin also provides some nice stats at the backend. Since we wanted to write a post about this collaborative process collectively, we needed a new plugin that makes it possible to write a multiple-author post. This plugin automatically adds authors to the post when a post is edited and makes collaborative posts possible.
Although we are very happy with the result of the blog so far, it is a blog and some important work still needs to be done. Always. We now have 403 posts, 837 comments, and 404 tags.
- The tag cloud now represents a selection of the most used tags overall but needs some cleaning up (can be done at “manage tags”).
- Although we collectively decided on a tag cloud and no categories, the discussion on a combination of tags and categories for navigation purposes might need to be addressed again. Navigation is not clear now and after all, a 404 on tags hints we need categories as well. An interesting analysis on the use of tags and categories can be read at Problogger. The tag cloud can use some redesign and might only list tags of the last x days.
- A new cleaned up link list needs to be composed and put online.
- We got a calendar, do we want it on the new blog and in what form?
- Since we now have a QR-code that can serve as a logo on t-shirts and coffee mugs, and since we have been collecting cool quotes in the past couple of months, the Cafepress section of our blog will be updated soon.
- Look out for bugs and report them.
Friday October 31 the –free- Moving Movie Industry Conference organized by the Stifo@Sandberg took place at the theater in the Public Library in Amsterdam. It was a full days program with different speakers around the theme of how new media are influencing the moving image and the other way around. (more…)
Manovich’s lecture was terrible in terms of both its preparation and its intellectual content. As a member of an audience, I find it disrespectful when a speaker – the keynote speaker no less – does not have a well-prepared talk and fumbles through a generic set of keynote slides. This fumbling was all the more annoying because the audience could not see the whole slide – particularly the text – at once. The examples he provided of “Cultural Analytics” would have been laughed at by any serious social scientist or art historian. I would have laughed too, if I hadn’t been so pissed off.
Using a sample-size of 35 hand-picked images from realism to modernism, he analyzed the paintings using open-source digital techniques, which demonstrated that painting became increasingly simple (fewer distinct shapes in each image) during this period. Obviously, a sample size of 35 is not sufficiently large to generate statistically relevant results. This sample, moreover, was clearly biased and included a disproportionate number of works by Russian artists and no Americans. But perhaps more importantly, what does one learn from this software-generated observation, which has been generally accepted knowledge amongst art historians on the basis of empirical observation for decades? Absolutely nothing.
In another example, Manovich showed slides that demonstrated an enormous waste of US tax-payer money. He had used the remarkable array at UCSD of some 70 large, hi res, flat panel monitors each connected to a processor (thus constituting a potentially parallel super-computer for visualization) in order to demonstrate variations in the brightness of Mark Rothko’s paintings during the artist’s lifetime. The outcome of the analysis was as underwhelming as the method was problematic. The challenges of accurately capturing the color and tone of a painting in digital form and then representing them on a monitor are well known. The challenges of comparing multiple paintings on monitors is all the more complicated. While there may be insights to be gained by such a method – and I’m not sure how relevant they would be even in the best of circumstances – it appears to be limited to only the most superficial formal aspects of a painting. And while certain aspects of connoisseurship may be aided by computer analysis of high-resolution digital images, Manovich’s example was far from that. What do we learn about Rothko or about art in general from an analysis of the brightness in his work over time? Why even bother posing that as a research question?
As a matter of comparison, in the Digital Methods project (spearheaded by Richard Rogers at UvA, and involving several New Media MA students, PhDs, and other researchers), the researcher must very carefully orchestrate the question, the method, and the database. Sometimes, this requires creating new tools. Sometimes this means asking different sorts of questions. Sometimes both. Digital Methods analysis works only when these three elements are in synch. And when it works, it can provide vital insight that could not be arrived at otherwise because the quantity of information that must be evaluated and manipulated does not lend itself to traditional methods. If Manovich’s Cultural Analytics hopes to achieve what Digital Methods has achieved, it must learn how to ask relevant questions of its tools and data. The questions Manovich posed to his data were mundane and limited to formal features. Even then the methodology and results were unconvincing. As a member of the audience astutely pointed out during the Q&A, Manovich’s cultural analytics does not reckon at all with content. Perhaps that, more than anything, is its most egregious shortcoming.
Well, maybe not. Manovich’s contentions, or rather refrains, that “culture is software” and “we are entering a new epistemology: pattern is the new real” may sound provocative and progressive but they represent very shallow thought, an epistemological slippage that fails to differentiate between ontological registers. I could not agree more with the audience member who, during the Q&A, said she thought that some of his positions were dangerous. His response – that it’s not dangerous like walking into the street when a bike is whizzing by – was, not surprisingly, as shallow, if not arrogant, as the rest of his presentation.
Manovich is a central figure in new media discourses and is a figurehead of our field within a larger ecology of scholars and public intellectuals. We should expect more from him. He does a disservice to the field and presents a poor example for students when he presents material that is intellectually shallow and does so in an unprofessional manner. There were quite a number of UvA New Media faculty and MAs in a circle around Manovich after the talk. I hope you were kicking ass and not kissing ass. None of you rose to my challenge to blog the event. Maybe next time you’ll have more time and/or courage.
Event details: Lev Manovich was the keynote speaker for the public opening event at Paradiso, prior to the expert meeting “Archive 20/20,” organized by Virtueel Platform and held at the Trouw Building the following day. See http://www.virtueelplatform.nl/en/#2519 and http://www.virtueelplatform.nl/en/#2489
Show off your favorite videos to the world.
Take videos of your dogs, cats, and other pets.
Blog the videos you take with your digital camera or cell phone.
Securely and privately show your videos to your friends and family
around the world. …and much, much more!
(About Us page of YouTube, 2005)
YouTube. One of the worst names for a cover of a book you could imagine these days. Think only of SEO (search engine optimization). If you Google ‘YouTube’ you get 1.070.000.000 results. Where do we find info about this book or where do we buy this book? Ah, If we look on the inside of the book we see the full name YouTube: Online Video and Participatory Culture. Finally some clues!
But ok, more important is the content of the book. YouTube: Online Video and Participatory Culture is written by dr. Jean Burgess and dr. Joshua Green. This study on YouTube is surprisingly ‘new’ for me on a lot of points. We all know YouTube, and some of us are active contributors. But this book will give you a good insight in the ‘back-end structure’ of YouTube. How is this largest participatory video community of the world structured? And in which way does these structures evolving as media system in the economic and social context of broader media and technological change?
If we take a closer look at the book we see that it’s not that thick, 172 pages and written in a very easy-to-read-style with lot of examples. At the end of the book there are two essays included of well-known academics Henry Jenkins and John Hartley. These two essays provide a nice exploration of the challenges and opportunities developments to some of the central areas of debate in media and cultural studies. This book differs from other books on YouTube because of the methodological approach. In the scientific field there are two other approaches. One is a study on YouTube that leaves from a computer science and social network perspective. The other is a big ethnographic study on YouTube. Both interesting to read… The study of Burgess and Green combined two methods of research. On the one hand they use qualitative close reading of media and cultural studies. On the other hand they analyze over 4300 ‘most popular’ (viewed, responded, favorited and discussed) videos with a quantitative survey. They argue that this middle way approach helps them to understand the emerging issues on current debates about cultural politics and digital media.
They start off by beginning to look at YouTube’s origins and the prehistory of the debates around it, contextualizing them within the politics of popular culture, especially in the light of new media. Burgess and Green uses an empirical survey of the websites most popular content to uncover some of the different ways YouTube has been put to use. Think for instance of cultural participation (participatory culture) and the mode of thinking it surrounds. They argue that YouTube has been co-created by various institutions and individuals and is part of a participatory culture. They look at the most pressing discussions about this participatory culture: the unevenness of participation and voice. One of the outcomes of the authors is that more than half of the online content on YouTube is user generated like vlogs for instance. Another outcome is that users (individuals) are a large majority of the contributors on YouTube. Big traditional media companies are a smaller part of the contributors. They conclude that individuals contribute a substantial amount of media that comes from the traditional media institutions. These are for example quotes or parts of video clips.
In the second part of the book they take a closer look at YouTube as a social network. They argue that YouTube is more than a distribution platform that can be used to broadcast to an online audience. Instead of that they take the vlog as their key example and state that these vlog entries announces the social presence of the vlogger and calls into being an audience who share the knowledge and experience of YouTube as a social space. So Burgess and Green expose the YouTube community on a micro level, to look at power structures/relations within this community. For example; someone who participate and contribute on the website is a ‘leaduser’. This is a person who understands the way YouTube is working and can apply his own skills in a way that make sense within that system. But again this form of cultural citizenship is limited. Both digital literacy and the unevenness of participation and voice are important issues for cultural politics, they argue.
In my opinion YouTube: Online Video and Participatory Culture offers a great insight at the ‘back-end’ of YouTube that can attract both already and not familiar users of YouTube. This book is written in a very easy-to-read-style. It’s interesting reading material if you’re interested in modern and future implications of online media.
This is a short summery of the two added essays of Henry Jenkins and John Hartley. Jenkins looks in his essay at the often under-acknowledged, as he states, prehistories of youtube that are to be found in minority, activist and alternative media, in order to better understand the limits of youtube. Hartley’s essay is about the longue durée history of media, polpular literacy and the public. He addresses the question to which extent user-created expressions is capable of being scaled up to contribute to a more inclusive cultural public sphere and the growth of knowledge.
Cha et al. 2007.’ I Tube, You Tube, Everybody Tubes: Analyzing the World’s largest User Generated Content Video System.’ Paper presented at IMC’07, San Diego, CA. http://www.imconf.net/imc-2007/papers/imc131.pdf
Gill et al 2007. ‘Youtube Traffic Characterization: A View From the Edge.’ Paper presented at IMC’07, San Diego, CA. http://www.imconf.net/imc-2007/papers/imc78.pdf
Lange, P. ‘Commenting on Comments: Investigating responses to Antagonism on Youtube.’ http://sfaapodcasts.files.wordpress.com/2007/04/update-apr-17-lange-sfaa-paper-2007.pdf
Matteo Pasquinelli’s presentation this Friday at The Society of the Query conference organized by the Institute of Network Cultures lead by Geert Lovink, was based on his paper, Google’s PageRank Algorithm: A Diagram of Cognitive Capitalism and the Rentier of the Common Intellect. The paper can be downloaded from his website.
The essay and presentation of the Italian media theorist and critic focused on an alternative direction for research in the field of critical Internet/ Google studies. He proposed a shift of focus from Google’s power and monopoly and the associated critique in Foucauldian fashion developed within fields such as surveillance studies, to the “political economy of the PageRank algorithm.” According to Pasquinelli, the PageRank algorithm is the base of Google’s power and an emblematic and effective diagram for cognitive capitalism.
Google’s PageRank algorithm determines the value of a website according to the number of inlinks received by a webpage. The algorithm was inspired by the academic publications’ citation system, in which the value of an academic publication is determined by the number of quotations received by the journal’s articles. Pasquinelli takes this algorithm as a starting point in order to introduce into critical studies the notion of “network surplus-value,” a notion inspired by Guatarri’s notion of “machinic surplus value.”
The Google PageRank diagram is the most effective diagram of the cognitive economy because it makes visible precisely this aspect characteristic of the cognitive economy, namely network value. Network value adds up to the more established notions of commodity use value and exchange value. Network value refers to the circulation value of a commodity. The pollination metaphor used by the first speaker, Yann Moulier Boutang, is useful in understanding network value. Each one of us as “click workers” contributes to the production and accumulation of network value, which is further being embedded in lucrative activities, such as Google’s advertising model.
While in the knowledge economy a particular emphasis is placed on intellectual property, the notion of cognitive rent to which Matteo Pasquinelli draws attention becomes useful here. Google as “rentier of the common intellect” refers to the way in which free content produced with the free labour of individuals browsing the internet is being indexed by Google and used in profit generating activities. From this perspective Pasquinelli challenges Lessing’s notion of “free culture” in that Google offers a platform and certain services for free, but each one of us contributes to the Google business when performing a search, data which is being fed into the page ranking algorithm. The use of the notion of common intellect or collective intelligence in this context is however debatable, as shown in the discussion session which followed the presentation, because there is only a certain arguably limited segment of individuals – the users which contribute content to the web – , whose linking activity is being fed into the PageRank algorithm. The prominence of the PageRank algorithm as generator of network value has also been questioned, as the algorithm is not the only ranking instrument. As the posting on Henk van Ess’ website shows, human evaluators also participate in page ranking.
What is there to be done about Google’s accumulation of value by means of exploitation of the common intellect? Or to use Pasquinelli’s metaphor, are there alternatives to Google’s parasitizing of the collective production of knowledge? How can this value be re-appropriated? As the speaker suggested, perhaps through voluntary hand made indexing of the web? Or an open page rank algorithm? Or perhaps a trust rank? The question is still open.
You can read more about what happened at The Society of the Query on the event’s blog.
In a previous post I discussed and, hopefully, debunked some common assumptions on the next phase of the World Wide Web, or web 3.0. The general assumption is that in the 2.0 era the user was at the centre, the produser took control and the cult of the amateur was born. The web was being flooded with what seems an infinite amount of user generated content. Big platforms, such as Flickr and Facebook managed to centralize and collect some of these efforts effectively. The result of is a big, fragmented and messy dataset. Enter web 3.0; the iteration of the web which can be read and understood by machines, where the dots will be connected and contribute to an open sphere of knowledge, something that the current pragmatics of the web don’t easily allow for. The philosophy here is, bluntly put, that this connected sphere is more than the sum of its parts. Tim Berners Lee recognized the problematics of the messy web early on and proposed the Semantic Web to overcome messiness and apply a semantic structure to bring order (read: computer logic) in the chaos (read: human expression).
Pierre Levy, French philosopher and leading expert on collective intelligence, is on a similar mission. While driven by, arguably, a similar set of goals, his approach takes a step further. The problem with Berners Lee semantic structure is that implies a universal ontology, which might prove out to be the Achilles heel of the protocol. Levy’s approach overcomes these problems.
Levy is currently working on a research program, called IEML (Information Economy Meta Language). IEML is a metalanguage and proposes itself as the language of collective intelligence. As a metalanguage it differs fundamentally from natural languages we know. This can be best understood in the way it is conceived. Natural languages are, in the first place, the results of a process of documenting the spoken word. A metalanguage is artificial and is a result of formalizing ideas, instead of words. The practice of formalizing ideas in a universally adopted metalanguage is well established in the realm of natural sciences. For centuries now, ideas are being documented in terms of formulas, numbers, equations, molecules etc. There is a finite, well structured toolset at the hands of every natural scientist. In humanities the area of interest is infinite and not easily encapsulated in a formal manner. In humanities, knowledge is fuzzy, or as the IEML vision paper describes: “the knowledge and expertise accumulated by the humanities are diﬃcult to share in contexts that diﬀer from the initial environment in which they emerged” IEML offers the solution to this problem, offering humanities a language in which knowledge can be formally described. How this exactly will work pragmatically is to be determined, as the project is in a “fundamental research stage”, as Levy stressed out to me. On a hopeful note though: IEML also functions as a bridge between languages, the natural language of the end user is not relevant: “The IEML inter linguistic dictionary is precisely constructed to ensure that semantic networks can be automatically translated from one natural language to another”. Mr Levy was kind enough to answer a number of questions and concerns I had with this project.
As Levy pointed out to me in order for an idea to contribute to the sphere of collective intelligence it should be described in a formal manner. This is where the digital humanities researcher comes in, who should master this new code to formalize his peers ideas, a crucial step in the process: “If ideas and concepts are not formalized, it is impossible to compute their semantic relationships automatically”.
One of my concerns was with the IE in IEML, meaning Information Economy. In the vision paper it is noted that this sphere should be “observed”. I was curious what should be exactly observed in terms of meaningful data and the private sphere, Levy answered:
There is currently an immense mass of public data on the World Wide Web that is not efficiently shared, analysed and used by humanities and social sciences. Considering the extraordinary range of these data and the computing power that is now at our disposal, a scientific revolution in the human sciences can be predicted for the 21st century. I can mention the areas of cultural heritage, health, education, economy, sociology, etc.
One of the main reasons why this computational potential is not actualized today is the lack of semantic interoperability. A universal (interlinguistic and interdisciplinary) system of computable metadata, like IEML, could be the stepping stone leading us into a renewal of human sciences. Of course, every team or individual should be free to categorize and assess the data as he wishes. The common semantic code will allow for comparison and sharing.
All this should be done while respecting existing laws and privacy of individuals. (There is enough work to be done on public data.) The dangers that you mention are not specifically linked to IEML and do exist for all digital data in general.
The scientific revolution in the human sciences will culminate in collective intelligence, a common good which will throttle human development. In a recent book, Levy proposed a “loose IEML model” to monitor the coordination of human development. The axis of human development are defined by “education, health, sustainable economic prosperity, security, human rights, conservation and enrichment of cultural heritage, environmental balance, scientific and technical innovation” which are in accordance with the United Nations Development Program, Levy assures me. I wondered whether a metalanguage which positions itself functionally as neutral (as opposed to Berners Lee universal ontology) should contain assumptions on how western democratic society is structured to which Levy partly agrees that any metalanguage can’t be neutral:
There can be a lot of disagreements about the right ways or methods to improve human development. IEML, as a universal semantic code, can accomodate any method. Above all, IEML provides a common semantic sphere where all disciplines of human sciences can compare their theories and methods and can coordinate their findings at the service of human development. (…) Now, you can say: “Okay, but what if I am against improving health and education because these are western values and / or it has been used to justify western imperialism”. My response is: “It’s up to you!” In general, I do not think that any theory or metalangage can be neutral. Every act, being practical or theoretical, occurs in a hypercomplex context and has an effect on this context. I do not claim any impossible neutrality or objectivity. The objection “you’re not neutral” is besides the point. I have a very precise goal. My aim is to improve human development, collective intelligence and knowledge management in the humanities.
The creation of IEML is based on the explicit assumption that all human beings, and all cultures, have in common a basic linguistic-symbolic ability. The main limitation of artificial intelligence is the belief that logic and statistics are sufficent to model human intelligence. I don’t think that current techniques of automatic reasonning are enough to model the basic symbolic manipulation ability of the human species. In addition to the formal tools of artificial intelligence, we need a new kind of formalism to describe in a functional and computable manner our capacity to create and transform meaning (sense, signification). IEML provides precisely such a formalism. The main result of this scientific invention will be the expansion of a semantic sphere where any creative conversation on line will be able to observe its own processes of collective intelligence and to share methods and results with other conversations. IEML’s existing dictionary will be expanded. You’ll be able to build any kind of “universe of discourse” or semantic world by using IEML. Far from being “neutral”, the creation of IEML points toward a cultural leap forward: a perspectivist scientific reflexivity of human collective intelligence.
My final concern had to do with cultural determination, as Levy had stressed in another interview; the attempts at creating a symbolic metalanguage can be found in many different cultures. Each of these cultures would produce a different “universal” language. How does one overcome this, can a symbolic metalanguage be universal?
What is not culturally determined in the human realm, specially when it comes to language? IEML is first culturally conditionned by the technical environment of the 21st century : growing computing power (automation of symbol manipulation), growing memory power (availability of digital data), and growing communication power (ubiquity of data). It is also conditionned by the scientific method (which is of course a dated cultural institution), namely its insistence on functional computability and transparent (reproductible) procedures. I don’t see these determinations as limitations, but rather as very powerful engines that I have used in the service of human self-knowledge.
Levi-Strauss being one of my favorite author, I am of course well aware of the dangers of “ethnocentrism” . As an inventor, I have been influenced by a wide range of disciplines and theories in the human and cognitive sciences. (All this is explained in my book The Semantic Sphere). I should also mention that I have dwelt on three continents (Africa, Europe, North Amercia) and learned a lot from different cultures, that I have studied traditional Greek, Indian and Chinese philosophy, that I have scrutinized Jewish, Christian, Muslim and Bouddhist metaphysics and that I am deeply involved into the amazing Brazilian cultural and economic development.
Universal does neither mean “out of history” nor “out of culture”. The notation position system of numbers, including the zero, is universal. The decimal system is (almost) universal. The time zones system is universal. The meridian and parallel systems for geography are universal. The Internet Protocol and the HyperText Transfer Protocol are universal. However, all these symbolic systems have been invented somewhere, sometime.”
Finally I enquired about the future of IEML, what can we expect from this project in the foreseeable future. First his team has to build technical tools and train “semantic engineers” the language. After that? Levy is not sure: “Beyond this, I have no precise idea of the actual development [will be]. I just feel that it will happen sooner or later. I think that some big university or some scientific endeavour will lead the way, followed by the companies operating in cloud computing and big data management. I foresee the development of ‘collective intelligence games’ looking like massive multiplayer online / real life games, or some sort of trans-platform smart social media.”
Apart from the technicalities: Levy has a clear vision for the future. IEML can’t be regarded as merely a language or a tool, that wouldn’t do its intentions justice. IEML is a language with a mission and I can’t wait to find out how that will play out.
The police were able to find proof the computer once contained child pornography with the use of a keylogger. A keylogger is an illegal spyware application that saves the keyboard actions of the user. A keylogger is installed without the knowledge of the user by means of a virus or a software package containing spyware. A keylogger is walware and for that reason against the law.
Just for the record, I am 100% against any forms of illegal pornography and am very pleased Catterson is charged. Yet, I am not completely sure the actions taken by the police are proper. If a keylogger is illegal then it is not right when authorities make use of it, even if the cause is just. I assume the walware was installed before the police confiscated Cattersons computer, still, I recon the use of it is questionable and perhaps even illegitimate. What if in the future someone gets arrested on milder charges, say possession of Spiderman 3 in divx format: is the police allowed to use spyware to gather evidence?
I recon the British police did not put the keylogger records forward as evidence, but used it to force the suspect to confess, still this may lead to further breaches of privacy; following this logic governmental authorities may distribute walware and use it, as long if it is not put forward as proof but used as a means to pressure suspects.
Luckily I saw Spiderman 3 in the cinema… actually let me rephrase that…. sadly I saw Spiderman 3 in the cinema… I wasted 10 euro’s on a terrible movie.
Internet of things book presentation by Rob van Kranenburg.
28th of october, Theatrum Anatomicum, the Waag, Amsterdam
intro by Geert Lovink (INC).
We are treated with a visual intro of the book by Rob van Kranenburg.
The intro is given by Geert Lovink – Director of INC, that published the book. The INC is a part of the HVA, which is currently called “university of applied sciences” (so that you know…).
This afternoon will consist of a few short speeches that will act as input or at least contributions to a discussion. They will contain talks about the book and the ideas Rob has put into this publication.
In the narrow sense it is a ” RFID debate”. Luckily, Rob has broadened the subject. He looked into the social side of RFID and an internet of things, as well as interventions dealing with this internet of things.
About the Institute of Network Cultures
The INC facilitates interactions between people and communities online – bringing together these clouds in order to summarize what is going on. In a time where technologies are becoming smaller, their impact is becoming bigger and their production is faster, the question is; where are we? Can we discuss these things? We would like to contribute to the public debate. We are looking for writers. ( Is this actually a call for publications?)
About the book
It is important that we more carefully research what is going on. Technologies of control are getting more pervasive. What to do as an activist? While discussing and dismissing these technologies, at the-same time the-same people (activists) are using these technologies for their own use. Technology as such is a force, where different (sub) forces and domains are active within certain technological fields. We have to experiment with it ; this is the paradox Rob talks about in his book. Rob has tried to capture these ambivalent feelings. Within the pubic debate, we want to remain researching this collective ambivalence. Social movements these days are swift and change fast: we cannot be naive anymore; we do not want to put ourselves outside. The floor is Rob’s:
I want to be brief; thanks INC. As I experience being online, it still is a nice thing to have a book. It makes things tangible. Thanks Sean for commenting, forwording, and being a friend.
It is difficult to produce text; lots of version I have written – it is a hard process. The image of two cities gave phasing into complex things. This radicalizing (into cities of control vs cities of trust resp.) helped in sorting the story out. Still, it was extremely difficult, because one can slip into all kinds of modes. From a journalist to a poet (where the latter is always more right than the first).
Still, I want to stress that we have to keep things ambivalent. Technologies have the tendency, or already have taken out all the messiness we have in life. If we keep thinking that way (the modernistic way; that of producible man) our world not work; we need messyness.
I found three good friends who are going to speak about the book. (Eric Kluitenberg, Jaromil and Martijn de Waal)
There is an Amsterdam- thing going on about these hybrid spaces; This book of mine is an essay as well as it is an open platform to join and jump in. The goal is to get some practice going; because that is what it is all about.
Martijn de Waal.
Organized “The Mobile City” conference and has started a blog about the mobile city. With the arrival of hybrid spaces, there is an architecture approach to the things Rob talks about. De Waal quotes Lovink, who has ones mentioned a critique on Internet Research; they are only describing. The goal is not to describe but shape it (the internet in this case). We need critical concepts that can be implemented . What kind of technology can be used in order to become influential in the debate? De Waal has searched for these concepts in the book and has come up with two main points:
1) We need not seamless, but seamfull design .
Where technologies are disappearing, the company view on the matter is that it is about seamingless-ness without you noticing it.
This is not the fact. He quotes Bill Gates in 2004 “due to the continued growth of computational power we can crate any device possible:
it is the software that will magically tie stuff together” This is only mystifying technology; we should divert from this.
We need a much more seamfull approach; you should make technology more visible; more accessible:
There are some problematic things with seamingless-ness:
1) you cannot fix technologies when they break down; you need external help (‘a certified magician’)
2) if technology becomes invisible, it means the affordances of this technology are hidden.
Now it becomes harder to tinker with it. (skype phone example).
3): if you cannot only fix own car: you have lost a belief in social initiatives and the notion that once you could. This can cultural consequences; people become dumb; they forget the potential of technology.
2) City of Control vs City of Trust
This is a concept to describe the future of technology in a urban situation and is mostly used in debates about tracking and tracing. Where all kinds of informational systems are melting into a giant database; or at least multiple databases, you can no longer escape. The people that say they do not care about their data are missing one point; this argument cannot hold because data is stored longer than legislation. What is normal today may be prohibited in the future; a dystopian scenario.
The concept of city of trust is the same. The same technology is here, but it is designed in a different way. Instead of binary privacy, you have privacies. The user has control about technology and what he is doing. He decides what data he or she will share with whom in a Creative commons-like manner.
Gives a reflection on the book by first reflecting on Rob:
“I really witnessed Rob’s growing expertise in this field over the years: within EU- fora about the disappearing computers to discussions about invisibility versus commitment.
And after that an shear RFID obsession. Now he is strongly involved in RFID projects.”
Since the OV chipcard trouble in Holland lots of politics are getting involved.It is rally becoming a control issues right now; it is still problematic. The question is whether and how we can stop this technology? It is all about control these days- we have to deal with it. – regulations for this field are badly necessary. Where and how can we discuss this?.
Another thing is the interface question: Rob discusses this on page 23 of the book by asking “If the environment becomes the interface, where are the knobs?” How is the interface going to take shape, where most computing in this space is invisible. Erik wonders about the fact that new , 2.0 models of “user involvement” and “content on the Web” are still not really there. When looking at the ‘hot’ trend of the iPhone, the strange thing is that it is works on the ‘old’ model of technology. The iPhone is just a pile of old concepts – why is it so hot? Curious is that it is a screen-based interface on an established OS. It is documented. Lots of open-source ways to get in. Intelligence not in the network, but still in the device itself. Way before the app store came out , hackers had already created app stores by breaking open the iPhone. Why? Because the device was still there, so that it remains possible to design alternatives.
Within an internet of things, you do not carry the device anymore; the functionality is in the environment,. Obviously, we are not there yet. The interface challenge is a large one. The point to make is that we have to rethink that interface question. How do we intervene in current interfaces? Do we need a regulation for spaces? It is all about design and experiences of these things. Interface design needs a new input here, because there are deeper technological questions to address (than privacy resp.).
Starts by talking about free market lies. (i was getting tired here, if anyone can fill me in, please do)
The first is that there is one market rule: When you purchase something, you own it.
The fact is that you don’t own the code of your phone, you don’t even own the phone anymore.
The second one is that competition is the motor for free markets
…Competition is not the motor for capitalism due to ….
The third is that monopolies are bad for markets:
…Something with open source being more slow, but more rigid- argument…
He continues by listing some historical failures:
– DRM (which is Digital Rights Management; a way too-late attempt by companies to still make money out of a lost race…)
Especially in gaming this has failed.
– fear-based society. Where people can call numbers if they see something scary.
We are told too many lies and do not believe that much anymore.
– equal opportunities.
Also failed. Even within the EU, there are still castes ( in Italy for instance) that decide politics and chances.
– Democracy and education.
This historical failure we are going to witness. iI we can invade all these privacies of people it will go wrong; there is no democracy without privacy.
We tried things like IP rights and creative industries. This did not work, either.
Also, we tried something called “Corporate service providers”.
One example of this not working is “the irate bay”. It is now said that they should redistribute wealth (the money they make via adds) from a central point –
the pirate bay community that gives back to the content producers. Apparently, we still like the old pattern.
Semi-public research funds was also a hypocritical attempt. Examples can be found in recent augmented reality spinoffs.
Corporate responsibility and finally philanthropic bubbles are debunked by Jaromil.
Luckily for us, he also comes up with viable alternatives:
– empowered content producers
– peer to peer services
– local ownership of production means
– more how-to, less manifesto
Some success stories of these models are mentioned:
– tv repair shops
– free voip technology
– Micro breweries
– Free Software
He finishes with a “design for commoning” manifesto (too fast to blog).
A panel discussion:
In Germany there is a very strong, dynamic movement of activists against these technologies of control. In Holland, we have a more debacle – let-the- catastrophe- come strategy
We’ll see what happens, often too late. This is a question of social events occurring. Questions of agency and strategies (tactical or not).
Are these small events enough? Can they scale up? Or is it too early? Or should we wait until the catastrophe to unfold.
Think of free software; that took 25 years to develop. Can ideas be too young?
Erik said that technology is a real force now – when you read Heidegger he says that you can only wait. Protest against the catastrophical line of the Chipcard is not helping anyone;
it is by default unhackable. You cannot hack all your groceries every morning. The polder model can help here.
The notion of an open infrastructure with RFID is completely tied to the internet. (Thats not really open, right?).
I need some form of oversight in order just to keep my own sanity – I was born thinking that anything can talk to anything. Now it will become reality.
The crash-scenario is that if we keep outsourcing our intelligence, the question of stability is rising. What if it breaks? Then, I think I would like to have my own network.
Then we can really use the expertise of people that are living in a distributing uncertainty. They are much more equipped.
Another question is: If it (IoT) will be there, for how long?
technology is not too young ; the technology is already implemented. It can be practical I the design of technology, plurality is needed.
Who is doing what with my data? How are we altering the parameters for design?
There is no technology that is per se evil . We need new principles.
Audience: one way of describing pervasive technology: a totalitarian state of comfort.
The death of our society is coming. We try to avoid the crash.. What do we do when we switch off the light.
What about regulation? I do not think regulation can do anything. Market lies lead to trouble and there was regulation there? What do you mean (q to erik?)
Some ideas on this: the regulated market does not exist. People get informed about RFID via these discussions about regulation. The role of civil groups is to have interventions.
The other thing is the role of artists and designers; we have to make alternative scenarios and interfaces. How do these things influence our spaces.
This RFID gets bigger and bigger; I want to talk technology for now… How much does it cost? How can it be hacked?
response by audience: lots of different types of rfid; passive and active. passive RFID by a reader.
We have to open up the spectrum of technologies; from barcodes to bluetooth: we have multiple technologies.
Scenarios in their databases; things becomes data.
(example of reader in supermarkets and a fanta in the cola space)
Rob continues by talking about the platform in one space in Holland about RFID and other IoT technologies.
We will build this matrix – we have to get the designers in the right place…
We want to map stuff; Objectively, we cannot trace anyone, this will create false positives.
We have to wrap up: thanks all for being here; asks rob to do the ritual: inviting people for a drink by cutting a lint.
If you are interested in reading the book, send a mail to the institute of Network Cultures in Amsterdam
This blog is a real magnet for pingback spam lately. While I’d like to take it as a sign of our growing popularity, that would be like being flattered by calls from telemarketers. Also, it probably says more about the arms race between spammers and spam-filters: the trend for a while now is for spammers is to use RSS feeds to syndicate. Now excerpts from our blog posts end up on spamblogs, where spammers include Google Ads and wait for the money to roll (or trickle) in. It’s all automated, and ends up looking like this:
But here’s the interesting point. Two years ago, Google implemented the nofollow html attribute to prevent this very same comment spam. Nofollow is the default setting for comments on blogging platforms, meaning links placed in blog comments (including pingbacks) do not ‘count’ in search engine rankings. It is overwhelmingly obvious that as a prevention mechanism, it simply doesn’t work – spamblogs and comment spam are just too easy and cheap. What nofollow does do, though, is help keep Google’s search engine rankings stable. If Google is serious about preventing comment spam, wouldn’t it make more sense to prevent these guys and girls from getting accounts on Google Ads?
Although I’m not really sure what this is all about, on page 3 of the Google search query on the spinplant I got the following: In response to a legal request submitted to Google, we have removed 1 result(s) from this page. If you wish, you may read more about the request at ChillingEffects.org. On the Chilling Effect page it is argued as a ‘Dutch defamation complaint to Google’. See the screenshot below for what I got. So basically, in keeping the Spinplant alive, there is also a movement trying to keep it completely out of the Google search.
Dérive is a notion used by Guy Debord in an attempt to convince readers to revisit the way they looked at urban spaces. The concept means to aimlessly walk, or drift, through the city streets being guided by the momentum and space itself. A modern practice of Dérive is roaming the streets of your city through the satellite photographs in Google Maps and more recently Google Street View; a new feature of Google Maps that allows one to view and navigate high-resolution, 360 degree street level images of various cities (in the US). Google’s maps distinguish themselves from traditional printed maps in the sense that the user is able to interact. Besides zooming on location, the user is able to demand additional information with concern to a particular spot. This information is offered by parties collaborating with Google, as well as information from databases which Google has power over. Google Maps became vastly popular when it integrated satellite photographs (and photographs taken with airplanes) in its online maps; beside a map in conventional design containing information on demand, the map now presents a realistic bird eye view allowing the user to rediscover familiar places (such as his/her own house) from an unfamiliar perspective.
The basic premise in Debord’s theory of Dérive is that people are trapped in the practices of everyday life, by looking at the city by following their emotions they can break with their daily route, routine and enclosed space. Cities in fact are designed in ways to direct and control its publics. Cities are complex structures in which movement and mobility is managed by its plan, for instance road signs tell one where to go at what speed and where to not go between what times, when to stop and when to continue. But also the architecture controls the flow of people by means of the way in which certain areas, streets, or buildings resonate with states of mind, inclinations, and desires. Debord argues that people should explore their environment without preconceptions, in order to create a better understanding of one’s nature; as one becomes aware of its location, one can value and comprehend his or her existence. The idea is that people built forth from their insights and seek out reasons for movement other than those for which an environment was designed. Bringing an inverted angle to the world can make people assign new meanings to familiar places, produce new forms of social interaction and make public space a place where one stops to look.
This idea of (re)discovering familiar places can be compared to taking a boat tour through ones own city. The roads beside the eight canals in the center of Amsterdam are passageways I personally frequently travel through; however, when passing through them by boat, the well-known monumental facades in the vicinity of the canals seemed foreign to me from a different angle. Similarly the satellite photographs in Google Maps changes meaning and memories attached to common places; it gives the user an experience of re-familiarity. Street View on the other hand draws on the recognizable element; the photographs are taken from street level and thereby rediscovering is substituted for virtual sightseeing. The user can now wander through New York while staying at home; moreover, the user can zoom and alter the view at any time. Instead of looking up the fastest route or determining ones location, the function seems to have shifted in the direction of roaming and aimless wandering.
In addition modern maps are coupled to databases consisting of location bound information; possibly delivering the user knowledge and ultimately awareness. A wide variety of peer-created extensions are freely available augmenting the information and increasing the amount of knowledge, such as the Wikipedia extension – which provides a sense of temporal accuracy in Google Earth because information is provided about history and coming into being of a particular place, complete with specific dates, adding to the hyper-real situation. The practice of contributing to the medium opposes with traditional one-way media institutions. Google Earth allows users to act upon their creative skills and knowledge by offering possibilities to co-create the product and make it available to anyone, also outside the community. Google Maps API is a tool which users of Google Earth can use to include whichever information to existing maps offered by Google. In addition Google offers users SketchUp, similar to Google Maps API SketchUp is a free application with which users can add content to maps presented by Google, however with SketchUp the user can do this in 3D (for example a model of ones own house). Via Google 3D Warehouse the models can be uploaded and made available for all users of Google Earth. Currently maps are circulating in 3D or data tips containing personal information or photographs taken by users from a street level (which consequently changes the perspective of the original design). Information visualization tools such as maps enable greater understanding of reality, our society, life, and in short our existence. The accessibility and popularity of dynamic digital maps should make academics and interaction designers wonder how new ways of wandering can educate, emancipate, and enlighten the masses.
Introduction by Ole Bouman:
At the NAI, values of architecture are defended that we are fond of to defend. Most architects and policy makers do belief that architecture is about shelter and enclosure, occupation and representation. Archiving architecture used to be at the core of the NAi, but it has to look at what is happening with architecture now and look beyond the traditional field. New questions arise about how new technologies affect architecture and what this says about the individual? It is not solely about designing and archiving our world anymore, but also about looking at new possibilities. Discussions with other disciplines here are very valuable.
What happens in the merging of physical and digital space? Is locative media a successor of net.art? It is about creating experience in real locations with digital layers. Nowadays GPS phones and ad hoc networks create a new experience of place. This experience of place is no longer only the domain of architecture. Interaction designers, gaming, media makers and artists are now moving into that space. Through architecture, we can define ourselves as human beings. At least, we used to. New technologies are defying these standards, this paradigm. There is fluidity in how we create representation. New, locative technologies empower a nomadic life; new ways to organize our spaces. Is it also affecting the way we look at ourselves? We have to think about our position vis a vis our technologies and societies. Roughly, two kinds of audiences can be distinguished (within the conference); one of curiosity and openness versus one that is procrastinating- hesitating towards technology. This dichotomy is a typical western position. Where modernity may no longer be on our side and technology is at the core of our daily life, we need people to help us conceptualize and define new concepts of urbanity and social interaction.
The book ‘Sentient cities: ambient intelligence and the politics of urban space’ by Stephen Graham is a reflection on politics, locative media and ubiquitous computing. Where technology fuses itself into the background of daily life, all sorts of scenes (art-commercial- governmental etc) are utilizing new technologies and seeking combinations, weaving them into certain directions simultaneously. We are moving towards a society of enacted environments. Phenomena like an internet of things, sensor-databases, biometric sensing, ubiquitous computing, machines linked to senses and databases etc are already dawning. All these infrastructures are constantly at work (or will be working) in the background in cities, arranging all kinds of privileges, possibilities and accesses. We don’t know where these servers are located, how it is stored and who keeps watch of them. All this data decides what we can and cannot do in a city, where we have access, where we can move. All is profoundly political. All levels of this infrastructure are politicized.
Computing is becoming everywhere, urban spaces are brought into being that have a computable layer. A critical question posed is: What is exactly new about this? We need to be aware of our history and the continuities and changes in societies and technologies in order to see the real and important developments.
Three starting points are addressed by Graham:
1. We must completely abandon the notion that there is a real and a virtual world, as if the two were opposed. Instead, we must look at how new media is layering over existing spaces, thus reorganizing them. Graham is building on the notion of Bolter and Grusin; remediation. It is constituted (the virtual) on top of our real world. Remediation is taking place constantly. Remediation of painting, film and television, of cities, houses and streets. The old notion of holographic pods, parallel worlds, cyberspace, does not exist. We are far from it.
2. Cities can be seen to emerge as fluid machines. We have to look at cities as processes. intense connections, constantly mixing. distant proximity and proximate distance in all sorts of ways. All sorts of flows are present in a city (data, people, services, all is about movement). These flows of energy, water, people, information, goods etc all are linked and are constantly influencing each other. Seeing cities as processes, we have to think about how new media fits into the process.
3. We must take a look at when and how technology becomes a part of our infrastructure. Everyone is using technology without thinking about it (like electricity). The most profound technologies are those who disappear into daily life (Mark Weiser). Now politics become important, but less visible.
Socially, these technologies become ‘black-boxes’; they become ‘engineers stuff’. So, what is infrastructure precisely? It is embedded, sunk and transparent into daily life. It links times and spaces. We have to learn how to use it. It has to be based on standards. They (technologies of infrastructure) become only visible when they fail. Graham wants to tell three stories about ubiquitous computing and locative media:
2. militarization/ securitisation
3. urban activism and democratization
Is there an ideal friction-free capitalism? Within the control revolution, the commercial world wants to take the internet and fix it down to local geography in order to achieve a data-driven mass costumisation. Exploiting of this possibility will occur very soon, based on a database model (like the Amazon recommendation model). Imagine a real time monitoring of consumers, where all your favorites and bookmarks in physical life are tracing and actively drawing your attention constantly. Layering new media onto the city creates a lot of commercial opportunities. Market places are emerging from mobility, where everyone is having a perfectly tailored capitalism.
An example is a RFID and logistics. If this is to work, users have to adopt – windows for instance has an AURA applications- barcode readers on their phone- an augmented consumption. Try to impose markets where these markets were not possible before. More as a commodity than as a public good. One can, for instance, start to commercialize roads (as an example of capitalizing mobility). This is done by controlling access. Building premium options to bypass nasty things (like congestion). Even adds alter along the way dynamically. Summarizing, some basics of the life of the city will be exploited via locative media.
Another example mentioned is the city center of London and the access to it. Lots of infrastructure is needed to make sure access is denied and offending are fined; mass customization in reverse. It demonstrates the difference in possibilities and politics; they are not defined by the technology, but rather by the politics of dealing with that technology. Even internet-traffic use is prioritized rather than everyone (and all data) being equal. An example of this is an imposed new ‘smart’ internet seen by Cisco, where only prioritized data will be able to travel from computer to computer. Some networks and/or routes will be unavailable for the masses. Another example is call-centers – companies realize that congestion is the problem – when you are deemed profitable, you are granted faster access. This is a new politics of technology.
What happens when architecture, new media and rfid are meeting? Lots of politics and privatized spaces. Location-based services arte the first in showing these politics. Consumer- databases are being used to create ins and outs, have and have-nots. Geography of cities are now managed by geo-demographics. Info about social networks, crime-rated, local governments, recommended neighborhoods and so on are already in use.We will see mayor social databases to influence your choice (when you are literate in this info-world). This underpins politics of data.
2 militarisation/ securitisation
Much in this point is around the war on terror. where the city is deemed the problem, with supposed enemies. How can we use our technology! Panicky in addressing risks in western cities within security world. This is a world of targeting, about locating and targeting enemies. Huge recognition and data mining technologies and biometrics. CCTV and face recognition etc even identifying walking styles. It is always about creating an average, in order to pick our the abnormalities. We move towards code-space and software-sorted mobilities. We are already moving into biometric systems. All of this with the argument to limit terror. Lots of commercial gain here.
About cctv, lots of cameras are privately installed. Security companies and military are investigating how they can be linked together, how they can be computerized? The politics of this are enormous. Think about your anonymity on the streets that would be lost.
Also, the oyster-card for example is mentioned and the misuse of this, typifying the link between commercial and surveillance use. Once the system is in place, it can be used for other purposes than intended. How can you make regulation robust enough to prevent misuse?
As an example, Graham mentions the American army admitting they need a new “Manhatten project” in order to allow tracking and locating targets in asymmetric urban warfare. This is the point where everything becomes war-space. The American army uses non-arguments for make the city a warfare-territory and they need locative media. Lots of these developments are moving into civilian space. One example is the DARPA ‘ combat zones that see’ project, where concepts of smart cams etc. are introduced. It is a techno-utopian fantasy, but one that is becoming more real every day.
Jordan Crandall talks about tracking and tracing technologies that are trying to capture and colonize the future. A war on statistical persons is emerging. Locative media is constantly looking for the now thus is constantly ahead of itself. Militarization views collapse identification and turns it into databases.
3. urban activism and democratization
This point is about reanimating and re-politicizing the city. Deeming with the problem of alienation, can we actually bring urban politics back to rigid social and political questions and interaction? Re-appropriating technology is the key. Sources often start military, then commercially exploited. After that, its real or alternative use must be sought. New social performances strive for re-enchantment, more interactive model of participatory democracy. Graham now quotes from Shivanee: “locative media and the viscosity of space”.
Examples mentioned are tagging the city, like click- able environments, graffiti and physical hyperlinks. Lots of digital collective memory and narratives are emerging in physical space. Also, revealing bodily mobilities e.g. urban tapestries example. Also, examples where the yellow-arrow project is mentioned – guerrilla mapping, re-visioning the streets. The main point is that it is all about visualizing politics of planning in new ways. It is a form of relational architecture where digital interaction can mix with local events. Urban screens are mentioned as a link between internet and real-life city urbanism and space. All these projects attempt to render all the network activity visible. This shows a politics of data and geographies of data. We need to reverse engineer data to understand what is happening and to adjust politics of this data.
Multiple visions of all sorts are struggling with these new technologies of locative media. There maybe some different dynamics, but all are efforts of remediation. Graham argues there is a relationship between these multiple pass- ways, overlaps are to be found. Is there a healthy co-existence of, for instance, the artist and the commercial view? And how will this be shaped? It is about an emerging urban and tech politics. The world of temporality is very important in the process of delegating agency top software? What political and social assumptions go into our software. Making these thing possible is very important.