Video art-a different view of new media


Most of you will be probably familiar with the term of video art, the new emerging movement of art that fascinates people of different background, age and interests worldwide. However, a brief-though concise definition,as given in Wikipedia is the following: Video art is a type of art which relies on moving image and comprises video and/or audio data.

One of the first video artists that influenced and inspired the next art generations Bill Viola, has stated: “When I started in video I was one of two or three dozens video artists in 1970. And now, to paraphrase Andy Warhol, everyone’s a video artist. Video, through your cellphone and camcorder, has become a form of speech and speech is not James Joyce. It’s great and to be celebrated, but it has to find its own level. ”

To go further to this arising form of art, it should be emphasised that video art is not a threat for traditional arts. On the contrary, it’s a new way of expression that gives the opportunity to the artists, but also amateurs, to communicate each other and create artwork, using the new technologies and the new media. It is, thus, the evolution form of art that keeps up with the changes in social and technological environment. Art, as almost every aspect of our daily life, becomes international, accessible to everyone and vulnerable to criticism.
  The most promising and well-organised new media festival in Athens, is indisputably the Athens Video Art Festival held every spring, since 2005. Artists from all over the world take part in it exposing their digital art and new media works.

Athens Video Art Festival was founded, as mentioned above, in 2005 in order to enforce this evolving cultural field and incorporate the digital arts and new media in current Greek culture.This significant effort was an initiative of a non-profit organisation, named Multitrab Productions, mainly focused on AVAF.

The festival at the beginning, consisted of, merely, video art projects but in next editions it was enriched with installations/ video art installations, performance art, web art, animation and digital image and the last year, added workshops and roundtables to its structure, free to everyone.

Incidentally, it is worth mentioning that in 2005 the number of attendees was 1000 and in 2010 the number was, amazingly, increased in 13000!

So as to foster the creativity and free communication across borders, AVAF collaborates with educational institutions of Greece, as also with foreign festival of similar content and international organisations.

I had the opportunity to attend the last edition of the festival, held in a multivenue in the historic centre of Athens,called Booze Cooperativa (unfortunately, the website is under construction at the moment). The festival lasted 3 days and had about 300 artworks from over 58 different countries.The main concept/title, under which the 8th edition was held was: Visualize Athens and, as written in the press release, was an effort to revive the historic and commercial centre of the city of Athens and start by visualizing the city through perspectives that offer an impressive and multidimensional program to the public.

I was there, unfortunately, only the last day of the festival but I have watched several short films, installations and live acts. Here is a Youtube video by artist Monomome, in his project Lost in the Woods, especially made for the AVAF. press release, was an effort to revive the historic and commercial centre of the city of Athens and start by visualizing the city through perspectives that offer an impressive and multidimensional program to the public.



One of the short films, really interesting and beautiful, in my opinion, is the one of cinematographer Giorgos Galanopoulos, portraying the formation of the island of Santorini, through the ages and it is the following. It could be considered as another effort to incorporate new media technologies into traditional, or more probably to see tradition and history of Greece, through the lens of the new media art.



Except of the projects and the installations, however, the new striking element in the festival, was the set of workshops, for those interested in the innovative and enchanting area of new media technology. Participants could choose between a variety of different themes as: mobile applications design, a really appealing subject, animation/video gaming and sonic sights for the festival. Personally, I consider the workshop help by John Richards, the founder of Dirty Electronics, really fascinating and I think the sharing would be recommendable. But before that, I should explain what exactly Dirty Electronics do. It is a large group of DIY musical instruments members, being active since 2003. The goal of this group is to promote social interaction and communication, through the collaborative thinking and building of an original musical instrument.

In the following video, that I have already mentioned above, a musical instrument especially designed for the festival of 2012, is presented and described, named Dirty Electronics 7-segment display,a name given to it due to its technical construction.


In this workshop, participants had the chance to interact with each other and under the instructions of John Richards, to build the hand-held device and produce some music with their creations.

To conclude and go through the reason I chose to write about this topic, I have to share that except of the artistic interest of the festival, I consider this effort really important as new ideas and creativity are promoted and rewarded in an open environment. Artists, professionals and also students oriented to digitization of art, exhibit their work and to a broad public and make known to them the new, so developing, genre of art. Finally, I believe that another aspect of new media is revealed here, that of social and cultural intercourse of people, in disagreement to the isolation and antisocial behaviour, new media are often accused of.If you are interested in more video art works, visit:





This is the question I am still delving into since I joined the startup project TimeBank Romania about 9 months ago.

What is TimeBank Romania?

Imagine an online platform where users can exchange knowledge and skills. They do so not by means of money but time, a more affordable currency, especially for a target of 18-26 year-olds and not only.

The user scenario:

If you would very much like to learn anything from public speaking to HTML skills, you can easily create a personal account on the platform. Then you can search through for users who offer the specific skills you want to learn, connect with them and meet either offline or online in order to be taught. Your teacher will hold the lesson for an agreed number of hours. For the hours the teacher offers to you, he or she will then be entitled to learn a subject of their desire from another member of the community. For the hour credit you have just received, you are responsible to teach someone else a skill or topic you are good at. Giving back to the community is what then allows you to take lessons again.

In order to make skills and knowledge searchable in the system, your online profile needs to show:

1. a list with skills or knowledge in which you can be an expert or simply a passionate amateur

2. list of things you wish to learn from others.

The quality of the exchange is decided by both teacher and student (public feedback, reviews, grading), allowing users to stand out or be restricted from the community.


The insight here is that many young people want to learn interactively and establish social connections with others. The fact that they spend a lot of time online and are internet savvy makes a platform like TimeBank Romania a good approach to offering just that. The online also makes it scalable – it is important that the community grows so knowledge and skills market diversifies.

When I first joined this project, I knew little about alternative currency ( or local, virtual, electronic, creative, by that matter) or the particularities of a (knowledge) market on a virtual platform. But TimeBank Romania happens to be a startup – a social entrepreneurship one. Together with a team of 10 like-minded friends, we hope to launch soon. Therefore, the functionality of this platform brings up complex issues to a debate that needs answers. I will discuss two of these major challenges (as well as how they might be approached):

“So you pay with hours and that’s it?” &  the complexity behind this question

The time-based currency is not new and  is actually being used in real, offline communities for a few decades now. Global Transition to a New Economy  lists them all, most based in the US or UK. Timebanking (using time banknotes) in most of these communities is used to supply traditional economy in social services, to support local businesses and indirectly help bond community members. The system usually benefits from authorities’ agreement on its legacy and only applies locally.

Most of these communities use time banknotes with an equivalence in dollars (10 time banknotes = 10 $). When you are exchanging actual goods or services, this makes sense. In the online, some communities developed their own exclusive virtual currencies, which are nevertheless calculated after a sum of real exchange rates. This is the case of HUB Culture , which gathers private companies and entrepreneurs in a wide range of fields. Its members use the currency of Ven (“a currency priced from a basket of currencies, commodities and carbon”) to exchange expertise between themselves but also to buy real products or services.


Does a real market model work for online knowledge exchange markets like TBR? Any link with the real price market would imply using time as a sort of conventional banknote. Not only is this difficult (how can you price skills and knowledge that don’t exist on a market? I have no idea how much a kite-making lesson would cost me), but the concept of time as currency is distorted. The idea here, I believe, is that while in a real market an hour of arabic calligraphy could be more expensive than an hour of English (a superiority in terms of value), this becomes irrelevant for an online community like TBR. The idea in the latter is that you cannot put different prices for the desire to learn. All users are on this platform to be taught things they are keen on and will in return teach things they are passionate about, which makes the parity of 1 hour = 1 hour for any skill a much more viable and valuable option. This opens an interesting perspective on whether online exchange communities can actually use alternative/digital/creative currencies exclusively and follow new, online markets based on other variables rather than just demand and supply.

Would you, as a user, trust the platform and its community?

I see trust and quality as essential to the very existence of the community; the two are strongly intertwined. Creating the right environment to build trust and a high quality exchange of skills and knowledge is a responsibility that needs to be taken by us as a developing project team but also by users. From our part, we are investing a lot of thought in creating a smooth and fair public feedback, review and grading system. However, the aim of the platform is to leave a lot of freedom to the community to manage itself, simply because we cannot control the actual quality of the exchange, nor the actual feedback. This being said, some questions arise. Some target the matter of trust towards the platform itself. At least incipiently, this can be built with good online and offline PR. Pitching online opinion leaders and bloggers creates user interest and more receptivity. Correlating the official site launch with an offline event would be a chance to have people meet the team behind.

Other questions relate to creating trust within the community itself: would it be more insightful to have both teachers and their “students” provide mutual feedback? To what degree should the developing team act as an online manager of the community? How could we get users to provide accurate feedback and still take responsibility for their words, or would anonymity encourage a more truthful opinion?  Although I am a supporter of transparency, I think some issues depend greatly on how the community evolves. If I predict correctly, it will grow organically from our own network of friends (the intention is to have 100 users – friends and those we showed interest in the project until now – to test it before the launch) and this circle widens. This will help with keeping identities transparent as well as interaction.

Other challenges include creating an interactive environment to keep users on the platform, gaining sustainability as well as working out the best business plan. I only paused on two issues that relate to this blog’s theme.

Would you join a timebank on exchanging knowledge and skills? Why? What challenges would you see?


You have probably already seen them and maybe you have not even noticed them. Some people describe them as ‘trash’, as ‘gosh awful bad’, as ‘YouTube spam’, and others might describe them as ‘(amateur) art’. A more neutral way to describe them is: mash-up or remix video’s. While opinions about the quality of these audiovisual objects may differ, analyzed from a cultural perspective, these mashed up pieces of visual and auditory culture show an interesting change in the consumption and (amateur) production of audiovisual media objects.

The mash-up video, the mash-up trailer in specific, is a relatively new phenomenon within audiovisual culture. It’s a type of video wherein different types of media objects are being brought together, reconstructed and remixed within one ‘new’ media object. The following will focus on the ‘relationship’ between mash-up trailers and YouTube.

Mash-up video’s

There is not an official name for the mash-up video (yet). Names that are often used for these types of media objects are remix, re-edit, mix-up and recut. A mash-up video is a video, which is composed out of various elements, derived from various audiovisual media. These elements can be movie fragments, fragments from television shows, fragments taken from radio, sound snippets, images, trailers, music, et cetera. In general, mash-up videos are characterized by the fact that there are no (or hardly) elements used by the makers/users themselves.

The creator of the video, author as you will, is the person (or group) who composes and uploads the audiovisual material. He or she is not the author of the original material that is being used. He/she is the author/director of his/her own work of compilation. These (amateur) creators do not own the copyrights of the products they use. The creators are assemblers of copyrighted material and create their own versions/media objects by using the products of culture. In Remix, Lawrence Lessig calls these culture products the tokens of our culture: ‘The images or sounds are taken from the tokens of our culture, whether digital or analog’ (74). Recycling or reusing other tokens of culture to create new tokens of culture is the base of remixing.

Three types of mash-up videos are particularly present on YouTube. The most common kind of mash-up is the music mash-up. A type of mash-up where music (and videoclips) are being remixed or re-edited. An example of the music mash-up is the work of Hugo Leclercq, an electronic musician also known as Madeon.

The second type is the mash-up wherein actualities are being used. Political satire or political parodies are the most well-known kinds of the actuality mash-ups. Examples of the second type of mash-ups are the mash-ups in the Read My Lips series, made by Johan Söderberg, and the political mash-ups made by Sander van Pavert in the LuckyTV series. In both cases music fragments, film fragments or statements are being combined with archive-material from news and actuality programs. Through the combination of varying sets of media elements, new interpretations of existing material is being created.

The third type is the mash-up trailer. Mash-up trailers are specifically made out of elements derived from the medium film. By editing footage from (original) movies and/or trailers, ‘new’ trailers are created. It is the cultural product film, what makes this type of mash-up stand out. An example of the latter is Titanic: Two The Surface (sequel): a 4 minute and 30 seconds during mash-up trailer that is composed out of 23 movies.

After seeing (maybe your first) mash-up trailer, you might not be impressed (or ever will be). So what makes mash up trailers special or stand out and what has YouTube got to do with it?

The multimedia platform named YouTube 

Following are some basic characteristics of the media platform YouTube. These features play a key role in the production, consumption and exchange of mash-up videos. One might argue that the mash-up trailer derives its existence from the multimedia platform named YouTube.

To begin with, YouTube is a website where anyone with access to Internet, can place content or watch content on. The content is placed on the website by the users, therefore referred to as user-generated content. The user-generated content may consist of photos, videos, music and/or text. From this point of view YouTube can be described as a multimedia platform that supports a wide variety of media objects (Kavoori, Reading YouTube: the Critical Viewers Guide).

Secondly, YouTube’s internal programming, the web architecture, is based on a complementary relationship between data, data structure and algorithms. The user-generated content forms the data on which the database is constructed. All these data are connected and function through a certain set of formulas/algorithms, made visible through the interface. The interface provides access to the underlying database (Manovich, The Language of New Media). In this way, based on its structure and functionality, YouTube can be described as an archive/database consisting out of the user-generated content.

The platform has been designed in a way that the data generated by the users, the audiovisual objects, can be supplemented with descriptions and tags. The users decide what the search terms, tag clouds, are. The process of data-indexing, the hierarchical classification of an archive/database, is an action that is performed by the users themselves. The users take part in the expansion and indexing of the database. Therefore YouTube is also a participatory-medium.

To end this YouTube introduction, the social features of the medium will be addressed. The medium shows forum-like qualities. Users can create their own account(s). Using these accounts, users are able to comment on videos or communicate with each other through comments. Users are able to send messages and receive messages from other account members or subscribe to YouTube channels or YouTube accounts. These options, to participate in a digital community and perform social and communicative actions, make of this medium a social medium. Or as Tim O’Reilly famously coined the participatory and social properties of digital mediums on the internet: Web 2.0 (What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software).

One of  Web 2.0’s key features, is what O’Reilly calls its ‘remixability’. It is this remixability, the ability to write and rewrite, which comes to notice in mash-up video’s and the functionality of YouTube. The mash-up, or remixing itself, is now even facilitated and stimulated by YouTube. In June 2010 YouTube introduced the experimental version of the browser-based video editor: an application that allowed users to edit videos on the website. When first introduced, the YouTube video Editor was part of TestTube. ‘This is where YouTube engineers and developers test out recipes and concoctions that aren’t quite fully baked and invite you to tell us how they’re coming along.’

However, since January 2012 the YouTube video Editor has been officially launched. It offers users to carry out numerous editing processes online without the need to purchase, download or install software onto their computers. Editing your own material, or material from the archive, is an integrated feature of the platform. By adding a video editor to its feature set, YouTube is making a (big) step towards the remix-community.

The YouTube video Editor:

Though anyone with an account can posts videos, it must be noted that YouTube is owned by Google Limited Liability Company. All data present on the website can and will be deleted if considered necessary.

Read/Write, Read/Only culture and media convergence

This final part will focus on film and its transition onto digital media. In Remix, Lessig describes two kinds of culture, he calls them Read/Write (RW) culture and Read/Only (RW) culture. Lessig uses the following explanations for the names: In the language of today’s computer geeks, we could call the culture […] “Read/Write” (“RW”) culture: The analogy is to the permissions that might attach to a particular file on a computer. If the user has “RW” permissions, then he is allowed to both read the file and make changes to it. If he has “Read/Only” permissions, he is allowed only to read the file. (28)

In RO culture consumers/users of cultural products are not able to edit or (re-) produce these cultural products. In contrast, RW culture is characterized by the possibility for consumers/users to edit, (re-) create and (re-) produce the products of culture with the same ‘tools’ used by professionals. The remix, in this case the mash-up video/trailer, is a product of this cultural form. Or as Lessig puts it: ‘Remix is an essential act of RW creativity. It is the expression of a freedom to take […] and create […]’ (56).

The transition from RO to RW culture starts with the cultural products of RO-culture, the tokens of RO culture. For almost the entire twentieth century these tokens were analog products. Film, for almost its entire existence, has been based on analog technology. The production and distribution of these products took place with the aid of machinery and analog technologies. According to Lessig these analog product shared two important limitations: ‘[…] first, any (consumergenerated) copy was inferior to the original; and second, the technologies to enable a consumer to copy an RO token were extremely rare’ (37). Mainly because of the technologies, the studio systems and the specific knowledge and skills, the production of film has only been accessible to the professional sector. The media objects that the production companies produced could only be consumed. In this case consumption means that consumers of the media objects use the media object in a way that was intended by its producers. Using the cultural products in a different way was (almost) impossible. With the advent of digital media, and the convergence of media, the boundary between the (professional) production and consumption of cultural products has strongly faded.

In Convergence Culture Henry Jenkins postulates what the consequences might be when old and new media collide. He states the following about media convergence: ‘Media convergence is more than simply a technological shift. Convergence alters the relationship between existing technologies, industries, markets, genres, and audiences. Convergence alters the logic by which media industries operate and by which media consumers process news and entertainment’. (16).

The mash-up trailer is a media object that derives from these changes. Film is a cultural token that has been part of RO culture. New technologies and digital media have made the digitalization of film possible. Through media convergence the properties, but also the content of other media can be acquired by the digital medium. YouTube is such a medium, and film is one of the many types of media that has been acquired by YouTube. These transitions contribute in the fact that film is now part of RW culture.

The judicial, economic and cultural constructions, which are constructed on the basis of the (previous) analog media, have been/are changing because of qualities/possibilities that lie within the digital media. This results in a new situation and a new way in which media products are consumed and reproduced by amateur producers. Remixing is part of a vastly growing digital community on YouTube. And following the recently added editing functions on YouTube, the media platform seems to support this community.

For the  years(?) to come, it will be interesting to see how professional production and consumers/amateur producers will deal with these changes and what role YouTube will have during these transitions.


Center for Social Media. Center for Social Media. 2008. American University School of Communication.  <>

Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006.

Kavoori, Anandam. Reading YouTube: the critical viewers guide. New York: 2011.

Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin Press, 2008.

LuckyTVMedia. YouTube. 9 Mei 2011. <>

Manovich, Lev. The Language of New Media. Cambridge: The MIT Press, 2001.

O’Reilly, Tim. “What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software.” O’Reilly. 30 September 2005. O’Reilly Media, Inc. 4 april 2012.<>

Soderbergtv. YouTube. 24 September 2007. Maart 2012. <>

YouTube. 2005. Google, Limited Liability Company. <>


Recently, Dutch software-firm Layar introduced Stiktu, an application that lets users be creative with augmented reality. The app runs on iPhone and Android platforms and aims to be the next best thing in social media, blurring the boundaries between the social, the real, and the virtual. Although the app has only been launched worldwide since June, its concept of painting augmented spaces with digital graffiti is catching on rapidly. Moreover, its creators are trying to realize an admirable ideal: “With Stiktu we really put the power of augmented reality in the hands of the people.”

The functionality of Stiktu is simple, yet so delivering as it transforms our possibilities with augmented reality and enhances our world. In a nutshell, Stiktu could be described as followed: scan, edit, publish.
“So what? User-generated content isn’t anything new, yadda yadda yadda.” Maybe so, but the combination with augmented reality is trailblazing. Let’s have a closer look.

With Stiktu you can scan any object in the real world, be it a building, magazine or anything else with your smartphone camera. What follows is an editor that allows a personal touch by adding your own drawings, text or stickers to the object. When you’re all done and satisfied with your creation, hit publish and share your remix with the world.


Now this is where it gets interesting. Let’s say for example you’re done remixing that box of cereals you love or that famous statue in your city. When someone else scans that same object with their camera, they get to see your creation in real-time. No matter the angle of the camera, your creation remains on the exact same position on the object. Stiktu uses Layar’s Vision technology that allows it to link any scan with their existing database, so you don’t have to scan your object under the exact same circumstances as the original for it to detect a match. Pretty cool right? But it gets even better because Stiktu is not just a nifty camera app. By allowing your creations to be shared with the rest of the world, Stiktu is building a whole new social network where users can view, comment and like each other’s work. More importantly, when you scan an object that has already been remixed by multiple users, you get to see all the previous works by simply hitting the arrow buttons on your screen to browse back and forth.

Like many devices of augmented reality, Stiktu tries to bridge the gap between the virtual and the physical. Whereas virtual reality used to be a characteristic inherent to computerized worlds only, now it’s being domesticated into our daily environments. ((Manovich, Lev. 2006. The Poetics of Urban Media Surfaces. First Monday, 11, 2, <>)) In general, Stiktu is the paragon of an ongoing socialization and meme-like transformation of augmented reality. Most early manifestations of personalized AR served commercial interests or have focused on specific purposes. ((Shaviro, Steven. 2007. Money for Nothing: Virtual Worlds and Virtual Economies. unpublished ms., p. 3-4)) The information they provided was coming from large institutions like Google (Maps) and Wikipedia, and users were mainly readers and not writers. Basically, these corporations had the controlling means of what you see and don’t see.


With Stiktu this has changed. It provides the ability to add any information onto the digital meta-layer of reality. So, former readers become active writers within a collaborative participation-process that is generating a new view of reality. As in web 2.0 tendencies on the Internet itself, Stiktu increases the individual’s capacities for attributing to the virtual physicality, making him or her part of the design-processes that shape our augmented world.

Stiktu is not so much an information-based application as a creative platform that people can utilize to interact with each other. It is comparable to interactive storytelling, by which stories of individuals are being assembled and layered according to particular geolocations. What is remarkable about Stiktu’s content is that most of it seems to be based on meme-like animations –  e.g. funny images with catchy captions  – familiar with websites like 4chan and 9gag. In a sense, the cultural base of the Internet-meme is now being projected on physical reality by Stiktu.


Because of its user-generated nature, the democratizing elements of the application sound appealing overall, but keep in mind that applications like Stiktu can have detrimental consequences for our perceptions and personal relations as well. For instance, the democratization of virtual information in physical spaces can lead to a cluttered information overload. Also, if the app applies face-recognition software vigorously this could result in privacy violations. The notion of the ‘databody’ gets much more real when the individual could be traced according to his appearance and when digital information is unwillingly attached to his physiognomy by other users.

In the end, Stiktu is part of a tendency in which user-generated forms of augmented reality emerge. As it brings together the virtual and the physical, it is also extending online social networks to new spaces of physicality. Where this development will take us is uncertain, but it is without a doubt that our traditional notion of the physical environment is radically being altered as a result of new media applications like Stiktu. Although Stiktu’s concept is a step in the right direction to incorporate augmented reality with user-generated content that defines our current Web, it makes you wonder to what extent augmented reality will construct our perceptions in the future. Will the layers of augmented reality become mandatory in such a way that we no longer can perceive the world to its full potential without it? In fact, will augmented reality become our new reality?

P.S: For those of you who wish to try Stiktu, there is an easter-egg hidden somewhere on this page. See if you can find it ;)

Authors: Robert Silvis & Stijn van Wonderen

What does it need to launch a successful media outlet? Probably a good deal of start-up capital, a team of editors, a business plan and some advertising clients. Cairo’s Rassd News Network started with two things: a Facebook page and a revolution.

(Text: Jules Mataly, Mathias Schuh)

It takes a lot of scrolling to find the first status update of Cairo’s Rassd News Network (RNN), one of the biggest citizen journalism projects of today. Dozens of links, photos and updates are published on the project’s Facebook page every day, and tens of thousands since the page was launched just over a year and a half ago. Back then, in the turbulent days of January 2011, RNN’s mission was clear: providing alternative media coverage from Tahrir Square and elsewhere.

The Arab Spring is one of the most prominent success stories of how social media can help to gather people, inform both local activists and journalists worldwide, and bypass traditional media and ways of communication that could be either controlled by governments or were not efficient (on this note, we invite you to take a look at this beautifully animated Arab Spring timeline by The Guardian).

A screenshot of the RNN Facebook wall during the 2011 Cairo uprising

Two years earlier, the protests in Iran had already shown the political importance of social networks: harder to control and faster than any other ways of communication, social networks quickly became allies of the protesters. The 2011 uprisings however also tremendously changed the media landscape, explains RNN co-founder Abdullah Al-Fakharany in an email: “The aim has been to create an alternative form of journalism covering events subject to state censorship, or the self-censorship of established media, that are prevalent in the Middle East”.

The story

A quick historical reminder: On June 6, 2010, Khaled Saeed is beaten to death by two police officers in Alexandria. A disturbing photo of his corpse gets viral and eventually is seen by Wael Ghonim, who sets up the Facebook page We are all Khaled Said, which gradually becomes a platform of protest.

On January, 17th, an Egyptian man sets fire to himself outside an administration building, emulating Mohamed Bouazizi‘s self-immolation in Tunisia, and triggering a similar chain of events. On January 25th, a long series of protests starts in Cairo’s streets, aiming – and eventually achieving – to remove the power from Mubarak’s hands.

The network

RNN’s predecessor was a Facebook page dedicated to monitoring the 2010 parliamentary elections in November and December, which have been troubled by fraud suspicions. “Egyptian media largely failed to report the widespread fraud and intimidation that characterized the elections”, explains Abdullah. During the elections, the platform received up to 700 contributions a day from citizens, about 400 of which were published on Facebook. The core team grew to 30 editors.

By sunset on 25 January 2011, the platform launches a new page: Rassd News Network, Rassd being a blend of the Arabic words Rakeb, Sawwar and Dawwan (Observe, Photograph, Blog). Its first update: a reference to the Khaled Said page, with which RNN forms a vital cooperation: “Rassd was endorsed as the “official” news-source for the online community of activists and concerned youths who were deeply involved in the momentous events rocking Egypt”, explains Abdullah.

“RNN functions on the basis of a vast network of volunteering reporters, and a small core of volunteering editorial staff. Besides RNN’s volunteering reporters, members of the public are encouraged to send in text messages, pictures and videos documenting events they witness”, says Abdullah Fakharany. A staff of almost 200 volunteers is in charge of checking, formatting and publishing the news they are receiving. In the 18 days following January 25th, RNN received an average of 6500 reports a day and published 4 000 of them, attracting every day an average of 40 000 new followers.

The network keeps growing, and can be a bit confusing at the first look. It went global, with pages emerging for Marroco, Algeria and Turkey. These networks are still active, and today, and even though their role has changed and protests have declined, RNN still shows passion and dedication to citizen journalism, encouraging its users to contribute:

The question of bias

Compared to traditional media, the objectivity of civil journalism initiatives like RNN is probably disputable. Rassd News has been accused of partiality and inaccuracy in reporting from the uprisings. But how much objectivity can be demanded from a medium that essentially is a part of the revolution it reports from?

For journalist and blogger Jilian C. York, the benefits of citizen journalism outweigh the risk of false or biased information, especially in a context like the Arab Spring: After all, argues York, the young, tech-savvy Egyptians probably know their country better than the foreign correspondents of global media networks. And while some international reporters were covering the events at Tahrir square from their hotel rooms, Cairo’s civic journalists blogged, taped and tweeted from the very centre of the clashes.

Is, thus, civic journalism the better alternative or even a replacement for traditional media outlets? Quite the contrary, argues Elizabeth Iskender in a journal article on the role of Facebook in the Egypt 2011 uprising. Only in close collaboration and exchange with traditional media, the full potential of Egypt’s grassroots journalism could unfold : “If social media are to continue to play a role other than acting as a separate communicative space, the flow of communication between different forms of media and between the different “audiences” within Egypt is crucial.” 1

Whatever the future relation of traditional and alternative media will look like, the success of Rassd News Network shows that web-based civic journalism is not just a temporary phenomenon in turbulent times, but a real “empowering tool for ordinary citizens”, as Abdullah puts it: “Rassd News Network continues to grow thanks to the passion, dedication and self-reliance of a new generation, the strong civic spirit and the desire for truth that animates it.”


1 Iskander, Elizabeth (2011) Connecting the national and the virtual: can Facebook activism remain relevant after Egypt’s January 25 uprising? International Journal of Communication, 5 pp. 13-15.


postagram postcard

When was the last time you’ve sent someone a postcard? And not just over the e-mail e-cards that go directly to people’s spam folders, but the good old days genuine postcards brought home by the postman. In the age of speed-reading and skimming through loads of wall-posts and search results, nobody slows down to put pen on paper and write a few sincere thoughts to his or her loved ones. There’s no need anymore in sending a colorful postcard with a few lines written on it when you’re away on a holiday. Why would you brag about the exotic place you’re visiting using only one photo, when you can upload hundreds of pictures into Facebook albums?

Well, the probable answer comes exactly from this clutter of joyful and never ending stream of online pictures, e-cards and virtually expressed thoughts. People still love to receive stuff through their snail mail. That’s right SNAIL mail, not instant mail. When someone takes the time to actually buy a postcard, write down a short message (which – and this can bring a little panic into the sender – cannot be deleted and re-rewritten), stamp it and send it by mail, that can be a really nice gesture.

This brings us to the phone app that we’d like to introduce to you today. It’s called Postagram and its name probably makes a good enough description to the everyday digital addict. It’s an iPhone and Android app, related to Instagram (hence the “gram” in the name), but it also works with your phone’s photo library and Facebook albums.

The way that Postagram works is this: using a photo of your choosing, the app easily allows you to send high quality photo postcards to friends and family anywhere in the world. The postcards are printed on real, thick, glossy paper showcasing a high resolution of 300 DPI. What makes this app real good, once again, is the possibility of sending postcards from anywhere to everywhere in the world. Sending a Postagram was available only for US destinations (and for a measly 99 cents per send), but starting March 2012 you can put a smile on somebody’s face even if that person is from Europe as Postagrams are delivered to the Old Continent as well. You can find the list of European locations here, and the cost is of 1,99 $ per send.


Sincerely, the start-up behind Postagram, is responsible for a number of other cool and well reviewed apps, such as Pop-Booth, Sincerely Ink and Dotti. Their future plans include a “QR code” stamp printed on the postcards that gives the recipient a digital copy of the photo on the card, adds the sender’s address to the recipient’s Sincerely address book, and it alerts the sender the card has been viewed. And this brings us to one of humanity’s biggest problems with postcards sending: finding the address of the recipient. Nowadays especially, when nobody knows house addresses, only e-mail addresses. Sincerely’s solution for this inconvenient is searching your phone’s address book and uploading Postagram with a list of all the other Sincerely users you’re connected to, filling out your Sincerely address book. Their next move is integrating Facebook contacts.

Why would you choose Postagram over regular postcards? This is a simple one: Postagram can be just as cheap (or even cheaper) as a regular postcard, stamping and sending it (of course, the cost of the postcard depends on its quality), but the real advantage comes from the truly personalized character of Postagram. Just imagine your grandmother’s joy seeing you waving and smiling next to the Eiffel tower from a Postagram, not just receiving a standard postcard that all of her neighbours might have got from their grandsons. Also, if you’re concerned with your postcard reaching its final destination, Postagram has added post office tracking. Senders receive an alert when their postcard reaches the initial post office and when it finally reaches the recipient.

Here’s a nice video of how to use Postagram:

Moreover, the people from Sincerely have put up a Postagram blog, which is very useful in terms of tips and tricks of how to create your perfect picture using digital tools and other tutorials.

And, finally, this is where you can download your Postagram app for iPhone and Android for free. Our hope is that you will send at least one nice thought out today to someone you care about, on a nice, real piece of paper. Because, as Postagram says, “reality is awesome”!

Photo credit: Postagram blog

By Alexandru Manole & Diana Necula

Who doesn’t love social media? Facebook has recently celebrated more than 1 billion active users (becoming one the biggest countries in the world), Twitter has the power to put politicians on the throne or to throw them out in the gutter, people happily (actually quite furiously) comment on forums and big corporations spend top dollars on advertising campaigns that include social ads, direct likes, sponsored stories, paid tweets and so on. Well, I can tell you the name of at least two huge companies that learned recently how social media can backfire and destroy some of their carefully constructed digital image. Feast your eyes with two case studies on the recent adventures of Shell (the Oil Company) and Nestle (the food and beverages giant) in the wonderful realm of new media.

Let’s talk about the more serious case first. And when I say serious, I’m only talking about the actual subject of the campaign, as the final results where quite dramatic in both cases described in this article. As our world’s oil resources are scarcer every year, companies that exploit them have to drill where no other human being has ever drilled before. Fearing that their reserves in Nigeria are going to end up soon, Shell decided to change premises to the Arctic. They’ve got their marketing teams together and, with the help of ad agencies, started to explain to the world how the Arctic operation is going to happen and, of course, how safe and beneficial for everybody it will all be.

Enter Greenpeace. The environmental concerned company got on the case immediately and started an extensive hoax campaign to counter Shell’s real one. They built a regular digital strategy that included a platform, social media, video content and gaming. The Arctic Ready website looked exactly as if it was built by Shell. It has all the visual elements of the Dutch companies’ official homepage. But the message is right there, in your face, in the form of the content. Greenpeace has invited people around the world to express themselves creatively and let Shell know what they think. Just a few entries:

The hoax campaign included a Youtube video that mocked the way oil companies deal with environmental catastrophes:

The icing on the cake (pun intended) is a fake-game called Angry Bergs (as in Ice-Bergs) that lets you destroy the Arctic and make money in just seconds. Word was also put out on Tweeter. In the end, the whole shenanigan brought a lot of attention on Shell’s intention of exploiting the Arctic. Despite the whole social media buzz surrounding the fake campaign (and, thus, the awareness towards the supposedly evil intentions of the oil corporation), Shell decided not to sue Greenpeace in revenge. They said they’re just going to mind their own business. Nevertheless, the Arctic is safe for now as drilling was postponed in the wait of clearer waters.

Now, for the fun part. Nestle is one of the world’s biggest spenders on digital advertising. And it’s also a company renowned for its alimentary products, being present in one’s life cycle from birth to death. The Swiss company struck a global deal with Facebook, agreeing to put a consistent chunk of their advertising budgets in Zuckerberg’s website, but they’ve done so after learning their social media lessons in a drastic way. You’ve probably heard of the 2010 scandal in which Nestle’s Kit Kat was accused of purchasing palm oil from companies that are destroying the rain forest. It has been deemed as the biggest disaster in the history of PR. In a nutshell, Greenpeace organized a digital mob and a riot against Nestle, the food corporation’s Facebook fan page being flooded with angry commentators. People actually changed their profile picture using a logo that said “Nestle Killer”. This was the website put together by Greenpeace as the center point of the revolt. It has all resulted in Nestle’s decision to stop working with the suppliers accused of destroying the rainforest.

But, this is all gone and buried now. In the month of July 2012 Nestle decided to take their first Instagram picture and upload it on Facebook. This is what it looked like:

Now, for those of you who are not familiar with the practices of the community, let me introduce to you the Pedobear. As wikipedia describes it, Pedobear is an Internet meme, a symbol for pedophiles, something like their mascot. Let’s compare the pictures of the Pedobear and of Nestle’s mascot, put together:

They’re quite similar, right? This is what the whole world believed, actually. Following Nestle’s enthusiastic message “Drum roll please… Kit Kat is on Instagram” that accompanied the picture of the dreaded character, once again, digital citizens around the globe flocked the fan page and flooded it with comments (only this time they’ve replaced threats with jokes). And, once again, Nestle had to admit that the use of social media can leave a sour taste in one’s mouth. They pulled out the picture from Facebook the same day it has been published.

What is the lesson to be learned here? I’ll leave this to the corporate marketing departments, PR firms and media agencies. They probably know already that today all media is social media. People have voices that can be heard whenever the need arises and they’re not always saying nice things. Furthermore, due diligence means knowing what the Pedobear is and why associating your brand with it will make all your potential clients laugh their socks off. I’m simply delighted to note, once more, the power of social media.




The Big Think Logo

Archiving online can be a tricky undertaking. On the one hand we can’t get enough of it when it comes to news and scholarly articles or any other form of factual information but on the other hand there seems to be too much of it going on concerning personal or private data and irrelevant babble. In this text I will explore the topic of web archiving as an integral part of website design using the example of “big think” and their partial front page archiving function. In comparing this way of creating a visual interface for archiving to other websites I will try to highlight the significance of archiving information in context as opposed to archiving standalone information.


Big Think Layout

Structure of the Website

The structure of the big think website is more or less standard for any blog site with the logo, some advertising, social media and the navigation in the header, followed by a banner with the daily stories and the popular posts or editor’s picks.

This is then followed by the main blogroll and the idea feed, occasionally interrupted by ads and newsletter signup fields. The website is then rounded off with the footer containing the topic selection, about us, contact and link sections.

The main part of this layout of course that drew my attention is a button, which is marked in orange on the graphic. I will refer to it as the “time machine” button. It is a simple arrow that when clicked sets the date back one day at a time and changes the banner to display the idea of the day and the front page articles of the day before. This can then be repeated to view an earlier date.

The reason why I refer to this button as the time machine is because it doesn’t just list the articles of the day before but actually displays the layout of the day before in the banner allowing the user to virtually travel back in time to the web page of yesterday retaining the context of the articles.

Archiving Online

The issue with archiving online is similar to the problem of displaying information online and that is context. Take for example your standard blog website with continuous flow of information. In the present the content of the article can be compared with other content available in the present and organized into tag clouds or some other relevant visualization thus creating some form of context for it. When these articles are archived however the only context the user gets is the date when this article was published and (most of the time) who published it. Since the web is a dynamic medium, which doesn’t follow a set publishing rhythm it becomes difficult to trace contexts in the past.

Put that in direct comparison to print newspapers being archived as a whole and you will find a significant difference because whenever an archived newspaper is revisited the person viewing it can see what articles were displayed on the same page, their size relative to each other and the order in which they appeared on the page and on which page of the newspaper they appeared. The information of this newspaper can thus be studied in the context of this newspaper at a given date in time.

Now of course the blog does not have “pages” like a newspaper but it does have sections. These sections, much like newspaper pages, give different weight to the articles. Once however these articles are archived the visual of this is hierarchy is lost and only the information from the article itself is left.

There are some who try to archive the web in one way or another such as the Wayback machine. This can be done for various reasons, e.g. to see the evolution of the design of a web page or the change in technologies used to portray information and organize it on a page, but one of them could be to see the interplay of information on a web page at a given date in time. For example it is possible to see how a certain event was portrayed by a website at a given time such as 9/11.

“Big think”, with their time machine button on the front page influencing the banner and reloading the “front page” of yesterday manages to partially recreate that effect of being able to see the context of information and of ideas at a given date in time and uses it as an integral part of their layout giving similar weight to the archived “front page” almost as the present one.

The Importance of Yesterday

Apart from having the latest content most websites use a basic search function that allows the user to search for articles according to keywords or sometimes the date but most of the standards of archiving only extend to the information, which is being archived, not the actual website since the content is dynamic and loaded from databases to be viewed on a standard html/css hull.

The timely medium favors topical context (connected by tags) as opposed to historical context. Topical context can be extremely useful but not always the most relevant context for a specific piece of information.

The basic idea behind web archiving is to preserve the web, however current techniques of archiving usually only preserve information uploaded to the web. You may wonder what the difference is but once again the difference is very similar to the newspaper example given before. Archiving the newspaper and the cut out articles from a newspaper are two very different things used for two different types of research. One concentrates on the thing that happened and the other concentrates on how that which happened was portrayed in the media in context of the medium itself.

Similarly archiving articles scattered across the web, ordered by date or author or even simply by keywords achieves the archiving of information but it may not be enough to answer the questions of how this information was portrayed in the medium itself at a certain moment in time.

The comparison between the newspaper and the internet in this case is not here to show how something should be done but much rather that the current way of archiving web content may be lacking from the perspective of contextual information portrayal within the medium.

As mentioned before there are institutions, which attempt to archive some of the web in one way or another but the task is much more difficult for an outsider as opposed to the website itself. In the case of “big think” this is being handled internally by the website (although it is not a screenshot of the entire site) with a button allowing the user to go back in time and view the ‘website of yesterday’. To draw a rather banal parallel to the biggest social media website, the timeline idea may not be that bad.

Of course the big think model is only one example of how to visually integrate a different type of archiving into web design and there is no right way of doing this but at least it is being thought about and it could be thought about more because as the time machine function on the big think website proves, the ideas of yesterday may be just as good as the ones from tomorrow and losing the context may result in the loss of the big picture and hence the idea itself.

Why minimalist approach steps into our online life and what is the level of its effectiveness?

We all have heard about the popular minimalist approach by now. Most of us have probably seen reportage on TV, or YouTube about some new-born minimalist throwing away tons of expensive furniture, equipment, clothing. Leaving the space super clean and super empty, preventing the overload that could make him, or her, feel stressed.


I’m thinking, did any of my unused old phones, or any of my too many pairs of shoes ever made me feel stressed? Somehow it seems to me, it would probably be more stressing if I had none, or not enough. If we talk new media though, that is a totally different story. How many times I felt annoyed, by unexpected apps loading on my phone, offering the new updates for “Twitter on you phone” that I didn’t really know what to do with, I cannot express. I can’t also count, how many times I felt distracted looking at the number of  notifications on my Facebook, trying to figure out how many events am I going to miss this time as obviously I can’t go to all of them.

I decided to find out if more people feel the same way about things and if there is any evidence of shifting towards the minimalism among the online communities.

As soon as I typed the “minimalism and social media” into Google search engine, endless list of users, sharing stories of ‘unfriending’ people on Facebook, popped up. People seemed to consider Facebook just another popularity contest after MySpace, or Twitter, and feel urged to sign off that trend of creating Digital Noise.

I created a survey afterwards, to get a bit more detail about that discovered phenomena. It showed that among the interviewed, the amount of friends on Facebook never being contacted, or spoken to, is ranging from 20 to 70 percent, with the average (for the fifty persons group) being 45% of friends neither ever contacted, nor paid attention to in the news feed. Additionally, only around 50 percent of  the social platforms they were registered with were being visited on a daily basis, with almost 34 percent hardly ever being used, but with personal data uploaded on them, cluttering the web.


The most valuable insight though, I’ve gained during the focus group with some of the survey participants. Most of them have already tried to filter their Facebook friends, or limit the list of people they follow on Twitter to minimize the information intake and data overload. Although, it didn’t really stop them from adding new people in the manner very similar to that before, so the removed positions were quickly replaced with the new ones (later to be removed!), because “meeting new people is so great”.
A Few of the interlocutors additionally applied the functions limiting the news and particular content visibility, in order to try to structure the outcome and minimize unwanted, or redundant features and information. However, at the same time, they were liking new positions, or allowing new apps to interact with their profiles, which on one side expresses the human nature of constant learning and changing interest (clearly enhanced by the new media as such and by the information availability), on the other though, it causes the circle to close, and therefore does not prove the minimalist approach too effective.
What also does not help in breaking out of that scheme, is that “we have an inherent fear of not wanting to offend others”. We will try to “follow” everyone, even if our brain is struggling with such an information overflow. We are on Facebook, because it meets our psychological need – “it validates us and makes us feel “Liked” (both statements by digital minimalist, Adam Boettiger).

Another interesting issue to discover, was that content is only valuable if it has the audience that makes time to consume it. 
So while growing networks of social media, emails and other channels are great for (e.g. Facebook’s) investors and “start-ups that need to demonstrate growth and usage, the fractionalisation and fragmentation of our own individual attention is suffering” (digital minimalist, Adam Boettiger). There is an enormous content wastage produced by the social media. The amount of friends’ news, or general posts that we deliberately omit on Facebook (not being able to absorb it), or the “twitter stream updating so fast, and so often, that I was missing a lot of tweets I didn’t want to miss” (source: the focus group), are probably creating the content wastage surely keeping up (if not exceeding) that of the product, or resources wastage in the consumer markets.


What conclusions can one draw from the above findings? Is the minimalist approach not adequate for the social media platforms? Is the flowing nature of the ‘online’ so incompatible with the physical world that it excludes application of the minimalist theory, that seemed to work in real world, to that of the virtual? What other ideology, action, or approach could therefore be more effective in uncluttering the online space, (in the very same way, in which we reduce the amount of invaluable, or unneeded things around us, to be able to feel free, but less stressed and overloaded)?


It seems ideal to me that a minimal workflow should be pursued in software, hardware, and work environment, which could greatly enhance our productivity and state of mind. It should take away what’s not important, but more specifically, highlight what is important, thereby allowing us to focus.

“The philosophy of minimalism, as I see it, is having exactly what you need, when you need it, for as long as you need it. It’s not about just less, it’s about just right”. — Patrick Rhone, Minimal Mac



Social media as trend indicators
It is not uncommon that when “social media networks” are mentioned, the most common associations among people are with platforms that promote interaction with, and sharing and exchange of user-generated content. With its exponential growth, social media is now regarded as the collective weight and barometer of thoughts and ideas about every aspect of the world – a collective pulse of observation, wisdom and emotional reactions1. The aggregated opinions and feelings of online users are slowly gaining the status of ‘data’ and are altering the ways in which we now arrange and communicate information.

It is no wonder then that this phenomenon has led to a new sort of content analysis, one wherein mining the contents and attributes of social media content allows an exploration of social structure characteristics, analyse action patterns qualitatively and quantitatively, and, as an extension, also make possible the likelihood of successfully indicating and predicting future trends when extracted and analysed “properly”2.

A recent application of social media content for purposes of prediction is seen in the financial sector, where text and sentiment analysis methods are applied to Twitter content to gauge the relationship of tweets to stock returns (even predicting them a day ahead), trading volume and stock price volatility3. There already exist applications and networks, such as HedgeChatter and StockTwits, which offer real-time and automatic data mining to track stock trends. This hyped interest in Twitter content as an indicator of financial market patterns begs one question – is it really so?

Under the scanner
To study the phenomenon first-hand, I narrowed in on seven US volume leader stocks (as per October 11 and 12) – $GOOG (Google Inc.), $MSFT (Microsoft Corporation), $AAPL (Apple Inc.), $BAC (Bank of America Corporation), $JPM (JP Morgan Chase & Co.), $INFY (Infosys Ltd.), $INTC (Inter Corporation) – to chart the stock trends (using Twitter content) and compare it with each stock’s chart on Yahoo! Finance, the go-to “factual” database. US stocks, widely influenced by individuals worldwide, were therefore an appropriate choice of study for this analysis, with a likelihood of a high volume of related Twitter content. If the charts created from the cumulative of Twitter data followed similar a pattern to the Yahoo! charts, or at least gave similar indications, it would prove that Twitter can indeed be regarded as a source for stock market trend indicators.

The process
Using a Twitter scraper, I aggregated all available tweets related to each stock using the search query: “<stock name> stock” (e.g.: GOOG stock). Thereafter, I manually coded the collected tweets for each stock into 3 categories:

P – positive – including tweets that either included ‘buy’/’buying’, directly conveyed an upswing of the stock, or reflected a positive/optimistic attitude towards the stock.

N – negative – including tweets that either included ‘sell’/’selling’, directly conveyed a downswing of the stock, or reflected a negative/pessimistic attitude towards the stock.

NR – not relevant – including tweets that were either unrelated, ambivalent updates, mentioned incidentally in relation to other stocks, or related to other topics.

Once the content for each stock had been coded individually, I created seven line graphs, one for each, charting the difference in the sum of the positive and negative categories for the period of October 4 to October 12, on average. These line graphs where then compared with the standard 5-day trend charts available on Yahoo! Finance for each stock. The Yahoo! Finance charts, however, only included data from October 8 to October 12. Therefore, the comparison in this study was made only with the common data points in both charts.

Here’s what happened:
In a nutshell, it worked.

Due to the differing time periods used in my graphs and the charts on Yahoo! Finance, the trend lines aren’t an exact match, but the visible implied patterns are most often the same. In the $GOOG stock comparison below, both charts depict the high on October 8, the dip on October 9, and the peak on October 11.

Chart tracing $AAPL stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $AAPL stock chart on Yahoo! Finance

While the $AAPL graphs appear different at a glance (mainly due to different scales of measure), the information it conveys is similar. Looking at both, it’s evident that the stock experienced a sharp dip on October 9 and found its way back up on October 11. Both charts also indicate the dip in Apple’s stock the very next day, on October 12.

Chart tracing $BAC stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $BAC stock chart on Yahoo! Finance

For the Bank of America ($BAC) stocks, the trend lines differ on the dates on October 8 and October 9 but the Twitter content-based chart does echo the spike in stock action on October 11, as well as the fall the next day.

Chart tracing $JPM stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $JPM stock chart on Yahoo! Finance

In the $JPM stock comparison charts, the Twitter deduced chart mirrors the plateau in the early stages and also notes the spike in the stock during October 11 and October 12, in line with the Yahoo! Finance graph.

Chart tracing $INFY stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $INFY stock chart on Yahoo! Finance

The Infosys graphs are a clear match, with both charts showing a stagnant performance to begin with, followed by a sharp decline in the stock around October 12.

Chart tracing $MSFT stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $MSFT stock chart on Yahoo! FinanceChart tracing $INTC stock trend basis Twitter content (created by Audrika Rakshit) + Actual 5-day $INTC stock chart on Yahoo! Finance

On the other hand, the charts for $MSFT and $INTC are less well-matched. While the Twitter-sourced $MSFT chart does show the slight rise on October 10, it completely misses the spike in the Yahoo! Finance graph on October 8, showing the opposite, in fact – a stagnant dip. The $INTC charts show completely opposite trend lines, mirroring the fact that though there was a lot of Twitter buzz for INTC around October 11 and October 12, it did not necessarily mirror the actual dip in the stock market.

Indicator indeed
Given their propensity for and advantage of real-time data, social media platforms such as Twitter can, in fact, indicate the nature of actual occurrences. Though the stock market landscape is by nature volatile and unpredictable, early indicators extracted from online social media (Twitter feeds, blog, etc.) may enable the prediction of changes in various economic and commercial indicators4.  As shown in this analysis, a careful selection of search queries and a clearly coded aggregation of Twitter content can indicate and reflect actual stock market trends, in many instances. To use these findings to further predict market performance is the next step to this analysis.


1McGuire, Seth. “Social Media and Markets: The New Frontier.” GNIP. 13 October 2012. <>

2Yu, Sheng and Kak, Subhash. “A Survey of Prediction Using Social Media”. 12 October 2012. <>

3Byrne, Ciara. “Twitter Predicts Performance of Individual Stocks” Venture Beat. Ed. Matt Marshall. 2011. 13 October 2012. <>

4Bollen, Johan, Mao, Huina, Zeng, Xiaojun. “Twitter mood predicts the stock market.” Journal of Computational Science. 2.1 (2011): 1-8. 12 October 2012.

It’s a disquieting idea: everything you do on the web, on your smartphone and on your tablet is being tracked, and all that data is gathered somewhere in the void of the internet. Thankfully, most of that data is relatively anonymous, lost in a vast cloud that is hard to fathom. However, the idea of tethered devices – devices designed for personal use that are connected to the internet, sending the user’s data to the databanks of the manufacturing company – makes that data less anonymous and the user more visible (and therefore, also more of a target). Take, for instance, Amazon’s Kindle, where the conglomerate of is privy to the reading habits of Kindle users. In ‘The Abuses of Literacy: Amazon Kindle and the Right to Read’, Ted Striphas outlines just how can use (and abuse) the fact that the Kindle is a tethered device, by, for example, remotely removing purchased e-books from users’ Kindles.

In this same article, Striphas raises a suggestion how Kindle-users can, in minor ways, take back the power from Amazon, like by ‘corrupting the data cloud’. Here, they would introduce noise into the data-stream by inserting nonsensical notes into the pages of their e-books, or otherwise scrambling the data gathered by the device. While Striphas doubts how effective this would be, mainly due to the organization needed on the readers’ part, it raises an interesting point regarding on the one hand and their customer-base on the other: the system might use the data of the users, but the users use the system as well, and they should be able to do so creatively, unpredictably and, ultimately, subversively.

And you're done.

However, in the world of e-books, there are more parties than just the readers and Publishers play their part as well, in a wholly different part of the system. What follows here is a plan, neatly divided into 5 steps, on how a fledgling publisher could subvert the system maintained by and beat the company at their own game.


Step 1: Publish a book

Find yourself a manuscript to publish. It should have controversial content. Maybe the protagonist is racist, maybe the protagonist is in a healthy sexual relationship with his or her twin sister, maybe there are a rabbit, a horse and a carrot involved as well. Whatever the case, it should be taboo. Should you take yourself seriously as a publisher, you might want to go with something well-written and well-crafted, though this is not a necessity to net yourself some good results. When you have found a fitting manuscript, publish it as an e-book and allow Amazon to sell it.


Step 2: Fight the machine

Next, it is time to draw a line in the sand. Perhaps you could decide not to go along with the discount policies of, like the British publishing house Hachette Livre did. promptly targeted this publishing house by removing the ‘Buy Now’-button from the web pages concerning their books, making it impossible to purchase them directly. Do something similar, just make sure is taking steps against your publishing house specifically.


Step 3: Alert the media

Now that you have gotten to bully you, it is time to alert the media. However, rather than blaming company policy, you should be blaming the nature of your would-be bestseller as the reason Amazon is targeting you. is boycotting your envelope-pushing book! They are basically violating the Right for Freedom of Speech! After all, they did the same to self-publishing author Selena Kitt and her incest-themed erotic fiction, removing them from Kindles remotely. Put your degree in Literature to good use and defend your book and its ‘literary qualities’ as well as you can, citing Lolita, Lady Chatterley’s Lover and any book by the Marquis de Sade as similar books that were unjustifiably ‘banned’ as well.


Step 4: Wait

Let the media hype unfold. Two things should happen. First, should get tarred and feathered by the media, resulting in them in making a hasty retreat and blaming malfunctioning algorithms and glitches as the reason why your e-books suddenly disappeared from their store. Trust me, they did the same when books dealing with LGBT-themes suddenly disappeared from the sales rankings. (It’s not homophobic if it’s a system error.) Second, your controversial book should squarely be in the spotlight right now, so people will talk about it and, wondering what all the scandal is about, readers will promptly buy it.


Step 5: Profit

If things go according to plan, and you’ve chosen the right book to publish, you should now have a best-seller on your hands and plenty of money in your pocket. This would be the time to reconcile with, since, despite their numerous ethical quandaries, they do have a customer base that is not to be ignored: 137 million customers per week. Don’t worry about Amazon rejecting you: the company has absolutely no problem with taking a stand and going back on it, like they did when controversy apparently trumped Freedom of Speech. Rather than accepting the monopoly of Amazon and jumping through their hoops, you have instead used their reputation to promote your own e-book. Congratulations!


Note: This plan would work only once.




Ted Striphas, ‘The Abuses of Literacy: Amazon Kindle and the Right to Read’, Communication and Critical/Cultural Studies 7.3 (2010): 297-317.

“It’s always been a tricky balance between getting the story across, and making a great image. But thanks to some serious computing power, we’ve arrived at a crunch point. In one corner of the ring is information, and in the other is art, and they’ve been slugging it out.”

Three years ago, in a blog post for Society of News Design (SND), John Grimwade raised his concern about the direction into which the fairly new medium of information visualisation was moving. In many cases, argued Grimwade, the process of making an infographic was reduced to running data through high-end software and wrapping the output into a visually appealing design. The aesthetic component of the visualisation, that is, the image it creates, is given priority over the actual goals of infographics, namely functionality and comprehensibility: “Dreary spreadsheets can be transformed into beautiful artwork. Spirals, circles, piles of dots and other assorted shapes. Lots of overlapping info in brilliant colors. Population trends turned into a wheel of interconnecting dots. I love it, but to be honest, I often have no idea what’s going on.”

The Infographic Inflation

Meanwhile, stating that infographics and interactive data visualisations are gaining popularity would be a grave underestimation. A quick look at Google’s trending search queries shows how public interest for ‘infographic’ has skyrocketed in the past two years. This development is paralleled by the ever-growing amount of big data on the one side, and the availability of user-friendly applications like, which recently celebrated the 500.000th visualisation created with it’s web-based interface.

Easy-to-use design software and well-designed templates contribute to an inflation of infographics, which in many cases give priority to design over data, aesthetics over accuracy. The share of functional, well-crafted data stories within the expanding body of infographics is fairly low, and sets Grimwade thinking that maybe as many as “95% of the current output is dubious.

Unsurprisingly, Grimwade’s concern is shared by others. Forbes editor Jason Oberholzer for example has introduced a ‘Today In Horrible Infographics‘ category in his blog to discuss worst-practice examples, while this Tumblr collects and criticises visualisations that are infoposters rather than infographics.

“Functions Constrain Form”

The interplay of data journalism, design and art is one of the key subjects of Alberto Cairo’s recent book entitled The Functional Art. Borrowing from some of the core principles of industrial design, Cairo puts functionality at the very centre of his book: “The form should be constrained by the functions of your presentation. There may be more than one form a data set can adopt so that readers can perform operations with it and extract meanings, but the data cannot adopt any form.” 1

Similar to Grimwade, Cairo sees the current infographics hype between the poles of art, design and functionality (usually to the detriment of the latter), but suggests an uni-directional process that covers all three aspects. An infographics producer has to be three things, explained Cairo in his keynote to the 6th annual Infographics Conference in Zeist: First, a journalist to develop a story and carry out accurate research. Secondly, a designer to choose the best-fitting, most functional type of visualisation for that data. And thirdly, an artist who produces visually appealing graphics. What is crucial about Cairo’s model is the order it suggests: functionality first, art second.

Dress to Impress

But what if we put the artistic component at the very centre of information visualisaiton, with functionality and comprehensibility taking the back seat?

100 Years of World Cuisine

Perhaps the most impressive (and unsettling) information visualisation projectI’ve come across in recent years is 100 Years of World Cuisine by Clara Kayser-Bril, Nicolas Kayser-Bril and Marion Kotlarski. The project seeks to visualise the 38 million war casualties in the past century. The visualisation itself is essentially a photo of a kitchen table cluttered with jars, jugs and bowls, each of which representing one of 25 conflicts and filled with the respective amount of (fake) blood. From a purely functional viewpoint, this visualisation does not excel:  The arrangement is random, cluttered and three-dimensional. The containers have different sizes and shapes, making any comparison impossible. And the labeling further aggravates the understanding of the data. Functionally speaking, bar charts or even a table would do a better job in presenting this data in a comparable manner. But scientific accuracy and functionality are obviously not the goal of this project: It is all about evoking a long-lasting, eerie impression in the viewer, as the designers state on the project website: “The horror lays hidden beneath the rigidity of numbers. Figures give us knowledge, not meaning”.

Art First, Information Second

100 Years of World Cuisine exemplifies what Robert Kosara terms an artistic visualisation. Even more, Kosara suggests to categorise information visualisation on a scale ranging from pragmatic to artistic, from utilitarian to sublime: On the very pragmatic edge of the scale, we would find highly function visualisations, the form of which is determined by their goal, that is, their ability to be decoded easily, fast and accurately by the viewer.

For artistic visualisations, quite the contrary is true, Kosara asserts: “The goal is not to enable the user to read the data, but to understand the basic concern. In many ways, this step is the opposite of pragmatic visualization: rather than making the data easily readable, it is transformed into something that is visible and and interesting, but that must still be readily understood.” 2

Peter Orntoft (2010): Informationgraphics in ContextGiving less thought to the most accurate, scientific representation of data and rather putting the visualisations back into their context is key to the recent work of Danish designer Peter Ørntoft. In his 2010 project Information graphics in context, Ørntoft uses religious symbols to create real-life charts about the results of a public opinion poll in Denmark. The functionality and accuracy of these visualisations are, again, questionable. However, they manage to give the viewer an immediate idea of the story and context of the data, and the popularity of Ørntoft’s approach appears to prove him right.

Symbols and Sublime

A similar style was employed by Sarah Illenberger to visualise the results of a sex survey for German magazine NEON in the form of still photos. In the right-hand example, Illenberger arranged zip flies to create a real-life bar chart – with the position of the zip indicating the number of sex partners of the survey participants.

Sarah Illenberger for NEON Magazine

What all of the aforementioned examples have in common is that they include visual cues – blood, religious symbols, zip flies – that stand symbolically for the underlying story: They put, as Ørntoft phrases it, visualisations in context, but also context in visualisations. But do they then constitute as information visualisation, art, neither, or both?

If we reduced the question of functionality to the most accurate and comprehensive visual representation of data, then the aforementioned examples would be semi-functional, if that. But if the role of a functional visualisation is also stand out, to contextualise, and to catch the viewer’s attention in the first place, they do succeed.

Since the very dawn of the medium, asserts Manovich 3, information visualisation has followed a set of simple rules: reducing information to numeric variables, representing it spatially through geometric shapes, and giving visibility to naturally non-visual information. But in the midst of an infographic inflation, the examples mentioned above open up a new exploratory space in which functionality may still be one, but not the only core principle in the fast evolving realm of information visualisation. Grimwade: “Don’t get me wrong, there are some great data visualizations around, and I applaud them. But it’s a new form, and we’re still learning what to do with it.”



1 Cairo, Alberto (2012): The Functional Art: An Introduction to Information Graphics and Visualization. Berkeley: New Riders.

2 Kusara, Robert (2007): Visualization Criticism. The Missing Link Between Information Visualization and Art. 11th International Conference Information Visualization, IEEE.

3 Manovich, Lev (2011): What Is Visualisation? Visual Studies, Vol. 26, No. 1.

Piracy is not theft, let’s make this perfectly clear. Theft, according to the Merriam-Webster Dictionary of English, is “the felonious taking and removing of personal property with intent to deprive the rightful owner of it”. Piracy, on the other hand, is the act of producing and distributing an (illegal) copy of a digital entity. Copying, see, does not have any implications on the original – it doesn’t modify or destroy it.

Now that I’ve led this big ethical elephant out if the room, I can tell you the story of how piracy enabled me to learn and effectively shaped me into who I am. Being Bulgarian, and born in the late 1980s, I had little access, if any, to high quality western cultural content when I was young. The decade after the fall of the Communist regime and the beginning of the “transition” to democracy and capitalism, the income of a working citizen averaged 35 US dollars. A month. You can understand then, why going to the cinema, buying music on “original” cassette tapes (let alone CDs) and using licensed software was not an option.

I was lucky enough to have a parent working in design, and owning an, even if somewhat outdated, desktop computer. And I wanted to do stuff with it. Like every teen, I wanted to play games, watch the Hollywood blockbusters and listen to all the hippest bands. I suppose there were a lot of people in my position, since the Bulgarian pirate scene really flourished.

So, admittedly, I pirated the shit out of western culture. Star Wars, The Shawshank Redemption, Half-Life, Nirvana, Adobe Creative Suite – I did it all. At a time and place where knowledge of western languages was the most precious asset and the state educational system was heavily lagging behind, I learned English at quite a decent level, at the cost of no more than my internet connection. To add to that, I acquired skills in using specialized software for design and video production, as well as knowledge about peer-to-peer networks.

Illustration by Banksy

But I was also shaped along the lines of the western values and lifestyle. I was immersed in American and British culture – something that later in life proved to be responsible for most of my educational, career and life choices. In effect, my personal “transition” from communism to democracy and capitalism was not a result of the political structure of my country, but, oddly enough, of the anarchistic ways of piracy. Thanks to it I caught up to the world, and now I stand on even ground with western civilization. Just as Lawrence Liang points out in his article “Beyond Representation: The Figure of the Pirate”, piracy accomplished for me all the things the state and the educational system could not.

Many westerners have scolded me when I’ve told stories of the obscene amounts of music, movies and software I have pirated. What they fail to understand is that I used this mode of distribution for the lack of any realistic access to an alternative. In some cases, a given movie would never even come to cinemas or air on TV. Some bands would never sell their CDs in local stores. And obtaining a legal Adobe Creative Suite would have meant selling most of my organs.

However, much has changed over the last few years when it comes to the accessibility of content. Publishers have come up with new distribution platforms and business models that make music and video considerably more accessible. Examples such as Spotify and Hulu demonstrate how the cost for cultural content can become marginal. At the same time, by restricting their services to particular localities, they have once again shown the weaknesses of copyright laws. Bulgaria, to this day, doesn’t have access to such innovative platforms, which naturally means that people still resort to illegal channels for getting their music and movies.

And while the publishing industry has been lobbying for reformed copyright laws that would help the survival of its old models, good entrepreneurial minds are trying to beat piracy at its own game. As Gabe Newell, CEO of game developers and distributors Valve, has aptly pointed out: “The only way to get people to stop pirating is by providing them with a better, easier to use service.”


Lawrence Liang, “Beyond Representation: The Figure of the Pirate”

Merriam-Webster Dictionary of English

The main goal of data visualization is to communicate information clearly and effectively through graphical means. (Friedman 2008)

Friedman’s definition is simple and concise, yet broad and applicable to any type of visualization. What it fails to provide is what happens after the communication act. Would it be reasonable to say that some data visualizations have the goal to be aesthetically pleasing, as expert David McCandless mentions in a TED Talk on The Beauty of Data Visualization? Possibly, although much of the visual data art I have witnessed was not just art for art’s sake – there were underlying meanings, debates. Does data visualization stop at the level of informing then, making readers acknowledge bits and thus live the illusion that mere understanding means making a change? In the same TED Talk, David points out how we have become accustomed to demand a visual aspect to information. My worry here would be the risk that online journalism, if it fails to produce valuable and interactive visualizations, will only become the same cold medium that makes the reader passively absorb information (remember McLuhan?). In yet another activity field, for-profit companies are picking up on infographics and the likes in their marketing strategy; ideally, for them this has a return of investment, making their target take action by buying.

What can one say about the various goals that data visualization has in different fields of activity? Does it aim at informing, at pleasing the eye, at shocking and surprising, at simply adding cultural capital? The goal, therefore, is a matter of defining the threshold where the impact of data visualization on its readers ends. Current academic literature, I believe, falls short of detailing at this and I will not take the task on myself.

What I do wish to discuss, however, is a case of data visualization whose goal is specifically to have a strong call-to-action and thus real-world reactions. This, I believe, is especially the case with non-profit actors using data visualizations in their campaigning. These actors ultimately ask for people to take actions, either by joining a movement, donating, reporting etc.  I have recently discovered that the NGOs sector is empowering its campaigning by the use of data visualization. Even data visualization projects that do not present themselves as campaigns often have the right ingredients to make users take action.

With non profit data visualization, one could see a pattern evolving: a call for action visualization needs first to inform, make an emotional appeal, convince. A yard before convincing, it needs to be visible, to be understandable and perhaps interactive enough to keep the user exploring. Therefore, we are looking at a sort of ladder with different steps, with data visualization goals aiming to go ever higher, from mere visibility to the final call to action.

In the other fields, campaigns are structured in these types of steps. In communication theory (and applied practically in advertising,marketing and social media campaigns) the model is called “hierarchy of effectsor “communication ladder”. Each step is an objective which, if the campaign offers to the readers, then it will lead to the desired goal; otherwise, not only can it lose them, but create adverse opinions. The model is not fixed and other industries have appropriated it accordingly.The Fledging Fund, a creative foundation supporting documentaries and their makers, visualize the impact of films and media slightly different.

Communication Ladder in Advertising and Marketing. Source:

Communication Ladder in Creative Media. Source:








If we take Friedman’s definition of data visualization as a communication act, then I argue data visualization, especially in a campaign context, is efficient if it attends to the hierarchy of effects. There is no model available for data visualization, therefore I am proposing an exercise in outlining one:

  1. The step from seeing to exploring is made by a good choice of representation for the given data set and good balance between author and reader driven approaches (Segel  and  Heer 2010)
  2. The step from exploring to understanding (awareness) is done through good design interface, usability, aesthetics, clarity of data, good encoding, acknowledge of limitations, the overall conveying of a story.
  3. The step from understanding to conviction is made through trust in data sets and in the visuals.
  4. The step from conviction to action is made by the smoothness with which the data visualization directs you to immediately-available action points (click, donate, submit…)

In the following, I will take two examples of data visualizations as part of non-profit campaign and analyze them accordingly to the steps.


1.      Amnesty International: Countries practising death penalty

Amnesty International is powerful NGO which intervenes in world-wide issues. Fore some years, it has been employing data visualizations for its major campaigns. The visualization presented here explores which countries practice death penalty and in what numbers.

The visualization debuts with a two-minute video that acts as a splash screen for the user, providing context and numbers for the hot topic, as well as stating the NGOs official opposing to such measures. The video thus introduces the story; it is entirely author-driven, a tactic which leaves out details and raises questions to be explored by the user in the data visualization itself. After the video (itself an animation using interactive forms and icons), the user can explore the actual visualization and build on the story. The visualization is a world map with each country encoded by color, according to the numbers of executions per year. The data is available from 2007 to 2011 and can be opened as an Excel.








And here it stops. Although the map has so many opportunities for encoding further information, all it offers is a report on how many executions and death penalties each country holds for a period of 5 years. The reason for this is that much data is actually surrounding the visualization …as text. The site itself plays the major role.  A 74 pages report offers an extensive analysis of the issue, and furthermore the left side of the page comprises further sections with data expressed in tables. Why has this data not been used for visualization?

Somewhere on the goal, this project is promising to reach to action with the video, while the map falls short of conveying any complex data, story or trust.

What could have been done for an improved impact? Have a look at several infographics The Guardian presents, based on the same data sets but revealing more information.





2. Janaagraha (non profit NGO) – I paid a Bribe

When I attended the Be Good Be Social event in Amsterdam earlier this month, open data enthusiast and speaker, Pelle Aardema , discussed a project entitled I paid a bribe, a site dedicated to Indian citizens anonymously reporting cases of bribing (developed by NGO Janaagraha, whose dedicated role is  “improving the quality of citizenship and infrastructure and services in India’s cities”). Thus, the visualization’s role is to “uncover the  market price of corruption” in country. All this crowd sourced data goes into the section “Bribing trends”, a constantly updated data visualization on analytics reports.







The first visualization gives overall information on what the reader is looking at: 305 cities and 23 departments in the Indian state, with a top of 5 most corrupt cities by bribe numbers – a good hint at what the story will reveal. The second slide of the visualization allows for in depth exploration with a mix of author and reader driven approach, analyzing either a city’s statistics or a department’s. The visualization is split in 3 dials: the first is a bar graph encoding the not paid, not asked and paid bribes (encoded by color) in each city/department. The second reveals the amount of money lost in bribe amount overall, as well as in each city and each department. The third acts as a timeline: showing trends in bribing in cities and departments since the project’s start. Exploration is further enhanced by choice of manual or automated presentation and an upper search box.

Apart from effectively communicating a story (and an unfolding one too!), the visualization has the powerful motivation to urge people in submitting a contribution – hopefully, the story will end when paid bribes on bar graphs reach 0.
The site itself supports the whole project very well, as anyone can actually read the submitted reports of bribery or good ethics. The site further includes articles on best practices in relation to authorities, awareness of laws – restrictions and rights – which makes this visualization useful not only to citizens, but also to authorities. The project is easy scalable and could be implemented in every country – in fact, some countries already have their own “I paid a bribe” platform.


Having worked in the non profit sector myself, I think it could benefit hugely from using data visualization for campaigning, transparency in reports or communication of goals and results. However, I see two important issues:

1. Lack of theoretical development on data visualization impact: how can we make data visualization reach the goal of action? There is no similar model to the ladder of communication, hence my experiment with the proposing one in this blog post.

2. Lack of skills in making data visualization. At the Big, Open and Beautiful conference last evening in Amsterdam, the moderator had a recurrent question for his speakers (mostly data journalists): what skills do you need to do this? The replies were either “one man good for everything: researcher, storyteller, programmer and designer” or team work with separated roles. Another option is simply outsourcing to a design/programming company. This is truly an issue for NGOs in terms of affording and evening finding the right collaborators. Bad visualizations are much worse than clean PDF reports in text, unless one is a perfectionist and manages well with online courses teaching the skills.

All in all, I believe NGOs could leverage the benefits of data visualizations in their campaigns, but are facing difficulties in making it right as the field is still developing theoretically, competitively and professionals that make them are just rising.



McCandless, David. “The Beauty of Data Visualization”. August 2010. 15 March 2013 <>

Friedman, Vitaly. “Data Visualization and Infographics”. Smashing Magazine. 2008. 15 March 2013 <>

Segel, Edward, and Heer, Jeffrey. “Narrative visualization: Telling stories with data.” Visualization and Computer Graphics, IEEE Transactions 16.6 (2010): 1139-1148


Nowadays, more and more game producers release video games on digital distribution platforms. Besides the increase in digital distribution of game content, these platforms and game developers offer gamers an experience out of a gameplay environment. For instance, these platforms have integrated other gaming services such as new communication features, achievements, reward systems, and gameplay statistics. Some of these changes are recognizable in social platforms such as Battlelog and Xbox Live, which not only allow players to purchase game content digitally, but also allow players to analyze their gameplay performance and communicate within a social network. Battlelog is an innovative social platform that is produced by Electronic Arts (EA), and the Xbox Live platform is part of the Microsoft Xbox360 which is mainly designed for console games. Both Battlelog and Xbox Live offer gamers all sorts of stats tracking and visualizations of their gameplay data and achievements. Furthermore, these platforms allow players to analyze and discuss their game data with other gamers. More interestingly is that leaderboards, achievements, badges, and other forms of visible progress are essential components on these platforms. Moreover, it seems that the visualization of game data on these platforms has become a significant complement to gaming nowadays. For instance, it provides players a casual approach of data analysis in order to gain more insight in their own performance.

This can be recognized in the work of Ben Medler, which concentrates on achievements and game data visualization. He examined how visualization, analytics and games intersect. Moreover, he argues that “play analytic systems surround the experience of playing a game, visualizing data collected from players and act as external online hubs where players congregate” (Medler 14). In this respect, platforms such as Xbox Live and Battelog can also be seen as play analytic systems. Medler further explains this term by saying that play analytic systems “allow a player’s data and the data from other players to be combined and analyzed outside of gameplay” (14). An example of such an analytic system is a high score leaderboard. A leaderboard is a game mechanic which is essential for making comparisons between players. Medler explains that leaderboards are often available outside the game on websites or platforms. In this way, players can monitor their scores outside the game and discuss this with friends. In addition, general game stats and stored collected achievements can also be viewed and analyzed outside the actual gameplay.

Figure 1: Stats overview (screenshot taken from Battelog interface)


When looking closely at the social platform Battlelog, it becomes clear that players are able to track and analyze their game stats from their web browser. This platform is an essential part of the core of a game, such as Battlefield 3 and Medal of Honor. It moved many traditional in-game features to the platform, such as player profiles, statistics, leaderboards, and several other things that are associated with a player’s game progress. Moreover, these stored statistics are often visualized in order give players more insight in their game progress. For instance, players can view their number of kills, the number of deaths, the average score per minute, their accuracy percentage, and how many games they have won. In addition, players can see which rank they have, how many hours they have played, and how many times a specific weapon is used. In addition, a visualized progress bar is shown which informs a player about what is completed and his current rank. These bars are usually associated with leveling systems. Furthermore, this progress bar is also visualized with a greyed-out tone and shows players how many percent of their points is earned, and how many is required to get a higher rank. Basically, all achievements, ranks and other game data aspects can be recorded and visualized on these platforms.

Figure 2: Win/Lose graph (screenshot taken from Battelog interface)

It is also worth noting that players not only use these systems to see their accomplished achievements, but can also give them insight “into their play behavior, and potentially get better at playing the game” (Medler 96). In addition, this insight may affect a player’s strategy, because it allows players “to review and plan their future game actions outside of real-time gameplay” (Medler 13). In this way, players can use these play analytic systems to optimize their strategy. This is also explained by Medler, who emphasizes that play analytic systems support play through game data visualizations.

Casual InfoVis

Interesting to note is that “whether play analytic systems are built by players or developers, the systems are always focused on the player as an audience” (Medler 12). From this perspective, the visualization of game data on these platforms can also be seen as a casual information visualization (Casual Infovis), a term that is introduced by Zachary Pousman. Pousman shows in his work on casual information visualization how users can analyze data from a casual approach. According to Pousman, casual infovis “is the use of computer mediated tools to depict personally meaningful information in visual ways that support everyday users in both everyday work and non-work situations” (Pousman 1149). He points out that casual Infovis is not only used in a working environment, rather it is more used for ‘casual use’. For instance, this can be found in software for visualizing personal data such as photo collections. Furthermore, Zachary Pousman explains that “visualizations of social processes, social networks, and social situations have become another emerging and exciting domain for infovis researchers” (1146). Another difference between traditional infovis systems and Casual Infovis systems is that a Casual Infovis is personally important. Furthermore, users of Casual Infovis may vary from experts to novices. They are not necessarily expert in analytic thinking, nor are they required to be experts at reading visualizations (Pousman 1149).To gain insight into a casual infovis system, Pousman uses the Nike+ Running app as an example. The Nike+ Running app tracks distance, pace, time and calories burned with GPS and provides the user with constant feedback. Just as in the example of Nike, platforms such as Battlelog and Xbox Live also visualize information that is personal in nature. For instance, these platforms visualize a player’s gameplay history and their gained achievements. In this way, players can view all sorts of stats tracking and visualizations of their gameplay data and achievements. In addition, the player does not need to be an expert to be able to analyze this type of information visualization.

Moreover, these platforms show similarities with the idea that “visualization may have a catalytic effect on communication between users” (Viegas 1). From this perspective, social information can be used for social purposes. When looking closely at Battlelog and Xbox Live, it becomes clear that players can view each other‘s profiles and see what games their friends are playing. In this respect, these platforms allow players to analyze and discuss their game data with each other. This casual and social approach of data analysis can give players more insight in their own game performance.


Medler, Ben. “Play with data-an exploration of play analytics and its effect on player experiences.” (2012).

Pousman, Zachary, John T. Stasko, and Michael Mateas, 2007. Casual Information Visualization: Depictions of Data in Everyday Life. IEEE Transactions on Visualization and Computer Graphics 13, 6, pp. 1145-1152.

Viegas, Fernanda, Martin Wattenberg, Matt McKeon, Frank van Ham, and Jesse Kriss, 2008. Harry Potter and the Meat-Filled Freezer: A Case Study of Spontaneous Usage of Visualization Tools. In Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS ’08), IEEE Computer Society.

After more than a century of constantly re-inventing audio recording and playback, plenty of disagreement exists about which methods are the best. The invention of portable playback devices has certainly not made it easier to take your own pick. Since streaming became popular, this might have become even more difficult. Some people would claim that nothing beats good old vinyl records (but try listening to one on a bus). Some people hate MP3’s, but are very fond of uncompressed digital audio files (which consume a lot of megabytes). Some people don’t seem to care about audio quality at all and just want acces to as many songs as possible.

Different consumption as well as distribution intentions require different audio qualities. For this reason, music streaming services have to make choices about bitrates. These choices, however, may have different implications.


So what’s different?
Soundcloud and Spotify (just to name a few) handle audio quality in their own way. Soundcloud streams at 128kbps. The reason for the amount of compression might be economical: smaller amounts of data will be streamed so less bandwidth is used. This could benefit Soundclouds servers as well as the users experience. The audiophile is left unhappy though. Spotify has found a solution for this. Premium users are allowed to stream at 320kbps. Still a compressed format, but certainly superior to one mentioned above.

And does it matter?
The quality of streamed music does not only affect consumers though. Music producers and distributors might have a different perspective on the case. The availability of high quality audio streams makes piracy all the more attractive. Tools like soundcloud downloader enable users to rip music that was never meant to be played offline. It now converts the streamed file to a 128kbps MP3. Would Soundcloud improve their streaming quality, then high quality rips would become available as well.

Schermafbeelding 2013-09-07 om 09.49.19Some artists and record labels have shown to be concerned about the quality of the music they post online. They try to avoid piracy by uploading downgraded previews of their (often unreleased) songs. Low playback quality often isn’t audible on cheap speakers. But high end sound systems will tell the difference, so playing ripped versions isn’t an option for most DJ’s. Ironically, ‘downgrading’ to 192kbps is no exception. So some artists try to protect their music from being pirated, not knowing their efforts are useless. 

What should we make of this?
So this raises questions about the aesthetics of audio streaming. If  some artists apparently don’t even notice the downgrading of their own music, how many listeners would? Should platforms strive to deliver music in the highest quality possible, or should their ambitions be directed elsewhere (like availability for example)? Will high quality streams discourage artists to preview their fresh made pieces of music?

My suggestion is that different uses require different bitrates. Perhaps more personalized upload options combined with greater transparency could solve some problems. What if distributors (whetter artists or labels) could choose between different playback qualities, and users would directly see what bitrate they were dealing with? I realize this might be a somewhat optimistic idea coming from someone who is involved in audio aesthetics way above average. And especially for this reason, I highly encourage everyone to engage in this discussion.

Smartboard (

But did it really fail?

Rather, Interactive Whiteboard (IWB) is in fact a huge success from the business perspective of educational technology. Introduced in early 2000s and richly funded by educational technology industries and governments, IWB has obtained the position of ‘vital’ teaching material in many schools in the UK and the US. It attracted many investors and researchers who eagerly wanted to prove its positive impact on children’s effective learning and better achievements. A series of studies have been conducted and they statistically proved, with controversies, that IWB brought forth from children the sweet fruit of better grades.

What IWB failed to do, however, was leading the fundamental remodeling of the pedagogical horizon in 20th-century new media era, as it was expected to have. (as some called it the device of the future)

This failure was quite predictable though, when we think about it, because the Interactive Whiteboard is, in the end, a whiteboard: the teaching material(rather than a learning one) that is solely under the jurisdiction of the teacher and something that sits in front of the class, set far from the actual learners. When incorporating interactiveness of new media into effective learning, ‘interactivity’ has to occur between the learning material (media) and those who learn (user of the media), and that is the only way of true interaction.  Interaction should not be proxied.

In order not to restrict the tremendous educational potential of digital media upon that which is barely more than an electronic chalkboard, I am suggesting a total turnover of the landscape of classrooms; to give every student a interface of their own, or at least one for a team of 3-5(*). Schools and other private educational institutions can benefit from this holistic/systematic change in two ways;

a) enhance achievements by promoting self-directed learning, and b) acquire direction towards more effective evaluation system through data.

First, Students of 21st century are digital natives for whom using digital media to get information is as natural as using a fork to eat. Thus they possess  independent and autonomous learning styles that they prefer. (Barnes et al, 2007) The effectiveness of self-directed studies has been well proven through many pedagogical studies across time. Interactive media device given to each student, most probably connected with web, will make vast resources of information available at hand and with appropriate and engaging interface, will align with the intrinsic habits of digital natives; scavenging for information.

Second, by guiding students to proactively learn with individual interactive devices that can preferably traverse school to home, educational institutions can collect extensive amount of objective data on how students learn. Currently teachers and schools depend their evaluation of students’ achievement on examinations. However it is not only easier but also makes more sense to look into the orderly advancements or procedures of learning, in order to spot each student’s problems or excellency than to stare at the exam results, which are often too ephemeral to embrace numerous variables in the courses of learning.  The cofounder of Coursera, Daphne Koller said in her TED talk in 2012 that:

“(With data) You can turn human learning from hypothesis-driven mode to data-driven mode, a transformation that for example has revolutionized biology.”

Automatically created and collected data about each and every students will enable teachers to better mentor them with personalized information and furthermore, schools will be able to devise more effective education models based on accumulated data.

To add, there are other aspects that we must explore prior to implementing fundamental change of pedagogy to a new form in order to secure optimum impact including; subject of education(which subjects are learned better with interactive media and what not); student demographics ( adult or child education, where capability of self-direction can differ); place of education(at school or at home).  In accordance with these variations, the degree of implement and what form of interface will be effective should be determined.



(*) Effectiveness and importance of group learning in education using media, mainly due to its possibility of feedback and discussions among team members, is well described in the following TED talk:


References and further information

G. Moss, C. Jewitt, R. Levaãiç, V. Armstrong, A. Cardini and F. Castle,  “The Interactive Whiteboards, Pedagogyand Pupil Performance Evaluation: An Evaluation of the Schools Whiteboard Expansion (SWE) Project: London Challenge”, Institute of Education, 2007

S. McCrummen, “Some educators question if whiteboards, other high-tech tools raise achievement”, Washington Post, 2010

K. Barnes, R. Marateo, and S. Ferris,  “Teaching and Learning with the Net Generation”, Innovate, 2007

B. Taylor, Self-Directed Learning: Revisiting an Idea Most Appropriate for Middle School Students. Paper presented at the Combined Meeting of the Great Lakes and Southeast International Reading Association, Nashville, TN, Nov 11-15. [ED 395 287], 1995

Scrapebox is a versatile SEO tool but it’s mainly known for its comment spamming functions. The purpose of this article is to show that this software offers many interesting features that could be applied to academic and market reasearch. For the sake of brevity, we will focus our review on 3 core tools offered by Scrapebox: the Keyword Scraper, the URL Harvester and Meta-data/Comments Scraper.


Scrapebox was developed for SEO purposes, its popularity peaked a couple of years ago before search engines adjusted their algorithms in order to penalize comment spamming backlinks.

In addition to the features examined in this review, it offers several and variegated functions like proxy testing, spun comment spamming, page and domain PR scraping, etc. Being born as a “spamming” tool, its core strength is the ability to manage huge amounts of data. Most of its tools can manage up to one million inputs and theoretically an infinite amount of outputs per operation. Even though it doesn’t have many analytic capabilities, we suggest that its powerful data gathering features could be productively used outside the SEO sphere.

Keyword Scraper

Keyword Scraper

Keyword scraper – One of the many functions of Scrapebox. The bottom left box shows all the sources that this software uses to scrape related keywords.

Scrapebox can scrape related keywords from several popular internet services listed in the picture above. This function is based on the suggestions offered by these websites when the user starts typing the query. It is worth mentioning that it is possible to select the sources and then compare them. It is also possible to scrape these suggestions from different nations, since proxies are supported for every tool provided by this software. There are 4 depth levels available, this means that it is possible to scrape automatically the additional suggestions that are shown when the user types the first level suggestions.

These features give the opportunity to examine how the same concepts are linguistically approached by different internet services. Doing this type of analysis could give an insight on the technical differences of search algorithms. Testing the hypothesis that Amazon’s search algorithm conceptualizes words in a more commercial way than Google’s, could be an example. As mentioned before, this tool has the capability of scraping from different search engines and from different locations. Therefore it could be possible to make a study on the various linguistic correlations and differences that countries have on web.

URL Harvester

Search Engine Scraper

Search Engine Scraper – Scrapebox can scrape SERP URLs from Google, Yahoo, Bing and AOL.

Another interesting function that Scrapebox provides is the URL Harvester, a simple but flexible tool to scrape from the most popular search engines: Google, Bing, Yahoo and AOL. The user needs to input a set of keywords and the software gathers all the URLs from search engine results for that keywords. Search engines provide one thousand results per keyword but Scrapebox is able to restrict the selection to any number of results (e.g. only the top 10 results). Moreover, there is the option to scrape the same search engine in different languages/versions (,, etc). As shown in the picture, it is possible to use advanced search operators like “site:” and “inurl:”.

One of the many ways to use this tool could be to track down censorship. Performing a search with the same keyword, but in different countries each time, would result in a list of the most popular sites. In case one or more URL are missing from a country’s list, while they appear on most of the others, then this might be a sign of censorship. Scrapebox also offers a tool for URL lists comparison that could be used for this purpose.

Meta-data/Comment Scraper


Meta Data Scraper – In this picture Scrapebox is scraping Titles, descriptions and keywords for the Masters of Media Blog.

This last tool is meant to extract HTML metadata (Title, Description and Keywords) and comments from long list of URLs. As shown in the picture, this feature appears to have a bug since it is missing some URLs. This is a potential problem to be addressed before considering this tool for research purposes, although this might have been caused by an internet connection problem. The last software update was on the 30th of August 2013, so there is a reasonable chance that if there are any problems they will be taken care of.

Those last two tools work in synergy and can be used for data retrieving tasks. Their capabilities though do not end here. They could, for example, be used in order to discover trends that emerge from heated social debates. To be more specific, these tools could be used to study how keywords such as terrorism, Islam or 9/11 are interpreted in different countries and if those interpretations form trends. The search engine results for a specific keyword could be extracted in form of a list by using the URL Harvester. The next step would be to import this list into the Meta-Data Scraper and examine the new keywords that emerge. Having this new data set at hand, a statistical analysis could be performed in order for patterns to appear (e.g. secondary keyword appearance frequency). Those patterns could indicate trends that are forming into the web and therefore in a fraction of the society too. Moreover, repeating the same procedure over time, a study could be made on how specific trends grow and decay.


In conclusion, this was far from a complete analysis of the tool. The purpose of this article, though, was to suggest some alternative applications of this interesting software.

“Readily combustible material, such as dry twigs, used to kindle fires.”

Tinder is the new application that tells you if the people you like/ want to sleep with like you too. All you have to do is download the application, sign in through Facebook, and Tinder automatically uses your photos from your Facebook profile.

What Tinder does is take basic info from its users and match them with others that ‘’look’’ compatible based on location, interests and common Fb friends. Users can chat with each other, only after they both liked each other’s photo. Also they can choose if they want to talk to someone or not. The best part about this application is that it’s anonymous and it doesn’t reveal your activities on Facebook.The app is not only meant for straight people, guys who like guys, girls who like girls, people who like both, everyone has the opportunity to meet someone special.

Tinder’s target group is men and women 18-35 years old. It’s suitable for Androids and Iphones.

The initial release of this application was last September and till now it has created more than 250 million matches. Tinder was created by Sean Rad and Justin Mateen. It started because the creators realised that there are applications that connect you with people but there is not an actual application that helps you meet new people and hook you up with them.

There is a similar application for gay people that started before Tinder and it is called Grindr. It applies to gay or bisexual people that want to have ‘’easy’’ sex. Both Grindr and Tinder work with satnav (satellite navigation). Grindr allows users to locate other users in a close distance from where they are. Each user has a profile which includes photos, basic info and their location. So if someone is interested he can chat with the other user immediately and exchange photos and other extra information.

Looking at reviews and ratings on the app which can be found in the Apple App Store and Google Play for Android, the app gets four out of five stars from users. Although comment #6 by Alleenzaam states: “No recommendations, the loneliness is killing me.” The comments also show some people disagree with the app having to link to Facebook, as they don’t have Facebook or don’t have the 50 Facebook friends that are required, to use the app.

Tinder 1 Tinder 2Tinder 3

We thought it would be fun to see the perspectives of men compared to those of women when it comes to Tinder. In Tinder review: a woman’s perspective, relationship expert Caroline Kent tests out dating app Tinder for a week. The following quote from the online article shows a part of the superficiality of the app: “On closer inspection, his pics are all selfies, which screams “I’m vain and don’t have any friends to take pics of me.”

When looking at the app through a man’s eyes, Willard Foxton lets a friend take a look at the women he comes across with his Tinder account: “Too fat..No…Too thin..No…Eww, ugly dress…No! That’s never her car…binned! Mirror Selfie… No!”

An important thing Caroline states in her article, is Tinder being the app of the moment because of its immediacy. This makes us wonder: what does this really mean? Time Barrow, a blog on digital orality, quotes Bolter and Grusin’s Immediacy, hypermediacy and remediation ’stating immediacy is about the user of a medium wanting immediate access, understanding, and information. Tinder offers immediate access for anyone who has Facebook and downloaded the app, but at the same time the app doesn’t allow users to edit all the information that has been (incorrectly) put on the app via a Facebook account. Tinder is easy to use by buttons that clearly show what they exist for and a clear interface. Tinder offers little info, which probably makes it more easy to use and more fun as well. By only knowing what someone looks like, their age, mutual friends and one simple quote, the possibility of an eventual date even gets more exciting.

Reviewing the app from a personal stance we would say using the app can work as a boost or a downer for your ego, depending on the amount of matches you have. When you try to think of Tinder as if it was real life, Tinder actually is like an interface between your brain and people you could see walking down the street. It’s no secret people judge each other within a matter of seconds, Tinder makes you do the same thing, without having to actually see someone in real life. In our environment we have also noticed people who are in a relationship use the app, just to get some extra attention. If you need attention, have a romantic vision on meeting people online in a sort of superficial way or just hope to meet real love: get your Tinder on!

The effects of new digital means of translation are starting to affect the way we interact with the world. If everyone can understand every foreign language with the use of new technology, how can those changes be understood and what will the impact be on the world we live in?

There are almost 7000 languages in the world we live in, branched out in different language families and in many different writing systems. If a person is unfamiliar with the language in a country it can be difficult to navigate and interact with their surroundings, when a person is also unfamiliar with the writing system it could be nearly impossible. New technology is really starting to affect the way people interact with language around the world. Free online language translation services have been around for more than 15 years. Realistically this technology doesn’t really have an enormous impact on how we interact with foreign languages in an actual foreign country, because a user would have to enter the text into an online translation service manually. Moreover when a person is presented with a language in a different writing system than their own, it is impossible to use such services. More recently, apps like ‘Scan&Translate’ can directly translate text from a photo. This means that if someone were to take a picture of some Russian text, the user could get an English translation of the text in the photo, even though the user might not actually even be able to read the language in question.

The technology of Google Glass has users wearing a head mounted display. With the introduction, this technology could be taken to the next level. An app called ‘Word Lens Glass’ is in currently in development for Google Glass. Users are able to command Google Glass to translate text that they see in front of them. Even though the use of Google Glass is not widespread (yet), it does open up the possibility to have direct access to information in a language they would not have direct access to without the tool.

Google Glass Translation

Memory is a key word when it comes to not only digital media, but for all media that have a communicative element. When books were invented they were seen an “aid to memory” (Mitchell and Hansen 17) by ‘storing’ information on paper. In this digital era, computers are “the most powerful exteriorization of memory technology in the history of media” (17). The duality of media innovations has always been that “each new medium operates by exteriorizing some function of human cognition and memory, it involves both loss and gain” (Hansen 173). Hansen uses the myth of Theuth, as recounted by Socarates to Phaedres to explain this duality. The myth of Teuth is about Teuth; the Egyptian god who invented writing. Teuth presents writing to the Egyptian king Thamus as being able to “make the people of Egypt wiser and improve their memories” (173). The king does not agree and instead claims the opposite: “If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written” (173). Marshall McLuhan explains this duality in the sense that media can been seen as an extension of our senses, but also “amputations” of the organs they extend (173).

Even more advanced technology like direct oral speech translation (there has been some experimentation in this field) seems like something out of a science fiction film us now, but it could very well be the direction we are heading in. If we don’t have to speak the language anymore to understand someone from a completely different country, and they could understand us, this makes the need for learning foreign languages obsolete. The part of the memory that is externalised is not of that of the user themselves, but of a collective digital memory. In that case Hansen’s statement “computers are becoming even more necessary” while people “are becoming ever more contingent” (178) couldn’t be more true. We’ll have no more need for second language education, but we need new communicative technologies to make contact with others. What the precise effects will be on human brains and society is unclear. It is however proven that learning additional languages is beneficial for brain development, and recently it has also been proven to slow brain aging (Bak). So perhaps the ability to communicate in a language we do not know by accessing a collective digital memory results in us edging a step closer to becoming cyborgs, or more rather; old brained cyborgs.


Bak et all. “Does bilingualism influence cognitive aging?” Annals of Neurology. 75. 6: 2014. 959–963. 7 September 2014.

Gannes, Liz. “Next Google Glass Tricks Include Translating the World From Your Eyes.” All Things D. 2013. 7 September 2014. <>

M.B.N Hansen (2010) New Media In: W.J.T. Mitchell, M.B.N Hansen (eds.) Critical Terms for Media Studies. Chicago: University of Chicago Press, pp.172-185

W.J.T. Mitchell, M.B.N Hansen (2010) Introduction. In: W.J.T. Mitchell, M.B.N Hansen(eds.) Critical Terms for Media Studies. Chicago: University of Chicago Press, pp.vii-xxii



Early pop-up books are made out of pure paper. Each time the kid opens a page, an architectural piece of art jumps out of the book. In 2014, it looks much different. Bridging Book established a new mixed-media book for children and presents a combination of book and eBook. To experience the digital-print combo, parents have to equip their kids with the picture book, the app and a simple touchscreen device. The book should be placed below the device that the contents get synchronized. When the kid flips through the print pages, the pad extends the book’s illustrations with animations, short texts and sound. Cool, eh? See the promotional video below:

Paper becomes interactive – how does it work?

Magnet is the magic word. The Portugal-based engageLab uses magnets incorporated into the printed book that can link to the tablet’s built-in compass. The software is able to identify the different strengths of the magnets from the sensor, and “when a page is flipped, the new cumulative sensor’s reading will be decoded as a page number“ (Figueiredo, Pinto, Branco, Zagalo, Coquet 571).

Bildschirmfoto 2014-09-11 um 15.57.11

Bridging Book: Magnetic System

Bridging Book is still in production and not yet for sale but the concept presents a new form of reading and goes beyond what is offered by simple interactive eBooks. Or not?

A real book combined with an electronic version sets apart from kid’s amusements that only come along with a digital device. The engageLab’s researchers really do a good job and deserve credits for that. However, it is likely that children easy familiarize themselves to the special reading experience that is coupled to two different mediums. Indeed, it can be so much fun when moving things are jumping out of a normal book. But what if it comes to reading a standard book, just binding and pages? Children might get bored or even confused when finding out that the inside only consists of letters or “dead” pictures. Are they even able to fall in love with a book that doesn’t extend it’s content interactively?

Interactive reading in general

Classic books let the reader automatically create pictures in mind or further thoughts of the story. Digital ones merely prevent the reader to produce “imagination and empathy” (Sharabi). Book apps for children sometimes have more the character of a game experience rather than a dispute with the book itself. Kids only learn how to tap on the tablet: Press the balloon to change it’s color. Doubtlessly, the apps are exciting for children and in regard to the 21. Century, they are unavoidable. Confronting the young generation with new media objects can also be advantageous due to the fact that they quickly learn how to deal with the technique and might have little difficulties in adult life.

Obviously, interactive books can be quite helpful but back to the topic, do they really engage kids in a story? The British children’s book writer Julia Donaldson says No. When she was asked to create an eBook version of her most popular work, The Gruffalo, she simply refused:

“The publishers showed me an ebook of Alice in Wonderland. They said, ‘Look, you can press buttons and do this and that’, and they showed me the page where Alice’s neck gets longer. There’s a button the child can press to make the neck stretch, and I thought, well, if the child’s doing that, they are not going to be listening or reading, […]“. 

There are endless debates whether interactive elements limit the ability of children to read or not. In the end, it is up to the parents if they want to provide their junior with new digital innovations. Fact is, proper books will never have the same aura for the young generation as it had for elderly. That is, because we are living in a modern world that overwhelms us with permanent novelties. Anyway, mixed-media innovations such as in the example of Bridging Book show an attempt to remain faithful to old media and at the same time to get prepared for new media. Bridging from the traditional physical book to the digital world of wonders can therefore be an attractive alternative to simple eBooks.




Figueiredo, Ana Maria, Pinto, Branco, Zagalo, Coquet. “Bridging book: a not-so-electronic children’s book”. ACM Digital Library. 2013. 12. September 2014 <>

Rustin, Susanna. “Gruffalo author Julia Donaldson tells why she vetoed ebook. The Guardian. 2011. 12. September 2014. <>

Sharabi, Asi. “Tablets make it nearly impossible for kids to get lost in a story.” The Atlantic. 2013. 12. September 2014. <>

The engageLab. 12. September 2014. <>

The Bridging Book. 12. September 2014. <>

Vimeo. 12. September 2014. <>

„We all pass away sooner or later“, states the opener of the webpage of Only to continue with a much surprising statement: “But what if you could be remembered forever?” – With the data of your old online-communication,’s algorithms promise to create a version of you, that lets the non-deceased communicate with this fresh-but-old-rip-off-entity. Is selling zombies?


eterni.e - simply become immortal’s USP: the simple task of becoming immortal







Talking to the dead is not a phenomenon of the internet. Be it shamanistic journeys or the conjuring of ghosts – each time has its practices of trying to communicate with the deceased. tries to tackle the fact of life’s certain end with the industrious collection and recombination of traces left behind: collects almost everything that you create during your lifetime, and processes this huge amount of information using complex Artificial Intelligence algorithms.

Then it generates a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends, even after you pass away.”

This proposal of ghost-communication tries to monetize a narcissistic fear: ‘it will be a dread when I’m gone. Dreadful for those, whom I leave behind, those, who will then forget me’. Looked at from the point of view of those left behind, has a unique selling proposition to fill a (perhaps sudden) gap of yawning absence, leaving but the need to have it removed.

This exact phenomenon was addressed in the 2013 British TV-series Black Mirror. The second season’s first episode revolves around the sudden death of Ash and the process of coming to terms with it of his wife Martha. At Ash’s funeral, the film takes on the view of a rationalist non-believer (just as it can be seen in the protagonist of Woody Allen’s latest Magic in the Moonlight), when Martha’s friend Sarah begins to make an offer:

The film shows what Sarah negates with saying “it’s not some crazy spiritual thing”: it appears to be hocus pocus to speak with the dead. When indeed it is nifty technology, which has to work like a charm to make ‘magic’ happen. This was already the case in 1789, where people were tried to be convinced of the presence of a ghost (Gaderer, pp.25) with the use of electricity, smoke and a magic lantern. The workings of the process had to be hidden, in order not to disturb the immersion of the respective ghost-conjuring attendees. In’s case, it is new media technology aka algorithms which try to replicate a person with this technique of excessive mimetic approximation in order to create presence. In trying to reverse certain aspects of death, that is to reverse parts of the absence of a person, it strives for the creation of the immediate, and, in so striving, remains irremovably tied to the mediate which it can never leave behind. The result is bound to be paradox: the fabrication of presence of that which is utterly absent does not only fabricate presence, but at the same time co-evoke the absolute absence of that which is made present (and of course is this Derrida, speaking).

A certain kind of the Uncanny Valley

In an essay about the ‘Corpse inside of the wax figure’ (free translation by me, this article was published in German), media-historian Bernhard Siegert argues that, in a wax figure, it is the uncertain status between alive and dead of the significant which is causing its uncanniness. In its ever changing status, death itself is co-referred to by the oscillating significant. And just like the excessive mimesis of the wax figure causes the real to show through (cf. Siegert, pp. 118),’s mimetic subjects are doomed to cause an automatic deconstruction of its significant by never being able to shake off the death of its signifier that is constantly showing through.

It is thus to differentiate between immortality as advertised by and an automatized, reactive and recombinative index of a former online presence (which is ultimately created). On the same page, it really could create the realistic experience of communicating with the dead. The algorithmic entity would have to pass the turing-test and meet the willingness of the respective user (where Christiane Voss’ theory of the lending body (again, sadly only in German) could be subject to discussion), but, as well as the work-alike lives.on (“If your heart stops beating – you’ll keep tweeting”) could then have created a money-generating zombie who comes from the dead and feeds on the living. Or as Sarah from the Black Mirror clip puts it: “I know he’s dead, but it wouldn’t work if he wasn’t.”





(1) The Homepage of

(2) The Homepage of lives.on

(3) Derrida, Jacques: Of Grammatologie. John Hopkins University Press 1997.

(4) Gaderer, Rupert: Heimliche Technologien des Unheimlichen [translates into: secret technologies of the uncanny, transl. by me]. In: XING 12/09, pp. 25-31.

(5) Siegert, Bernhard: Die Leiche in der Wachsfigur. Exzesse der Mimesis in Kunst, Wissenschaft und Medien.
In: Peter Geimer (Hg.): Untot. Verhältnisse von Leben und Leblosigkeit. Berlin: Kadmos 2007, pp. 116-139.

Twitter’s launch of the live streaming app Periscope last March, was received with much enthusiasm by those who believe that it will empower reporting and citizen journalism.

Live streaming apps are not something new. Few months before Periscope’s launch, Meerkat, another live-streaming app, was released to the market. Bambuser, which allows users to live stream videos from their mobile phones and webcam-equipped computers, has been available since 2007.

But Periscope is a big deal because Twitter launched it. The social networking site numbers 316 million active users per month, which gives it an edge over its competitors. As of 2 August, the number of Periscope subscribers already exceeded 10 million.

Few hours after Periscope’s launch on 27 March, users were live-streaming an explosion which rocked a building in New York City, as firefighters rushed to the scene. Shortly afterwards, opinion pieces on how periscope will change the internet and the news industry were all over the web.

Owen Williams, a reporter at The Next Web, described watching the explosion on Periscope as “more authentic than watching the TV news could ever be”.  The app, he said, “has begun to transform the way that news can be accessed and consumed overnight”.

The fact that today anyone owning a smartphone can have their own broadcasting channel with them wherever they go, seems like a revolutionary step towards the decentralization of media power, concentrated in the hands of very few corporations and governments around the world.

In the global south, where media outlets are usually located in capitals and big cities, residents of rural areas are now able to broadcast their stories to local and international audiences. While under regimes that impose media blackouts, activists and reporters can use the app and similar tools to live-stream protests.

A number of reporters have already tested Periscope to broadcast video reports and stories. BILD reporter Paul Ronzheimer broadcasted the journey of a group of Syrian refugees from the Greek island of Kos to Germany.  In Nepal, BBC journalist Nicholas Garnett live-broadcasted damage in the Nepalese village of Sipaghat after April’s devastating earthquake. In late April, the Guardian’s Paul Lewis live-streamed interviews with Baltimore residents and community members as their city was being ravaged by riots.

Despite the above-mentioned examples of how periscope can allow journalists to easily and cheaply live broadcast compelling stories, skepticism remains as to whether such apps will actually change the news industry.

Technology journalist Mic Wright wrote that “periscope won’t change the world.” “As odd as it may sound, live video of a fire, an explosion or a protest isn’t the story, it’s a catalyst for a story. We need analysis and thought to be introduced before something become news. Just being present is not enough,” she added.

As millions of videos, images, posts, and tweets are produced and distributed on a daily basis, internet users do not need another app or social networking site as much as they need high-quality content that provide context and analysis to news .

“See the world through someone else’s eyes,” is Periscope’s motto.  Like any other social media network, it is up to the user to decide what they are going to do with the app, and what they are going to broadcast to their followers. Most Periscope subscribers have been using the app to share details of their personal lives, what they are having for dinner or videos of their cats doing silly things.

This, however, does not mean that journalists should or could not use the app in their work. After all, apps and social networking sites do not produce (great) content by themselves.

Today, more and more people, particularly youth, access the internet through their smartphones. By 2019 in Africa, internet use on mobile phones is expected to increase 20-fold. According to a 2015 study from the Pew Research Center, 92% of US teens report going online daily, an access facilitated by mobile devices. Nearly three-quarters of them own a smartphone.

Journalists can no longer ignore the possibilities live-streaming applications present not only to produce stories and media reports but also to reach out to large and diverse audiences.

This is why it is important that journalists and bloggers obtain professional training on how to use Periscope and similar tools in the newsroom, and journalism schools need to adapt their curricula to include mobile journalism courses.

For David Cameron, lecturer in communication at Charles Sturt University, “some of the issues to be considered [in mobile journalism curricula] will be the training of students to understand the technical and practical parameters of producing content for mobile delivery, the nature of mobile media audiences, and the development of cross-platform content.”


Twitter. 2015. Twitter Inc. 13 September 2015.

Periscope. “Periscope, by the numbers.”Medium. 2015. 12 September, 2015.

Williams, Owen. “Periscope and live video are changing the internet forever”. The Next Web. 2015. 12 September 2015.

BILD. “Live-Übertragung einer Flucht aus der Hölle.” 2015. 28 August 2015.

Garnett, Nicholas. “Periscope from remote Nepalese village of Sindhupalchok.” Youtube. 30 April 2015. 12 September 2015. .

Lewis, Paul. “The Baltimore riots: the night on Periscope – video.” 2015. The Guardian. 12 September 2015.

Wright, Mic. “Periscope won’t change the world – but it appeals to journalists’ vanity.” 2015. 12 September 2015.

Smith, David. “Internet use on mobile phones in Africa predicted to increase 20-fold.” 2014. The Guardian. 13 September 2015.

Lenhart, Amanda. “Teens, Social Media & Technology Overview 2015.” 2015. Pew Research Center. 13 September 2015.

Cameron, David. “Mobile Journalism: A Snapshot of Current Research and Practice.” The End of Journalism? Technology, Education and Ethics Conference. 17th-18th October 2008. University of Bedfordshire, UK.–files/davidcameron/David%20Cameron.pdf




The financial crisis in 2008 marked the advent of the sharing economy, and its motto “access trumps ownership” (The Economist) came out as a good option to deal with the crisis which mainly occurred as a result of the excessive consumption (Henten, Windekilde 5). But to describe the sharing economy, it is necessary to look at different definitions since it is an emergent marketplace that has been developing and changing continuously.

Sundararajan describes the sharing economy as a combination of both commercial and social activity (39). In other words, people do not only provide and buy goods and services, but also establish closer connections, maybe become friends and share further interactions. This social and cultural role of the sharing economy is one of the most important features that distinguishes it from the earlier marketplaces. Moreover, Botsman in her TED talk points out to another important aspect of the sharing economy by describing it as a “social and economic activity driven by network technologies”. Thus, new media technologies facilitate the most essential activities of the sharing economy.

According to Sundararajan, there are five distinctive characteristics of the sharing economy. First of all, the sharing economy fosters economic activities by making both goods and services possible to exchange. This leads to “high-impact capital” which means that every asset can be capitalized and used in “their full capacity”. As the variety of assets – including both goods and services – are expanded, the provision of the money and workforce becomes “decentralized”. Furthermore, the expansion of the type of assets provided by the sharing economy also weakens the boundary between “personal” and “professional”. Many peer-to-peer activities that are regarded as “personal”, such as sharing your house or giving someone a ride, have become a professional activity that help people make profit. Lastly, unlike the traditional markets, the type of labor practiced in the sharing economy does not necessarily require long-time responsibilities and “continuum” which effaces the distinction between work and leisure (Sundararajan 40).

Considering these features of the sharing economy, it is clear that several advantages have been brought by the successful platforms in this economy. Botsman and Rogers express the value of the sharing economy as “the enormous benefits of access to products and services over ownership, and at the same time save money, space, and time; make new friends; and become active citizens again” (qtd. in Henten, Windekilde). Apart from these, the sharing economy has also introduced new transaction options that reduce the costs. For instance, with the help of network technologies, it is considerably easier to search for a right place to stay or find the most convenient means of transportation.sharing
However, the sharing economy has also engendered risks which are mainly caused by the trust issues. Ert et. al suggests that because services provided by the sharing economy platforms are “produced and consumed simultaneously” (63), people cannot know what to expect, so there is a risk in terms of money. As every step in this exchange process takes place online, people risk more than money which increases the importance of trust immensely. Staying in a complete stranger’s house, or sharing a ride with someone we do not know raises the question of safety.

As Sundararajan indicates, the definition of trust can change according to the context, so he proposes James Coleman’s definition as the most suitable one: “a willingness to commit to a collaborative effort before you know how the other person will behave” (79). Therefore, what causes people not to trust someone is mainly the information asymmetry which refers to the situation where different sides do not have the same amount of knowledge (Finley 17). Thus, the more people know about what kind of service they get, or from whom, the more trust can be built.


Airbnb as a case study

To understand how platforms overcome this information asymmetry we decided to look into one of the most prominent pioneers of the sharing economy, namely Airbnb. Airbnb identifies itself as “a trusted community marketplace for people to list, discover, and book unique accommodations around the world” ( In simpler terms, Airbnb encourages people to invite strangers to their homes and stay at strangers’ places, with relatively unknown consequences. In view that people – or guests and hosts – who engage in such activities get to know and trust each other exclusively through mediation of Airbnb, we argue that the study of this platform will help to understand how trust is established in such newly evolved marketplaces.

In order to overcome information asymmetry, Airbnb provides several services such as building a reliable reputation system which includes reviews, trustworthy pictures and visuals, smooth transactions, and also incorporating social media accounts is an effective way to reduce the information asymmetry.

In the aforementioned self-description, Airbnb itself also recognizes that trust plays a major role in platform’s operations. Joe Gebbia, one of Airbnb’s co-­founders, particularly regards reputation to be the key factor for building trust between hosts and guests (TED). He believes that high reputation based on reviews helps to overcome natural social biases that people tend to have toward strangers. To take it a step further, reputation nowadays can be considered to serve “not only as a psychological reward or currency, but also as an actual currency – called reputation capital” (Botsman and Rogers 337). According to Botsman, reputation capital is gained through participation in collaborative consumption, and the more reputation capital we earn, the more we can participate (338).


It is true that highly ranked Airbnb hosts get more reservations and guests with positive reviews are less likely to get their booking request cancelled (Newman and Antin). Yet, one cannot help but notice that novice users who lack an established reputation can too engage in transactions and reap the benefits of this sharing community. Thus, it is reasonable to assume that other factors that allow individuals to overcome trust barriers and engage in home-sharing interactions may also be at play here.


Visualising the development of trust in Airbnb

In attempt to conveniently convey the possible factors that contribute to establishment of trust, a visual timeline seemed like a logical solution. After all, building trust is likely to be a continuous process that also takes time.

The resulting trust timeline encompasses the period from the creation of Airbnb’s first prototype (Designer’s IDSA Connecting Guide) in October 2007, when co-founders Joe Gebbia and Brian Chesky rented out airbeds in their apartment to other visitors of the event, until the present. It is comprised of:

  • screenshots of Airbnb’s homepage at and other important subpages to capture interface changes;
  • newly introduced or changed features and policies of the platform;
  • some of most notable problems Airbnb encountered in course of its development and the company’s responses.


With regard to the last point, we focused on incidents that went public and could potentially damage company’s profile globally.

Additionally, some other interesting points, such as nights bookings’ milestones reported by the Airbnb, emergence of and the time of platform’s rebranding, were included to set up context.


We offer several paths through which to explore the timeline. These pre-created narratives focus on specific aspects of building trust – through visuals, through added functionality and rules, and through company’s reactions to different events. The timeline also allows to see a whole picture of the events “negatively” affecting Airbnb and Airbnb’s “positive” reactions. This view is accessed by pressing the “Show overview” button. From here, users can also zoom into particular elements themselves and move between slides discordantly. We believe, such free interaction with the timeline will allow users to find their own view on the meaning of various developments that took place on Airbnb over time, independently establish possible correlations and form their own opinion about the company and its dealings with trust.

The timeline is published publicly on the Masters of Media blog and Prezi platform and is thus accessible to anyone interested in how trust relations are being established in the sharing economy and how this works at Airbnb in particular. It can also be seen as a tool that could help others companies understand what constitutes participation in the sharing economy and what sort of problems should be anticipated.

Chronological Timeline

Features and Policies

Evolution of Design

Incidents and Responses



The timeline shows how a fast growing platform such as Airbnb copes with all the elements that are related to trust. When looking at the timeline we see several moments in time where problems with Airnbnb surface, for example the death of a woman due to carbon monoxide poisoning or the stories of vandalized apartments. Some time afterwards new additions were made to Airbnb that might have solved some of the stated problems. It then seems plausible that the level of trust towards Airbnb is in a constant flux.

When we looked at the design changes made to Airbnb’s homepage over the years, it became obvious that photography has grown to be a central element. While in the beginning the images were produced by users, the current website uses only high-quality material, and these visuals take up way more space on all of the pages than before. This seems to coincide with the theory of “visual based trust” (Ert et al.)  that has focused on the effect of imagery on user perception of the platform. The research suggests that using images in a substantial amount would meet the hosts’ needs for personal interaction (69).

Our research also revealed several developments that were introduced to the platform in response to unfortunate Airbnb experiences that gained a lot of media attention. For instance, after the a blog post about a trashed apartment went viral, Airbnb added Trust & Safety section to its website and introduced its Host Guarantee policy. And while Airbnb’s reaction to such events may seem appropriate, it nevertheless raises the question of why the company had not been proactive in preventing certain disasters in the first place.

During the research, we have also noticed that in the last two years Airbnb has incorporated its pages devoted to safety into “trust pages”. This seems to be be telling the world that for Airbnb trust is the main element in creating a safe environment for its users. At the same time, it must be noted that Airbnb has done a lot to improve safety.



We recognize that the timeline is far from being an exhaustive record of all of the events that could influence trust. We had to rely on our evaluation skills in assessing what information is to be put up on the timeline and what information had to be left out in view of our research topic. Moreover, as some things are not being disclosed by Airbnb, we were able to only use information that made its way out in the open.

It is also difficult to determine whether there actually is any cause and effect relation between two seemingly connected events on the timeline. Establishing this as a fact would only be possible through people involved confirming the story and backing it up with evidence. In the meantime, we can only speculate.

Even though we consider Prezi to be one of the most useful tools for creating presentation-based visualisations in a short period of time, we also recognize its limitations which were reflected in the aesthetics and functional capabilities of the timeline. A diverse colour palette, more text editing options and broader slide customization features could have all helped to produce richer and clearer narratives.



About us. Airbnb. 18 October 2016. <>

“All Eyes on the Sharing Economy.” The Economist. 9 March 2013. 6 October 2016. <>

Botsman, Rachel, and Roo Rogers. What’s Mine Is Yours: The Rise of Collaborative Consumption. Harper Collins e­books, 2010.

Botsman, Rachel. “The Currency of New Economy is Trust.” TED. June 2012. 21 September 2016. <>

Ert, Eyal, and Aliza Fleischer, and Nathan Magen. “Trust and Reputation in the Sharing Economy: The Role of Personal Photos in Airbnb.” Tourism Management 55 (2016): 62–73.

“Designer IDSA’s Connecting Guide.” Internet Archive Wayback Machine. The Internet Archive, 2016. 11 October 2007. <>

Henten, Andersen, and Iwona Maria Windekilde. “Transaction costs and the sharing economy.” info 18.1 (2016): 1–15.

Newman, Riley, and Antin. “Building for Trust: Insights from Our Efforts to Distill the Fuel for the Sharing Economy.” Airbnb Engineering. 29 March 2016. 24 September 2016. <­for­trust>

Sundararajan, Arun. The Sharing Economy: The End of Employment and The Rise of Crowd-Based Capitalism. Cambridge: The MIT Press, 2016.

TED. “How Airbnb designs for trust | Joe Gebbia.” YouTube. 5 April 2016. 20 September 2016. <­RFid9U>