From Media Landscape to Media Ecology
The Cultural Implications of Web 2.0
December 31, 2007essay,
In this essay, media philosopher Martijn de Waal examines the implications of the rise of Web 2.0 for the public sphere and its democratic content. Who decides what is of value in the new media ecosystem and how do important processes take place?
Over twelve months ago, the Francophone press in Belgium filed a lawsuit against Google News. The papers wanted to prevent their news reports from automatically appearing on Google’s web pages. The issue at stake was this: should ‘news aggregators’ – ‘News 2.0 services’ in the buzzword lingo of internet watchers – be allowed simply to pluck headlines from newspapers, weblogs and other news providers and then rearrange them on their own sites using an often secret algorithm? This is not just a copyright issue. Equally important is the corresponding cultural debate as to who should be in charge of bringing order to the media landscape. Who is to control the public sphere? Who is to determine what is of value in it? Is this the preserve of experts and professionals like journalists? Or would it actually be more democratic to leave the process to Google’s computer algorithms and Wikipedia’s egalitarian peer-to-peer networks?
With the rise of Web 2.01 the traditional gatekeepers of the public sphere are facing competition from new players. On the one hand there is the ‘collective intelligence’ of ‘aggregators’ like Google News, Nujij.nl, Digg and Newsvine. On the other hand, the position of traditional experts is being undermined by ‘collaborative intelligence’ systems such as Wikipedia in which media users cooperate in an egalitarian manner. What do these developments mean for the public sphere and for processes of ‘valorization’? What is the precise role of technology in these processes? Is this – as Web 2.0 gurus and entrepreneurs frequently maintain – really a question of democratization? Are we entering the era of smart mobs, adhocracies and issue politics?
From a Media Landscape to a Media Ecology
The basic premise of this discussion is that the hierarchical and centralist architecture of the media landscape is turning into a more decentralized peer-to-peer network, a media ‘ecosystem’. That being so, it is time to refine the linear flow chart of the media landscape usually found in media and communications studies handbooks with a dash of chaos theory. Traditionally, the media production process is depicted as a chain in which the individual links are connected by arrows pointing to the right. Media ‘content’ (or, more broadly, cultural product) is produced in institutional environments after which this content is ‘packaged’ (for example, by broadcasters and publishers), then distributed, and finally consumed.
Insights derived from cultural studies have taught us that in each segment of this chain, ‘encoding’ and ‘decoding’ processes take place. The encoding processes on the left-hand side of the chain have their origins in institutional contexts with their associated professional codes and cultures, or are prompted by economic considerations like shareholder profit maximization or ideological motives. On the right-hand side of the chain, the decoding process takes place from the perspective of specific cultural identities. Based on their experience, the public invests the message with meaning, while at the same time those meanings help to form their experiences of identities.
Developments in the media landscape have changed this process in at least two important ways. Firstly, the scarcity in this system has decreased, thanks to increasing access to cheap production methods and distribution networks, resulting in an extra link in the chain immediately prior to consumption: the ‘filter’. In the scarcity system, the supply is determined by the gatekeepers, who work for the ‘packagers’. In a system without scarcity, the supply is more or less unlimited, but a filter mechanism (search machine, portal, Amazon algorithm, the long tail, social networks, collective intelligence) matches supply to the demand of the media consumer.
Some of these filters are devised by institutional organizations (commercial publishers, public broadcasters) with interests of their own. Others are the result of feedback data from media usage. Every book ordered from Amazon has an impact on the lists of ‘personal recommendations’ presented to subsequent buyers. And every link from a blog to an article in a newspaper raises that paper’s ‘page ranking’ in Google and thus its visibility and potential authority. This process is also known as ‘collective intelligence’.
The second change concerns the process of decoding, at the far right of the chain. In the traditional media model that process was chiefly confined to the private or parochial sphere; now it has become part of the media chain – and the public sphere – both directly and indirectly. Directly, because all manner of interpretations and ‘remixes’ of commentary on media products are now part of the media landscape via blogs or Youtube. In Convergence Culture, Henry Jenkins explains how the roles of cultural producer and cultural consumer are steadily converging. He even alludes to the emergence of a new ‘folk culture’, a cultural system in which narratives have no definitive form but are continually being retold.2 Encoding becomes in effect a process of ‘recoding’ that produces new content, which can in turn be ‘packaged’, filtered and consumed. This process could also be called ‘collaborative intelligence’.
All of which means that it would be more accurate to talk about a media ecology than a media landscape. Whereas a landscape is a metaphor that conjures up a static image, ecology does justice to the notion of a system that is in a state of flux.
Traditional Authorities Versus ‘Those People in Pyjamas’
How and where in such a media ecology is it decided what is of value, which cultural products ‘matter’? In systems of collaborative intelligence, users work together on the basis of equality to create meaning and compile knowledge. Wikipedia and open source software like Linux are perhaps the best-known examples of such systems. Charles Leadbeater has dubbed this phenomenon ‘We-think’. ‘In the We-Think economy people don’t just want services and goods delivered to them. They also want tools so that they can take part and places in which to play, share, debate with others.’3 There is a caveat to this, of course. Such a system only works as long as the participants trust one another, accept one another’s knowledge, or at any rate are prepared to discuss it. Because whose opinion counts when there are conflicting views?
It is no accident that most of these systems are subject to new forms of institutionalization of expertise and reliability.4 The best-known examples are reputation systems like the ones that operate on online marketplaces like Ebay. Then there are the ‘Karma’ rating systems such as introduced on the Slashdot website. Writers and commentators can earn karma points by contributing to the community. Contributions from visitors are also rated and visitors can in turn filter contributions according to their rating.
The expert paradigm in which experts accredited by official bodies determine what is true and what not, is being replaced here by a more meritocratic system where what counts is proven expertise rather than institutional embeddedness. A new balance will gradually emerge, and new collective forms of canonization. In a recent discussion on Edge.org about authority on Wikipedia (quoted by Henk Blanken on the Nieuwe Reporter blog), Gloria Origgi wrote: ‘An efficient knowledge system like Wikipedia inevitably will grow by generating a variety of evaluative tools: that is how culture grows, how traditions are created. What is a cultural tradition? A labelling system of insiders and outsiders, of who stays on and who is lost in the magma of the past. The good news is that in the Web era this inevitable evaluation is made through new, collective tools that challenge the received views and develop and improve an innovative and democratic way of selection of knowledge. But there’s no escape from the creation of a “canonical” – even if tentative and rapidly evolving – corpus of knowledge.’5
This is not to say that the role of traditional gatekeepers and the mass media is played out. The various systems of ‘peer-to-peer co-production’ that are emerging in different places are not isolated but linked to one another in a layered model. Henry Jenkins foresees the emergence of a model in which the traditional mass media and the new niche or amateur media enjoy a symbiotic relationship. The mainstream media still manage to reach large groups in society and continue to exert considerable influence on the public debate. They provide for shared cultural frameworks, they establish cultural symbols. A great deal of cultural production occurs bottom-up in the ‘grassroots media’ which by definition focus on a small group of fans or conversely, critical users. Those grassroots media can also act as a control mechanism on the mass media. If the mainstream media abuse their authority, this can be raised in the niche media.6 But at the end of the day, such criticism still needs to be validated by the mainstream media. Bloggers may keep a critical watch on cbs news broadcasts and discover that a negative story about George Bush’s National Service record is based on forged documents, but anchorman Dan Rather only stands down when the New York Times picks up the report and in so doing validates it. ‘Those people in pyjamas’, as a cbs senior executive initially described the bloggers, thereby implying that their allegations are not to be taken seriously because they don’t belong to the professional media, are perfectly capable of bringing matters to the attention of the public. But when it comes to validation, the mainstream media are for the time being indispensable. ‘Broadcasting provides the common culture, and the Web offers more localized channels for responding to that culture,’ according to Jenkins.7
A more layered model than Jenkins’ dichotomy between mass media and niche media can be found in Yochai Benkler’s The Wealth of Networks, in which he explains how, in the media ecology of the internet, the production, distribution and valorisation of ideas and meaning proceeds via a complex and graduated process. A small number of sites attract a large public, he states, while the vast majority of sites appeal to a very limited public. Discussions between peers may well take place on such niche sites, but many of those niche websites are in turn monitored by sites that appeal to a wider public, the so-called ‘A-list bloggers’. When they flag something interesting on a niche site this triggers a sudden flurry of visits to the site in question. That high level of interest may ebb away after a while, but it’s not unknown for a niche site to evolve into a new authority. Benkler: ‘Filtering, accreditation, synthesis and salience are created through a system of peer review by information affinity groups, topical or interest based. These groups filter the observations and opinions of an enormous range of people and transmit those that pass local peer review to broader groups and ultimately to the polity more broadly without recourse to market-based point of control over the information flow.’8
Alongside this layered system of various forms of peer production, processes of ‘valorisation’ also take place in systems of ‘collective intelligence’. Collective intelligence is not the result of deliberate collaboration, but is a by-product of other processes – in systems theory it is known as ‘emergence’. In a discussion on Edge.org, Benkler describes how it works: ‘Take Google’s algorithm. It aggregates the distributed judgments of millions of people who have bothered to host a webpage. It doesn’t take any judgment, only those that people care enough about to exert effort to insert a link in their own page to some other page. . . . It doesn’t ask the individuals to submerge their identity, or preferences, or actions in any collective effort. No one spends their evenings in consensus-building meetings. It merely produces a snapshot of how they spend their scarce resources: time, web-page space, expectations about their readers’ attention. That is what any effort to synthesize a market price does.’9
The use of social bookmark systems like Del.icio.us, media-use analysis software such as can be found at Last.fm, or the kind of Long-tail implementations offered by Amazon.com, work in a similar way. In Pop-up, authors Henk Blanken and Mark Deuze call this the ‘metacracy’: ‘The metacracy is what you get when mathematical algorithms elevate the wisdom of the masses to the norm. . . . The power of the media shifts to the faceless masses. New “social software” will compile the news for us the way we want it, before we even knew that was how we wanted it. The successors of Digg and Google will know our preferences, our weaknesses and our passions and put together a media menu that satisfies our taste and expectations.’10 Authority develops in the process of what has been called ‘collaborative filtering’: an aggregated analysis of the activity of every node in the network. Just as the ‘invisible hand’ of the market economy determines the ‘right’ price for every product, so the ‘collective intelligence’ spawned by a combination of social networks and computer algorithms determines which articles or programmes are worthwhile, or pressing or important.
How Intelligent is ‘Collective Intelligence’?
Such developments are often presented as democratic. Thanks to these systems, it is claimed, we are on the one hand able to compile the knowledge scattered across the network (rather than having to depend on the accredited knowledge of a social elite), and on the other hand we have greater freedom when it comes to making choices within the media ecology. In this we are assisted by smart software that points us in the direction of the sorts of things that might interest us, or that these collective systems have decided are important. ‘The metacracy has obvious advantages,’ argue Blanken and Deuze. ‘It is an open system in which everyone can see what we collectively think, what the trends are, the signs of the times, and what is important.’11 And according to Leadbeater ‘the dominant ethos of We-Think economy is democratic and egalitarian’.12 In many of these discourses the mass media are contrastingly portrayed as aristocratic and paternalistic bastions that have completely lost touch with what people are really thinking. Typical is this quote from Wikinomics by Don Tapscott and Anthony Williams: ‘Regardless of their differences both sites [Slashdot and Digg] make most traditional news outlets look like archaic relics of a bygone era.’ To add weight to their argument, the authors go on to cite the founder of News 2.0 website Rabble.ca Judy Rebick: ‘The mainstream media people define themselves as the arbiters of taste. . . . As long as the media think they know what’s right, she continues, they’ll never be in a position to harness people’s collective intelligence.’13
Nonetheless, this putative democratization, or at any rate its positive implications for the public sphere, has its critics. In the first place, the feedback mechanisms in the media ecosystem can also result in collective folly or media hypes, as Steven Johnson has shown in Emergence. Johnson describes how, in the early 1990s, the Gennifer Flowers affair became a media hype despite the fact that the editors of the major American television news bulletins – the traditional gatekeepers in this media landscape – had originally decided not to devote any air time to the matter. The private life of a politician was not news, was the initial judgement. But they had reckoned without an important change that had recently taken place in the media landscape. Until the mid 1980s, the national networks delivered a series of selected, ready-to-air news items to affiliated local broadcasters. But around that time, local television stations acquired access to cnn’s video databases containing all the uncut and unused material. Whereas previously it had been New York that decided what the local stations could broadcast, now they could make their own pick – a decentralization of authorization within the network. Many local stations accordingly decided to run the news about the Flowers case. The following day all the national newscasts opened with the item – after the news had done the rounds at the local level, they could no longer ignore it.14
Geert Lovink has described a similar process. On blogs, the use of ‘snarky’ language (a ‘cynical mannerism’) provokes a lot of fuss, and thus a lot of incoming links, and thus a higher ‘page rank’. In other words, the blog culture, rather than producing intelligent debates, leads to point-scoring contests and invective. This was also one of the reasons why David Winer gave up blogging: ‘I don’t enjoy being the go-to guy for snarky folk who try to improve their page-rank by idiotic tirades about their supposed insights into my character.’15 So while the way filters work is determined by media use and processes of decoding, conversely this mechanism influences the process of encoding. An arresting headline on the front page of a newspaper is not the same as an easily found headline in Google, as journalists are nowadays well aware, having meanwhile attended one of the countless courses in search engine optimization. Anyone who aspires to be heard in the media ecosystem will need to adjust their language to the patented and secret rules of the search engine.
Some critics fear that processes of collective and collaborative intelligence are leading to cultural trivialization. Collaborative intelligence leads to bland compromises, collective intelligence to populism and even to tunnel vision. That may be democratic, but it is not good for society or for the quality of cultural production, say the critics. One of them, Andrew Keen, even regards the democratization that occurs in the media ecology as downright undesirable: ‘As Adorno liked to remind us, we have a responsibility to protect people from their worst impulses. If people aren’t able to censor their worst instincts, then they need to be censored by others wiser and more disciplined than themselves.’16 And that wiser entity is not a search engine or Web 2.0, but the cultural pope. ‘Without an elite mainstream media, we will lose our memory for things learnt, read, experienced, or heard.’17
In an influential essay entitled ‘Digital Maoism’, veteran internet guru Jaron Lanier explains that collaborative intelligence makes for feeble consensus formation. On his own experience of contributing to Wikis he writes: ‘What I’ve seen is a loss of insight and subtlety, a disregard for the nuances of considered opinions, and an increased tendency to enshrine the official or normative beliefs of an organization.’18
Other critics have little faith in the quality of valorisation via collective intelligence. Information professionals still see a major role for themselves in the future. ‘It’s the role of professional journalists to make a selection from the huge media supply,’ writes Geert-Jan Bogaerts (de Volkskrant’s internet manager) on his weblog. ‘In a newspaper or a radio or television newscast, connections are made that listeners or viewers would not make of their own accord.’19
A second barrage of criticism concerns the commercial character of the institutions that facilitate the media ecology. Critics like Trebor Scholz, Andrew Keen again, and David Nieborg point out that many of the Web 2.0 tools were developed by companies like Google, Amazon and internet start-ups. By setting up lists, voting on articles or commentaries, media consumers undeniably influence the media ecosystem.20 But, Nieborg wonders rhetorically, ‘the big question is, who benefits from large groups of consumers investing their precious time and insight in, say, writing reviews for the Amazon web store?’ On De Nieuwe Reporter, he argues that there is a fundamental difference between ‘consumers who generate value for companies like Amazon with their contributions’ and ‘users who write a Wikipedia entry or maintain their own blog’.21
Both Lawrence Lessig and Henry Jenkins have additional worries about copyright. The copyright system turns cultural symbols into the property of commercial institutions and in a media ecology it obstructs the process of encoding. Authority passes to the film studios, publishers or television networks who determine which ‘recodings’ fans may legally publish.
Yet other critics point out that parallel with the rise of the media ecology is a process of media concentration. The great paradox of contemporary journalism, write the editors of the American State of the Media report, is that more and more titles tackle fewer and fewer topics. On an average day in 2005 researchers counted 14,000 references to news items on Google News. On closer analysis it turned out that those many thousands of sources dealt with only 14 different topics. A very small number of media companies and press agencies supply the content circulating around the media ecosystem. Bloggers, the report concluded, may well add new commentaries, but they add few completely new topics.22 These criticisms are not so much about the developments themselves, which (apart from media concentration) are often viewed as favourable. Rather, they concern the commercial framework within which those developments take place. Why are no public alternatives being developed in which collective and collaborative filtering systems benefit society instead of the market? And how can we prevent production in the ecosystem also falling into the hands of a small group of big media companies?
A third group of critics warns against cultural fragmentation. Origgi may predict that systems of collaborative intelligence will give rise to new democratic canons, but in the end that system rests on a willingness to engage in debate. But doesn’t the internet encourage people to simply introduce their own canon alongside existing ones? As well as Wikipedia, there is now Conservapedia where collaborators are working, for example, on a canon of evolutionary theory from a very different perspective. Henry Jenkins warns of the need for a careful balance between mainstream and niche media. As he sees it, the decline of the mainstream media might even pose a threat to the integrity of the public sphere: ‘Expanding the potentials for participation represents the greater opportunity for cultural diversity. Throw away the powers of broadcasting and one has only cultural fragmentation.’23 Collective filters that determine what we collectively consider important are no match for that.
For their part, Blanken and Deuze warn that far-reaching personalization can lead to tunnel vision: ‘The successors of Digg and Google will know our preferences, our weaknesses and our passions and will put together a media menu precisely tailored to our taste and expectations. And all sorts of things will be lost as a result. Our horizons will narrow.’24
Once again, this appears to be an ethical rather than a technological issue. For supposing that this kind of intelligent software were to be developed, it would also be able to determine what we should regard as important, in the same way that newspaper editors do now, wouldn’t it? Clearly, the danger lies not so much with the technology but with ourselves – the danger that, presented with the possibility, we will indulge our narcissism.
In The Wealth of Networks, Benkler rejects such criticism. He concedes that there are two camps in the American blogosphere, the conservatives and the liberals. But, he maintains, 15 per cent of the links connect sites ‘across the political divide’.25 To which one could reply that ‘linking’ is not ‘bridging’ and counting is not the same as interpreting. A snarky link to an opponent doesn’t generate a discussion, for example, but serves rather to vindicate one’s own group.
People Power 2.0
Nonetheless, some of Benkler’s conclusions have merit. The image of the public sphere on the internet that emerges from his work is not a fixed, locatable place like the opinion page in a newspaper. This public sphere develops wherever the public happens to be and that public can converge at different points in time and in different places – usually at moments when several parties join forces around a particular issue. In a media ecology, thanks to the complex system of links and peer-to-peer groups, a crowd can be mobilized in a short period of time, a phenomenon also known as ‘adhocracy’. ‘While there is enormous diversity on the internet, there are also mechanisms and practices that generate a common set of themes, concerns and public knowledge around which a public sphere can emerge.’26
An often-cited case study of an example of an adhocracy may help to clarify the way the ecosystem works and at the same time guard us against an overly technologically deterministic outlook. The study concerns two ‘revolutions’ that took place on exactly the same spot – the Epifanio de los Santos Avenue (edsa for short) in Manila – 15 years apart. In 1986, President Marcos fled the Philippines after angry crowds had protested against his regime for four days. In 2001 there was another four-day demonstration on this avenue in the centre of Manila. On this occasion the target was President Estrada, who was forced to resign after the collapse of his impeachment trial for corruption.
In the first People Power movement (as the events were later labelled), the radio and a hierarchical social organization played a major role in mobilizing the crowd. On 22 February 1986, Radio Veritas, a Catholic station not under the direct control of the Marcos regime, broadcast a press conference at which two military leaders declared that Marcos had cheated during the recent presidential elections. That same day, via the popular archbishop Jaime Cardinal Sin, the radio station called on listeners to support the protest against the president and to gather on edsa. There the demonstrators held radios clamped to their ears. And even after the section of the army that had remained loyal to the president had knocked down its main transmitter, Radio Veritas continued to play a role. Via a standby – albeit weaker – transmitter, the station continued to broadcast reports, including the latest government troop movements.
Descriptions of People Power II in 2001 usually assign a central role to the mobile telephone and to the decentralized peer-to-peer networks that can be formed with it. This, for example, is how Howard Rheingold describes the events of that year in his book Smart Mobs: ‘Opposition leaders broadcast text messages and within seventy-five minutes of the abrupt halt of the impeachment proceedings 20,000 people showed up. . . . More than 1 million Manila residents [were] mobilized and coordinated by waves of text messages . . . On January 20, 2001 President Joseph Estrada of the Philippines became the first head of state in history to lose power to a smart mob.’27 According to Rheingold, the protest rapidly escalated into a mass movement because those involved were texting messages like ‘Go 2 edsa, Wear Black 2 mourn d death f democracy. Noise barrage at 11 pm’28 to everybody in their mobile phone address book. Telephone company Globe Telecom sent 45 million text messages that day, almost twice as many as normal.29 The network became so overloaded that telephone companies erected extra mobile transmitters around edsa. Other decentralized ‘grassroots media’ are also credited with a role. Criticism, often in the form of parodies of Estrada, were circulated via email, and the online forum E-lagada claimed to have collected 91,000 signatures against President Estrada’s government.30
But is it really true, as Castells, Fernández Ardèvol and Qiu rightly ask, that the uprising succeeded thanks to ‘invincible technology’ that resulted in ‘each user becoming his or her own broadcasting station: a node in a wider network of communication that the state could not possibly monitor much less control’.31 In other words, was this an adhocracy facilitated by new processes of valorisation whereby the mobilization was the result not of an appeal by an authority via the mass media but of the collective intelligence of a smart mob?
Many of those who took part in the demonstration think that it was. In ‘The Cell Phone and the Crowd: Messianic Politics in the Contemporary Philippines’, Vicente Rafael quoted several reactions from newspapers and online discussions.32 ‘The mobile telephone is our weapon,’ said an unemployed construction worker. ‘The mobile telephone was like the fuse of the powder keg, with which the uprising was kindled.’ Another, in the same upbeat prose: ‘As long as your battery’s not empty, you’re “in the groove”, and you feel militant.’ And: ‘The information and calls that reached us by way of text and e-mail was what brought together the organized as well as unorganized protests. From our homes, schools, dormitories, factories, churches, we poured into the streets there to continue the trial [against Estrada].’
Rafael sees such comments in a broader cultural context. In the late 1990s, mobile phones became incredibly popular in the Philippines, especially after Globe introduced prepaid subscriptions with cheap text messaging. Owners talk about their phone as a ‘new limb’ with a very important property: wherever they are, they can always be somewhere else at the same time. In any given social setting they can communicate with other members of a self-selected group that is not physically present. Conversely, the telephone can be used as a unifying element during mass gatherings: ‘While telecommunication allows one to escape the crowd, it also opens up the possibility of finding oneself moving in concert with it, filled with its desire and consumed by its energy.’33 Sending text messages turns into a symbolic practice surrounded by an imagined community which in the Philippines has been labelled ‘Generation txt’. As such, sending text messages can be seen as a contemporary equivalent of waving a flag in revolutionary colours.
But were Rheingold and others right in claiming that the mobile phone represented a shift in the structure of authority? This is where we must be on our guard against technological determinism. As Castells et al. show, there are several objections to the claim that the mobile phone alone was responsible for toppling Estrada. The state’s power had already been weakened, thus reducing the government’s ability to respond to the uprising. In other countries where the state is much stronger, we see far fewer successful political smart mobs. In China, for instance, the authorities are still able to contain protest demonstrations and their effects. Another factor is the economic embeddedness of the telecom services. A strong state would probably have been able to disable the sms network, in the same way as the Radio Vertias transmitter was knocked out in 1986. In reality, the telecom companies, who saw their sms revenues double that day, set up extra mobile transmitters at edsa.
So it would be going too far to identify the mobile phone and the cultural practice of text messaging as solely responsible for the revolution. That said, the kind of social networks the cell phone made possible in the cultural, political and economic conditions in the Philippines did play a role. Interesting in this context is Rafael’s analysis of a contribution to a discussion forum by the initially sceptical Bart Guingona, who described how he started to believe in the power of sms peer-to-peer networking during the demonstrations. He was part of a group of people who organized one of the first protest gatherings. When someone suggested sending an invitation via sms, he doubted whether it would work without being validated by an authority. A priest who was involved in the preparations suggested enlisting Radio Veritas in a repeat of 1986. In the end, it was decided to send a test sms. When Guingona turned on his phone the next morning it was to find that friends and friends of friends had forwarded the message en masse, including to his own inbox: indirectly he had got his own sms back in threefold.34
Guingona, Rafael explained, had little faith in the power of text messages because he saw them as equivalent to rumour. In order to be credible, the message would need to be legitimated by a traditional authority. This proved to be a misconception. An sms is no isolated message from an unknown source of dubious status, but a message from a known sender within one’s own social network. And that remains the case, even when the message is forwarded for the second, third or thirtieth time. Validation of the message occurs not via an authority but via an accumulation of individual decisions whether or not to forward the message within the network. Rafael: ‘The power of texting here has less to do with the capacity to open interpretation and stir public debate as it does with compelling others to keep the message in circulation. Receiving a message, one responds by repeating it. One forwards it to others who, it is expected, will do the same. Repeatedly forwarding messages, one gets back one’s exact message, mechanically augmented but semantically unaltered.’35
What this case shows is that the peer-to-peer networks played a role in the process of validation and mobilization in Manila’s public space around an issue. At the same time this example shows that if we are to understand such phenomena properly, we mustn’t become fixated on the technology, or on processes of collective and collaborative intelligence. Instead we must look at the entire context of an event and at the various related elements of the ecosystem. It was the interaction between different levels of scale of the mass media, the niche media and the p2p networks that in this instance created an adhocracy around the issue of the deadlocked corruption proceedings. But this is not to say that a similar technological constellation would have the same outcome in a different context. Or that this technology automatically leads to processes that are beneficial to democracy. Football hooligans who use their mobile phones to mobilize and coordinate their brawls are also examples of adhocracies and smart mobs.
Which brings us back to the initial question. We can now say that the role of traditional gatekeepers, though declining, is a long way from being played out. The role of filters based on computer algorithms that aggregate and analyse social and cultural practices is increasing. Alongside these forms of collective intelligence, we are also seeing processes of collaborative intelligence. All these developments offer possibilities for creating adhocracies around particular issues. But they can equally well lead to media hypes, and they may still be thwarted by a traditional authority like the state. Commercial concerns often play a role in facilitating such processes and public alternatives do not exist in every area. The technology-based values of companies like Google may even result in media producers adapting their output to those values. And there is a further danger of such adhocracies breaking away from the greater whole and cultivating their own canon.
It is difficult, therefore, to talk about an all-powerful new paradigm. There is no easily localized ‘Public Sphere 2.0’. Rather, different and often opposing processes are taking place simultaneously. Yes, new media technologies offer more possibilities for controlling the state and the mass media or for self-organization. But this does not necessarily, and certainly not automatically, lead to a better democracy. Additionally, it is important to analyse every case individually and to look at the whole context of the media ecosystem. Who provides what input for which political and / or commercial reasons? What are the motives of institutional organizations involved in this process? How are those motives translated into technology (from software and filter algorithms to hardware) and what are the limiting or empowering consequences of that technology? Then again, how does the bottom-up process of decoding and recoding work? Which particular practices are of importance here? What role do those practices play in the process of valorisation? Only by continually asking these kinds of questions will we be able to get a better grip on the fluid Public Sphere 2.0.
1. A term encapsulating a concept of the internet as a huge database of content, to which anyone can contribute data and in which the data can be linked in a wide variety of ways. This is in contrast to Web 1.0, which consisted of static web pages.
2. Henry Jenkins, Convergence Culture (New York: New York University Press, 2006). See also the theories of Lawrence Lessig in which he explains how nearly all cultural manifestations and innovative ideas are in fact a ‘remix’ of earlier cultural manifestations.
3. http: / /www.wethinkthebook.net / (accessed 13 June 2007).
4. The website Edge.org recently hosted an extensive discussion about the role of experts in systems of collaborative intelligence. Wikipedia founder Larry Sanger explained that he eventually came to regard Wikipedia’s egalitarian knowledge paradigm as counterproductive and accordingly set up an alternative – the Citizendium – where validation is once again carried out by experts. See: Larry Sanger, ‘Who Says we Know. On the New Politics of Knowledge’ on Edge.org, http: / /www.edge.org / 3rd_culture / sanger07 / sanger07_index.html.
5. Henk Blanken, ‘Deugen journalisten? (On Sanger’s Citizendium)’, in: De Nieuwe Reporter, 21.5.2007. http: / /www.denieuwereporter.nl / ?p=963 (accessed 5 June 2007).
6. For the time being it appears that bloggers only monitor certain kinds of news reports. In the usa it is primarily politically charged news that is put under the microscope. The number of known cases in which ‘fraudulent’ reporting of other topics has been exposed by bloggers is considerably smaller. See: Maarten Reijnders ‘Journalistieke fraude en de rol van het publiek’ in: De Nieuwe Reporter http: / /www.denieuwereporter.nl / ?p=553 (accessed 14 June 2007).
7. Jenkins, Convergence Culture, op. cit. (note 2), 211.
8. Yochai Benkler, The Wealth of Networks (New Haven: Yale University Press, 2006), 246.
9. Yochai Benkler, ‘On “Digital Maoism: The Hazards of the New Online Collectivism” By Jaron Lanier’, Edge.org, 2006, http: / /www.edge.org / discourse / digital_maoism.html (accessed 5 June 2007).
10. Henk Blanken and Mark Deuze, Pop-up (Amsterdam: Boom, 2007).
12. http: / /www.wethinkthebook.net / (accessed 13 June 2007).
13. Don Tapscott and Anthony Williams, Wikinomics: How Mass Collaboration Changes Everything (London: Penguin Books, 2006).
14. Steven Johnson, Emergence (New York: Scribner, 2002).
15. See also Geert Lovink, ‘Blogging, the Nihilist Impulse’ in Zero Comments, Blogging and Critical Internet Culture (New York: Routledge, 2007).
16. Andrew Keen, ‘The second generation of the Internet has arrived. It’s worse than you think’, Weekly Standard, 15.2.2006, http: / /www.weeklystandard.com / Content / Public / Articles / 000 / 000 / 006 / 714fjczq.asp (accessed 5 June 2007).
17. See also Andrew Keen, The Cult of the Amateur: How Today’s Internet is Killing Our Culture (New York: Doubleday, 2007).
18. Jaron Lanier, Edge - Digital Maoism - The Hazards of the New Online Collectivism. 30.5. 2006, http: / /www.edge.org / 3rd_culture / lanier06 / lanier06_index.html (accessed 5 June 2007).
19. Martijn de Waal, Theo van Stegeren, Maarten Reijnders (eds.), Jaarboek De Nieuwe Reporter 2007. Journalistiek in Nederland: onderweg, maar waarheen? (Apeldoorn: Uitgeverij Het Spinhuis, 2007), 159.
20. Although that influence is not very great as yet. An analysis by Hitwise, a research agency, shows that aggregation services like Google News and Digg play a modest filtering role. Only 5 per cent of all visits to the websites of American broadcast and print media are generated by this kind of service. A much bigger proportion, 12 per cent, is generated by portal sites. In particular, the msnbc news site (a joint venture by nbc and Microsoft) profited from referrals on the msn portal site, the default homepage of the Internet Explorer browser. An even bigger proportion, almost a quarter, is generated by standard search engines like Google.
21. David Nieborg, ‘Een lange staart is goud waard’, De Nieuwe Reporter. 31.8.2006. http: / /www.denieuwereporter.nl / ?p=544 (accessed 5 June 2007).
22. http: / /www.stateofthenewsmedia.org / 2006 / narrative_overview_eight.asp?cat=2&media=1 (accessed 21 May 2007).
23. Jenkins, Convergence Culture, op. cit. (note 2), 257.
24. Blanken and Deuze, Pop-up, op. cit. (note 10).
25. Benkler, The Wealth of Networks, op.cit. (note 8), 248.
26. Ibid., 256.
27. Howard Rheingold, Smart Mobs (Cambridge, MA: Basic Books, 2002), 158-160.
28. Manuel Castells, Jack Linchuan Qiu, Mireia Fernandéz-Ardèvol and Araba Sey, Mobile Communication and Society (Cambridge, MA: MIT Press, 2007), 188.
29. Ibid., 189.
30. Ibid., 188.
31. Ibid., 191.
32. Vincente Rafael, ‘The Cell Phone and the Crowd: Messianic Politics in the Contemporary Philippines’, Public Culture V. 15 # 3 (2003).
Martijn de Waal is a writer and researcher. He is part of the New Media, Public Sphere and Urban Culture research project in the Department of Practical Philosophy at the University of Groningen. He is cofounder of TheMobileCity.nl – an international think-tank for new media and urban culture. See further: www.martijndewaal.nl.