Danny Butt
University of Melbourne
[Abstract]
At the beginning of the 20th century, competing global telegraph networks struggled to monopolise the international circulation of information. Governments did not nationalise the cable industry (as they had telephony and the postal system) and even at the peak of “new imperialism” in 1910 only 20% of the world’s cable networks were state-owned (Winseck and Pike, 2009: 33). European governments instead used infrastructural subsidies to promote their telecommunication aims. Yet during this period of technological expansion and militarisation — perhaps relevant to our own — the nation state was far from hands off, as the market leading Marconi company discovered. Their resistance to wartime government control of their infrastructure led to the expropriation of their US assets. While the US Navy patriotically painted Marconi British puppets, Marconi’s bid for the troubled Reuters agency in England also failed due to political interference: the British Government secretly rescued the too-big-to-fail Reuters from its misadventures in international finance in return for being allowed to use the wire service for distributing propaganda (38). The fate of “private” and “free” telecommunications infrastructure was, in the last instance, underwritten by miltary and geopolitical supremacy.
This enduring theme in the history of communications media was not lost on the countries of the Non-Aligned Movement (NAM), which convened the Bandung conference in 1955, a key moment in the tenuous emergence of the “postcolonial” era where a group of nations attempted to formulate political positions outside the dominant powers of the Cold War. The postcolonial, in Spivak’s (1999) terms, refers to the period where the European empires removed themselves from administrative control of some of their former territories, leaving the infrastructure of the nation-state intact. The postcolonial can be distinguished from the settler-colonial states (Australia, US, Canada, US) where the majority European population remained in “independent” administrative control. It is into this conjuncture that the “free flow doctrine” emerged, marked by a negotiation of residues from the pre-colonial and colonial eras with the neocolonial forces of global capital. Under this doctrine, access to media markets for Western corporations was described as an important factor in the ongoing development of the former colonies. This relation of dependence was enabled by economic policies such as the floating of exchange rates in 1971 and the continuing integration of nations into the economic surveillance of the International Monetary Fund: not a direct military relationship but a financial calculation, whereby wealthy powers would control the terms by which indebtedness would be produced and police adherence to the numbers. Recognising that formal decolonisation was accompanied by this kind of media-enabled economic and financial control, articulating a critique of dominant media systems was one of the NAM’s foremost tasks.
The new international economic order sought by the NAM would be underpinned by a New International Information Order (NIIO), reversing the imbalances of colonialism to support cultural diversity and national independence in under-developed regions. The development of a global media debate emerged from a combination of new technologies and new institutional configurations. If territoriality was the ruling logic of colonialism, the process of decolonisation coincided with the integration of former colonies into an intergovernmental order of “united nations” where national sovereignty required alignment with major powers. While globe-girding communications networks were tentatively in place during the decolonial period, the development of satellite technologies definitively ruptured existing media governance mechanisms that were structured around territoriality and internationalism.
The United Nations system became the default location for deliberation on issues that cut across national borders: oceans and space being two of the most prominent. The “space race” was considered to be a part of the Cold War, but in the growing recognition that satellite communications would become central to an information-intensive economy, the NAM countries raised the question of their own participation in this emerging economic order. This was perhaps best highlighted in the eventual United Nations Economic, Social and Cultural Organisation (UNESCO) declaration of Guiding Principles on the Use of Satellite Broadcasting for the Free Flow of Information (1972), which sought to require negotiation with audience nation states before allowing direct satellite broadcasting into global nations. The US was the lone dissenter in a vote of 102 to 1 (Schiller, 1976: 40).
The NAM’s resistance to cultural imperialism through the NIIO was not a Swiss-style European “neutrality”, but an activist agenda to overcome the shared suppositions of Cold War neoimperialism. For Spivak, the attempt of the Bandung conference to establish a third way of neither East or West highlights the possibilities of a “critical regionalism” against the forces of neocolonial globalisation through the nation-state form (Butler and Spivak 2007), however at the time these political moves were “not accompanied by commensurate academic effort” (Spivak, 1999: 375) that could ground them into the education of scholars thinking the global, and therefore the intellectual possibilities of the NAM have faded from view. This paper argues that the rise of US-based global “platforms” for media and communication and the revelations of widespread extra-legal surveillance by the US government suggest similarities between the NIIO era and our own, giving the call for a scholarly analysis of global political economy renewed importance. A rereading of the vigorous debates on the NIIO and its successor, the New World Information and Communication Order (NWICO) is perhaps timely for understanding our current conjuncture. The aim of this paper is to reflect on the historical underpinnings of the NIIO discourse and link it to contemporary debates in governance of the Internet and new media platforms.
UNESCO, the NIIO, and the “Free Flow of Information”
The tension in the NIIO debate lay between two theories of social and economic development. Modernisation theory saw the position of developing nations as based on historical backwardness, where the input of Western capital and the “free flow of information” would eventually lead these nations to catch up to the “progressive” advanced nations. Dependency theory, on the other hand, emphasised the continuity of the systems of international domination, and saw international relations as a process of underdevelopment by which rich countries maintained control of poorer ones, with control of the media vectors, news agencies and broadcasters being a key strategy. Dependency theorists also noted the continuation of a global logic of race and colonialism in the discourse of modernisation. Events such as the Oil Crisis of 1973 (an OPEC protest against Zionist actions in the Middle East) and the wars in Vietnam and Korea highlighted the opposed interests of wealthy and poor countries.
The media and communications issues addressed by the NIIO were succinctly described by UNESCO Director General Amadou-Mahtar M’Bow as:
the highly political aspect of information and the problems posed by the virtual monopoly of the major news agencies in international news services and the dominance of Western countries in the marketing of programme materials, create divergence of interests between developing countries and industrialized countries producing the bulk of the information at present flowing in the world. (Wolfe, 1980: 259n56)
Carlsson describes the New International Information Order as addressing this situation in its call for four D’s: democratization of the flows of information between countries; decolonialization, i.e. self-determination, national independence and cultural identity; demonopolization, i.e. setting limits on the activities of transnational communications companies; and development, i.e. national communication policy, strengthening of infrastructure, journalism education, and regional cooperation. (Carlsson, 2005: 197)
The body that provided a forum for these debates was the United Nations Economic, Social and Cultural Organisation (UNESCO), an international organisation developing principles of international information exchange and provision of development assistance. An organisation conceived under a modernisation paradigm, UNESCO nevertheless remained the most accessible global forum for non-aligned countries to extend their demands in the first half of the 1970s. The NAM sought UNESCO recognition for a number of independent news agencies, most particularly the Non-Aligned News Agencies Pool (NANAP), founded in 1975 (Wolfe, 1980: 260). During the early 1970s, even European nations questioned the “Free Flow doctrine”. Finland’s president Urho Kekkonen noted that ‘a mere liberalistic freedom of communication is not in everyday reality a neutral idea, but a way in which an enterprise with many resources at its disposal has greater opportunities than weaker brethren to make its own hegemony accepted’ (Schiller, 1976: 44). UNESCO had to find some kind of common ground between the NAM and the West. This would lead to the establishment MacBride Commission which was charged with “official mission” and the development of consensus on a New World Information and Communication Order (NWICO).
The more responsive UNESCO became to the demands of the NAM through the late 1970s, the further the West shifted away from participation in UNESCO and toward media and communications policy mechanisms more conducive to their interests, particularly through the Tokyo and Uruguay rounds of General Agreement on Tariff and Trade (later to become the World Trade Organisation). A campaign was run by the World Press Freedom Committee, led by large US journalistic interests, to caricature the NIIO as simply restricting the free flow of information and curbing press freedom toward world socialism, in language reminiscent of today’s US-based attacks on UN involvement in internet governance. Western nations attenuated NWICO’s leverage as much as possible by watering down all language relating to decolonisation, while concentrating their attention on a funding mechanism, the International Programme for the Development of Communication (IPDC), to implement their objectives through “practical” means rather than diplomatic debate. Led by the U.S. Carter administration, the West proposed the IPDC as a “Marshall Plan of Telecommunications”, which would require countries to accept a broad set of Western-friendly principles in return for investment in their media infrastructure (Mansell and Nordenstreng, 2007: 23). This mode of financialised, distributed and program-focussed development as Foreign Direct Investment remains the model for media development today.
Although the MacBride Report that described the possibility of the NWICO represented one of the UN’s most thorough and successful attempts at integrating views of opposed participants, the resulting abstraction left it as a functionalist document with little explanatory power, presenting the “‘crucial problems facing mankind today’ as a simple list of familiar issues, with no explicit explanation of the theoretical and political controversies that they represented” (Mansell and Nordenstreng, 2007: 24). This neutralisation did not save UNESCO from the withdrawal of the US and UK as its two major funders, who objected to the very attempt to discuss the NWICO (Carlsson, 2005: 202). Also critical was the context of US and UK unilateralism in the Thatcher and Reagan administrations – these countries and Singapore would withdraw from UNESCO altogether in the 1980s, although though the UK would return in 1997 and the US would return in 2004, before the US once again withdrew funding over UNESCO’s recognition of Palestine in 2011. The withdrawal of these funds left whatever gains the NAM made in UNSECO far less powerful with two of their major donor countries no longer involved in providing the capacity for implementation of programmes. In this narrative it is particularly instructive to note that after diplomatically securing the IPDC as their preferred alternative to NWICO-related development activities, the US never actually committed any funding to the IPDC and convinced many other wealthy nations to do also withhold support, while even Bangladesh would provide $2000 in “practical support” (see Pendakur, 1983: 403). These same debates and issues would still be negotiated but under different protocols and fora that better supported dominant Western interests.
Internet Governance and the Information Society
When the great media debates returned, it was around the emerging internet platforms and their governance, the concept of the “digital divide” and internet governance. With a few exceptions in civil society (the work of Seán Ó Siochrú (2004) being prominent), the NWICO debates were not revived. Reflecting the neoliberal consensus, the focus of international debates in Information and Communication Technologies for Development (ICT4D) was on practical or missionary measures to bring developing regions into the information age, such as Nicholas Negroponte’s One Laptop Per Child program. Similarly popular among development agencies was Muhummad Yunus’ Grameen Bank which pitched “access to credit” as a human right. Reflecting this trend toward finance as the instrument of control in continuity with the IPDC proposals, the bulk of intergovernmental negotiation on media and information in the 1990s took place in trade agreements via the World Trade Organisation.
While UNESCO and the United Nations Development Program (UNDP) provided fora for various agendas related to ICT, the political void was filled by concept of the “information society”. In this discourse the International Telecommunications Union (ITU) saw an opportunity to develop a global summit to demonstrate leadership and relevance in the emerging information society debate, eventually coordinating the World Summit on the Information Society (WSIS) in 2003 and 2005. The ITU is one of the older agencies to be integrated into the United Nations system, having been formed in the 19th century to foster interconnection and compatibility in international telephony. However, while the Internet initially reached its widespread consumer audience through the “Plain Old Telephone System” (POTS) under the ITU’s governance, the ITU have had little involvement in the Internet’s development and management at a global level. At WSIS the New International Information Order concerns were not the central discussion, reflecting the relatively weak position of poorer nations in the economy. Despite numerous well-intentioned proposals and declarations, wealthy nations were able to block most developing nation calls for reducing inequality in internet governance arrangements, and to constrain the dialogues to matters of practical support for “enhanced participation” in existing orders. The exception was the establishment of the Working Group on Internet Governance (WGIG) and the subsequent development of the Internet Governance Forum (IGF) to discuss the management of key internet resources, with a particular focus on the continuing role of the US Government’s oversight of key internet governance agencies such as Internet Corporation for Assigned Names and Numbers (ICANN).
The Internet has always been governed, and the discourse of governance has relied on colonial spatial metaphors (such as “domains”, or “multihoming”) that aim to give territoriality to the cyber “space”. This frontier imaginary was given full expression by cattle rancher John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, a rerun of the Wild West’s escape from the limits of government, and reinvention of the political in the self-authoring man, neatly described by Cameron and Barbrook as “The Californian Ideology”, where California comes to stand as the centre of the global internet. The discourse of Internet governance highlights the distinction between what Spivak (2003) terms the “global” and the “planetary”. The global is always a spatialisation, an attempt to globalise rather than the achievement of globality. The Internet is a “globe-girding” technology, but its effective adoption in any part of the planet is localised and uneven. The effective governance of a global network then cannot be inferred from the use of those who are deeply “participating” in its operation, as the Californian Ideology proposed.
Since its origins as a military network extended through research institutions, the governance of the internet in the technical community has taken shape through the principles of “rough consensus and running code”, a sort of settler pragmatism that preserves the status quo and deflects attention to systemic issues in favour of the “independent” individual (who is usually underwritten by a corporation). For the commercial entities most involved in internet infrastructure, the lack of holistic oversight and accountability to traditional forms of policymaking is a feature rather than a problem, enabling a “competitive market” for technical protocols and solutions where little responsibility is held for the effects of the system as a whole — the miniscule adoption of IPv6 almost twenty years after its specification by the IETF is but one example. In this respect Internet governance arrangements reflect the discussion of the free flow doctrine through the NIIO era. The effective freedom for information to flow relies on financial and technical resources that are systematically denied to groups and regions that do not possess them. Ironically, the “bottom up” organisation of Internet control has been unable to develop effective measures to engage the largest sector of its affected population.
Much as in other forms of colonial infrastructure, the need for stable operation and development of “neutral” infrastructure would be used to justify the continuing exclusion of those outside the small group participating in its arrangements. Metaphors are a key mechanism for maintaining this order. Internet governance has often cited a “layer model” drawn from the schematic used by network engineers. In Werbach’s (2002) commonly cited form, it consists of a physical infrastructure layer, the logical layer, the application layer and the content layer, with the implication that governance discussions should restrict themselves to the appropriate layer of operation and not intervene in others. The siting of Internet governance entities such as ICANN and the Regional Internet Registries in the “logical layer” is indicative of the technical determinist ideologies that have historically underwritten internet governance. The nomenclature of the logical layer suggests it precedes the political, and relies simply on “technical coordination” that resisted the broadening of stakeholders beyond those of the self-selected group of participants, who emerge largely from the Euro-American technology sector.
This exclusion of participation from below has its flipside in the California Ideology’s focus on keeping the Internet “free” from government control and interference. Any attempt to maintain pressure on organisations like ICANN to relinquish their oversight by the US Government have been blocked by the technical community, who saw the governmental strings as simply a hangover that would inevitably be removed, while the greater threat was seen to reside in foreign governments who would inhibit the free flow of internet information. However, as Michael Froomkin (2011: 200) pointed out, when it came to ICANN and the US Government’s “Affirmation of Commitments” toward reshaping internet governance the USG did not actually commit to removing its sole oversight, and even when recently announcing the transition of the IANA functions to the “Internet community”, the US Government requires any proposal to demonstrate “broad public support” (which does not mean the support of elected governments in countries it is politically opposed to).
Multistakeholderism and Neoliberalism
The deflection of intergovernmental internet governance debates from effective control of the network protocols to a focus on funding and “practical measures” rather than questions of authorisation and legitimacy of governance structures reflects the neoliberal conditions of “multistakeholder” global governance, which positions governmental agents alongside private sector and sometimes civil society or NGO representatives. Sarikakis (2012: 151) describes multistakeholder governance as granting ‘private interests legitimacy in public policymaking next to elected governments in the process’. Multistakeholderism thus attempts to short circuit the democratic arrangements of nation states in favour of a kind of “opt-in” or participatory democracy, which is much more easily managed in the interests of capital. As Coombe & Turcotte (2012: 10) describe it, ‘The emerging digital landscape is increasingly governed by privately generated norms and technological measures backed up by legislative bodies, displacing public deliberations around the scope of copyright and its limits, which functions to turn large amounts of what was once in the public domain into private goods’. Jeremy Malcolm has noted that the complicity of the technical community in the maintenance of private governance against public accountability has been consistent:
ICC [the International Chamber of Commerce] and ISOC [the Internet Society], have consistently put forward arguing against the reform of the IGF to enable it to develop the capacity to produce policy recommendations, and against institutional reforms in relation to the enhanced cooperation process, which they have characterised as unnecessary in light of their own internal efforts at cooperation with other stakeholders. By the same token, the private sector and technical communities were not seen to raise any objection to the exclusivity of the e-G8 summit, nor to the release of the OECD Communiqué without civil society’s endorsement, they have actively participated in other Internet-related policy discussions from which civil society was excluded or absent (such as the ACTA negotiations). (Malcolm, 2012: 172)
Dissatisfaction with inclusive multistakeholderism has shown itself in the emergence of a splinter group of civil society actors, such as the Just Net Coalition. The Just Net Coalition have pointed out that after decades of modernisation-style discourse in ICT for development (which consistently talks about “capacity development” for participation in governance arrangements rather than their transformation):
we have seen mass surveillance, abusive use of personal data and their use as a means of social and political control; the monopolisation, commodification and monetisation of information and knowledge; inequitable flows of finances between poor and rich countries; and erosion of cultural diversity. Many technical, and thus purportedly ‘neutral’, decisions have in reality led to social injustice as technology architectures, often developed to promote vested interests, increasingly determine social, economic, cultural and political relationships and processes. (Just Net Coalition 2014)
While there is still a discourse of “bottom up governance”, in today’s corporate order it is hard to see private sector leadership as emerging from the bottom. After the capture of international IP negotiations and trade agreements by transnational private interests through organisations such as the ICC, it would be more accurate to describe private sector multistakeholderism in terms of neoliberalism, which Foucault describes as a ‘general regulation of society by the market’ (Foucault, 2008: 145). Under the neoliberal agenda ‘one must govern for the market, rather than because of the market’ (121). Instead of a market of producers who exchange goods as one part of a social life, the principle of the market becomes the grounding social structure, a game which one is not allowed to drop out of, ‘a sort of inverted social-contract’ (201) – a singular paradigm echoed in the technical community’s insistence on a single structure of internet governance and control of the internet as a unified network.
For the theorists of neoliberalism, this market order is necessary to regulate the imperfect knowledge of participants and provide a single platform of “interoperability” between agents who will otherwise fail to maximise their productivity. The emphasis on a platform of “enforced consensus” around private property managed through the price mechanism has an explicitly Christian heritage: for Ludwig von Mises, founder of the neoliberal think-tank the Mont Pèlerin Society, this system ‘coincides with the history of the development of mankind from an animal-like condition to the highest reaches of modern civilization’ (cited in Gane 2014: 17).
As with the “free flow of information” doctrine in the NIIO era, contemporary internet governance arrangements can be seen to centralise protocols that enforce a form of value in the name of stability and security, while preventing alternative protocols from emerging which are less profitable for capital. For example, debates about the appropriate entities to manage domain names have been intimately tied to intellectual property arrangements such as ICANN’s Uniform Domain-Name Dispute-Resolution Policy. Through the centralisation of universal protocols which govern the dispersion of internet resources such as domain names into “independent” entities such as ICANN, which must nevertheless act in concert with US Government positions on intellectual property, the dominance of capital is maintained even while governance organisations promote their “bottom-up” arrangements and their lack of censorship and control over the production of content by individuals. The development of commercial social media platforms as the dominant context of internet use further abstracts internet governance from users’ potential control. Where the “logical layer” of internet standards under the IETF represented an arcane but well documented site for the political negotiations of governance arrangements, dominant global platforms govern their users through the calculations of proprietary algorithms.
The Platform and the Algorithm
To calculate requires an archive, and social media platforms fundamentally archive individual labour as data to be sold to advertisers. These platforms now vie with search engines as the most popular uses of the internet, although the extent to which all media increasingly relies on “social” profiles calls this distinction into question: Google itself incorporates a user’s search history in its display of search results. Derrida sees in the question of the archive not simply ‘one political question among others’:
It runs through the whole of the field and in truth determines politics from top to bottom as res publica. There is no political power without control of the archive, if not of memory. Effective democratization can always be measured by this essential criterion: the participation in and the access to the archive, its constitution, and its interpretation. (Derrida, 1995: 11)
The archival logics of the statistical patterning were integral to democracy in the spread of genres of subjectivity in the mass media era, where individuals were enumerated and the planning of public infrastructures took place by governments with delegated representative responsibilities to “the people”. However, an entirely new logic of subjectivation occurs through the spread of online platforms and their “algorithmic governance”.
Italian organisational theorist Claudio Ciborra was one of the first to use the term “platform” in the expanded sense it is used today, in his analysis of Olivetti as a “platform organisation.” Distinguishing it from the “flexible cluster” of the network (see Borgatti and Foster 2003), the platform is a ‘system of schemes, arrangements and resources’ (Ciborra, 1996: 114) that incorporates the network model of routines and transactions, but also has a higher layer where the ‘re-architecting of structures is played out’, and it is the ‘recombination of bundles of routines and transactions’ that matters more than the specific properties of the network (113). This ‘decoupling of process know-how’ from the more mundane generation of product innovations leads to a dualistic system, where ‘strategic management mainly consists in placing bets about what will be its next primary task; all the other choices such as alliances, vertical integration and so on, follow the provisional outcome of such bets’ (114). Analysts such as Galloway (2012) and Chun (2005) locate this decoupling in the very architecture of software. Sandra Braman described new media platforms “meta technologies”, they ‘enable long processing chains, and there is great flexibility in the number of steps and the sequence with which they are undertaken’ (Braman, 2004: 156), However, there is always a final “authorisation” by the firm who will attempt to deflect the risk of any particular activity for as long as possible, while reserving the right (through e.g. intellectual property regimes) to “re-architect” the system to maximise its returns.
Social media platforms are a global expression of the two-sided media business model, where content is sold to users and audiences are sold to advertisers (Doyle, 2014: 302). The platform monetises users by extracting data from their actions to sell highly segmented audiences to advertisers, and using this data to algorithmically customise the user interface to maximise user attention, inducing them to add content to the database and make it more valuable. This governing of the platform and the ability to “rearchitect” its logic is the proprietary knowledge at the basis of the firm, precisely what is withheld from user control and protected from competitors. Crucial to the implementation of this model is the idea of the user as a coherent subject whose maximum attention can be stimulated through segmentation of their profile (Beer, 2009: 996). The structured data embedded in the social media profile is the “infrastructure” of the platform—in Mitropolous’ (2012: 117) perceptive definition, infrastructure is ‘the answer given to the question of movement and relation’. In the writing of Susan Leigh Star (1999: 382), infrastructure ‘only becomes visible on breakdown’ and otherwise operates invisibly to sort and group the individual into the social world. The profile produces the user by establishing constitutive rules of engagement that construct how users should interact. In other words, the platform determines the model of humanity users are allowed to perform through direct constraint of action, rather than through presentation of a model of subjectivity to be adhered to through the user’s mimetic capacity. Any critique a user may make of the platform is participatory or immanent, as the platform architecture remains relatively immune to user attempts to customise of genre, as in the building of independent websites. Increasingly, media platforms integrate with hardware that comes to informationalise location, movement, and even the vital functions of the body – as can be seen in the largest platform company Apple’s development of a watch with an integrated heart monitor and health software. This integration is good business for the platform, as platform switching becomes more difficult when the profile is linked to the user’s biodata in proprietary databases.
For Bernard Stiegler, the distribution of massively scaled cultural objects synchronise modes of life (Stiegler, 2009: 85), and the grammatisation of the affects—where the ‘tertiary retentions’ of expressive inscription are manipulated in the interests of profit—represent an attack on the possibility of democracy. Since the industrial revolution, grammar is no longer limited to the written word, but describes a certain archival structure in general (Derrida 1976). Stiegler is not specifically talking about social media profiles, but he notes that grammar ‘comes to invest in the sphere of bodies’, a grammatisation of gesture that Marx describes as the ‘process of proletarianisation’ (the loss of know-how) that becomes a loss of ‘know-how-to-live’ (Stiegler 2007). Where a historical representative democracy structured the “delegation of competence” to a juridical and organisational process that mediates differences in the nation state, this is now overtaken by real-time communications in formats or genres that constitute audiences in advance (Stiegler, 2010: 172). In other words, the platform’s algorithm imagines its user and we respond accordingly within its affordances. Stiegler describes this as a short-circuiting of democracy and its replacement with a telecracy. The operationality of direct action between user and platform comes at the expense of the deliberative processes of the state that historically mediated the diverse scales of operation between capital and living labour.
In her comprehensive account of algorithmic governance, Antoinette Rouvroy joins Stiegler in emphasising this bypassing of delegation that has historically characterised the democratic citizen, and describes a new mode that is entirely different from modernist statistical governance. Rather than Foucault’s account of neoliberal subjectivation toward a new homo oeconomicus as entrepreneur of themselves, algorithmic governmentality ‘bypasses consciousness and reflexivity, and operates on the mode of alerts and reflexes’ (Rouvroy, 2012: 153):
Data-behaviourism simply appears to have rendered the interpretive time and space of trial or process irrelevant. It is a regime of truth evaluated against criteria of costeffectiveness and operationality. The computational turn thus attests to the decline of interpretation to the benefit of something much more immediate (and immediacy is one of the connotations usually attached to efficiency), which is statistical inference operated on the basis of correlations, while validation of patterns or profiles happen through a kind of ‘backward performativity’: anything that would happen and be recorded, never mind whether it fits a pre-existing pattern or profile or not, will contribute to the refinement and improvement of the ‘statistical body’, and ‘validate’ the methods of automatic interpretation or correlation to which they are subjected. (Rouvroy, 2012: 151)
In Rouvroy’s account there is a certain additional space to move for the individual subject due the algorithm’s lack of interpretation compared to logics of representation. However, we also see the stakes of the consolidation of Internet platforms into oligopolies (if not cartels) and the extension of these increasingly linked profiles through the Internet. This development of the “platform” as the dominant genre of Internet use changes how the medium responds to and reflects users. As Craig Labowitz pointed out, the concept of the Internet as a widely distributed network across thousands of companies has consolidated to a much more concentrated regime of power: ‘by 2009, half of all internet traffic originated in less than 150 large content and content-distribution companies, and today, half of the internet’s traffic comes from just 30 outfits, including Google, Facebook, and Netflix’ (McMillan 2014).
State algorithms
As algorithms spread globally and consolidate their rearchitecting power, their archival authority is private and immune to the user, reserved for the platform architects who decide from afar which modes of informational behaviour are most profitable. Online platforms produce forms of ‘social surveillance’ that do not correspond to the modernist forms of hegemonic state surveillance (Marwick 2012). However, perhaps those historical forms were dismissed too quickly, if we consider Edward Snowden’s recent revelations about the scale of US National Security Agency programs such as Xkeyscore and the PRISM programme ‘forcing those private companies (such as Google, Microsoft, Apple, or Skype) regularly collecting vast amounts of data for commercial purposes to hand it over to the intelligence services without the knowledge of users’ (Bauman et al., 2014: 123). The role of these US-based platforms should concern all global internet users, and clearly displays how the technical community’s enabling of private sector authority underwritten by US jurisdiction is far from adequate for just global governance arrangements, irrespective of that nation’s self-defined position as a guardian of freedom. There is a shared interest between the state and the platform in a neoliberal logic of informational power. As noted above, under the conditions of neoliberalism, the user does not delegate authority to a legislative democracy, but is instead constantly stimulated toward production. The entrepreneurial auto-production of social media’s individual archive locates the self on the grid of intelligibility established by the platform. The platform’s primary interest in the profile is to package the subject as a unit to sell to advertisers, and under this protocol the state’s interest in the tracking of the individual makes the state just another client to the platform, albeit one who can wield a tremendous amount of power.
The managerial state is itself a player in social media’s market game of representation, maximising the productivity of citizen assets through their attention to dominant social media platforms and re-deploying underperforming or defective assets through policing. The state’s interest in the expanded representational powers of the profile and the profile’s attachment to specific citizens lies in the ability to not simply neutralise a defective citizen but to represent their non-compliance in the theatre of participatory consumption (through terrorist threats, police entertainment, border control, and such like). The dangerous or non-compliant profile can be made to do profitable representational work for the state, both in stimulating compliance and building political and financial support for the economy of securitisation. While crime management has been historically viewed as a cost to the state, under the prison-industrial complex the criminal who can be identified through surveillance networks becomes a profit centre to the public-private alliance (Donahue 1988). The role of online platforms in identifying potential deviance from state norms is thus an integral piece of infrastructure for development of the militarised carceral state.
As more “public” infrastructure shifts toward social media platforms, participation is not only invited but is stimulated as an imperative. This kind of participation in networked publics is, then, not simply equivalent to democratic freedom, as freedom may also involve detachment from prescribed forms of public participation. Warner (2002) claims that the figure of the public always requires counterpublics whose relation to the state is emergent and unclear, perhaps even resistant. What would a social movement for the internet look like that wasn’t based on the dominant default of the individual citizen as actor, as it is in social media? Within these conditions of technical consolidation and standardisation, the NIIO framework’s historical emphasis on decolonisation, platform independence and the collective ownership of strategic information resources remains inspirational, even if the nation state form is no longer the most relevant mechanism for its achievement.
Conclusion
The question of how to mobilise a social justice agenda in the governance of social media platforms remains problematic. Not only is the critical infrastructure of social media in private hands, but the private forms of algorithmic governance, as many have pointed out, are not simply able to be demystified, as their source code reflects little about the processes of subjective regulation experienced by the end user (Barocas, Hood and Ziewitz 2013). The recent academic analysis of surveillance and manipulative practices by platforms have articulated their critique less through detailed analysis of their technical workings than by articulating end user experiences to more generic understandings of the operations of technical governance, by documenting and analysing the powerful corporate interests that collectively author these algorithms. In US-based work in particular, the tendency of communications research to work in an “administrative” vein has not articulated the citizen-consumer model to a broader history and discussion of governance and institutional structures (Lazarsfeld 1941). Given the ongoing importance of governmental authority in networked communications infrastructure, this focus may inhibit a more genuinely international dialogue on platform politics.
At a collective and regional level, the historical Non-Aligned Movement’s opposition to dominant powers retains its force, and its continuing legacies can be seen in, for example, the BDS (boycott, divestment, sanctions) movement against the occupation of Palestine, or the thinking through collective and cultural property such as in the movements for indigenous self-determination. A New International Information Order for the 21st century would not simply be an industry-led “open source” model which attempts to control the platforms as natural monopolies. Nor will it be through the inclusion of “civil society” in private-sector dominated stakeholderism. Most of all, it will not be an expanded “participation” in existing platforms and “sharing economies” in the Silicon Valley model. As Lawrence Liang (2009: 26) has explained about such liberal modes of governance
the rhetoric of inclusiveness is also always accompanied by the prospect of violence; the claims of the poor are always a matter of contests and negotiations rather than the benevolence of the state and the corporate world.
Instead, the New International Information Order to come rests on a more performative mode of democratic rights, the right to autonomous self-management and experimentation with as-yet unproven alternatives to the status quo. Discussing the issues with working from a weak position in their “webs of influence” models of governance, Drahos and Braithwaite (2001: 124) describe the need for less of ‘a law-like understanding and more a clinical diagnosis of when a particular web tightens or unravels. It is important not to overlook weak strands in the web. The point is how one strand in a web of controls works to strengthen the mesh of the web (or to unravel it)’.
References
Barocas, Solon; Hood, Sophie and Ziewitz, Malte. ‘Governing Algorithms: A Provocation Piece’ (2013) https://ssrn.com/abstract=2245322 (accessed October 29, 2013).
Bauman, Zygmunt; Bigo, Didier; Esteves, Paulo; Guild, Elspeth; Jabri, Vivienne; Lyon, David, and Walker, RBJ. ‘After Snowden: Rethinking the Impact of Surveillance’, International Political Sociology 8.2 (2014): 121–144.
Beer, David. ‘Power Through the Algorithm? Participatory Web Cultures and the Technological Unconscious’, New Media & Society 11.6 (2009): 985–1002.
Borgatti, Stephen and Foster, Pacey. ‘The Network Paradigm in Organizational Research: A Review and Typology’ Journal of Management 29.6 (2003): 991–1013.
Braman, Sandra. ‘Where Has Media Policy Gone? Defining the Field in the Twenty-First Century’, Communication Law and Policy 9.2 (2004): 153–182.
Butler, Judith and Spivak, Gayatri Chakravorty. Who Sings the Nation-state?: Language, Politics, Belonging (New York: Seagull Books, 2007).
Carlsson, Ulla. ‘From NWICO to Global Governance of the Information Society’, in Oscar Hemer, and Thomas Tufte (eds.), Media and Glocal Change: Rethinking Communication for Development. (Buenos Aires: CLACSO, 2005), 193–214.
Chun, Wendy Hui Kyong. ‘On Software, or the Persistence of Visual Knowledge’, Grey Room 18 (2005):26–51.
Ciborra, Claudio. ‘The Platform Organization: Recombining Strategies, Structures, and Surprises’, Organization Science 7.2 (1996): 103–118.
Coombe, Rosemary and Turcotte, Joseph. ‘Cultural, Political, and Social Implications of Intellectual Property Laws in an Informational Economy’, in UNESCO-EOLSS Joint Committee (eds.) Culture, Civilization and Human Society, in Encyclopedia of Life Support Systems (EOLSS), Developed Under the Auspices of the UNESCO. (Oxford: EOLSS Publishers, 2012) https://www.eolss.net
Derrida, Jacques. Of Grammatology trans. Gayatri Chakravorty Spivak (Baltimore: Johns Hopkins University Press, 1976)
———. ‘Archive Fever: A Freudian Impression.’ trans. Eric Prenowitz. Diacritics 25.2 (1995): 9–63.
Donahue, John D. 1988. Prisons for Profit: Public Justice, Private Interests (Washington DC: Economic Policy Institute, 1998)
Doyle, Gillian. ‘Audiovisual services: International Trade and Cultural Policy’, in Christopher C. Findlay, Hildegunn Kyvik Nordas, and Gloria Pasadilla (eds.) Trade Policy in Asia: Higher Education and Media Services (Singapore: Asian Development Bank Institute and OECD, 2014), 301–335.
Drahos, Peter and Braithwaite, John. ‘The Globalisation of Regulation’, Journal of Political Philosophy 9.1 (2001): 103–128.
Foucault, Michel. The Birth of Biopolitics : Lectures at the Collège De France, 1978–79, trans. Michel Senellart and Graham Burchell (New York: Palgrave Macmillan, 2008).
Froomkin, Michael. ‘Almost Free: An Analysis of ICANN’s Affirmation of Commitments’, Journal on Telecommunications & High Technology Law 9 (2011): 187–234.
Galloway, Alexander, The Interface Effect (Malden, MA: Polity, 2012).
Gane, Nicholas. ‘The Emergence of Neoliberalism: Thinking Through and Beyond Michel Foucault’s Lectures on Biopolitics’, Theory, Culture & Society, 31.4 (2014): 3–27
Just Net Coalition. ‘The Delhi Declaration’, (2014), https://justnetcoalition.org/delhi-declaration
Liang, Lawrence. ‘Piracy, Creativity and Infrastructure: Rethinking Access to Culture’ SSRN (2009) https://ssrn.com/paper=1436229
Malcolm, Jeremy. ‘Arresting the Decline of Multi-Stakeholderism in Internet Governance. In Malcolm, Jeremy (ed.) Consumers in the Information Society: Access, Fairness and Representation. (Kuala Lumpur: Consumers International, 2012), 159–180.
Mansell, Robin and Nordenstreng, Kaarle. ‘Great Media and Communication Debates: WSIS and the Macbride Report’, Information Technologies and International Development 3.4 (2007): 15–36.
Marwick, Alice. ‘The Public Domain: Surveillance in Everyday Life’, Surveillance & Society 9.4 (2012): 378–393.
McMillan, Robert. ‘What Everyone Gets Wrong in the Debate Over Net Neutrality’, Wired (2014), https://www.wired.com/2014/06/net_neutrality_missing/.
Mitropoulos, Angela. Contract and Contagion: From Biopolitics to Oikonomia. (London: Minor Compositions, 2012).
Pendakur, Manjunath. ‘The New International Information Order After the MacBride Commission Report: An International Powerplay Between the Core and the Periphery Countries.’ Media, Culture & Society 5.3–4 (1983): 395–411.
Rouvroy, Antoinette. ‘The End(s) of Critique: Data-Behaviourism Vs. Due-Process’, in Privacy, Due Process and the Computational Turn: the Philosophy of Law Meets the Philosophy of Technology. (London: Routledge, 2012), 143–168.
Sarikakis, Katharine ‘Securitization and Legitimacy in Global Media Governance’ in Ingrid Volkmer (ed.) The Handbook of Global Media Research. (Oxford: Wiley-Blackwell, 2012), 143–155.
Schiller, Herbert. Communication and Cultural Domination. (White Plains, New York: International Arts and Sciences Press, 1976)
Spivak, Gayatri Chakravorty. A Critique of Postcolonial Reason: Toward a History of the Vanishing Present. (Cambridge: Harvard University Press, 1999)
———. Death of a Discipline (New York: Columbia University Press, 2003)
Star, Susan Leigh. ‘The Ethnography of Infrastructure’, American Behavioral Scientist 43.3 (1999): 377–391.
Stiegler, Bernard. ‘Anamnesis and Hypomnesis,’ Ars Industrialis (2007), https://arsindustrialis.org/anamnesis-and-hypomnesis
———. Acting Out (Stanford: Stanford University Press 2009).
———. ‘Telecracy Against Democracy’, Cultural Politics: An International Journal 6.2 (2010): 171–180.
Warner, Michael. ‘Publics and Counterpublics’, Public Culture 14.1 (2002): 49–90.
Winseck, Dwayne and Pike, Robert. ‘The Global Media and the Empire of Liberal Internationalism, Circa 1910–30’, Media History 15.1 (2009): 31–54.
Wolfe, Thomas. ’New international Information Order: The Developing World and the Free Flow of Information Controversy’, Syracuse Journal of International Law and Commerce 8.1 (1980): 249–264.