Mark Andrejevic
Pomona College
[Abstract]
Recent debates over the fate of automated weaponry raise the question of pre-empting pre-emption: might it be possible to thwart the seeming ineluctable development of so-called ‘killer robots,’ that can respond to perceived threats more efficiently and rapidly than humans? The processes of disarmament and pre-emption collided in the ‘bold action’ of a top United Nations official who issued a call to ban the ominously-acronymed Lethal Autonomous Weapons (LAWs). ‘You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control,’ UN Director-General Michael Moller told the participants in a 2014 conferenced on killer robots (Agence France Presse, 2014). The difference between LAWs and other lethal weapons lies in the command decision – that is, the final determination regarding whether to fire (bomb, destroy, etc.). If the command decision always incorporates a human at some point in the command chain, the promise of the LAW is to codify priorities so that the human element can be programmed in advance (and thereby bypassed). Suggestively, automated application has long been a fantasy of the law – that is, the prospect that a law might carry within itself the principle of its application, thereby obviating the need for the all-too-human category of judgement. LAWs, in a sense, literalise the fantasy of automated application, by trying, sentencing, condemning and executing all at once.
In more general terms, the promise of LAWs recapitulates that of frictionless automation in which the resistance that slows down decision-making processes takes the forms of humans themselves. We, in all our sluggish, fleshly, humanity, are gumming up the machine by preventing it from operating as efficiently at it might otherwise do, freed from the vagaries of our desires and the hesitations of our decisions. The fantasy of friction-free capitalism outlined by Bill Gates (1995), for example, is one in which intelligent ‘agents’ speed up the consumption process, seeking out information about products, prices, and eventually about human desire so that it can be filled automatically. This same fantasy underwrites current developments in predictive analytics designed to distribute goods to particular locations before they have been ordered, to know what consumers want better than they themselves do. The prospect of LAWs envisions something similar – a process of automated warfare that can take place in an ongoing fashion at a pace that outstrips the limitations of human command and control. The friction-free conceit behind a LAW is that it can ‘outperform and outthink a human operator’ (Foust, 2013). As one university researcher put it in what sounds like a parody of contemporary Gradgrindianism:
If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm… A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run. (Foust, 2013)
The same logic can be turned around on humans themselves through the process of what might be described as self-droning: finding ways to transform humans into networked, sensing devices. Consider, for example, the HSARPA-funded cortically coupled computer vision system that seeks to make human image scanners more efficient by tracking brain responses in real time. The goal is to make intelligence analysts, among others (including shoppers, of course), more efficient by bypassing the need for conscious recognition. The program’s lead researcher, Paul Sajda, claims to be able to show images of drone footage or surveillance satellite photos to analysts more rapidly than they can consciously process in order to use their brains, hooked up to EEG monitors, as a detection device. The resulting technology, according to researchers, can at least triple search speeds. Sajda describes it this way: ‘The system latches on to individual perceptions and trains the computer to know what the user means by “interesting”‘ (Daley, et al, 2011). Building on this research, The U.S. Army is reportedly interested in creating a direct interface from drivers’ brains to automated forms of reaction and response.
A driver might see something peculiar on the roadside. Maybe it is an improvised explosive device. His C3Vision headgear would register the brain waves associated with the suspicious object and inject them into the vehicle’s driving system. When the system sees other things out there that look similar, it would automatically evade them. Likewise, security guards might use such gear to spot suspicious activity on surveillance video. (Daley, et al, 2011)
Related research explores the ability of such systems to improve response times in jet pilots: the construction of LAWs by other means.
Unsurprisingly, in our convergent world, the technology is also envisioned to have consumer applications: a miniaturised, wireless version of the device might be used to identify consumer items or even specialty shops that catch your fancy as you walk down a city street. ‘Just a quick glance at a dress in a window, for instance, might elicit a neural firing pattern sufficient to register with the system. A program could then offer up nearby stores selling similar items or shops you might want to investigate’ (Daley, et al, 2011). It sounds like a ready-made app for an EEG-equipped Google Glass: the promise to realise the fantasy that neuromarketers have been pushing: a direct feedback system routed through the affective register to bypass self-conscious thought altogether. If Bill Gates envisioned automated consumption via ‘intelligent agents’ that determined our tastes and shopped for us, the C3 system promises to turn us into our own intelligent agents by bypassing the forms of conscious reaction and deliberation that threaten to introduce ‘friction’ into the system.
The goal of aligning these examples with one another is to highlight a shared logic that coalesces around a version of experience that literalises the post-psychoanalytic disentanglement of language and desire. A particular version of the materialisation of desire – (its subtraction from the realm of language and therefore its ‘post-humanisation’) – fits neatly with the forms of monitoring and manipulation envisioned by the coming generation of affective applications and platforms. What model of experience corresponds to this reconfiguration and generalisation of desire? The work of Ian Bogost led me to this question in reverse largely through the attempt to discern what the appeal of the model of experience he proposes might be. He raises the relevant question in the subtitle of his 2012 book, Alien Phenomenology: Or What It’s Like to Be a Thing. In a sense, being a thing is precisely what the C3 system starts to envision. Bogost proposes an object-neutral definition of experience under which we might subsume all forms of interaction in terms of an expression familiar to the denizens of the data mine: the monitoring of the ‘exhaust’ of things. As Bogost puts it, ‘The experience of things can be characterized only by tracing the exhaust of their effects on the surrounding world’ (2012: 100). That is, things can only experience other things by tracing their ‘exhaust’ – and their own experience is whatever reaction they might have to this exhaust, a reaction that generates further exhaust. We might describe this as the meta-datafication of everything, a sensor-based model of experience, insofar as anything that is, in any sense, impacted by anything else becomes in the broadest interpretation of the term, a sensor. I’m inclined to push this reframing a bit farther and call it a kind of drone experience, in part because of the agentic sense with which Bogost infuses this flattened-out concept of experience, in part because of his fascination with various imaging technologies, and in part because of the treatment of the object as a probe: the attempt to experience the experience of the object that motivates the analysis.
The drone model of experience invokes the notion of a sensor-database-algorithmic formation that might be summed up by using the figure of the drone broadly construed: not just in the form of a flying, weaponised, surveillance device, but as the combination of a distributed sensor equipped with automated data analysis and response capabilities. Discussions of ‘big data,’ ‘data mining,’ and new forms of monitoring and surveillance often emphasise the figure of the database: the place where the data is stored, rather than that of the infrastructure that makes data collection possible. In part this is because of the distributed and heterogeneous character of the various sensors that comprise the monitoring ‘assemblage’ – but in part it is because of what might be described as the turn away from infrastructure that has characterised the fascination with so-called ‘immaterial’ forms of activity. This turn is echoed in the rhetoric of immateriality that characterises discussions of the ‘cloud’ (in ‘cloud computing’) and cyberspace more generally. Such formulations are symptomatic of anti-infrastructural thought. The figure of the drone, by contrast, focuses attention back upon the interface device that serves as mediator for information collection, automated analysis, and automated response at a distance.
The underlying claim here is that one of the reasons the figure of the drone has so rapidly captured the popular and media imagination is that, in addition to reviving what might be described as the ballistic imaginary once associated with technological gadgetry (in the Popular-Science vision of personal jet packs and rocket-ships), it encapsulates the emerging logic of portable, always-on, distributed, ubiquitous, and automated information capture: the droning of experience and response. The promise of the drone as hyper-efficient information technology is four-fold: it extends, multiplies, and automates the reach of the senses or the sensors, it saturates the time and space in which sensing takes place (entire cities can be photographed 24-hours a day), it automates the sense-making process, and it automates response. In this regard, the figure of the drone, generalised, stands for that of the indefinitely expandable and distributable probe that foregrounds the seemingly inevitable logic of algorithmic decision-making. The model of the signature strike (directed toward targets that ‘fit a profile’ rather than uniquely identified targets – that is, named and identified individuals) is an increasingly familiar one in the realm of data mining generally – whether for the purposes of health care, surveillance, marketing, policing, or security. Identification takes a back seat to data analytics: one needn’t know the name of a particular individual to target him or her, merely that he or she fits the target profile. This is why the category of Personally Identifiable Information is becoming an increasingly vexed one. Data analytics are subsumed and accounted for by the broader ensemble represented by drone logic, which unites sensing, analytics, and response. The figure of the drone, then, serves as icon of the (inter)face of new forms of monitoring, surveillance, and response: an exemplar of emerging forms of digital ‘interactivity.’
It is with this broader conception of the drone in mind that we might approach the affective frontier of data collection and monitoring: the fascination with so-called mood monitoring and sentiment analysis. The hallmark of the drone as a material object is – like so many of the digital devices that have come to permeate the daily life of technologically saturated societies — its mobility and miniaturisation, that is, its anticipated efficiency as ubiquitous, always-on probe. We might use the notion of the signature strike and its analogue in target marketing as an example: identification falls by the wayside, as do those aspects of the legacy version of experience associated with accounts of intentionality, motivation, and desire in ways that recall Chris Anderson’s paean to the power of big data: ‘Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people [and things] do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity’ (2008). Such logic, like the signature strike, isn’t interested in biographical profiles and backstories, it does not deal in desires or motivations: it is post-narratival in the sense conjured up by Bogost as one of the virtues of Object Oriented Ontology: ‘the abandonment of anthropocentric narrative coherence in favor of worldly detail’ (2012: 42).
Experiencing the data flow becomes, necessarily, the job of various kinds of distributed objects. Perhaps this is the appeal of Bogost’s theory in the digital era: the excavation of the forms of post-human experience that characterise automated data collection. The interest in capturing all available data – as exemplified by a fascination with open-ended, random lists – embraces what Bogost describes as ‘a general inscriptive strategy, one that uncovers the repleteness of units and their interobjectivity’ (2012: 38). He calls this process ontography: the writing of being, which ‘involves the revelation of object relationships without necessarily offering clarifying description of any kind’ (2012: 38). This formulation bears a certain resemblance to Anderson’s diagnosis of the ‘end of theory’ wherein data mining might generate actionable but emergent information that is both unpredictable and, inexplicable (in the sense that it neither needs nor generates an underlying explanatory model).
The defining attribute of the kind of ‘knowledge’ envisioned in Anderson’s Big Data manifesto is the process of emergence itself – the fact that data mining by definition generates un-model-able outcomes and thereby puts emergence to work. So we start to see the outlines of a particular form of so-called knowledge emerging: a post-comprehension, post-referential (in the sense of referring to an underling cause or explanation), data-exhaust driven way of ‘knowing.’ It is with this in mind that I want to turn to the theme of emotion and relate it to the non-human version of materiality outlined by, for example, Jane Bennett (2009). She describes a version of affect that is, ‘not only not fully susceptible to rational analysis or linguistic representation but that is also not specific to humans, organisms, or even to bodies: the affect of technologies, winds, vegetables, minerals…’ (61). This is a version of affect that manifests itself in the form of exhaust described by Bogost and that, I think, lends itself to emerging, data-driven strategies of post-narratival analysis: tracing the exhaust of an unfolding litany of actants and interactants: of complex webs of interactions ‘too big to know,’ as David Weinberger (2011) puts it.
Bennett’s (2009) version of the endless unfolding of material detail surely expands beyond the realm of narrative containment – the ongoing chain of connections toward which her account gestures is both breathtaking and frustrating. Any outcome is the result of potentially infinite array of agentive factors. Her (inadvertently) complementary gesture to the ‘end of theory’ manifesto is a post-theoretical fascination with a kind of infinite regression: the attempt to contain everything so as to eschew the ostensible evils of abstraction. The growing reach of the big data database and the breadth of Bennett’s ambition to take into account what she describes as ‘an interstitial field of non-personal, ahuman forces, flows, tendencies, and trajectories’ (61), share a conserved impulse toward totality, although Bennett retains the model of narrative closure while frustrating it utterly. The prospect of unfolding the full list of participants in a particular event or outcome is an ongoing one. Similarly, the database in its ideal-typical form approaches the levelling, allegedly democratising ambition of Bennett’s vibrant materialism, allowing a promiscuous jumble of factors to rub shoulders.
With these affinities in mind – between correlational forms of data mining and post-narrative, post-explanatory modes of analysis, the remainder of this article sets out to explore their relevance to the topics of affective computing and sentiment analysis and the role of so-called mood reading in the process of affective modulation (Clough, 2009). Consider some examples from the realm of mood-mining as one frontier of data collection in the service of so called affective computing. Microsoft’s ‘MoodScope’ initiative seeks to turn smart phones into mood sensors, not by adding a dedicated sensor, but by tracking usage patterns and their correlation with self-reported mood. By correcting their models over time, the researchers eventually automate the prediction process and claim to move from 63 percent accuracy to 93 percent accuracy (LiKimWa, 2012: 1). As the project’s researchers put it, ‘we find smart phone usage correlates well with users’ moods…Users use different applications and communicate with different people depending on their moods. Using only six pieces of information, SMS email, phone call, application usage, browsing and location, we can build robust statistical models to estimate mood’ (LiKimWa, 2012: 2). Of course, the goal of inferring mood is, for Microsoft a commercial one that serves the generation of recommendation algorithms and marketing strategies that monitor and influence shifting consumer preferences.
We might describe the MoodScope as partaking of drone logic (and drone experience): it envisions a network of mobile, distributed, always-on sensors that underwrite automated forms of data collection, processing and response (targeting). The invocation of ‘mood’ should not distract – it is a placeholder that does not refer to an underlying state but simply to pattern of correlations: the nexus of a particular set of behaviours (as monitored by smart phone sensors) and the measured probability of a particular response. The next logical step for the development of such an app is to bypass the placeholder of ‘mood’ altogether, simply extrapolating from patterns of activity to predict susceptibility to particular prompts and appeals during particular times within specified contexts. This modality of prediction or influence operates at a machinic level, linking flows of activity to patterns of response in order to get something done (generate a response or action of some kind). The point is not interpretation (of mood, subjective state, evidence of desire) but intervention in flows of viewing, clicking, spending, consumption. This way of thinking lends itself to the machinic imaginary of scholars such as, for example William Bogard (1998), who quoting Deleuze and Guattari, notes that, ‘The social machine… is literally a machine, irrespective of any metaphor, inasmuch as it exhibits an immobile motor and undertakes a variety of interventions: flows are set apart, elements are detached from a chain, and portions of the tasks to be performed are distributed’ (54). The notion of an ‘immobile motor’ neatly invokes the figure of the ‘exhaust’ of things. The process of ‘sensorisation’ works to codify these flows for the purpose of intervention. As Daniel Smith (2007) puts it, these networks of affect (and the information networks through which they flow) become ‘infrastructural’: ‘They are, if I can put it this way, part of the capitalist infrastructure; they are not simply your own individual mental or psychic reality. Nothing makes this more obvious that the effects of marketing, which are directed entirely at the manipulation of the drives and affects: at the drug store, I almost automatically reach for one brand of toothpaste rather than another’ (74).
The infrastructure of affect continues to be ‘built out’ by the growing platform of affects apps. Apple has already patented technology that relies on an embedded tactile heartbeat sensor to identify users and monitor their moods (Calorielab, 2010). The technology combines the promise of convenience with enhanced monitoring capability: the phone can be unlocked just by picking it up, but the monitor, unlike a fingerprint scanner, simultaneously gathers information to potentially serve a host of marketing, security, and medical functions. As one news account put it, ‘By monitoring your heartbeats, the device will also be able to tell how you’re feeling (better than you can tell yourself, presumably), what you’ve been eating and if you’ve just come back from a jog’ (Calorielab, 2010). The vectors for capturing, monitoring, and intervening in the flows that link ‘mood’ and response are multiple and expanding alongside the various registers of interactivity: they piggyback on multiplying applications and the behaviour patterns these elicit.
Similarly, the company that developed the technology that powers Apple’s Siri is working on adding voice recognition ID systems that simultaneously incorporate mood detection. Soon Siri will respond not just to what you say, but to its conception of how you feel. Once again the promise combines convenience with the prospect, at least in this case, of commercial monitoring. As an interview with the company’s marketing chief put it: ‘If your car thinks you sound stressed, it may SMS your office to say you’re late or even automatically suggest another route that avoids traffic’ (Eaton 2012). But the company is looking to monetise the technology: ‘What if when you ask Siri for information about a movie, she works out that you’re sad and recommends a comedy film that you otherwise wouldn’t have seen, paired with an ad campaign?’.
And the litany of mood apps goes on: MIT has spun off a company called Affdex that uses facial recognition technology to gauge emotional response. It has been used by companies like Forbes to crowdsource reader’s responses to ads shown on the company’s website. Yes, soon not just the TV, but the ads, the music, the magazines and books will be watching, analysing, and responding in the affective register. A company called Sensum develops apps that use galvanic skin response to measure stress levels. Microsoft is building emotion recognition into its Kinect device, so that next-generation games (and, yes, ads) will be able to react to facial expressions and monitor heart rate. The anticipated result is, as a somewhat breathless account puts it, that ‘games will react to your emotionality, and even your cars will route you to entirely new destinations based on how you’re feeling. The next generation of advertising will determine how you’re feeling…And it’s not just the question of detecting your mood, it’s all about how this leads the person expressing the mood to discover new information’ (Eaton, 2012). It also leads to the prospect of more effectively sorting, targeting, and influencing in a variety of registers for a range of purposes.
Coming full circle, security is one of the pioneering and recurring applications of affective monitoring, thanks in no small part to department of homeland security funding. The DHS, has funded Cambridge-based Draper Labs ‘to develop computerized sensors capable of detecting a person’s level of “malintent” – or intention to do harm’ as part of the ‘Future Attribute Screening Technologies,’ program (Segura, 2009). The goal is to, ‘detect subjects’ bad intentions by monitoring their physiological characteristics, particularly those associated with fear and anxiety,’ according to the DHS (Segura, 2009).
Possible technological features of FAST include ‘a remote cardiovascular and respiratory sensor’ to measure ‘heart rate, heart rate variability, respiration rate, and respiratory sinus arrhythmia,’ a ‘remote eye tracker’ that ‘uses a camera and processing software to track the position and gaze of the eyes (and, in some instances, the entire head),’ ‘thermal cameras that provide detailed information on the changes in the thermal properties of the skin in the face,’ and ‘a high resolution video that allows for highly detailed images of the face and body … and an audio system for analyzing human voice for pitch change’ (Segura, 2009). The project is based on another DHS project called ‘Hostile Intent,’ which ‘aims to identify facial expressions, gait, blood pressure, pulse and perspiration rates that are characteristic of hostility or the desire to deceive’ (Segura, 2009).
Researchers are developing applications that claim to be able to identify a person’s emotional state by listening in on mobile phone conversations. Some companies in the United States already use the system in their call centres. Researchers are testing the software’s use in diagnosing medical conditions like autism, schizophrenia, heart disease and even prostate cancer (DiscoveryNews, 2013). One could continue indefinitely in this register: since emotion detection covers the gamut of securitisation applications: economic, criminal, health, social and so on. And the sensor array proliferates on the various forms of drone devices, broadly construed, that circulate amongst us, upon us, with us.
It is just one step from these examples to what might be described to the redoubling of drone logic: equipping drones with ‘malintent-detection’ sensors. Drones already target strikes based on mobile phone signatures, using the device to identify a particular individual. But drone logic pushes beyond strategies of identification in which a device comes to represent a particular target to strategies of pre-emption in which a device identifies potentially threatening or risky affective states with the potential to result in action.
In this regard, the invocation of terms like mood, emotion, or sentiment (or even ‘malintent’) is not meant to speak to a particular conception of subjective interiority nor even to have any definitive discernable stable referential content, but rather to mark the intent of detecting, predicting, and influencing response in a register other than that of reflexive, self-conscious communication – indeed to, in a sense, bypass this register in any respect other than as a potential source of more raw material for pattern analysis. The promise of bypassing this register is to bypass the vagaries, pathologies, deceptions, and self-deceptions of self-consciousness: to read affective response directly and thereby to develop strategies for intervening in it. In this context, speech, to take an example, is not about content, but about voice stress, or detectable word patterns that correlate with signature patterns – as in a signature strike. That is, the strategies of influence mobilised in response to detected ’emotional’ states may take the forms of standard types of communication, but the register in which their potential effectivity is posited is other than the ideological – the narrative, the content-based. In Papoulias and Callard’s (2010) formulation, the intervention, ‘is seen as proceeding directly from the body – and indeed between bodies – broadly construed here – without the interference or limitation of consciousness, or representation’ (37).
In her critique of the turn to affect, Ruth Leys characterises the split at work here in terms of the, ‘presumed separation between the affect system on the one hand and signification or meaning or intention on the other’ (2011a: 800). It is a presumption she is concerned about not least because it smuggles in the very binaries these theorists imaged they had surpassed: ‘in spite of their explicit hostility to dualism, many of the new affect theorists succumb to a false dichotomy between mind and body’ (2011a: 801). This dualism is characteristic of ‘post-comprehension’ strategies of influence and ‘literacy’ (brain reading and body reading). The ‘mind’ (intentional, conscious, available for rational cognition) may have gotten much of the attention when it comes to information processing and communication, but the body’s language is efficacious. As Leys puts it, affect is figured as ‘prior to ideology’: ‘an inhuman, nonsignifying force that operates below the threshold of intention, consciousness, and signification’ (2011a: 802).
The turn to affect in the strands of theory outlined earlier is thus framed as a (re)turn to the body as subsumed to the status of object with particular types of experience, that take into account what Thrift describes as, ‘the way that political attitudes and statements are partly conditioned by intense autonomic bodily reactions that do not simply reproduce the trace of a political intention and cannot be wholly recuperated within an ideological regime of truth’ (as quoted in Leys, 2011b: 436). This model of affective communication as immediate influence is rehabilitated not least in the strategies of neuromarketers and the sentiment analysts (as is the temporal and conceptual split between affective response and post hoc rationalisation: the attempt to narrativise the impulse that always comes after the fact). Although data mining is agnostic about this split, allegedly eschewing models of causation and explanation, in this very refusal it has already chosen sides.
Something related takes place in the development of so-called sentiment analysis: the attempt to data mine expressed sentiment on the social web in real time so as to intervene and influence an aggregate conception of the internet’s ‘feeling tone.’ The field is popularly described as one in which, ‘the vagaries of human emotion are translated into hard data’ (Wright, 2009). But this description is not quite right: the goal of marketers is not to gauge personal, individual ‘human’ emotion, but rather to probe an affective landscape without having to pore over the individual contributions of millions of Internet users. Sentiment analysis relies on technological advances that make it possible to sift through all these forms of expression, to treat them as measurements of a capability to affect or a susceptibility to influence, without actually reading them. The goal is a kind of thin-slicing or pulse-reading of the Internet as a whole. Pioneering companies in the field develop applications that troll through twitter feeds, blogs, social networking sites, online forums, bulletin boards, and chat rooms, probing the emotional pulse of the Internet. The industry places a premium on speed and volume: processing as many posts and messages as possible in real time.
As in the case of the app examples, the model is not a descriptive, referential one (that would aim to accurately describe how individuals are feeling) but a predictive, correlational one. Applied to sentiment analysis, the goal of data mining is both pre-emptive and productive: to minimise negative sentiment and maximise emotional investment and engagement: not merely to record sentiment as a given but to modulate it as a variable and thereby to influence the forms of behaviour with which these shifts are associated. The process relies on giving ‘populations over to being a probe or sensor’ (to borrow Patricia Clough’s formulation) to provide the raw material for tracking the emotional indicators that correlate with desired outcomes – and for developing ways of exploiting them (Clough, 2009: 53).
What is suggestive about the proliferation of apps in the affective register is the way they redouble all content in the form of post-content ‘knowledge.’ Recall the goal of MoodScope or the FAST program: not to read all messages, or listen to all calls, but to piggyback on content to get machine-sortable metadata: you may use your apps or your email to collect information or communicate with others, but these uses generate patterns that, without your conscious knowledge communicate a user state (and an aggregate state) that can then be correlated with your responses. We might describe this monitoring logic as the meta-datafication of everything: content becomes metadata, when it is not read (for significance), but sorted, mined, and correlated (for useful patterns). This is why no human at Google reads your email. Such applications use the placeholder of mood, or affective state, to generate correlations that underwrite more direct modes of influence – techniques for enhancing the power of acting or being acted upon. That is, the goal is to define a state of receptivity in which the broadened and flattened conception of experience allows all kinds of collected data to commingle. The result is a litany of content – in its machine-readable form – including patterns of search, typing speed, Web sites visited, patterns of communication (who one emails, how frequently), movement throughout the course of the day, barometric pressure, sunspots (why not?), magnetic fields, and on and on, limited only by the capabilities of the growing sensor array. I am using the term post-comprehension, somewhat freely here, to designate the forms of too-big-to-know knowledge that represent the displacement of causation or explanation by correlation. The descriptor ‘post-comprehension,’ then refers to the goal of discerning patterns that are neither conscious nor meaningful to users. The term refers also to the detection of receptivity to particular influences – whether such and such a ‘mood’ – or, more properly speaking, the patterns of use which the placeholder of mood is meant to designate – correlates with a heightened tendency to respond in particular ways. Additionally the notion of post-comprehension refers to the fact that the generation of these patterns is portrayed as an emergent one, and is, in this respect, unmodellable, unanticipatible, and potentially, un-reverse-engineerable. Why post-comprehension and not pre-comprehension? Because the goal of explaining is not deferred but dispensed with: there is no background assumption that in the end, the infinite database will yield total comprehension. Once everything is coded, it is not understood, but simply processed: the ongoing interventions of the (total) immobile motor.
These forms of opacity, or unmodellability characterise the emerging asymmetries of a big data divide. From a research perspective, Boyd and Crawford (2012) have characterised the divide between ‘the Big Data rich’ (companies and universities that can generate or purchase and store large datasets) and the ‘Big Data poor’ (those excluded from access to the data, expertise, and processing power), highlighting the fact that a relatively small group with defined interests threatens to dominate and control the research agenda. The notion of a ‘big data divide’ needs to be extended to incorporate a distinction between ways of thinking about data and putting it to use. That is, it needs to acknowledge the consequences of emerging forms of opacity and asymmetry: between those who are able to put to use the unanticipatable and inexplicable correlations generated by the data mining process and those who are subject to the forms of sorting and exclusion they license. This is also a divide between those who seek to exploit detected correlational patterns of affective response and those whose actions are subject to the forms of inferential data mining enabled by the growing sensor array and the expanding database.
Despite the rhetoric of personalisation associated with data mining, it yields predictions that are probabilistic in character, privileging decision making at this level. Moreover, it ushers in the era of what might be called emergent social sorting: the ability to discern un-anticipatable patterns that can be used to make decisions that influence the life chances of individuals and groups. Notions like that of ‘informed consent’ when it comes to online tracking and other types of digital-era data surveillance are rendered largely meaningless by the logic of data mining, which proposes to reveal unanticipated and unpredictable patterns in the data. At a deeper level, the big data paradigm proposes a post-explanatory pragmatics (available only to the few) as superior to the forms of comprehension that digital media were supposed to make more accessible to a greater portion of the populace.
In this regard, the privileging of correlation and prediction – like the figure of the drone — leads us back to issues of infrastructure. If, as Weinberger (2011) puts it, the smartest person in the room is the room, in the era of post-comprehension knowledge, it matters who owns, operates, and controls the room. It is worth emphasising that such forms of asymmetry and opacity are the specific goal of so-called affective forms of context awareness. At the moment when access to traditional forms of understanding and evidence is enhanced by the new technology, these are treated as ostensibly outdated.
Practices of data-driven affect mining anticipate a context in which only the few will have access to useful forms of ‘knowledge’ that are not just unavailable to the majority, but incomprehensible. Thus, there is no way for individual users to anticipate how information about them might prove salient for particular forms of decision-making. Isn’t this the endgame logic of the ‘unmanned’ LAW? The figure of the drone augers not simply prosthetic enhancement but displacement: the cultivation of forms of automation that result not simply in synthetic perception (Virilio, p. 58), but in synthetic action. In this regard, the figure of the drone comes to stand for a particular kind of alienation: of perception and practice that is becoming increasingly familiar in our auto-sorted, curated, algorithmically directed information environment. We come to experience the re-processing of our actions, desires, and responses in an unrecognisable form directed back upon us in the service of ends built into the infrastructure. In the contemporary theoretical climate, the familiar critique of alienation (as a critical conceptual tool) is that it introduces an outdated form of (pre-post-) humanism (and thus, of the subject). When everything is alien, alienation, of course evaporates. What if the critique of alienation invokes, rather, the spectre of what Smith refers to as ‘an ethics of immanence’ that will criticise anything that ‘separates a mode of existence from its power of acting’ (2007, 68)? Rather than proposing the alien as a starting point, in the face of the developments outlined above, why not alienation? To invoke Guy Debord’s diatribe against Jean-Marie Domenach’s dismissal of the very concept of alienation: ‘Let us speak vulgarly since we’re dealing with priests: alienation is the point of departure for everything — providing that one departs from it’ (Situationist International, 1966).
Biographical Note
Mark Andrejevic teaches and writes about popular culture, surveillance, and digital media. He is the author of Infoglut: How Two Much Information is Changing the Ways We Think and Know, as well as two other books and a variety of articles and book chapters. He is currently writing about the droning of the social.
References
- Agence France Presse. ‘UN talks take aim at “killer robots,”‘ The Express Tribune, May 13, 2014. https://tribune.com.pk/story/707899/un-talks-take-aim-at-killer-robots/.
- Anderson, Chris. ‘The End of Theory: The data Deluge Makes
the Scientific Method Obsolete.’ Wired Magazine, June 23, 2008, https://www.wired.com/science/discoveries/magazine/16-
07/pb_theory. Accessed 30 August 2008/ - Bennett, Jane. Vibrant Matter: A Political Ecology of Things (Durham: Duke University Press, 2009).
- Bogard, William. ‘Sense and Segmentarity: Some Markers of a Deleuzian‐Guattarian Sociology.’ Sociological Theory 16.1 (1998): 52—74.
- Bogost, Ian. Alien Phenomenology, or, What it’s Like to be a Thing (Minneapolis: University of Minnesota Press, 2012).
- Boyd, Danah, and Crawford, Kate. ‘Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon.’ Information, Communication & Society 15.5 (2012): 662—679.
- Calorielab. ‘iPhone That Can Detect Your Mood in a Heartbeat’, 2010. https://calorielab.com/labnotes/20100510/iphone-to-recognize-mood-by-detecting-heartbeat/
- Clough, Patricia Ticineto. ‘The New Empiricism: Affect and Sociological Method.’ European Journal of Social Theory 12.1 (2009): 43—61.
- Daley, Jason, Piore, Adam, Lerner, Preston, and Svoboda, Elizabeth. ‘How to Fix Our Most Vexing Problems, From Mosquitoes to Potholes to Missing Corpses,’ Discover Magazine (October, 2011), https://discovermagazine.com/2011/oct/21-how-to-fix-problems-mosquitoes-potholes-corpses/
- Eaton, Kit. ‘Does Your Phone Know How Happy You Are?’ FastCompany.com, June 7, 2012. https://www.fastcompany.com/1839275/does-your-phone-know-how-happy-you-are-emotion-recognition-industry-comes-giddily-age/
- Foust, Joshua. ‘Soon, Drones May Be Able to Make Lethal Decisions on Their Own,’ National Journal (October 8, 2013), https://www.nationaljournal.com/national-security/soon-drones-may-be-able-to-make-lethal-decisions-on-their-own—20131008/
- Gates, Bill, Myhrvold, Nathan and Rinearson, Peter. The Road Ahead (New York: Penguin, 1995).
- Leys, Ruth. ‘Affect and Intention: A Reply to William E. Connolly.’ Critical Inquiry 37.4 (2011a): 799—805.
- Leys, Ruth. ‘The Turn to Affect: A Critique.’ Critical Inquiry 37.3 (Spring 2011b):434—472.
- LiKimWa, Robert. ‘MoodScope: Building a Mood Sensor from Smartphone Usage Patterns’ (Doctoral dissertation, Rice University, Houston, TX, 2012).
- Papoulias, Constantina, and Callard, Felicity. ‘Biology’s gift: Interrogating the turn to affect.’ Body & Society 16.1 (2010): 29—56.
- Segura, Liliana. ‘Homeland Security Embarks on Big Brother Programs to Read Our Minds and Emotions,’ Alternet (December 8, 2009), https://www.alternet.org/story/144443/homeland_security_embarks_on_big_brother_programs_to_read_our_minds_and_emotions/
- Smith, Daniel W. ‘Deleuze and the question of desire: Toward an immanent theory of ethics.’ Parrhesia 2 (2007): 66—78.
- Weinberger, David. Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room is the Room (New York: Basic Books, 2011).
- Wright, Alex. ‘Mining the Web for Feeling, not Facts’, The New York Times (August 23, 2009), https://www.nytimes.com/2009/08/24/technology/internet/24emotion.html/