Interview

FCJ-172 Posthumanism, Technogenesis, and Digital Technologies: A Conversation with N. Katherine Hayles

Holger Pötzsch
UiT Tromsø

N. Katherine Hayles
Duke University

[Abstract]

Holger Pötzsch: Katherine Hayles, your idea of posthumanism is inspired by cybernetics and by a new attentiveness to the body and materiality?

N. Katherine Hayles: Posthumanism as I define it in my book How We Became Posthuman (1999) was in part about the deconstruction of the liberal humanist subject and the attributes normally associated with it such as autonomy, free will, self determination and so forth. What I saw happening in the 1980s and 1990s was the rise of a new way of thinking about human beings that was in flat contradiction to all these attributes; that was what I called posthumanism. One of its manifestations was the idea that if you capture the informational patterns of the human brain, you could then upload it to a computer and achieve effective immortality. To me this seemed absolutely wrong, even pernicious, because it plays on mere fantasies of cognition and of what constitutes human life. I was, at this point, very concerned to insert embodiment back into the equation. It seemed significant to me that the foremost proponents of this reductionist view of human life, such as Hans Moravec, were not neuroscientists or physiologists, but worked within robotics. As much as the science of robotics has advanced, it still is no way near the capacity to reconstruct the complexity of the human brain and its relation to the body and its surroundings. The embodied nature of human cognition is highly relevant to the question of whether downloading a human personality might ever be possible. In my view the answer to this is no. Certainly it will not be possible within the next 50 years.

HP: But you think it might be technologically possible in a remote future?

KH: We currently have no computational platform that approaches the complexity of the human neurosystem; neural nets, for example, model synaptic connections but lack any connection to the complexities of the endocrine system and hormonal regulation. And even if we had such a device, the questions of the embodied nature of cognition and varying relations enabled by the sensory system still remain unanswered. Humans are enormously complex systems and we have nothing like that in regard to technological systems.

HP: Isn’t it quite reductive to assume that the question of copying a human being onto a hard drive is merely a question of complexity?

KH: Complexity and embodiment together. To say uploading is unlikely is not to deny, however, that computational media and other advanced technologies are changing the conditions of human life. Ray Kurzweil, for example, interrogates the various ways through which technology is already affecting things like life-span, human cognition, sensory systems, and so forth. We cannot draw a clear ontological distinction between human beings and their technical surroundings.

HP: What about the question of politics and agency in this?

KH: Once one starts to focus on how technology enables, for instance, longevity, one immediately becomes aware of the resource question. Simply compare the amount of money spend on life-enhancing and –prolonging technologies in the US or Western Europe with sub-Saharan Africa. The varying relations between humans and technology are always already invested with politics. Life expectancy and resource allocation are directly related. The problem is never merely technological, but always also social, political, and economic.

HP: If the liberal humanist subject is deconstructed, can we still account for creativity and change?

KH: Why would this deconstruction impede change, creativity, or as others have claimed progress? Can we assume 1) that human beings actually can be isolated from their technological or other contexts, and 2) that humans are the only agents capable of complex cognitive operations? I do not think we can. On the other hand, posthumanist thinking might help us to take a new look at the boundaries between what counts as human, animal, machine, or object. A redrawing of this boundary certainly entails highly political questions that can point either toward an inclusive and progressive, or an exclusory, direction.

HP: How does posthumanism change received ideas of agency?

KH: In the version of the human articulated within the liberal-humanist tradition, agency is seen to reside primarily in the individual subject. Individuals can be incorporated into larger structures, but it is ultimately the individual that possesses agency. As we move deeper into a highly technological regime and as the technological infrastructure surrounding us becomes more and more complex, it becomes increasingly obvious that human agency cannot ever be seen in isolation from the systems with which humans are in constant and constitutive interaction. In fact, the idea that human agency is paramount appears to be an illusion; as Bruno Latour and others have pointed out, it is a good corrective to see agency as distributed among both human and non-human entities. This is a primary focus of the emerging field of new materialism that looks into how technological, and also biological and social, processes predispose and channel human action.

HP: Have we ever been anything but posthuman?

KH: Thinkers such as Gilbert Simondon and later Bernard Stiegler have alerted us to the fact that humans have always been integrated into their environment and have co-evolved with it. What is new at the present moment is the unprecedented degree with which we actively build and change these environments. This enables new feedback loops and new forms of amplification between human evolution and technical developments. Take for example human attention. Humans are equipped with two mechanisms of attention: deep and hyper attention. Deep attention has a high threshold for boredom and enables one to engage in a specific task or problem over an extended period time to develop expert knowledge; hyper attention requires constant gratification yet enables one quickly to scan significant amounts of data to gain an overview or identify certain patterns. Both forms of attention have been with us since the beginning of humankind, and both have specific advantages. Now, with the development of ubiquitously networked digital devices, however, we have created a socio-technical environment that systemically privileges hyper attention. This has profound effects on human cognition and stimulates the development of hyper attention. Humans with this ontogenetic adaptation actively reconfigure their technical environments in a direction that requires even more hyper attentiveness. The biological, technical, and socio-cultural implications of smart phones are a good example of the mutual amplification of technical devices and human social and neurological co-evolution. This is something I try to get at with the term “technogenesis” in my book How We Think.

HP: Can you describe the particular role of digital technologies in contemporary technogenesis?

KH: Obviously, digital technologies have vastly expanded our ability to communicate, to do research, gather information, share, organise, and so on. Digital technologies have brought the technological infrastructure that I have been talking about to an entirely new level. The interfaces connecting humans to their technical surroundings become more and more transparent, while the networks connecting us become more and more ubiquitous. This has profound embodied, and also socio-political and economic, consequences. The global banking system, for instance, is more interconnected today than ever before. This provides increased efficiency, but also opens the system to the dynamics of complex adaptive technological ecosystems where small perturbations can have large consequences, and where machinic actors make decisions that impact the lives of millions of human beings. We saw this in 2007–8 with the start of the global financial crisis. As digital technologies become more and more woven into the fabric of everyday life, a neat division between human and non-human actors and agencies becomes more complex.

HP: What you mention here are mostly systemic impacts of digital technologies. However, they do have an embodied effect as well…?

KH: They do. Dealing with digital technologies on a regular basis has physical and neurological consequences. Due to the enormous plasticity of the human brain, practices invited by ubiquitous digital technologies entail significant neurological changes. I mentioned this above already with reference to a shift in cognitive modes from deep to hyper attention, and the connected socio-technical and biological feedback loops with mutual amplification. Moreover, these effects are more pronounced the younger the cohort.

HP: So introducing for instance iPads into kindergartens at a regular basis would not be a good idea?

KH: It would contribute to a technologically enhanced rewiring of children’s brains toward hyper attention at an age characterised by high degrees of neural plasticity. This might help them adapt even better to the socio-technical systems we are currently shaping, but it might come at a significant cost, the consequences of which we do not fully understand at present. We have to take these potential impacts seriously, and especially as teachers we should inspire and alert our students to forms of attention that may not come to them automatically from their environments, rather than going further down the same road they have already taken.

HP: What about digital technologies and the problem of surveillance?

KH: With digital technologies we have the capacity to capture and productively process unimaginable amounts of data. This has both advantages and disadvantages. The Snowden affair has made clear that these technologies make possible forms of surveillance and control that were almost unthinkable prior to the emergence of the Internet. So, the digital definitely has a dark side, but at the same time who would like to give up all the advantages that digital technologies bestow upon us? We need robust political and legal institutions that can mitigate and guard against the significant potentials for abuse by both state and private actors that the increasing ubiquity of digital technologies makes possible.

HP: Could we move on to a discussion of the role of the Humanities in this? What is their possible role in an encounter with digital technology? Is there a danger that they become obsolete?

KH: (laughs) The Humanities have a very important role to play because questions of meaning that the Humanities traditionally consider still have a salient position. Questions of meaning also are central in relation to our uses of digital technologies. Taking advantage of these challenges requires some changes or adjustments in the practices, methods, and theoretical basis of the Humanities. New forms of machine reading, for instance, have opened up whole new areas of research using quantitative approaches to corpora of literature far too large for any human to read. This machine reading does not replace or render obsolete traditional practices such as close reading, but it can supplement those and lead to new insights inaccessible without these technologies. New technologies also facilitate pedagogical changes regarding the roles of teachers and students, moving away from a one-to-many system of dissemination (for example, the traditional lecture) and toward technologically facilitated teaching practices such as a flipped classroom, innovative project work, or new forms of collaborative writing. By these means one can more easily tap into the enormous reservoir of knowledge, creativity and insights students always already bring to the classroom.

This opens the question of how the traditional university system might be changed through new practices enabled by digital technologies, such as massive open online courses, MOOCs. These have tremendous transformative potentials beyond the Humanities that might change contemporary higher education at a fundamental level and on a global scale. When students from anywhere can gain full access to the entire MIT curricula almost for free, this both enables learning world-wide and poses challenges to received institutional practices. I think academia as we have known it will transform radically and become almost unrecognisable by present standards in the decades to come. Universities are faced by challenges so profound that I suspect they will not exist in their present form for much longer.

HP: Given the systemic privileging of hyper attention and hyper reading in contemporary digital environments you mentioned before, do the traditional Humanities have a particular responsibility to train deep attention and forms of close reading as a counterweight to these tendencies?

KH: I agree with that. I think the traditional Humanities have a special role in cultivating deep attention, the ability to deeply concentrate on a particular subject with a high threshold for boredom and in-depth expert knowledge as likely outcomes. Deep attention, of course, is not only crucial for serious work in the Humanities but is a cognitive ability essential for almost any kind of advanced work, including the sciences and social sciences. Reading, and in particular the ability to read closely and with full concentration, is a universal skill that applies to every discipline. Given that this is the special provenance of the Humanities, we would expect that the Humanities would have a special role here. I would like to emphasise once again, however, that digital technologies also enable other forms of reading, such as machine reading, that might open entirely new venues for research both in the Humanities and in other disciplines.

HP: One example for such a productive use of digital technologies for a Humanities-based inquiry would be the work by Sönke Neitzel and Harald Welzer. During the Second World War, British secret service wire-taped several cellblocks holding German and Italian prisoners of war. All the conversations among the inmates were recorded over several years creating a dataset so vast that it became unmanageable for human inquiry. Only after the material had been digitised, it became accessible to scholarly analysis. In their book Soldaten, Neitzel and Welzer detail how such techniques as topic cluster analysis or keyword indexing prepared the ground for deeper scholarly engagement with particular relevant subsets of the whole database. Here, I think, we see some of the possible synergies created through a productive combination of machine reading and close reading that emerges as characteristic of the Digital Humanities.

KH: That’s a fascinating example that illustrates the potential of digital technologies for the Humanities.

HP: Could we move on to your method of comparative media analysis? As far as I understand, this method aims at maintaining productive focus on literature, but at the same time points beyond it in arguing that literary analysis is a media specific practice that has to be supplemented with attention to other medial forms?

KH: Some scholars trained in the Traditional Humanities tend to see Digital Humanities as a threat. They have spent decades developing sophisticated analytical skills, and suddenly it seems as if the Digital Humanities are devaluing these and replacing them with other skills such as coding, programming, etc. So, these people understandably feel antagonistic toward the new trends. In my view, this is a misreading of what the digital Humanities are about. One of the reasons I wanted to develop the framework of comparative textual media is to show that there are synergies between Traditional Humanities and new digital methods. The print book, after all, is a medium, along with the manuscript, the digital text, and so forth. The apparent division between the traditional and the digital can be rethought within a framework of comparative textual media. This move would also make it easier to form bridges between literature and other media that are not primarily textual. We should understand and productively explore the respective limitations, affordances, and possibilities of different media forms by directing our focus to the specificity of each medium rather than simply looking at ‘the’ content.

HP: Right now you were talking about comparative textual media. In How We Think you use the term comparative media studies. Could you briefly explain how these methodological frameworks relate to one another?

KH: Comparative media studies explicitly include media that are not primarily textual. Comparative textual media is therefore a subfield of comparative media studies.

HP: We have, so far, looked into how digital technologies change human beings and how the Humanities should or could respond to that. But how can we grasp those changes and the possible effects of digital media on a theoretical level? You use the term technogenesis for this purpose. Could we return to this concept and briefly inquire into what it means both in terms of the digital era, but also earlier?

KH: Developing the concept of technogenesis, I follow in the footsteps of Bernard Stiegler, who convincingly argues in Technics and Time that human involvement with technology did not happen at a late stage of human evolution but was there from the beginning of Homo sapiens. As Steven Pinker has argued, there is a link between the evolution of the human nervous system and the growing capacity to use language and to fabricate and use complex tools. The brain, language, and culture, including technology, co-evolved together. Stiegler points out that this co-evolution between formed objects and human beings already took place in the Paleolithic period. To put it in simple terms: we invent things and things invent us. We effectively co-evolve. My concept of technogenesis looks at these processes in the historical present. In particular, I look into the effects of digital technologies on human neurology and behaviour.

HP: You look into this in evolutionary terms…

KH: Evolution is about more than genetic make-up alone; it is also about the influence of culture on shaping human neurology as well as human behaviour. In the late 19th century, James Mark Baldwin argued that evolutionary theory must take into account the feedback loops between genetic evolution, behaviour, and the environment. Species experience an adaptation, and as a result of that adaptation, they change their environment so that it favours that adaptation even more. In this way the adaptation gains even more fitness advantage and spreads even more pervasively through the population. These recursive processes are called the Baldwin effect. If we think about this in terms of digital technologies, we can say that there is not necessarily a genetic change in human neural structures but an ontogenetic change that occurs after one is born. Because of the brain’s extraordinary plasticity, an infant’s brain undergoes synaptogenesis, in which synaptic networks stimulated by the environment strengthen and spread, whereas those less stimulated shrink and diminish. If cultural environments change relatively slowly in relation to human lifetimes, generations will undergo similar ontogenetic changes. If cultural patterns change more rapidly—as has been the case since the development of digital technologies—the ontogenetic changes across generations will vary more widely. Whatever the case, neurological changes after birth become part of the cultural inheritance of a species, laid on top of and interacting with their genetic inheritance.

HP: Could you give an example?

KH: Young people in developed societies tend to reconfigure their environments to favour ontogenetic adaptations such as a growing capacity for hyper attention. As a result, they crave ever more intense informational stimuli, which for example takes the form of rapid attentional switching between different media, different sites, different sources of information. Their reconfigured environments in turn enhance their cognitive ability to take in different information streams, and at the same time increases the pleasurable effects of doing so. Simultaneously, these ontogenetic changes are in constant interaction with inherited genetic tendencies and predispositions. Think, for example, of the age-old fascination of looking at a flickering fire. This ancient practice may well be a genetic predisposition, which now is in active interplay with an ontogenetic disposition to channel surf or multi-task with multiple screens open at once.

HP: You have recently dealt with questions pertaining to an object-oriented ontology, which you rephrased as object-oriented inquiry…

KH: It is my contention that the Humanities have too long disregarded the materiality of processes. When I encountered Graham Harman’s object-oriented ontology in The Quadruple Object, I felt that finally, someone is paying attention to objects. But when I learned more, I found that my enthusiasm was somewhat premature. Although Harman trades on a commonsensical understanding of objects. in his ontology the crucial idea is that objects recede forever from us and we have no ability to know them. Therefore, for my purpose, object-oriented ontology is not moving in a direction I personally would like to see. On the contrary, it is moving in a direction precisely away from a viable and productive attentiveness to the materiality of processes. Take Ian Bogost, for instance, who in his book Alien Phenomenology is inspired by Harman. Bogost is interested in the materiality of processes and devotes a large section to explaining the material basis of a certain kind of camera sensor, a predilection I applaud. Nevertheless, he seems to accept Harman’s idea that objects recede infinitely from our ability to know them, so he tries to smooth over the discrepancy by saying that his description of objects is merely metaphoric. But to me, it is not very productive to call every description a metaphor. As a former scientist, I believe that we are able to achieve reliable knowledge about an external reality. We do not really need to grasp what reality is in itself, if that concept even makes sense. What we need is a robust interface through which we can interact with objects, and that robust interface requires a detailed knowledge about the material processes constituting our relations to objects.

This leads over to my second problem with Harman’s approach, namely that I perceive it as anti-relational. In Harman’s ontology, as soon as a relation between entities is formed, it ceases to be a relation and reemerges as a new object. He constantly converts any relationality into ever more complex objects. So ultimately there is no way in his philosophy to talk about relationality as such.

HP: In your object-oriented inquiry you state that objects can only emerge to us through the resistances, through what we cannot know about them, what we cannot do with or to them …

KH: Yes, and here I have a certain overlap between my thought and Harman’s. Harman has a kernel of insight in his idea that objects recede from us. I would say, however, that they do not so much recede as resist. And it is the resistance of objects to us that is the source of our most instructive insights about them. In understanding the nature of those resistances and working within them, human knowledge is able to progress and increase. Resistances force us to modify our questions, and the modified questions uncover new forms of resistance, in a continuing cycle that Andrew Pickering has called the “mangle of practice.” Attention to how objects resist human probing is based on a negative understanding of knowledge. We cannot know what an object is in itself (here I am in agreement with Harman), but we know when our conceptions of it fail to work. These negative answers enable increasingly fine-tuned distinctions, increasing the robustness of how we think about our interactions with objects as we revise and rework our conceptions and practices. This cycle alerts us the fact that objects emerge for us always through their relations to other objects and with us. Reducing these relations to ever-new classes of objects, as Harman advocates, would foreclose such relational and reciprocal, understandings.

HP: One example for such an object-oriented inquiry might possibly be taken from nuclear physics regarding the model of the atom composed of a core and the electrons circling it? Science cannot tell us where exactly an electron is located at a given moment, but scientist can certainly tell us where it is not. This where-it-is-not gradually increases as our knowledge of the subject grows, however without ever reaching a point at which we can determine an exact location. This way, reality in its ultimate form recedes, but still leaves us with a huge variety of approximations that gradually become ever more sophisticated as our knowledge progresses. This thinking re-asserts, I believe, the ultimate contingency of the object world, without however falling prey to a disabling relativism. We have to accept the fact that we can never know the external world exactly, but this does not leave us without viable means to acquire valid knowledge. One could possibly say that this perspective reasserts a notion of necessary humbleness into the discourse of the scientific profession.

KH: The revolution in thinking brought about by quantum mechanics was profound, and its implications are still being explored in such phenomena as entanglement and decoherence. I’m not sure I agree with your analogy, because it equates quantum indeterminacy with a more general epistemological limit on the nature of knowledge, but we should remember that quantum effects become negligible (although still present) at macroscale levels. I tend to favour Karen Barad’s take on this in her notion of “agential realism,” in which she argues that the experimental apparatus is part of what determines the kinds of observations that a given experiment will yield (a point she develops from the philosophy of Niels Bohr). From here she makes a leap into ontology, arguing that reality itself is brought into being by intra-actions between agents; without these intra-actions (which might be between subatomic particles, between particles and instruments such as those at CERN, or between humans, instruments, and particles), reality could not exist. Hence the point is not so much a limit to our ability to know the world, but rather our active participation, along with myriad other agents, in bringing the world into being as such.

HP: Before we round up our conversation, would you like to say something about any ongoing projects of yours?

KH: My latest interest is in forms of nonconscious cognition and I’ll say this twice: it is not unconscious cognition, but nonconscious cognition. I work with a framework consisting of three levels: firstly, the conscious and unconscious as modes of awareness, secondly nonconscious cognition, and thirdly material processes. The boundaries between these are not clear-cut. Often they overlap and are quite porous. But this tripartite framework provides a way in which to more comprehensively approach the various roles of cognition in human life. As recent work in the neurosciences and the cognitive sciences has confirmed, most of our mental life is nonconscious, not unconscious as Freud thought – not hidden from consciousness through mechanisms of suppression and repression – but consisting of cognitive nonconscious processes that are simply inaccessible to consciousness, no matter how hard consciousness tries to access them. These nonconscious processes filter the enormous amount of information coming from the body and from the environment through sensory perceptions, recognising patterns, drawing inferences, and adjudicating between conflicting and ambiguous information.

It has become clear during the last decades that consciousness has a limited ability to process information compared to its unconscious and nonconscious counterparts, both in its speed of operation and in the amount of information with which it can deal. Nonconscious cognition supports consciousness by filtering out irrelevant information, feeding forward only that which is contextually relevant at the moment.

HP: The nonconscious functions as a filter to avoid information overload…

KH: Yes, but it does more than that. The nonconscious has a tremendously important role to play in understanding human mental life. It can, for instance, provide new insights regarding the various affinities and commonalities we share with animals as well as technical systems. Most of the time, our bodies react entirely nonconsciously to external stimuli; we share this behaviour with many biological lifeforms, including other animals who, like us, have consciousness, which in my view includes many other animals, especially mammals In addition, many contemporary technical systems exhibit nonconscious forms of cognition that impact significantly upon human cognition and conduct. Nonconscious cognition, spanning humans, animals, and technical systems, allows for a more fine-tuned analysis of interactions between these entities.

The tripartite framework can be envisioned as a pyramid, with modes of awareness at the top, supported by nonconscious cognition below it, and underneath that are material processes. While this metaphor grants the “highest” position to consciousness, it also allots to conscious/unconscious modes of awareness the smallest volume of space. This accurately reflects the conclusion that many cognitive scientists now accept, that human behaviour as a totality is comprised much more of nonconscious cognition than of consciousness. Which brings us back to the issue of profoundly questioning the implicit assumptions underlying the autonomous humanist liberal subject.

HP: To round up this conversation, I would like to briefly return to the issue of digital technologies and surveillance. Would you award Edward Snowden with the Nobel Peace Prize?

KH: (laughs) I don’t know about the Nobel Peace Prize… but I think that it is correct to say that Edward Snowden is a patriot, as the recent Wired cover suggested by showing him wrapped in the American flag. Patriotism does not mean to blindly endorse any action a government takes. Real patriotism, in my opinion, is criticising a government when necessary and supporting it when necessary, so that it is able to sustain the principles on which a democratic political order is built.

HP: Katherine Hayles, thank you very much for your time.

Biographical Notes

Holger Pötzsch (PhD) is associate professor in Media- and Documentation Studies at the Department of Culture and Literature at UiT Tromsø, Norway. His main areas of interest are the war film, war games, and the interrelation between new media technologies and processes and practices of (violent) in/exclusion. Pötzsch is coordinator of the WARGAME-research group and co-coordinator of the ENCODE-research seminar (both at UiT Tromsø).

N. Katherine Hayles, Professor of Literature at Duke University, teaches and writes on the relations of literature, science and technology in the 20th and 21st century. Her book How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics won the Rene Wellek Prize for the Best Book in Literary Theory for 1998–99, and her book Writing Machines won the Suzanne Langer Award for Outstanding Scholarship. Her most recent book is How We Think: Digital Media and Contemporary Technogenesis. She is currently at work on a book entitled Expanding the Mind of the Humanities: Nonconscious Cognition.

Notes

  • [1] The interview was carried out on September 28, 2014 in connection with a guest lecture by Katherine Hayles at UiT Tromsø, Norway. Her visit was arranged by the ENCODE-research hub at UiT Tromsø and was funded by the Dept. of Culture and Literature and the Centre for Peace Studies. More information on the ENCODE-research hub can be accessed here: https://digitalmedia.wikidot.com/.