A Malfunction in the Cyborgologist Utopia

Essays on technology, psycho­analysis, philosophy, design, ideology & Slavoj Žižek


January 28, 2012

A Malfunction in the Cyborgologist Utopia

I credit Evgeny Morozov with providing the most succinct definition of techno-utopian: good things are technologically determined and bad things are socially determined. This from Morozov’s scathing review of Jeff Jarvis’ Public Parts, but I understand it as referring less to an official, explicit thesis. Instead, it is more of an unconscious tendency, something that is never quite made explicit but habitually leaks out from an author while they might be trying to do something else.

A possible reason for this bias is that they are haunted by the figure of the Luddite – the one who rejects technological progress and stubbornly, aggressively refuses its benefits even after they have been made apparent. This apparently conservative refusal to change bothers them, and this irritation might appear in a political register, where technological change stands in for progressive social change.

In general, techno-utopians are careful to avoid confronting this figure too directly, because that would give the impression of intolerance and criticizing other lifestyles. Framing their writing as criticisms of the media is a safe alternative, a proxy villain that allows them to imagine themselves liberators of people from misinformation, freeing us to make choices rather than criticizing our choices; and as anti-corporate activists fighting “Big Media” even while they promote the agenda of internet corporations and the much larger telecommunications industry.

Jeff Jarvis is not the only one who seems to orbit around this strange pathological figure of the Luddite. It is also present in the work of youth advocate and Microsoft researcher danah boyd as the parent who is unable to understand and adapt to her teenager’s embrace of technology. In boyd’s telling, the Luddite’s dysfunction manifests itself as suffocating her child’s self-expression by impeding access to social media, which is why boyd is eager to refute myths and moral panics that are suspected to provide rational cover for what she regards as an irrational and harmful fear of technology.

This pathology is sometimes drawn in rather stark terms. In her paper Why Youth (Heart) Social Network Sites, boyd reports the results of her 2-year ethnographic study of social media practices by teenagers. For boyd, an important positive aspect of social media is the way it facilitates the construction and performance of identity, but believes that this capacity is threatened by parental watchfulness and restrictions.

In the paper, we are presented with a central antagonism between teenagers who “(heart)” social media, and parents who don’t. But this narrative is problematized by boyd’s own research, which contains several examples of teens taking a far more ambivalent and even critical view of social media. Her research subjects are apparently capable of more sophistication and nuance than merrily “(heart)ing” MySpace, but she skips over this, reformatting their critical attitude as really social criticisms, ignoring their explicit reference to the technology itself.

Boyd’s thesis isn’t falsifiable. Teenagers love social media! They are digital natives and embrace it without reservation. Actual teenagers who come to the opposite conclusion are hapless victims of media brainwashing, and possibly also of their parents, once again feeding into the portrait of the pathological luddite. The rhetoric is raised to a fever pitch when she attempts to draw a parallel between parents’ restrictions on teenage use of social media and the racial oppression experienced by African-Americans at the hands of the white majority.

For boyd as for Jarvis, every harm is caused by a pathological user – the hysterical media, overprotective parents or privacy advocates; and every benefit is ascribed to technology. These one-sided accounts are simply unpersuasive.

The sociology blog Cyborgology is particularly adept at refuting the negative impacts of technology by reframing them as social problems or products of media hysteria. A recent example is Facebook Practices and Mental Health by Jenny Davis, which attempts to solve a truly challenging problem: a peer-reviewed article in a well-regarded journal that shows that individuals who use Facebook are more likely to believe that life is unfair, and others are happier and lead better lives. Such strong evidence cannot be dismissed as hysteria, so it must be reframed:

Admittedly, a study such as this is powerful evidence for technological naysayers. A negative relationship between Facebook usage and mental well-being indeed offers a dismal picture of a constantly connected populous. I counter this, however, by arguing that the problem rests not in the platform itself, but in the potentially unhealthy ways that some people engage with it.

The pathological figure of the Luddite returns in multiple as “technological naysayers,” who are quoted extensively in the introduction to the post. We get the sense that this post is addressed to them, a feeling that is confirmed in the closing paragraph where Davis’ offers her advice for how to avoid the problems identified in the study.

So if you want to feel better, stop stalkernetting and write on your best friend’s wall. While you’re at it, purge those Friends who are merely targets of surveillance, they are messing with your self-evaluative measuring stick.

This conclusion leads us to believe that having negative experiences on social media does not expose flaws of the technology, it merely confirms the pathology of the user. They are not only technological naysayers, they also lack the self-awareness necessary to use Facebook in healthy ways, making them victims of their own dysfunction. The blog post is illustrated with a picture of a man wearing a t-shirt with the slogan “I Hate Myself” on it, implying that by reporting negative experiences with social media, we are walking around with a sign on our chest and embarrassing ourselves by exposing the shame of inner self-loathing and insecurity.

The Cyborgology blog often invokes “augmented reality” as a conceptual category but sometimes with tendency to promote it as an ideal. This is the case here, where Davis claims that Facebook’s mental health effects are caused by improper “digital” usage that leads to judgment and comparison, distinct from the preferred, healthy “augmented” usage that leads to intimacy and interconnection. The claim is that healthy usage of Facebook involves augmenting existing relational practices, by engaging more deeply with the technology as an extension of the self. This means that if you have negative experiences, for Davis, this is a sign of incomplete or insufficient engagement with technology, not too much, as the study claims.

This idea may or not be true, and Davis herself seems to contradict it in other parts of the post. On one hand, she says that using Facebook as a surveillance device to compare your life with your friends is a non-augmented form of interaction, but then also claims that this type of behavior is really no different to what we do in the offline world – like sitting in a coffee shop gossiping about who has the nicest house and so on. It’s certainly possible to have face-to-face interaction where the other party presents a highly idealized picture of themselves. On Facebook, this tendency is accelerated, we can conspicuously display our best selves to the world in rich, shimmering multimedia to an even larger audience.

Why is this considered purely digital, not augmented? The answer is that the term “augmented”, at least as it is used here, simply refers to technology use that has beneficial consequences. But this means that the concept is empty – it does not refer to any specific mode of engaging with technology, it refers to the outcome. Two things follow from that: first, to say we ought to use technology as an augmentation is to say we ought to use it in ways that are beneficial, but who would argue with that? Second, understanding the impact of technology in terms of capacity to augment is to study technology in terms of its benefits, ignoring or downplaying its harms, or reframing them as unrelated to technology itself.

Cyborgology argues against the view that the computerization of human relationships diminishes them by eradicating their essential humanity, claiming instead that it augments and enriches them. But does this indicate regression to romanticism? Technology is always good because it forms an extension of the Human, imagined as primordially whole and good. The use of technology only becomes bad when humans are corrupted by the secondary effects of culture, the media, etc. So we might say that this cyborg discourse regards the Human as always already technological, naturalizing technology but making culture into a foreign, alien, distorting intrusion – a reversal of the two roles in the Luddite discourse.

A second aspect to this is that Davis seems to follow Rousseau in this post. The supposed innate cyborg ability to merge seamlessly with technology and Rousseau’s amour-de-soi are both healthy capacities that exist prior to society, but are then corrupted into technophobia and amour-propre respectively. This is the hidden link that allows Davis to make the counterintuitive claim that a fuller embrace of technology means a reduction in comparative judgment – rejecting technophobia involves a (possibly countercultural) rejection of the distorting effects of society on our nature which, for Rousseau, is also the ultimate cause of amour-propre.

This potentially romanticist tendency on the Cyborgology blog puts it in a problematic relationship with Donna Haraway, a thinker who officially inspires them even while the writers apparently make significant deviations. Haraway says of the cyborg:

it has no truck with bisexuality, pre-oedipal symbiosis, unalienated labour, or other seductions to organic wholeness through a final appropriation of all the powers of the parts into a higher unity.

Lacan also takes a different view of the cyborg than what’s found on Cyborgology. Rather that eliminating the natural-artificial split, Lacan puts the human subject on the side of the artificial. The human subject itself is a cyborg, an object of horror as the Luddite perceives it: a foreign disturbance, an ontological excess, an alien, a gap in the order of Being. Alienation is constitutive of the subject, so the very elements of the machine that make it “inhuman” – blind, mechanical, repetitive motion – also appear at the heart of human subjectivity as death drive.

Thus the problem with Cyborgology is not that it naturalizes and humanizes technology, but that it naturalizes and humanizes humans, producing a technoromanticism similar to Richard Brautigan’s poem All Watched Over By Machines Of Loving Grace: “where mammals and computers / live together in mutually / programming harmony.”

Another interesting aspect of this blog is that when Davis attributes the negative mental healthy impacts of Facebook to misuse on the part of users, this closely resembles a tendency – far from universal, but still common – among software engineers to see problems in the intersection of humans and technology as being caused by user error. In his 1952 novel Player Piano, Kurt Vonnegut made a reference to this:

“If it weren’t for the people, the god-damn people” said Finnerty, “always getting tangled up in the machinery. If it weren’t for them, the world would be an engineer’s paradise.”

This is humorous because our usual expectation is that machines serve to exist people, and without people there would be no point in having machines. But it also reflects a fear, the idea of machines taking on independent life and replacing humanity, getting rid of us so that they can run more smoothly.

In the software world, the practice of User-Centered Design was created as a response to the tendency of software developers to “blame the user” for software problems. The term “user-centered” was meant in contradistinction to a “system-centered” approach to creating software, which implies prioritizing the needs and goals of the system (and the system builders). The planning diagrams of a system-centered orientation often modeled users as subcomponents of the system alongside the digital subcomponents. A user was understood to accept input, process it and generate output just like any digital component, which meant that it could be combined with digital or non-digital components to produce a functioning, coordinated system.

This is a Taylorist approach to software design because of the way that it represents humans as “cogs in the machine”, prioritizing the goals of the system builders (capitalists) over the human subcomponents (labor). The user-centered approach attempts something different. Because it has roots in Marx-influenced cooperative design that sought to involve workers in the design and construction of digital technology, it attempts to orient the whole process of software creation around the goals and needs of the users. (Side note: one interesting problem is that UCD methods are sometimes rejected because they’re viewed as failing to produce “innovation.” This may be partly due to the wide acceptance of the Heideggerian axiom that well-designed technology has the quality of being ready-to-hand, meaning that it almost disappears from consciousness, we use it without being directly aware we’re using it. This is the opposite requirement of innovation and novelty which necessarily draws attention to itself.)

In the system-centered view, a user who has a problem using software is understood as a symptom of a malfunctioning part – the user – and this obviously calls for corrective action to restore proper functioning. The default solution is remedial training, an answer that Davis also resorts to when she responds to the mental health problems of Facebook users by educating them about the correct and incorrect ways of using the software. In the Taylorist world, this reeducation aims at correcting behavioral mistakes: a button clicked at the wrong moment, data entered in the wrong format, an inefficient navigation path, etc. But Davis’ cyborg reeducation addresses the user’s subjectivity, adjusting their psychology so that it conforms to the needs of the system, a practice that Maurizio Lazzarato associates with post-Fordist management techniques. In Immaterial Labour, Lazzarato writes:

What modern management techniques are looking for is for “the worker’s soul to become part of the factory.” The worker’s personality and subjectivity have to be made susceptible to organization and command…

The system-centered approach is a kind of cyborg discourse because it posits humans and machines as having a basic compatibility and commensurability. This turns into a kind of disciplinary pressure on the pathological, incompatible subject. Because we can be augmented by machines, we must – an implacable demand associated with superego in Lacanian psychoanalysis. For the traditional Taylorist, the sign of pathology is error; for the Cyborgologist, it is a refusal of the injunction to augment oneself, technophobia and resistance to change – psychological problems. Lazzarato’s managers need look no further than Cyborgology, which deploys concepts from sociology and social psychology