Can Technologies That Empower Also Enslave?
A post on Michael Sacasas’ excellent blog has drawn my attention to Tim Wu’s essays in the New Yorker about technological evolution. Michael has his own reflections which are well worth reading of course, but my thoughts veered in a somewhat different direction.
Wu begins with a thought experiment. Imagine a time traveler from 1914 is visiting our time. He is in a room divided by a curtain, and is asked to evaluate the intelligence of the woman sitting on the other side. No matter what question he asks or on what subject, her replies with the correct answer is almost instantaneous. The time traveler concludes that humanity has achieved a level of superintelligence, but the truth is that behind the curtain the woman simply has access to a smartphone with an internet connection.
The lesson Wu wants to teach us is about what we talk about when we talk about whether technology is making us smarter or dumber. When Nicholas Carr wrote The Shallows: What the Internet Is Doing to Our Brains, he was evaluating the human. When Clive Thompson wrote Smarter Than You Think: How Technology Is Changing Our Minds for the Better, he was evaluating the cyborg.
One possible direction that could be taken is to argue that humans are always-already augmented by technology, so Thompson is correct and there’s really nothing really to worry about. The counter argument would be that this argument falsely reifies “technology”, treating a wide range of tools and instruments—from books to eyeglasses to calculators to mobile phones—as part of the same monolithic thing called technology, all sharing in its mystical essence. Such a framing tricks us into thinking we must accept all augmentations if we accept any.
The question of whether we are evaluating the human or the cyborg struck me as related to a user interface problem: which possessive pronoun to use in user interface text. Should it be the first person (“My account”) or the second person (“Your account”)?
Some try to dodge the problem by claiming we can simply drop the possessive pronoun and just say “Account”. That works in many cases, but the problem crops up whenever you mix objects created by the user with objects created by other users. For example, in a social bookmarking site, you might have a search results page divided into bookmarks you created and bookmarks created by others. Do you label the section “My bookmarks” or “Your bookmarks”?
In practice, we’d probably resolve the question empirically through usability testing or page analytics. But it’s useful as a thought experiment because it exposes different ways of thinking about the status of the interface and the underlying system. For whom does the interface speak? Does it speak for you, or to you?
Anecdotally, I’ve found that engineers tend to favor the first person pronoun and designers tend to favor the second person. And the trend has been towards the second person over the same period that responsibility for interface text has shifted from engineers to designers. That may be because engineers are more likely to view the interface as an extension of themselves, as a kind of avatar that acts on the system and its parts. In this view of the human-machine dyad, the interface is on the side of the human.
But users, especially more naive users, don’t draw the distinction between interface and system. Often the interface is the system—the screen as a kind of window looking out on to the machinery of computation. This illusion is encouraged by the classic interface metaphors, particularly the graphical file system which visually represents files and folders that correspond to real collections of bytes, quite unlike the command line, which is a void into which one types commands that disappear, do their work and return with the results.
It’s misleadingly easy to conclude that the expert engineer who views the interface as speaking on his behalf is closer to a cyborg than the GUI-dependent novice, who views the interface as something they are in dialog with. On closer analysis, the opposite is true. The novice relies on the interface to remember file and folder names, display available commands in menus and disable commands when they aren’t relevant in the current context. In that sense, novices are more augmented by the interface, but because of that, they are more likely to experience the system as an Other with its own agency and opaque internal processes.
The skilled command line user memorizes hundreds of commands and options, keeps mental track of the contents of the folders and has mental maps of the system’s internals. When the user’s skill is fully engaged, they experience the interface as a transparent extension of the self, precisely because the system isn’t helping very much and is fully subordinated.
Looking at things this way, I hope to challenge the presumption of cyborg-oriented thinking of technological augmentation as an extension of the self. It could very well be experienced as an Other, which may lead us to interesting thoughts about human intersubjectivity itself as augumentation. I find this line of thinking more promising because intersubjective relations may or may not be harmonious, but the self always is. As Lacan taught, the ego is a fantasmatic unity.
Tim Wu’s thought experiment exposes another interesting flaw in cyborg thinking. Although it poses as anti-Cartesian, proponents of augmentation assume that no matter what happens, human agency will always be preserved. Augmentation only enhances existing capabilities, and there is no possibility of an antagonism, or any sense that the nonhuman partner has agency or exerts force or pressure on the human, as might be implied by actor-network theory or object-oriented philosophy.
To put it differently, why assume that in the cyborg assemblage, consisting of the woman behind a curtain and her smartphone, the woman is the augmented agent? Why privilege the view of the cyborg with the human at the center, when one could just as easily say that the smartphone is augmented by the human? After all, from the vantage point of time traveler, the woman isn’t really contributing very much. At best, she acts as a translation layer, resolving the ambiguities in his commands and relaying the information back with greater verbal fluency than what Siri can currently provide.
It’s worth thinking more deeply about what it really means to say that technology is making us smarter, and who benefits of our augmentations.
A few years ago Dropbox created a promotional website for its business offering which included a photo of a man at Disneyland with his family and a quote praising the company for giving him a new ability to work while he was on vacation. We could play the Tim Wu game and create a thought experiment asking whether a man who can access his work documents from anywhere is more capable than the man who can’t, but we would have to be extremely naive to think that way. It’s obvious that from the perspective of the man and his family, having to work while on vacation is not an improvement, although it might be great news for Dropbox and his employer.
A man with a device that can broadcast his position to a remote receiver is clearly augmented by this technology. But this could describe a parolee wearing an ankle monitor that allows the police to monitor his movements. To muddy the issue even further, what if the human-machine dyad is a person and the slot machine (or social media website) they are addicted to?
Wu describes the woman behind the curtain as a prosthetic god, which seems to imply that in addition to having great powers she is also in control of her situation and the primary beneficiary of these new powers. But the scenario itself suggests otherwise. She sits behind a curtain tethered to a smartphone, responding dutifully to the demands of someone else, almost exactly like a harried white-collar worker responds to the endless stream of emails that arrive all hours of the day from her colleagues, customers and managers.
Framing it this way, the powers granted by her technology don’t empower her, they enslave her. That potential exists because technology is enslaved—as Oscar Wilde put it, “On mechanical slavery, on the slavery of the machine, the future of the world depends”. So if we become more conjoined with technology, we could become enslaved as well, or at the very least experience some form of oppression.
Wu seems highly resistant to thinking about power relations. He believes that consumer choice is the motive force of human evolution, and claims that “there are far more important agendas than the merely political” without ever questioning whether this might be a dumb thing to say. His unwillingness to think about technology and power—not in the individualized sense of expanded capabilities, but social and political power—is why he concludes with analytically useless middle-groundism, the sophisticated way to say “I don’t know.”
Convenience technologies are good, but can be bad. Difficulty and struggle can be bad, but can also be good. Banalities like this don’t contribute anything.