Before my departure for a trip to celebrate my mother’s ninety-eighth birthday, friends suggested that I read Atul Gawande’s Being Mortal. Since I also planned to visit a relative recently diagnosed with terminal cancer, I took the book along with great interest.
Gawande begins with the confession that he never learned how to treat mortality in medical school. To make up for this lack, he undertakes a diagnosis of how Americans deal with death, first examining patient-care in nursing homes, and then turning to end-of-life treatment in hospitals.
His descriptions of the American nursing homes are familiar. Anyone with a parent or grandparent in such an institution understands the reasons for his concern: impersonal care, neglect, isolation, absence of stimulation, anonymous routines, and regimentation. “Lacking a coherent view of how people might live successfully all the way to their very end, we have allowed our fates to be controlled by the imperatives of medicine, technology, and strangers,” he writes.
Gawande compares the soulless institutions in America with the experience of his grandfather in India who lived to 110 at home. But he admits that this was made possible by his aunts and uncles. Gawande’s own father, on the other hand, lived and died in Athens, Ohio, while the son worked in Boston.
All my grandparents lived into their old age at home because there was a village to support them. But three years ago, after my mother had a stroke and heart attack at 95, my brother, sister, and I looked for a nursing home. We still confront the guilt of having taken our mother there, a person who had given so much and had asked for little in return.
Reading through Gawande’s prescriptions about old age, I kept asking myself what they had to do with my experience in my mother’s nursing home. He says, for instance, that rather than just ensuring their patients are safe, nursing homes should devote energy to making their lives meaningful.
This is a great and inspiring idea. But how does it apply to the residents on my mother’s floor who are in varying degrees of dementia or are withdrawn into cognitive inwardness? One man, paralyzed on one side, suffers from aphasia. Like a figure enduring endless punishment in Greek mythology, he has desires but is incapable of expressing them. What does meaning mean here? Where else can he be other than a nursing home? Even a village would have difficulties caring for him.
When Gawande moves on to the operating room, his analysis becomes more incisive. Here again he examines something that has been talked about for decades: Rather than dying at home, patients are experiencing their end with tubes snaking into their mouths, with doctors trying experimental treatments, with nurses trying to coax one last breath.
Gawande provides countless stories of confused patients, false hopes, and terrified family members. Particularly moving is the death of Gawande’s own father, himself a urologist. Here the doctor experiences the end of life from the perspective of a patient and comes to understand his feelings of helplessness.
But this section also highlights the book’s weakness—its over-reliance on anecdote. Being Mortal manifests the dominant tendency in American commercial publishing to prefer narrative to analysis. A discussion of terminal care in other countries, for instance, or of hopeful strategies being introduced in the United Sates is inevitably followed by another story of patients caught in the bureaucratic machine.
How many tales do you need to make your point? Should stories constitute two-thirds of your book? Gawande, his publisher, editor, and agent might think about these questions. For they subscribe to the assumption that mass-publication books can succeed only by trading in parables.
In one respect, this is understandable. We are story-telling animals, communicating through narrative. Life is itself a story, Gawande himself says, the direction of which we often lose towards the end of life.
When stories illuminate, say by revealing the failures of the American medical system, they are invaluable. But stories, through their very familiarity, repetition, and steady accumulation, can also block our understanding, making us numb to the pain they describe. This is what happens in this book. The narratives Gawande collects end up saying the same things but through different voices. We have all heard before from friends and family members or have experienced them ourselves.
To work stories have to entertain, that is, maintain our interest. We want to read about the man who mistook his wife for a hat or about how the urologist turns into a cancer patient. Our delight and insight paradoxically come from the suffering of others. Reading has an aesthetic dimension.
But sadness alone does not make a story stimulating. And many sad stories do not necessarily yield an effective book.
Homer, the master story-teller, understood the relationship between narration and human sorrow. In the Iliad Helen says to Hector, “Zeus planted a killing doom within us both,/ so even for generations still unborn/ we will live in song.” She suggests that the gods incited Paris to elope with her to create the pleasures of the story for future ages. And in the Odyssey, hearing a minstrel tell his own tale, converting his life into art, Odysseus says “that was all gods’ work, weaving ruin there / so it should make a song for men to come.” At the end of Mark Twain’s The Adventure’s of Huckleberry Finn Tom Sawyer wishes to prolong Jim’s imprisonment and humiliation for the sake of telling stories about freedom and enslavement.
The anguish of others can be converted into aesthetic enjoyment and human understanding. But these stories can also anesthetize us when they become commonplace, when they turn into routine, and when they just accrue. How different Gawande’s book would have been, had it been accompanied by a sustained discussion of issues relating to death and dying. But this seems difficult to imagine in American commercial publishing, its heart beating to an aestheticism that converts life into narrative.
As a teacher in the humanities, I welcome this development because it underscores the narratological aspect of existence. But at the same time, I am troubled by the patronizing assumption that readers can’t understand concepts unless told in simple parables.
Gawande’s Being Mortal rests on the supposition that a book will sell only by telling stories. In so doing, its success is only partial. While it enables readers to form empathic bonds with the individuals described in its pages, it leaves these readers waiting for wisdom on how to take care of ailing loved ones.
In the aftermath of the recent Brussels attacks I was talking to my friend, the well-known Catalan poet Lluis Urpinell i Jovani, and he suggested that in the contemporary world the writer is an enemy of the people just like Henrick Ibsen's protagonist Doctor Stockmann. I have to agree with this comparison.
The contemporary post-postmodern world mixes a complicated clash of ideas with the full dominance of neoliberal ideology. What we witnessed over the past 25 years was the triumph of Western liberal discourse in the battle for "cultural hegemony." Now new challenges arise and new tendencies limit the freedom of expression. The job of the writer and visionary has become more and more complicated in our times. In many cases government and authorities are at fault, but in others private organizations, corporations, churches, and even whole communities are involved in the persecution of the writer. The freethinking and iconoclast writer is looking for a safe haven and finding it not very easy. This is because the collective spirit of investors reinforces today’s version of censorship all around the "first world," whatever that racist phrase means. Today, one runs the danger, not only of being politically incorrect, but also of acting against the will and interest of the contemporary community of "investors."
When Henrik Ibsen wrote An Enemy of the People in 1882 he introduced a new dimension of criticism to the nineteenth century drama. In Ibsen’s play, the main protagonist, Dr. Stockmann, challenges the entire community of investors as well as the authorities. At this time, Stalin was just 3 years old while Lenin was just 12. Ibsen wrote in a different society and time from the Bolshevik state of the twentieth century, but the play foregrounds a struggle against an authoritarian collective. In addition, Ibsen revealed the full resonance of environmental issues together with other social issues that were almost unheard of at the time. He wrote all this in the context of one of the most progressive societies of the world in 1882, Norway and Sweden. At that crucial juncture in history he understood that authoritarianism does not always come from just political leadership but also from private citizens and corporations: so-called special interest groups.
In the play, Dr. Stockmann tells the truth about the environmental problem his village is facing. He is opposed not just by his brother, who is the mayor of the town, but almost by the entire community of his that names him “an enemy of the people.” The community justifies it on the grounds that Dr. Stockman's truth is very bad for the investment policy of the entire village and town. They insist the whole area would suffer economically if the dangers and extent of contamination were to be revealed. So Dr. Stockmann is forced to leave his own village and country. In a way, the doctor is Henrik Ibsen himself, who has left his country in 1864 for 27 years and went to Sorrento, Italy. He wrote many famous works in exile and returned to his country a very famous but controversial playwright.
In Georgia we have the similar experience of Vajha Pshavela, a great poet and writer of the nineteenth and twentieth centuries. In his writings, he went against his own community and its moral codes. "Guest and Host" and “Aluda Ketelauri” are two examples of the genius of Vajha Pshavela's work. He endured a self-imposed exile in the mountains near the town of Pshavi, Georgia, which he would infrequently visit. On the one hand, Pshavela exposed the wrongs of communitarian and authoritarian thinking. At the same time, he admired the individual heroism of protagonists just like Dr. Stockmann, Jokola and Aluda Ketelauri, who refuse to succumb to the collective model of action. More importantly, he struggles against inaction in the face of a collective fear exercised by an entire society. This collective authoritarianism also acts at the level of private citizens or even non-citizens as in nineteenth century Georgia, where few were granted citizenship in the massive Russian Empire.
Many other writers have written about different but related challenges. But today, we are seeing a new type of censorship, a new oppression of freedom of expression, and this is coming not just from the government or authorities. Many contemporary theorists and practitioners are talking about the outsourcing of oppression to private organizations: Churches, NGOs, Corporations and other non-governmental entities. Today's legal system is well suited to protect powerful special interest groups, which are mostly private, but at the same time represent groups of people just like Dr. Stockmann's villagers or Aluda Ketelauri's community members. This threat is far greater than just governments because it is very difficult to detect when the privatized evil will surface in the form of a dogmatic Church, liberal NGO or private corporation.
Writers or reporters are told to withhold the truth. The truth is very inconvenient, as Al Gore noted in his film, because the "investor community," a very small minority but an expanded one today, does not like anything that will threaten its total domination of the world economy.
What are the interests of these special interest groups? They are involved in the most profitable business operations today. The main problems facing the world are socio-economic ones. Establishing economic and social justice through democratic processes is something that seems mostly unacceptable for these powerful groups. In some cases they might have conflicts with each other, but most of the times they defend their world order and their discourse. Changing their discourse and cultural narrative is the most complicated challenge of our time seeing as the practice of "manufacturing consent" is so widespread.
How do today's writers challenge this meta-narrative where only 1% live at the expense of 99% of the people while those who question this truth are being killed, silenced or arrested? This is the challenge of post-industrial times where power has been de-centralized and out-sourced to a community of investors. This “community” is, in fact, not so small. We are talking about millions of ordinary investors who are concerned about their social security already invested in private funds. Any questionable use of these funds are tacitly acknowledged and overlooked. The community of investors is silent, because just like in Ibsen's play it is against its own interest to speak the truth. But can a writer stay silent and say nothing against this criminal treatment of humanity in the name of the collective investor community? We are told to numb ourselves and stay silent in exchange for more or less comfortable lives at the university campus or metropolitan art centers. Otherwise we would starve and die. In these enclaves, what is most interesting is the seeming lack of any secret service agencies or other trappings of a police state. No, it is in the interest of the investor community for the writers to talk about secondary problems. Being a Doctor Stockmann today is much more difficult than it was during Ibsen's lifetime and that is why he was so prophetic.
In today's world, mainstream has become mean-stream—we need to find an alternative. For that the writer is obliged to become an “enemy of the people” with little chance of surviving. It is not impossible since every order breaks down sooner or later. Maybe the hegemony of Doctor Stockmann's town hall is as strong as it was any time in the history of humanity but we can see that it has started to crumble. Young people do not want to buy into cliché dreams. They imagine a different world in the West. While some become very bitter and kill themselves, "love[ing] death more than life," this is also a sign of a great existential problem.
Maybe it is possible to engage in a constructive, direct dialogue with the “community of investors” to figure out ways to proceed in the future because it is obvious that the status quo is untenable. Perhaps a nonviolent economy is a crucial step to overcome this horrible terror that we all face around the world. This alongside new kinds of free thinking aimed towards the greater empowerment of the ordinary people need to be done in more creative and engaging ways.
Tired of reading endless novels, with so many characters and plotlines to keep track of? Stupefied by the relentless back-and-forth of theatrical dialogue? Weary from absorbing all fourteen lines of a sonnet? We at Digiprose™ know exactly how you feel. And we’re proud to tell you: your worries are over!
Finally you don’t need to read any of this stuff any more: computers will do it for you. Using simple keyword searches and length measurements, they are just as good at understanding The Tale of Genji or Song of Solomon as any human being could ever be. Think you’ve detected irony in A Modest Proposal? A computer could do that! (See small print for exclusions.) Figure the first line of The Trial may be in free indirect discourse, given everything else that happens in the novel? Keyword searches could easily have noticed that ambiguity. (Edit: please stop asking us to explain how.) Reckon there’s a chiasmus in Sonnet 35, coupled with a change in rhythm, bringing about a surprising sense of harmony that cuts against the semantic content? Check, check, and check.
Our patented technology also searches movies for facial expressions, since it’s well known that smiles are an indication of happy scenes
There is nothing of interest that Digiprose’s software cannot identify in a literary text: that symmetrical pattern of motifs in the first and last volumes of Proust, those moments of bathos in Flaubert, that switch from metonymy to metaphor in Baudelaire, those authorial ironies in Plato, that self-referentiality in Mallarmé, those perspective shifts in Woolf. (Edit: OK, it can’t.) Our patented software will even give you the value of a literary work, on a scale of 1 to 73. Don’t worry if you feel differently; that just means you’re wrong.
We know what you’re thinking: “all of this sounds amazing, but who has time to learn the software? First you have to figure out what your question is, then you have to pick a bunch of books, then you have to read a stack of manuals to get the program up and running… it’s a real palaver.” Well don’t you worry: we now have a solution for this too. Presenting Digital Student™, the machine that learns for you.
No more digital humanities classes! No more late-night cramming! No more “pedagogically valuable” assignments! Using simple keyword searches, our patented software watches the lectures, completes the reading, speaks up in class (guaranteed participation grade: B- or above), constructs an original hypothesis, drafts the code, runs the study, writes everything up, and gets your paper published. (Our computers have an excellent rapport with the servers at The New Left Review, with whom they play golf on a monthly basis.)
Who will read these publications, you ask? Don’t worry, there’s an app for that too. Pretty soon all essays on literature will be researched, written, and read by computers. And then the rest of us can finally return to watching cat videos.
Are there talismanic quotations that you know sufficiently so that you don't quite think them through? I think that's partly a result of rhythm: strict endings complete a line (that's a rule of Indo-European metrics); and rhythms structure and sometimes anchor the remembered words.
I've always loved these lines of Stevens' from "The Plain Sense of Things":
Yet the absence of the imagination had
Itself to be imagined. The great pond,
The plain sense of it, without reflections, leaves,
Mud, water like dirty glass, expressing silence
Of a sort, silence of a rat come out to see,
The great pond and its waste of the lilies, all this
Had to be imagined as an inevitable knowledge,
Required, as a necessity requires.
Some time long ago I abbreviated them, without knowing it, as "The absence of the imagination had itself to be imagined, / Required, as a necessity requires." And it was only the first of those two pseudo-lines whose meaning I thought much about. The imagination would never be absent! To think so was to rejoin it, to imagine even that. "Disillusion as the last illusion," as Stevens says in a later poem. Or Beckett's: "Imagination dead, imagine!" (my punctuation).
The end of the poem, the end of my abbreviated version, was only what filled out the stirring, saving, Berkeleyan self-contradiction of trying to imagine the imagination absent.
But now I begin to wonder why the absence of the imagination was "required"? Why makes its absence, or imagining its absence, necessary?
I think (if I thought about it at all) that I took "required" to mean just a way of repeating "had" in "had to be imagined." It is required that you do euthanize your faith. But that's because I didn't really pay attention to the "as" of the last line. As a necessity required. We need to imagine necessity too. Ananke is not the iron law we cannot escape. It is the law we imagine we suffer under, but we need to imagine it. The rat can come out to see, whenever it wants to: it's a placidly, self-contained Rilkean animal, a denizen of the immediate.
But we need necessity, and the only question is whether our need for it is enough to count as need—as we need it to be.
When Gwen Harding reappears in the final season of Downton Abbey, she symbolizes how out-of-place the noble Crawley family has become. A former maid at Downton and now a respectable middle-class citizen, Gwen is immediately recognized by the servants but not by her old employers. She represents a new era where nobles must accept their diminishing influence and acknowledge the views of a group they had been accustomed to ignore. In America, white people are increasingly being called out for their racism, and a big reason why the show resonates with its white fans here is because they do not feel personally implicated by its portrayal of privilege. They can see how an unexamined belief in birthright has hurt the Crawleys, yet don’t have to question their own inherited privilege. But that’s why Downton Abbey is the ideal way to call attention to the post-racial fantasies of our own age.
I'm not saying that white people believe they are American nobility. I'm saying that nobility is a useful analogy for whiteness. The Crawleys routinely ignore the lives of their servants because they haven’t had to pay attention to them. For example, they know nothing about the ambitions of their servants, remembering Gwen only when the underbutler Thomas outs her. Thomas himself suffers greatly this season because Lord Grantham and his obsequious butler Carson turn a blind eye to his needs. The obliviousness of the Dowager Countess to working-class life is usually played for laughs, such as when she famously asks, "What is a weekend?" On the other hand, the servants cannot afford to ignore the reality of the nobles. Their lives and livelihood depend on their exacting familiarity with the Crawleys and their aristocratic culture. Like members of any other oppressed group, the servants must know the vanities of the privileged group by heart.
After six seasons of Downton Abbey, many white viewers probably know more about the lives of its fictional servants than those of actual black people. This is because most white people can succeed at their jobs while knowing nothing about black reality. Hence the antiracist #OscarsSoWhite and campus protest movements. Black Lives Matter is controversial because white people can't believe that law enforcement is as bad as black people say. Yet black parents must be experts on whiteness in order to have "The Talk" with their children about encounters with police. Some white people observe MLK Day by quoting one out-of-context sentence, then complain about the unfairness of Black History Month a couple of weeks later. The Australian actor Barry Humphries caused a stir when he suggested that Downton Abbey was popular in America because "there are no black people in it." Regardless of how white people keep black people out of their living rooms, it's hard to see oppression only when you decide to tune in.
Fans of Downton Abbey know that the more the Crawleys insist upon their nobility, the less fulfilled and humane they are. Although we might find something to envy about Lord Grantham or Lady Mary, we would also never, ever want to be the kind of human beings they turned out to be. We shake our heads at the folly of their internalized superiority. Despite her zingers, the imperious Dowager Countess is a lonely figure whose only real friend is her progressive cousin Isobel. For most of the series, we watch the younger nobles pursue inappropriate relationships of all kinds because their reputation must come before their happiness. Their servants literally and figuratively pay the price for this, such as when Anna humiliates herself buying contraceptives for the obtuse Lady Mary. Most tragically, Lady Sybil dies during childbirth because Lord Grantham trusts an unknown aristocratic doctor more than the village doctor. The Crawleys are at their worst when they are nobles first, human beings second. It makes sense, then, that when the footman Molesley begins his new career as a schoolteacher, his first lesson is to debunk the divine right of kings.
If white people compared whiteness and nobility, they might observe what their privilege has cost them too. As Lady Sybil found out, privilege can be bad for your health. A New York Times study revealed that rates of drug overdose have skyrocketed among whites in part because doctors assume that white patients will be more responsible with prescription drugs. Like the servants at Downton, people of color have seen how privilege warps the perspective of otherwise decent people. In a recent article, Iris Kuo raised the issue of the inability of white people to tell Asians apart. "Yes, it rarely happens out of malice," Kuo writes. "Yes, it is often accidental. Yes, it is bumbling, careless, idiotic and unintentional. But it is absolutely not right." A profile of power agent Chris Jackson, who is black, highlights his experiences with repeatedly being mistaken for one of his most famous clients, Ta-Nehisi Coates. But these insults owe to more than a momentary slip of the mind. Their origin in segregation is ancient, inbred. The “burden of whiteness,” Coates memorably tweeted, is that you “can live in the world of myth and be taken seriously.”
Of course, Downton Abbey tried to deal with racism in its fourth season, but the storyline of its only black character portrayed racism as the sum of individual sins only. At a time when the meaning of white identity is dangerously confused, when white people now claim to be the victims of racism because of their whiteness, we need to stop thinking about "white" as only a box to check like “married” or “single.” We need to remember that "white" is an idea invented to make superiority inheritable, like nobility. "White" was never an ethnic group like the Irish or Germans, identities which can exist independently of one another. In America, "white" identity has always been premised on black inferiority, making racism our national origin story. Yet no television show does for whiteness what Downton Abbey does for nobility, so we must use our imagination. Just as nobility is at the core of England’s social history, whiteness centers our own, but we don’t think to compare them because racism is seen as an individual moral failure and not a national strategic plan.
In one of the final episodes of Downton Abbey, the Crawleys decide to raise money for the local hospital by opening their house to the villagers for a day. The elders despise the idea of being put on display for gawking townspeople. The servants question the family’s elitism, with the woke kitchen maid Daisy proclaiming, “What gives them the right to keep people out?” Most tellingly, Lady Cora and her daughters, serving as guides, are all stumped by the guests’ earnest questions about the artifacts in their home, completely unaware of their own privileged history. A young boy wanders off the tour and finds himself in an upstairs bedroom, aside a recuperating Lord Grantham. The boy innocently asks the Crawley patriarch why he needs such a large house, and wouldn't he be happier in a comfy place like his own? "Maybe," Lord Grantham reflects warmly. "But you know how it is. You like what you're used to."
The honor of having Downton Abbey’s last word ever belongs, of course, to the Dowager Countess. The series concludes shortly past midnight on New Year’s Day, 1926, with the Dowager remarking how much she likes that people will “drink to the future, whatever it may bring.” Her confidante Isobel wonders what else to toast to since they’re not going “back into the past.” Laughing, the Dowager adds, “If only we had the choice.” For millions of us, Downton Abbey was compelling drama because we stood witness to the end of an epoch. Today, we are nowhere near the finale of white supremacy, despite what Hollywood leads us to believe. It seems that white people also like what they are used to. But like Isobel and Gwen, we have always had more choices for how to live our lives, if only we would act on them. White people might even commit to seeing themselves as people of color sometimes see them: as characters in the current season of a long-running period drama about racism in America.
Those who like anniversaries—and I am one of them—have recently celebrated Michel de Montaigne’s birthday (on 28 February), a reason to revel in the quality of his writing and thought. The buzz started in the summer of 2015 when Philosophie Magazine Hors-Série featured several contemporary French thinkers discussing Montaigne’s discourse and its connection with everything that serves as its influence, origin or referent. Or maybe five years ago when Sarah Backewell published her biography How to Live: Or A Life of Montaigne in One Question and Twenty Attempts at an Answer.
In fact, a vast, atemporal and impressive literature has been dedicated to his Essays, which have elicited a wide array of interpretations, from synoptic content analysis to concordances and formalist determination of harmony, gravity or decorum. The juxtaposition and interaction of diverse tendencies of the text illustrate a productive irresoluteness that virtually stimulates all possible approaches. A very active critical imagination has lead to the popularization series Montaigne Que Sais-je? that takes advantage of his motto in order to encompass aspects such as “Montaigne and Skepticism” or “Montaigne and His Horse.”
While his contemporaries suffused their texts with citations and took the liberty to use them according to their free sampling will, Montaigne admitted he shared his cultural authority with others and often used the trope of modesty in contenting his essays are but a collection of emprunts (« Les histoires que j'emprunte, j'en laisse la responsabilité à ceux chez qui je les ai trouvées » (i. Ch. 20, De la force de l’imagination). During his lifetime, collections of citations were like precious stones gathered in books and florilegia or scattered in letters to friends—mutatis mutandis, they were present everywhere the same way they are on Facebook today. Montaigne knew the ancient theories of citation mainly from the works of Cicero, Seneca and Quintilian whose books he had in his library: « Je suis si imbu de la grandeur de ces hommes-là » (ii, Ch. 32, Defence de Seneque et de Plutarque). Many of his quotations do not originate in those popular second-hand books of compilations but are to be found in the volumes that Montaigne owned and the proverbs he collected and had engraved on the wooden panes of his librairie.
According to Antoine Compagnon, a citation, as intentional transmission of a passage conserved intact in its original shape, is a foreign body in a text, whose existence, just like organ grafting, runs the risk of being rejected (31). For analytical purposes, the notion of citation has to stay flexible in order to incorporate several modes of historical deployment. The main characteristic of this form of appropriation is its referential character back to the original. Michael Metschier tells us that in the ancient Greek and Latin discourse, citations were only seldom points of departure for comments. Most of the times, their authority served to illustrate and to “ornate” an opinion, and, thus, their effect was most of the times that of delectatio (30). On the other hand, paraphrasing meant learning a text by heart and then reproducing it in a different manner, such as turning poetry into prose. It functioned also as an exercise preparing imitatio, that is, the internalization of the model naturally followed by emulatio, the audaciousness to compete with that source (35).
The use of citation is a common phenomenon in humanist Renaissance where authors lived in their ambiance as the most direct means to communicate with the ancients. As Anthony Grafton notes, well-educated authors quoted from books and not from memory (29). One of the most famous such intermingling with self-commentary that serves as a mode of self-authorization is Petrarch’s letter called “The Ascent of Mount Ventoux,” in which he describes his climbing like a spiritual initiation in the company of Ovid, St. Augustine, the Gospels, Virgil, Livy or Juvenal.
In The Praise of Folly, Erasmus denounced the tendency of his contemporaries, “scribbling fops, who think to eternize their memory by setting up for authors,” to be prodigal with ancient quotations just for the vain desire to legitimize their own work: “By doing so they make a cheap and easy seizure to themselves of that reputation which cost the first author so much time and trouble to procure.” In Adages, where he incorporates proverbs freely, he offers a series of precepts meant to indicate the righteous usage of these bits of traditional wisdom—they have to be inserted only where they are useful and their efficiency should not be diminished by an inconsiderate accumulation. In turn, Montaigne critiques authors who rely too much on quotations, asking for a law against « les écrivains ineptes et inutiles » (iii. Ch. 9, De la vanité), who attempt to present themselves via a foreign value. In the four editions that appeared during his lifetime, we could see a growing number of citations as an impure add-on to his écriture de soi; nevertheless, they are indispensable to his Essays. Latin, a stable presence in synergy with French, engenders a continuous dialogue with Virgil, Catullus, Horace or Ovid.
Montaigne’s writings are available off- and online in several original and modernized editions both in French and English; Stanford Library’s Department of Special Collections has two of the 17th century versions of Les Essais (1602 and 1659), which I could consult and compare with the books of Adrien Turnèbe, Ravisius Textor or Jean de Coras. Systematically, but in an unpremeditated manner, in the discourse of Montaigne’s contemporaries, the French Turnèbe, Textor, Coras as well as the Flemish Justus Lipsius, the citations change their nature and become borrowings. The quoted authors loose their initial legitimizing function; the cuts, notes, annexes and juxtapositions proliferate and can also be found parenthetically, as notes, personal digressions or asides.
Montaigne sometimes hesitates between French or Latin and usually adds the translation or paraphrases the Latin text. In the editions I mentioned above, Latin citations are graphically separated in the text, italicized and always placed between two periods and never after a comma or a colon. Thus inserted (rather incrusted) and separated at once, Latin words seem to follow a loud reading pattern and be ready more for recitation than for citation. Punctuation usually indicates intonations or pauses that mark a paragraph in relationship with other instances of enunciation, such as quotations. The writer annotates freely and in all directions on the margins of his text without taking into account any layout, logical or grammatical constraints. Nevertheless, the quotations are visibly internalized as another form of authorial voice with multiple acts of presence.
It is precisely in their annotations that Turnèbe, Textor, Coras and Lipsius extend the range of their citational writing towards a condition of self-aware authorship. If their ebullitive erudition makes their texts less accessible and tedious, it is what I would call their unintentional fictionality and expressiveness that could rescue them from remaining fossilized in drudging historical commentary. Far from interrogating themselves on the formal tensions in their work, as Montaigne would do, they dilute everything in endless ethical or religious preaching. The transgressions are visible in their unforeseeable hybridity, liberty of interpretation, and fragmentation.
For instance, in the brouillons by Adrien Turnèbe (1512-1565) called Adversariorvm, the chapters are disjointed and sometimes instead of titles we only encounter quoted and explained phrases. In his dedication to Michel de l’Hospital, he presents his work as the “pages of Sibyl,” the woman oracle. His proto-Surrealist recipe of écriture automatique has the following ingredients: forgetting (notes); repeating (things one wrote before); transcribing (according to the laws of hazard); leaving (the dust settle): « Parfois, ayant oublié ce que j’avais noté auparavant, je le répétais sans changement sur d’autres papiers ; ceux-ci, comme les feuilles de la Sibylle, n’étaient pas numérotés ni rangés; je transcrivais sans choix et sans ordre – mon livre, sans que je le dise, le montrera bien par lui-même – j’écrivais au hasard, pêle-mêle, et je laissais tout cela moisir dans la poussière. » (modern transl. in Tournon, 148)
In his immense juridical opus called Officina, Ravisius Textor (Jean Tixier, c. 1480-1524) changes the initial purpose of the text, offering it the allure of a confession. At the beginning of a list reminiscent of People magazine’s classifications of the 100 most beautiful people in the world (Formosi et formosas, ex historicis, oratoribus et poetis), he places a long gallant poem dedicated to his very dear damisella Textoris (126-127). Such a passage in his laborious compilation has the unintentional but enjoyable effect of disrupting the didactic monotony of the discourse.
Jean Jehasse makes the generic argument that erudite books of comments have their own particular way of presenting the institutional practices that enforced the application of law to the social, economical and political realities of the Renaissance. Jean de Coras (1515-1572) is one of the judges in the now famous case of the two men who claim they are Martin Guerre. His Arrest Memorable, du Parlement de Tolose, Contenat une histoire prodigieuse, de nostre temps, avec cent belles, & doctes Annotations was published in 1561. Coras’ notes either serve the text and add additional meaning or become autonomous and depart from the very message that engenders them. He offers several moral lessons that do not explicate the main text and its factual information but develop a life of their own. Such details are chatty digressions: when Martin Guerre’s uncle cries after having seen him in chains is a pretext for Coras to make a taxonomy of possible causes of crying (no. 30, 51). An inquiry into Martin’s private life becomes a three-page lesson on friendship and intimate confidences (no. 4, 9-11). When taking act of Martin Guerre’s return from Picardy, Coras composes a short tourist guide about the geography and history of the region (no. 102, 144). On the one hand, we have in front of us a multifarious juridical document; on the other, a tome succumbed to the desire of its author to say everything and prove his moral, historical, and scientific erudition. This is a heterogeneous collection of several unfinished projects in terms of intentionality and style. The main text is pulled apart in all directions by the glosses and annotations implanted on virtually each paragraph.
Justus Lipsius (1547-1606) in his Politica offers the remarkable example of hundreds of heterogeneous citations assembled to illustrate a discourse. In Book IV, while discussing what he considers legal and illegal, he uses about one hundred borrowed phrases from thirty seven different authors but he places them methodically within the frame of his argument, which retains its coherence until the end (IV, 13-14, 71-76). Cicero’s sententiae appear everywhere—they are used to claim a rigid sense of morals and to counterpart various political assertions. Ancient texts are an immense repertoire of words available for new usage. Having read Justus Lipsius, Montaigne approvingly calls his writing a « docte et laborieux tissu » and considers him the most scholarly man that exists (i. Ch. 26, De l’Institution des enfants; ii. Ch. 12, Apologie de Raimond Sebond, respectively).
Any analysis of a type of production in which the accident and the tampering with the normative function of the discourse will produce fiction-like effects leads to several interrogation: where are we supposed to look for meaning in such texts? what are those "expressive" effects for a contemporary reader? The diversity of annotations and didactic lessons does not seem to accept a definite answer. If the purpose of these commentators is to offer a systematic explanation of their object of study, we have to focus on the notes, glosses and comments only as a pretext for a more ambitious scientific discourse. But expressiveness seems to appear and develop in the interstices where a more inspired authorial voice claims its right upon the neutral informative writing.
Les essais de Michel Seigneur de Montaigne. Edition nouvelle, prise sur l'exemplaire trouvé apres le deces de l'autheur, reveu & augmenté d'un tiers outre les precedentes impressions. Leyden: J. Doreau, 1602.
Les essais de Michel de Montaigne. Nouvelle édition. Enrichie et augmenté aux marges du nom des autheurs qui y sont citez. Avec les versions des passages grecs, latins, & italiens. Paris: C. Journel, 1659 [-1669].
Coras, Jean de. Arrest memorable du Parlement de Tolose, contenant une histoire prodigieuse, de nostre temps, avec cent belles et doctes annotations, de Mõsieur maistre J. de Coras... Prononcé es Arrestz Generaux le xij. Septembre, M.D.LX. Paris: 1565.
Lipsius, Justus. Les politiques. Livre IV : édition de Paris, 1597 / Juste Lipse ; avant-propos de Jacqueline Lagrée. Caen: Presses universitaires de Caen, 1994.
Textor, Ravisius /Tissier, Jean. Officina , nvnc ... emendata ... per Conradum Lycosthenem ... Cvi ... accesservnt: eiusd*e[m] Rauisij Cornucopiae libellus ... Item eiusdem ... epistolae .... Basileae: Apud Nicolaum Bryling, 1562.
Turnèbe, Adrien, 1512-1565. Adversariorvm. tomus primus. Parisiis: Ex officina Gabriëlis Buonij, 1564-65.
Compagnon, Antoine. La seconde main : ou, Le travail de la citation. Paris : Seuil, 1979.
Grafton, Anthony. The Footnote. A Curious History. Cambridge, Massachusetts: Harvard University Press, 1997.
Jehasse, Jean. La Renaissance de la Critique. L'essor de l'humanisme érudit de 1560 à 1614. Publications de l’Université de Saint Étienne, 1976.
Metschier. Michael. La citation et l'art de citer dans les Essais de Montaigne. transl. by Jules Brody. Paris: H. Champion ; Genève: Diffusion, Editions Slatkine, 1997.
Tournon, André. Montaigne: La glosse et l’essai. Presses Universitaires de Lyon, 1983.
Last fall, the Provost and Chancellor of my university approved a proposal calling for the creation of an academic program in Hmong Studies. I was a member of the committee charged with developing the proposal, and although we didn't get everything we wanted at the beginning, we did get what we desired the most: a tenure-track hire in Hmong Studies. If you've been following higher education in Wisconsin, you know that Governor Scott Walker and the GOP-led state legislature hit the UW System with unprecedented budget cuts that prompted a devastating number of layoffs, departures, and early retirements among its faculty and staff. Our committee discussed how certain core courses in popular majors were unstaffed, potentially leaving hundreds of students in the lurch. We considered whether some faculty would question the creation of a new tenure-track position at a time when their departments were not allowed to search for replacement personnel. I don't know whether any objections snaked their way through formal channels, but I always assumed that making our proposal during a time of stark austerity could be regarded as premature, imprudent, maybe even selfish. Shouldn't we wait until better times to ask for a permanent position? I trust that most educators would arrive at such a rationale out of a concern for the needs of students foremost. But such a rationale is also racist.
In this blog I return to the critical race theory tenet of whiteness as property to explain the relationship between the curriculum and the racist status quo in higher education. In the wake of antiracist student protests on campuses across the country, administrators like Oberlin College's president Martin Krislov have rationalized the slow pace of reform by pointing to a policy that takes power out of their hands: shared governance. Krislov wants student protesters to understand that administrators cannot make unilateral decisions about just any issue related to student experience on campus. The 1966 Statement on Government of Colleges and Universities from the American Association of University Professors states that
The faculty has primary responsibility for such fundamental areas as curriculum, subject matter and methods of instruction, research, faculty status, and those aspects of student life which relate to the educational process. . . . Faculty status and related matters are primarily a faculty responsibility; this area includes appointments, reappointments, decisions not to reappoint, promotions, the granting of tenure, and dismissal.
Faculty have described their relationship to the curriculum as one of "ownership," a metaphor that sparked a little pride in me when colleagues touted shared governance at my university. Yet we in the academy know that faculty do not own the curriculum equally, and we know how parts of it can be jealously guarded. Faculty who would resist a Hmong Studies program probably don't do so because they bear any overt racist hatred toward Hmong people, however; they do so because the curriculum in its current state is a property interest that produces clear benefits for them.
As I mentioned in my blog on the whiteness of the anti-vaccine movement, the tenet of whiteness as property comes from Cheryl Harris' influential 1993 article in Harvard Law Review. Harris posits that whiteness is more than a racial identity in the US; it is actual property whose value the law recognizes and protects. Harris cites Charles Reich's 1964 article "The New Property" in The Yale Law Journal as the work that expanded the idea of property to encompass
jobs, entitlements, occupational licenses, contracts, subsidies, and indeed a whole host of intangibles that are the product of labor, time, and creativity, such as intellectual property, business goodwill, and enhanced earning potential from graduate degrees. . . . Reich's argument that property is not a natural right but a construction by society resonates in current theories of property that describe the allocation of property rights as a series of choices. This construction directs attention toward issues of relative power and social relations inherent in any definition of property." (1728-29)
In other words, whiteness is property like any other reified relationship commonly understood to hold value—a medical degree, for example. The law protects the value of a medical degree by punishing anyone practicing medicine without one. Similarly, for most of our nation's history, a black person could be punished for pretending to be white, and accusing a white person of being black was like accusing a physician of being a quack—grounds for defamation. Whiteness, at the very least, promised that its owner could never be enslaved. Harris cites Jeremy Bentham's claim that "property is nothing but the basis of expectation" to argue that white privilege became a protected expectation of white people. "When the law recognizes, either implicitly or explicitly, the settled expectations of whites built on the privileges and benefits produced by white supremacy," Harris states, "it acknowledges and reinforces a property interest in whiteness that reproduces Black subordination" (1731). How does the law or, in our case, institutional policy, reinforce the "settled expectations" of whites, and what does that look like in higher education?
To answer this question, we need to be familiar with the rights traditionally associated with property ownership, which include the rights of disposition, use, and enjoyment. For our discussion, the most salient is "the absolute right to exclude." To understand how whiteness is functionally like property, we can look at the idea of hypodescent. Colloquially known as the "one drop rule" in many states, laws of hypodescent excluded people from whiteness the way that trespassing laws excluded people from private property. Moreover, other forms of new property that serve to reify whiteness as an "object" can also count on legal protection against the intrusion of blackness and other non-white identities. One of these forms of new property is curriculum.
Scholars of critical race theory in education studies have focused on the exclusionary function of property to explain the persistence of racial disparities in the nation's schools. Terry Pollack and Sabrina Zirkel use the tenet of whiteness as property to explain why an antiracist policy shift at a diverse high school in California met stiff resistance from the parents of white students attending the school. Parental lobbying and media pressure forced the school to revise its policy so that white students once again retained their expected right to "use and enjoy" the curriculum as well as "exclude" others from it. As a result, white students also retained access to higher GPAs and better credentials for college application.
Yet only in extraordinary circumstances do outside agents force a curricular change at a university like mine, and this is because of the due process of faculty governance: curriculum committees, hiring committees, academic policy committees, and faculty senates. And although we appear to be living in extraordinary times in higher education, administrators can still stonewall demands for reform by insisting on the ethics of such processes. In an interview with Inside Higher Ed, Hank Reichman, chair of the AAUP's Committee on Academic Freedom, Tenure and Governance, suggested that student demands should be heard but that student protesters cannot be expected to have the "sophisticated understanding of academic freedom" asked of faculty and administrators. The interview turns to specific demands made at Hamilton College and Emory University regarding faculty hiring and evaluation:
At the same time, he [Reichman] said, the kinds of demands being made with regard to faculty members were in many cases ill-advised.
The demand at Hamilton to discourage white faculty members from chairing certain departments "would impose a prejudicial and possibly illegal racial restriction on the hiring of faculty," he said.
And the demand at Emory about faculty evaluations would require questions that are "far too subjective" and are "prejudicial," Reichman said. He added that "a better approach would be to permit students to file complaints about specific mistreatment, backed by evidence, and to handle those through mechanisms that guarantee any faculty member so charged with fair due process protections."
In order to make any lasting antiracist reform on campus, college faculty need to acknowledge how faculty governance can and does reinforce the "settled expectations" of white faculty.
As new property, the curriculum must therefore reflect the dynamics of social relationships, bringing with it "questions of power, selection, and allocation" (Harris 1728-29). The relationship between predominantly white institutions and the history of white supremacy in the US does not always manifest as obviously as a building or statue honoring a slave trader or segregationist. For example, my institution's present classification took shape in 1951 when it became part of the Wisconsin State University system (now University of Wisconsin system); it transitioned from a teacher's college to a four-year regional comprehensive university largely because of the enrollment demand generated by the GI Bill of Rights. Put another way, our curriculum changed, quite significantly, in order to accommodate the needs of students from the region, the vast majority of them white and many hailing from racially-engineered sundown towns. Nationally, the GI Bill widened the racial education gap by underwriting the rise of institutions like mine while underserving black institutions in the South. Once we consider how indebted the model of American higher education is to Europe, the question of how the college curriculum reinforces white supremacy barely needs to be asked. Whole academic disciplines developed out of the imperial impulse, from the precursors of today's area studies, to cultural anthropology, to my own discipline of English.
How exactly, then, does the curriculum produce value for white faculty as a property interest, and how is it threatened as a property interest by student demands?
1. Competitive industry metrics such as first-year retention, student-to-teacher ratio, and four-year graduation rates factor into institutional prestige, itself a form of new property threatened by the addition of antiracist curriculum. This article on Northeastern University's strategic "gaming" of U.S. News and World Report's college rankings system reveals how much these metrics can matter to an institution's bottom line. Perhaps the most common demand among the dozens of lists of demands from student protest organizations across the country is the call for mandatory courses in antiracism or critical race theory for all students. A course of this nature usually supplements the general education curriculum, an addition that not only would extend time to degree but also would impact accreditation timetables and increase class sizes (given that there are few faculty qualified to teach such courses). In academic departments with tightly-scripted comprehensive major sequences, adding even one additional three-credit general education course risks ballooning time-to-degree rates in the aggregate.
For an example of how faculty might try to meet this demand by working within an existing system, see the website dedicated to reporting the University of Missouri's response to student demands. Rather than create a single new course required of all students, MU administrators propose that certain existing courses be retooled for "cultural competency" credit (see below). Most egregiously, this offer conflates antiracism with "cultural competency" as a learning goal, allowing courses such as "Cross-Cultural Journalism" to satisfy the requirement. Many institutions within the University of Wisconsin system have operated on this same flawed "two fer" model for decades.
The advantage of this appraoch is that we do not affect the number of required gen. ed. Credit hours, which could have impacted accreditation of professional programs. Furthermore, with a broad distribution of courses, we will not adversely affect either the distribution of credit hours by college and by department, or the ditribution of funding by college and by department.
2. The curriculum can produce value based on its association with and replication of white cultural capital. For example, multiple demands given to Oberlin administrators concern its famed Conservatory of Music, including the following:
2. We DEMAND that Jazz Curriculum in the Conservatory be reflective of the students [sic] musical focus. Students SHOULD NOT be forced to take heavily based classical courses that have minimal relevance to their Jazz interests. Classical students are not forced to take Jazz courses, and seeing as how most Jazz students are of the Africana community, they should not be forced to take courses rooted in whiteness.
In this example we see that the value of the property of the white curriculum is directly diminished by an antiracist demand: classical courses would no longer be required of all Conservatory students. Yet to many, this demand would seem outrageous because of the neutral or even virtuous value assigned to the practices of "high" white culture. At a time when the relevance of fine arts and humanities courses are pressured by the popularity of pre-professional and STEM disciplines, maintaining the current curriculum can be a matter of programmatic survival at institutions less prestigious than Oberlin. The threat of antiracist protest isn't only about the curriculum that students demand be added; it is also about the curriculum that students reject as no longer essential.
3. De facto "ownership" of individual courses or programs by individual faculty members constitutes a clear property interest when it confers prestige or research opportunities. Many faculty leverage their academic reputations as subject area experts for outside professional activities such as consulting. As a pathway to research, senior undergraduate and graduate seminars are property interests for those faculty accustomed to publishing research findings conducted in the classroom or in the field. There may be a basis of expectation for the use of institutional equipment or facilities instrumental for producing research. We can safely assume that white faculty disproportionately benefit from this arrangement, so any revision of the curriculum responding to non-disciplinary or non-departmental pressures—such as the demands at Hamilton College—threatens to upset such an arrangement. A report on STEM faculty diversity in 2007 shows that the share of underrepresented minorities is a small fraction of the overall faculty in forty of the top departments of each field. Research on faculty entrepreneurialism determined that "hard and applied science faculty also tend to generate more supplemental income for consulting activities than non-science faculty" (Lee and Rhoads 745). Ownership of discrete courses or programs increases the marketability of individual faculty members, who are expected to replicate a successful curriculum at the institutions recruiting them. The old metaphor of academic "turf" battles is quite apt once race is taken into consideration.
4. Lastly, the value of curriculum as a property interest is tied to the meaning of student evaluations of instruction (SEI). Another popular demand calls for antiracist professional development for faculty, sometimes accompanied by a demand for instruments to evaluate and assess classroom climate. For example, protesters at Yale University demand the "inclusion of a question about the racial climate of the classrooms of both teaching fellows and professors in student evaluations." A similar demand appears on the list from Wesleyan University, specifically mentioning the problem of classroom "microaggressions." AAUP's Reichman advises against such reform, recommending extant due process procedures that would require a student to initiate a complaint. Adding a question about inclusivity to the standard SEI of my home department required a vote among the members of the tenured personnel committee. This is because student evaluations are a form of new property: SEI results inform resource decisions over hiring, tenure, promotion, and prestigious teaching awards. Our old SEI did not include any question suggesting that social group identity matters to student learning. While we cannot say for sure that white faculty will score lower than faculty of color on this question, white faculty resistance to such questions may be based on the perception that they will. Indeed, it is productive to read faculty resistance to trigger warnings not as the clash between abstract concepts of "academic freedom" and "political correctness" but as a contest over valuable property. What would more challenge faculty ownership of the curriculum than a student choosing to opt out of a few weeks of class because of a racist climate?
In stable economic times, incremental progress in diversifying the curriculum and personnel can mask the existence of a white property interest. In my experience, most faculty meet the news of diverse hires or academic programs with pleasure or, at the worst, indifference. The prospect of a Hmong Studies hire at my university would have caused barely a ripple of controversy if not for the budget crisis that fueled speculation of a zero-sum situation: Hmong Studies in, something else out. Racist defenses of the curriculum and personnel decisions usually arise only when white property interests are obviously and imminently threatened. This is why our current period of antiracist student protest is so important. Student demands do obviously and imminently threaten white property interests in the curriculum, and faculty defenses of the status quo will reveal the nature of those interests. If the bluster over coddled Millennials and trigger warnings are any indication, they already have.
There are too many faculty in the academy who openly dismiss antiracist curriculum as marginal, lacking rigor, or just unimportant relative to their own concentrations. They are often the greatest beneficiaries of a curriculum that reifies whiteness as logical, cultured, or professional. Most faculty are not like this, I would like to believe, because they see the justice and the good sense in putting a Hmong Studies scholar on the tenure track. However, the actions of these faculty too, in their capacity as governors of the curriculum, regularly belie their professed values of diversity and inclusion. Antiracist students and educators may find that framing curricular conflicts as property claims will lead to productive discussions with colleagues who are open to reality of institutional racism but less ready to see their investment in it.
The words that the non-disabled use to talk about the disabled, or just the non-neurotypical,1 have not typically been known for nuance or tact. Even as physicians and psychologists have coined new clinical terms, ones that don’t carry the historical baggage of a word like “retarded,” children’s cruelty has kept pace: I remember a form of teasing in elementary school that involved tricking one’s victim into saying the letters “I. M. E. D.” —E.D. standing for some disability, we didn’t then know which, that would’ve caused a student to be placed in special classes or pulled out for therapy sessions. (I looked it up just now, for the first time in my life, and discovered that it’s currently used to mean “emotional disturbance,” but can’t be sure that the abbreviation had the same sense twenty years ago; if it did, a quick glance at the diagnosis reveals that this taunt was a particularly insensitive one, playing upon the social anxiety and interpersonal difficulties that children with emotional disturbances already experience.) Clinicians and advocates for the developmentally disabled must often attempt to recuperate or replace hurtful (or simply misleading) terms, searching for a vocabulary that reflects the rich and unique cognitive worlds of these individuals.
One strategy for adding complexity to traditional diagnostic categories is the “spectrum.” Clinically valuable for its ability to capture the many ways in which a particular disorder may “present,” the spectrum concept also feels a bit more humane: whereas labeling a particular individual “autistic” suggests that he belongs to an entirely different category of person, placing him on the “autism spectrum” implies a neurodevelopmental space shared by both neurotypical and autistic people, one where an autistic person may in some respects resemble an NT person more than he does other people classified as autistic. Referring to the “autism spectrum” also helps dispel the myth of autism as singular and predictable, instead preparing NT people to meet a range of different individuals who, for different reasons and in different ways, can be identified as autistic.
This essay is not about the way we talk about autism in neurological, psychiatric, or activist contexts; it’s about the way we talk about autism colloquially and casually. But I begin with this preamble, partly because terms like “neurotypical” may be new to some readers, and partly because, when I’m teasing out the connotations of “on the spectrum,” I don’t want to give the impression that what we mean by this demotic phrase is what autism is. When the phrase “on the spectrum” comes up in casual conversation, it doesn’t work the same way it does when autistic people or psychologists use it—but neither is it merely mocking or straightforwardly hateful along the lines of many other terms for mental illness or disability. This affective distinction strikes me as a clue, a hint that autism is serving some function other than clinical in the culture at large.
In some ways, the popularization of the phrase “on the spectrum” simply reflects the genuinely increasing integration of non-NT people into everyday life in America. It’s something you say about your brother-in-law, a coworker, your neighbor’s daughter—people whose behavioral habits you know casually but not intimately; and it’s in most contexts a way of making sense of and assimilating their difference rather than rejecting it outright. Sometimes the tenor of this assimilation is lightly dismissive, naming behavior that’s harmless though annoying—what previous generations might have labeled “touched in the head.” At other times, though, it involves a certain wary respect, providing an explanation for the quasi-magical capacities that popular culture still associates with autism’s deficits: a gift for calculation, an ability to focus, a precise and retentive memory. All of this, though a little sloppy and shallow, is in some sense exactly what the “spectrum” designation was meant to do: take the traits associated with autism and Asperger’s and bring them into the range of explicable and familiar, if not entirely ordinary, experience.
Sometimes very familiar indeed: “on the spectrum” is a beloved term of self-diagnosis, as a recent New York magazine article noted with hip disdain (“Is Everyone on the Autism Spectrum?”—we’re so over it!). For those who have never received a formal diagnosis, and who quite possibly wouldn’t, “on the spectrum” typically serves to index a discomfort in social situations and a need for routine and regularity: to hate talking on the phone or regularly find oneself at a loss for words or eat the same meal every day can constitute reason enough to locate oneself on the spectrum. A clinician, of course, might diagnose these behaviors differently (for instance, as symptoms of social anxiety disorder) or not at all, but the colloquial “on the spectrum” serves a purpose that is not strictly psychiatric but social: it’s a gesture of camaraderie, when applied to oneself, or of welcome, when applied to others. The result is just short of a paradox: a syndrome that is popularly understood to entail a lack of interest in social life and an inability to perceive the needs and interests of others becomes, in the right context, a gesture of community and belonging.2
The question then becomes: why has “the spectrum” come to assume this role? Why are autism and Asperger’s acceptable self-identifications among neurotypical folks who would be much less willing to declare themselves bipolar or dyslexic or, indeed, “emotionally disturbed”? In many respects, the concept of “the spectrum” behaves less like these disorders than like the less scientifically grounded categories of personality psychology—“introvert” or “extrovert,” “left-brained” or “right-brained,” and the entire combinatorial catalog of the Myers-Briggs scale. Relocated to this company, the success of “the spectrum” becomes much less surprising: is there anything white middle-class Americans love more than labeling their own cognitive and emotional styles?
For this—let’s not be coy—is the “right context” I mentioned above: those who diagnose themselves as autistic are overwhelmingly white, relatively affluent, and male. (As are, for that matter, the famous intellectuals and artists who’ve been retroactively placed on the spectrum: the aforementioned New York article lists “Thomas Jefferson, Orson Welles, Charles Darwin, Albert Einstein, Isaac Newton, Andy Warhol, and Wolfgang Amadeus Mozart”—a diverse group in all respects but two.) In part, this reflects a disparity on the level of actual clinical practice: autistic children of color are underdiagnosed, diagnosed later, and have less access to treatment, as numerous studies have shown. There’s probably a self-reinforcing schema at work here:3 because Leo Kanner and other early autism researchers tended for various reasons (outlined in depth by Silberman in Neurotribes) to associate the disorder with middle- and upper-class white male children, clinicians diagnose autism less often in children of color and in girls, which in turn helps to reinforce a cultural image of the autistic child as a white boy—probably the child of a Silicon Valley programmer or a successful financial analyst.
This last element of the autism schema offers one angle on why the self-diagnosis seems so inviting: the popular understanding of “the spectrum” bundles together several traits associated with privileged social positions. Most obviously, “the spectrum” trades on surprisingly trite, empirically unsubstantiated stereotypes about male and female behavior: men are supposed to be less interested than women in socializing and conversing, more interested in tools and objects, better at spatial and mathematical reasoning—all features that come to the fore in the simplified version of autism that circulates in popular culture. Insofar as these behaviors and dispositions are believed to be characteristically male, they’re also culturally valued—and so a subject position that seems to grant special access to them, like a self-diagnosed autism spectrum disorder, offers a measure of social cachet.
Less immediately clear, though, may be the whiteness of “the spectrum”; whereas many laypeople and a few psychologists have no compunction about asserting the supposedly male features of autism—Simon Baron-Cohen has infamously referred to autism as a case of “extreme male brain”—any racial association is likely to be less explicit, more socially taboo, and for good reason. (To be clear, there’s no evidence that autism actually varies in prevalence among different racial or ethnic groups.) But autism in the popular imagination does, I think, overlap substantially with a particular feature of European-American whiteness: the bias toward “independent selves” that Hazel Markus and Shinobu Kitayama identified in their classic article, “Culture and the Self.” Markus and Kitayama argue that cultural models of selfhood fall into two major categories: the interdependent self, which relies on the social and emotional support of others to survive, and the independent self, which is imagined as autonomous, unique, and atomistic. Most aspects of European-American culture encourage an independent self-image: parenting books that recommend giving children choices, memoirs that chronicle an individual’s success against all odds, classrooms and workplaces that emphasize inherent talent over teamwork, political structures that reinforce the privacy of personal beliefs and values. But, of course, the message of independence is inflected by the intersectional categories of race, gender, and socioeconomic status: “The prototypical American view of the self,” Markus and Kitayama acknowledge, “… may prove to be most characteristic of White, middle-class men with a Western European ethnic background.” Members of this demographic have the most license to be independent, to behave as though they don’t need or even, necessarily, acknowledge others; and insofar as the popular understanding of autism entails just such obliviousness, it reliably evokes middle-class whiteness.
If all this is true, it puts “on the spectrum” in a curious light: a pseudo-clinical diagnosis that acknowledges the strangeness and strain of the independent model of selfhood—the distortion behind the disregard for interpersonal complexity that is supposedly a white middle-class man’s prerogative—even as it naturalizes that model as an inborn pathology rather than a learned set of behaviors. This means that, as a self-diagnosis, “on the spectrum” isn’t merely gloating or strategic; there’s a hint of melancholy to it as well. Something is missing from the default worldview of the white male American, something to do with other minds and social awareness—but that something is imagined to have always been gone, to be a fixed condition that one must simply live with. The absence is even, most ironically of all, an identity: being socially “unmarked,” when described as a set of character traits and dispositions, turns out to look anomalous, non-normative, worthy of clinical analysis.
I bring this up not exactly to arraign the independent model of the self, which appeals to me (a white middle-class American) on many levels; nor to accuse all those who place themselves “on the spectrum” of harboring white supremacist tendencies; nor, conversely, to suggest that whiteness is some kind of pitiable pathology. I bring it up, first, to suggest that we tread more carefully with our recuperative claims, since in celebrating “the spectrum” we may end up celebrating only those aspects of the spectrum that we as a society already value—those aspects that overlap with privileged identities. Second, and most urgently, I bring it up to remind us that diagnoses can be a kind of capital that, like other forms of capital, will concentrate in the hands of white men unless we’re vigilant about redistributing them. When the behavior that reads as autistic in a white boy would constitute rudeness, insubordination, antisociality in an African-American girl—then it’s time to turn a critical eye on the spectrum.
With the release of The Man Who Invented Fiction, I thought I would devote this post (my first in quite some time) to highlighting what I feel was the most important thing I learned about Cervantes as a writer over the last several years of researching and writing the book. As is well known, the critical tradition has generally credited Cervantes with having invented the modern novel; but for me the true force of his innovation lies not so much in a specific literary form as in the structural trope he introduced into the medium of the printed word that enabled the modern experience of character.
In the 45th stanza of the first canto of Torquato Tasso’s La Gerusalemme Liberata, published in 1581, two of Tasso’s great heroes, the knights Tancredi and Rinaldo, make their appearance:
Next comes Tancredi; and there is none among so many (except Rinaldo) either a greater swordsman, or handsomer in manners and in appearance, or of more exalted and unwavering courage. If any shadow of guilt makes less resplendent his great repute, it is only the folly of love: a love born amid arms, from a fleeting glimpse, that nurtures itself on sorrows and gathers strength.
A mere 24 years after the enormously successful publication of this great poem, Miguel de Cervantes has his own fearless and lovelorn knight step forth onto the glorious fields of Mars. Having spied “a large, thick cloud of dust coming toward them along the road they were traveling,” and overjoyed at the prospect of at last showing his prowess in war, Quixote urges Sancho up the nearest hill to get a better look at the armies. From their new vantage, the Don begins to narrate in terrific detail, exactly as Tasso or Ariosto would have done before him, all the famous knights and giants he spots among the two armies. But in lieu of recognized names of lore, he spouts utterly absurd inventions of his imagination, replete with signature arms, shields, and powers—all to the great bewilderment of his sidekick Sancho Panza, who sees nothing but great quantities of dust in the air:
“Señor, may the devil take me, but no man, giant, or knight of all those your grace has mentioned can be seen anywhere around here; at least, I don’t see them; maybe it’s all enchantment, like last night’s phantoms.”
“How can you say that?” responded Don Quixote. “Do you not hear the neighing of horses, the call of the clarions, the sounds of the drums?”
“I don’t hear anything,” responded Sancho, “except the bleating of lots of sheep.”
Cervantes’ view of the battlefield doesn’t differ from that of Tasso because of the depths of its description or the beauty of its verses. It differs in that, where Tasso’s verses describe for Tasso and his readers the essence of war, Cervantes’ prose describes how his characters perceive and misperceive war. Tasso’s words paint heroes; Cervantes’ lines animate characters.
Cervantes’ success in creating characters that feel like “real people” depended in part on his rich descriptions and his attentiveness to their voices; but underlying all his characters was his fascination with how different people might experience differently the same situation. This focus is present throughout Cervantes’ writing. Indeed, his ability to shift fluidly between different points of view and voices was fueled by his obsession with portraying not just the world and the people and events that fill it, but how people perceive and misperceive that world and each other. Just as his most important novel, Don Quixote, is organized around the central character’s inability to distinguish fantasy from reality, what makes all Cervantes’ characters stand out are the idiosyncrasies and differences of how each inhabits his or her world.
The uniqueness of each person’s perceptions is, to my mind, the source of the book’s extraordinary appeal; at its core is a sustained relationship between two characters whose incompatible takes on the world are overcome by friendship, loyalty, and even love. Sancho Panza, whose simplicity and oafish appetites often veil an inadvertent wisdom, knows Quixote is mad, and chooses to follow him anyway. When the mischievous duchess mentioned above elicits Sancho’s confession that he does indeed know Quixote is mad, and then accuses him of being “more of a madman and dimwit than his master” for following him, Sancho replies:
if I were a clever man, I would have left my master days ago. But this is my fate and this is my misfortune; I can’t help it; I have to follow him: we’re from the same village, I’ve eaten his bread, I love him dearly, he’s a grateful man, he gave me his donkeys, and more than anything else, I’m faithful; and so it’s impossible for anything to separate us except the man with his pick and shovel.
As Erich Auerbach wrote of Sancho’s attachment to Quixote, the former “learns from him and refuses to part with him. In Don Quijote’s company he becomes cleverer and better than he was before.”
Just as the tenderness evident in Sancho’s confession is conjured not in spite of but because of the very incompatibility of lived worlds it transcends, so too does the book’s famous humor function along these same parameters. When the hunch-backed and half-blind scullery maid Maritornes slips into bed with Don Quixote in the dark of night, the hilarity doesn't just come from the fact that she's fat and ugly and he's old and bony, or that her true amorous target, the mule driver, gets angry and beats Quixote up after he's already suffered two or three terrible beatings the same day. What makes the scene so funny is that Quixote is convinced that Maritornes is the innkeeper’s beautiful daughter and a princess to boot; that when he declaims about his devotion and service to her she doesn't have the slightest idea what he's talking about; and that the mule driver thinks his tryst for the night has preferred another man to him, and so hands him the beating that Quixote concludes must have come “from the arm of some monstrous giant.”
Don Quixote is indeed a very funny book; legend has is that King Philip III once exclaimed, upon seeing a student doubled up in raucous laughter one day, “that student is either out of his mind or he is reading the story of Don Quixote!” As such, it uses many of the same tricks and themes that have elicited laughter throughout human history, specifically the scatological and coprophiliac sensibilities that have clung to the lowest rungs of humor throughout literary history.
The French humanist François Rabelais, who lived in the century before Cervantes, was one of many sixteenth-century writers who relished a good dirty joke; and his enormously influential series of satirical novels about the giants Gargantua and Pantagruel are packed with scatological humor. Indeed, the principal character of his books, the giant Gargantua, is literally born in shit, his mother, the giantess Gargamelle having over-consumed on tripe the night she gives birth. Rabelais, a physician as well as a writer, revels in not sparing us the details:
A little while later she began to groan and wail and shout. Then suddenly swarms of midwives came up from every side, and feeling her underneath found some rather ill-smelling excrescences, which they thought were the child; but it was her fundament slipping out, because of the softening of her right intestine—which you call the bum-gut—owing to her having eaten too much tripe, as has been stated above.
Almost a century later, Cervantes would turn to such tried and true themes as well in his desire to spur his readers to laugh. But where prior writers focused their efforts on depicting the grotesque, the humor in his version derives almost entirely from how the two characters perceive and misperceive what is happening.
Lost in the woods in in the dead of night, Sancho becomes frightened by the sound of “strokes falling with a measured beat, and a certain rattling of iron and chains that, together with the furious din of the water, would have struck terror into any heart but Don Quixote's.” To prevent his master from heading toward the sound, Sancho secretly ties his master’s mount’s hind legs together and begins distracting him with stories, when he feels “the urge and desire to do what no one else could do for him.” Afraid to move away from Quixote, he first tries to relieve himself in secret, but finds he cannot do so without making a noise, as Cervantes writes, “quite different from the one that had caused him so much fear.”
"What sound is that, Sancho?" Quixote asks. "I don't know, senor," Sancho responds. "It must be something new; adventures and misadventures never begin for no reason." His second attempt is more successful, and silent, but this time it is another sense than hearing that gives him away, and Quixote remarks, holding his nose, "It seems to me, Sancho, that you are very frightened.”
The abyss that divides these two scatological moments in literary history is decisive. Where Rabelais achieves his effect by describing the obscenity of basic human functions with an anatomical zeal leavened by his impish disdain for propriety, Cervantes’ prose brings into relief his characters’ emotions, their embarrassment, their fear, their desire to pull the wool over one another’s eyes, and their rueful responses when they fail. Rabelais wrote patently untrue stories that entertained their readers with their bawdy satire; Cervantes wrote fiction.
"Capital" is not what capital is called, it is what its name is called.
Joan Robinson (1954)
The following represents an attempt to articulate a neochartalist philosophy of capitalism. The 9 theses collected here spring from the startling insight of the leading neochartalist school of political economy known as Modern Monetary Theory (MMT): namely, that money is a boundless public monopoly that belongs to the people, rather than a finite form of private investment and speculation that owes its existence to capitalists. Or, as MMTer Stephanie Kelton will also put it, “Money does not grow on rich people.”
Political governance constitutes the center and unoutstrippable ground of economic life, argues MMT; and a currency-issuing government will never run out of a unit that it alone supplies. Public spending, therefore, can always be made to justly shape and include everyone in processes of social production. This will not cause inflationary price rises, insists MMT economists, so long as disbursements remain directed at real resources and productive capacities. What is required to immediately address systemic poverty and environmental degradation, MMT shows, is neither the restoration of tax-and-spend liberalism, nor the calamitous destruction of the value-form but, rather, collective will and the political capacity to produce money whence it always emanates: government balance sheets, which is also to say, thin air.
On this critic’s reading, MMT’s revelation not only transforms the central problem of political economy; it also radically reconfigures how we both imagine and answer the neoliberal catastrophe.
(1) Capitalism is the arbitrary law that no sovereign currency-issuing government should wield its boundless public purse to fully serve the peoples and environs money encompasses.
(2) Capitalism derives its contradictory laws of motion from the aforesaid arbitrary law.
(3) Capitalism’s subsidiary laws of motion engender a highly unstable and exploitative economic system.
(4) Capitalism is the false name given to the totality that modern money conditions.
(5) Capitalism’s namesake (the imagined “capitalist totality” and so-called "capitalist mode of production") reduces the whole of monetary relations to private capital relations.
(6) Capitalism’s naming (a nineteenth-century conceit) represses money’s publicness, infinitude, and answerability.
(7) Capitalism is neither the subject, nor the prime mover of modernity.
(8) Capitalism is a cataract in the eye of history.
(9) Capitalism does not exist.
Terrified of serpents, I knew to avoid the snake charmers of the Jemaa el Fnaa market in Marrakesh. But my son Alexander also warned against having my picture taken with any of the performers in the square.
We had just arrived from Fez, where I had given a presentation, and were making our way through the chaos of the market. This was like no other place. All around us we heard music representing the traditions of the Amazigh, the indigenous people of Morocco, commonly known as Berbers. Storytellers entranced huge circles of listeners with their narratives. Men holding monkeys were pursuing visitors, Moroccan and tourists, for photographs. And with the corner of my eye, I could make out the cobras swaying to the reedy vibration of the oboe-like ghaitas.
Overwhelmed, we headed for one of the terrace cafes surrounding the square. As we sat down with a glass of mint tea, we couldn’t really focus on any of the sights, so many were the riches before us. It was like trying to detect individual drops in a rain shower. But nearer to us we picked up loud, syncopating drums from a group of Gnawa musicians. Dressed in saffron robes with yellow sashes and wearing the tarboush, hats with long tassels, they beat their drums and struck their cymbals as they shook their heads in circles.
In a strange way the music seemed familiar to me. The night before in Fez Alexander had taken me to a café for a Gnawa performance by Rayan, his music teacher. Rayan had lived in Paris as a rap artist and break-dancer before returning to Morocco a few years later to study Gnawa music. Dressed in the fashion of the performers before me, he played his ginbri, a plucked string instrument similar to a banjo, accompanied by a singer who played large castanets called the grageb. Rayan was my introduction to the numinous, hypnotic rhythms of Gnawa.
From the corner of the room, I could see the other patrons clapping their hands and swaying to the trance melodies and sometimes joining in the song. They rolled tobacco into their cigarettes. And I, a foreigner, allowed myself to feel the waves. The room was a warm glow against the cool November night of the desert. I leaned against the wall and welcomed the heavenly calm. It did help, as Alexander later told me, that the patrons fortified their tobacco with hashish. Smoke was everywhere.
And then suddenly someone rushed into the room, waved his hands and stopped the music. What happened, I asked Alexander? Nothing, he said. It was just the call to prayer from the mosque nearby. No music can play at this time.
Alexander was taking weekly lessons from Rayan and often came to the café to hear him. Once he attended an all-night ceremony during which Rayan played for seven hours with only a few breaks. He explained that the Gnawa music originated in sub-Saharan African and mingled with the Sufi traditions of the Maghreb.
From my perch in the café in Marrakesh, I thought of Rayan’s presentation and tried to connect it to what I was hearing below. "What was more authentic?" I wondered: Rayan, who had, like many contemporary Gnawa artists, refigured religious ritual into an aesthetic performance or the performers before me who made their money in the square, often having their pictures taken with visitors?
With the sun setting, I suggested that we go back to our riad or guesthouse. But first I thought we should make a contribution to the performers. I make a point of supporting street musicians as my older son, Adrian, has been playing his violin in the streets of New Orleans for the last two years.
So I gave Alexander some cash and instructed him to give it to one of the performers. Immediately, however, a dancer asked, “Picture?” No way, I thought. Not me. I’m not one of those people. It seemed so cheap, so touristy. “Come on,” he said. I was torn. Was it not rude to say no? But I could feel Alexander’s glare. He did not want to participate in the objectification of the other, he had told me when we entered the square. So I resisted. “Come, take picture,” the man beckoned. But in a moment of confusion, I handed Alexander my phone and then I was pulled into the dance, had a tarboush popped on my head and instructed to swirl the tassel. For a couple of seconds I had my own little orientalism.
Alexander snapped a picture and then I pulled away. “Money,” called one of the dancers. “But I gave you some,” I replied. It had been a generous contribution. “My other son is a musician,” I mumbled. “But I want my money,” he insisted. “I deserve it.” “I’m sure you do,” I thought and stepped away with lowered eyes, trying to untangle myself from the contradictions of tourism, colonialism, wealth, and poverty.
Well, I said to myself. My son is a performer in New Orleans and he gets his picture taken with patrons. But I knew this was different. Many others had come to Marrakesh before me and snapped pictures of the indigenous inhabitants, often without permission, turning them into objects of curiosity, exoticism, or sometime disdain. I did not want to be part of this, wanting to keep my conscience clean. But now the musicians were actively courting the foreign camera, frustrating my attempt to travel in peace.
This was the case a year ago in Ilorin, Nigeria, where I was invited to give presentations at Kwara State University. One night my colleagues had invited me to a performance of Yoruba poetry and dance. I was the only white face in the audience, something made apparent at the end of the show as listeners wanted to have their pictures taken with the performers.
I stood back, trying to take everything in. But all of a sudden one of the performers pointed to me. Thinking that he was mistaken, I turned to see if he meant someone else only to realize that he actually beckoned me to come up on the stage. To my bewilderment the performers had their picture taken with me. Later I asked my host, an expert in Yoruba poetry, to explain this invitation. He said the ensemble would likely use the photos in their advertising with the message that they are so good that even white people come to their performances.
In Marrakesh, as in Ilorin a year earlier, I became enmeshed in contradictions I had not expected. Although we may travel with the best intentions, we can’t help but confront tensions we find easier to avoid at home — between rich and poor, black and white, those who represent and are represented, and those passports that enable movement and those that block it.
The music I heard in Morocco expressed these inequalities. Itself a product of ethnic and racial mixing of Africa, it brought Rayan, Alexander, the Sufi performers and me together. But it also conveyed political and economic conflicts that continue to reverberate around us.
Pierre-Joseph Proudhon develops Godwin’s early, and proto-accelerationist, model of progress into a full-blown mechanical Prometheanism in his System of Economic Contradictions, or the Philosophy of Misery (1846). This work represents one recognizable prototype for the resurgent techno-utopianism we see among certain factions of the contemporary left in the Anglo-American world today. The anarchist Proudhon offers his Prometheus as the key to a radical political economy:
Prometheus, according to the fable, is the symbol of human activity. Prometheus steals the fire of heaven, and invents the early arts; Prometheus foresees the future, and aspires to equality with Jupiter; Prometheus is God. Then let us call society Prometheus. Prometheus devotes, on an average, ten hours a day to labor, seven to rest, and seven to pleasure. In order to gather from his toil the most useful fruit, Prometheus notes the time and trouble that each object of his consumption costs him. Only experience can teach him this, and this experience lasts throughout his life. While laboring and producing, then, Prometheus is subject to infinitude of disappointments. But, as a final result, the more he labors, the greater is his well-being and the more idealized his luxury; the further he extends his conquests over Nature, the more strongly he fortifies within him the principle of life and intelligence in the exercise of which he alone finds happiness.
Proudhon offers us a pristine image of mechanical Prometheanism, whereby the titanic representative of collective humanity “finds the “principle of life and intelligence” in the “conquest of Nature.” We can find neither the quest for justice nor Percy Shelley’s utopian vision of radically transformed human and extra-human relations in Proudhon’s myth of a new political economy. In the words of John Bellamy Foster, “the mythological struggle over fire ceased to stand for a revolutionary struggle over the human relation to nature and the constitution of power and instead became simply a symbol of unending technological triumph.”
We can see in these early, and disparate, versions of the myth what Arthur Mitzman calls the “two Prometheanisms.” For Mitzman, the twentieth century version of modernity, in both its capitalist and state socialist forms, represents the apotheosis of a “mechanical Prometheanism” that yokes progress to “technological prowess” in order to legitimate class power of various sorts, while simultaneously cloaking this power under the guise of expertise, efficiency, and an ever increasing GDP. In fact, as Mitzman admits, it was the real material gains of the Western “middle classes,” under the mass consumer capitalism of the post-World War II era, built on the underdevelopment of the global south, which secured the hegemony of mechanical Prometheanism; and, it was the promise of rapid industrial development and a better living standard that made the state socialist model appealing to so many in the developing world during the Cold War era. Mitzman’s argument see-saws between ideal-typical generalizations and more specific socio-material analyses in this way. He, for example, details how the unresolved tension between the politico-ethical and techno-scientific dimensions of the Prometheus myth offered an attractive ideological template for both the proponents of capitalist developmentalism and their antagonists, as we can see in the case of the European enlightenment and its revolutionary romantic critics. Mitzman anchors this tension in the temporary alliance between middle class reformers and plebeian masses that drove these same revolutions. In this narrative, the bourgeois reformer's model of freedom as possessive individualism, or utilitarian calculation, supplants freedom as solidarity with the consolidation of capitalism during the nineteenth century. Radical romanticism, as exemplified in Blake’s visionary diagnoses of early industrial capitalism, preserves the emancipatory core of a Promethean program captured by mechanist — capitalist — imperatives. These problems of overgeneralization will arise with any argument that takes a reified and monolithic “modernity” as its starting point.
With these caveats in mind, this tale of two Prometheanisms is a better rhetorical framework for interpreting capitalist development and its revolutionary discontents, especially in light of the new Prometheanism brandished by today's Jetsonians.
Mitzman’s vision of multiple and conflicting Prometheanisms is in this way preferable to Marshall Berman’s influential celebration of a Faustian modernity in All That is Solid Melts Into Air. Berman, in a series of idiosyncratic close readings focused mostly on nineteenth-century literary texts, purports to discern the developmentalist logic of the modern age, beginning with Goethe’s Faust. Berman remakes Faust into the visionary subject of a nascent modern age defined by an insatiable desire for transformation in a specifically capitalist key; it is nonetheless Mephistopheles, rather than the poem's titular protagonist, who personifies capital itself for Berman. Berman, here and elsewhere, identifies an oftentimes reified version of technological dynamism with capitalist social relations while repeatedly denying this identification. Why? Apparently to preserve the dream of a communist, and specifically Marxist, break with capitalism. Berman, unlike Marx, views communism as an intensification of these same capitalist social relations rather than a break with our profit-driven perpetual motion machine and the expoitation that powers it. Berman renders Faust's accelerationist dream of technological mastery and material progress as a generically human end-in-itself, even as he displaces the specifically capitalist character of the Faustian project and its tragic externalities onto the devil. Communization for Berman is exorcism rather than revolution.
Berman’s interpretation of Faust — which provides a model and motif for the several chapters that follow the first — pivots on the second part of the long poem written by an older Goethe fascinated with the techno-utopian proposals of the St. Simonians. Faust, in reengineering the natural world for broadly human purposes, is a type of the twentieth century developer for Berman.
The old mythological couple — Philemon and Baucis — get in the way of Faust’s project, refusing to leave their plot of land, and Mephistopheles gets rids of them according to Faust’s wishes, although not in the bloodless way the hero would have preferred. This couple typifies the old precapitalist world that must be eviscerated, like the natives who hindered the westward march of "progress," and its Anglo-European avatars, in the North American settler colony, or, less dramatically, the working class populations in the way of Robert Moses’s modernizing reconstruction of New York City. Berman describes these processes — primitive accumulation and proletarianization in a Marxist vocabulary — as the tragedy of development. And, as with classical tragedy, Berman's Faustian tragedy of development pivots upon the teleological necessity of the sacrifice. Like Euripides' Iphigeneia — whose death at the hands of her father was the inescapable price to be paid for a Greek victory in Asia Minor — the Philemons of the earth must be sacrificed in order to ensure the "open-ended development of self and society" that defines modernity for Berman. Berman nonetheless distinguishes "Faustian consciousness" from what he describes as a Panglossian celebration of the techno-scientific status quo. Faust suffers under a burden of "guilt and care" and makes his suffering known, like Agammenon or the "haunted veterans of the Manhattan Project." In the coda to his paradigmatic reading, Berman reinforces this point in assessing Stalinist industrialization efforts in the USSR: “What makes these projects pseudo-Faustian rather than Faustian, and less tragedy than theater of cruelty and absurdity, is the heartbreaking fact that — often forgotten in the West — they didn’t work.”
More significant in this regard is Berman’s reading of Marx and Engels' Manifesto of The Communist Party in this same vein. While the Manifesto is the one explicitly political exception in a book that focuses on literary works — by Goethe, Baudelaire, and Dostoevsky, among others — Berman still offers us a literary exegesis of this text. Berman highlights the metaphorical register of the Manifesto and what he calls Marx's "melting vision" of capitalist modernization, alongside and in counterpoint to Marx and Engels’s argument. Berman was one of the first critics to explicitly link literary and artistic modernism to capitalist modernization, and it is with this linkage in mind that he claims Marx “lays out the polarities that will animate and shape the culture of modernism,” while the Manifesto’s gothic images of the bourgeoisie as “sorcerer’s apprentice” or even Victor Frankenstein himself — unleashing productive forces they cannot control — look forward to the twentieth-century modernists’ “cosmic and apocalyptic visions, visions of the most radiant joy and the bleakest despair” (102).
In approaching the text in this way, Berman often evacuates the Manifesto’s political content, while reifying certain images of a heroic, and Promethean, bourgeoisie whose “constant revolutionizing of production” sweeps away “all fixed, fast-frozen relationships.” Neglecting the dialectical structure of the text, Berman is enraptured by Marx's “lyrical celebration of bourgeois works, ideas, and achievements” (92). Berman moves from this aestheticized awe to several unwarranted conclusions, such as equating possessive individualism with the many sided and decidedly social model of individuality Marx envisions under communism. Berman also takes the never-ending flux of capitalist accumulation, which Marx "lyrically captures in the M-C-M," as an end-point and aim, so that if and when communism comes, “it may be only a fleeting, transitory episode, gone in a moment, obsolete before it can ossify, swept away by the same tide of perpetual change and progress that brought it briefly within our reach, leaving us endlessly, helplessly floating on” (105).
As Perry Anderson writes, in one of the more perceptive treatments of the book:
the cohesion and stability which Berman wonders whether communism could ever display lies, for Marx, in the very human nature that it would finally emancipate, one far from any mere cataract of formless desires. For all its exuberance, Berman’s version of Marx, in its virtually exclusive emphasis on the release of the self, comes uncomfortably close — radical and decent though its accents are — to the assumptions of the culture of narcissism…The vocation of a socialist revolution, in that sense, would be neither to prolong nor to fulfill modernity, but to abolish it.
Berman aesthecizes a decidedly mechanical Prometheanism over and against the exit from capitalist modernity. Mitzman too at least partially identifies Karl Marx with this same technological triumphalism.
Yet it was Marx who wrote one of the most thorough critiques of this Prometheanism outside of Mary Shelley’s Frankenstein in The Poverty of Philosophy, his book-length polemic against Proudhon. For Marx, “this Prometheus of M. Proudhon [is] a droll fellow, as feeble in logic as in political economy.” Why? As Marx goes on to explain: “What then in the last place is this Prometheus, resuscitated by M. Proudhon? It is society, it is the social relations based on the antagonism of classes…Efface these relations and you have extinguished the whole of society, and your Prometheus is nothing.” Proudhon’s Prometheanism, for Marx, reproduces the dominant ideology of the capitalist class in offering us “society” in the place of social relations defined by class conflict. These social relations include those magical machines, built out of dead labor, in order to extract profit, and wealth, from living laborers, immiserated in the process, as Marx details in the Grundrisse and Capital. Even in the relatively early Poverty of Philosophy Marx counterposes this model of exploitation to the illusory “collective wealth” personified in the Proudhonian Prometheus.
If “poetry,” for Shelley, is an emblem for a radically different arrangement of human and non-human natures, under which the mechanical arts must be subsumed, Proudhon offers the machine as a metonym for both techno-scientific rationality and actual technology. This ostensibly utopian iteration of technological determinism— the inevitable triumph of Reason in Godwin’s language — has defined the dominant strain of Prometheanism throughout the twentieth century, on both left and right, as both critics, such as Mitzman, and enthusiasts, like Berman recognize.
As noted in previous posts, this mechanical Prometheanism is making a come-back as exemplified in a crude if perfectly Proudhonian form by Paul Mason, who argues the end of capitalism — and the socialist transition — has already begun. For Mason, socialist revolution — or is it exorcism? — no longer needs a working class or mass insurgency, since the apps are building it for us, as he writes, “Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed — not just to subsist but to provide a decent life for all.”
Ray Brassier’s rigorous left-accelerationist defense of “mechanical Prometheanism”—“The Problem with Prometheus” — is worth considering in this context.
Like his accelerationist comrades, Brassier outlines his argument against the Heideggerian scarecrow that he finds lurking in a certain critique of technological hubris, represented by the aforementioned work of Jean-Pierre Dupuy. Brassier reduces Dupuy, and the “anti-prometheanism” he supposedly exemplifies, to a “theological investment in equilibrium” between “what is made [by human techne] and what is given [by God].” Brassier initially traces this idea to what he tendentiously reads as Heidegger’s confusion between the epistemological and ontological registers of human experience, according to which human beings can know everything but themselves, since to know ourselves would reduce the human subject to an object.
Here is the first limit Brassier detects in the anti-Promethean attitude, to which he later adds human finitude, which brings with it the suffering that makes human life meaningful. In describing a specific strain of technology critic in this way, Brassier channels Nietzsche’s critique of Christianity and its modern, socialist, feminist, and democratic avatars, all marked by a “slave morality” that would erect barriers to heroic human striving. Brassier nonetheless equates his Promethean subject with an impersonal reason defined by the capacity to assess objective situations and alter them according to a set of flexible rules or algorithms. In contrast with the very specific critique of anti-prometheanism that occupies the first half of the essay, Brassier’s outline of a resurgent Promethean program is notably abstract, that is until we arrive at the conclusion in which this Promethean project is described as “re-engineering ourselves and our world on a more rational basis.” Yet, this same Prometheanism “promises an overcoming of the opposition of reason and imagination, reason is fuelled by imagination, but it also can remake the limits of the imagination.” Rather than the Romantics’ marriage of reason and imagination, Brassier updates Goya’s sleep of reason, whose new techno-scientific dreams are sublime, monstrous, or both.
Brassier equates Marxism with an unleashed, and emphatically instrumental rationality, which can reshape an infinitely plastic human and non-human nature. But who is re-engineering “ourselves and the world?" The philosopher depersonalizes Promethean reason, even as he tacitly personifies human cognitive capacities, in isolation from the bodies that do the thinking, in place of the working class collective subject that Marx proposed in response to Proudhon’s human god, a god who finds new life in Brassier’s accelerationist fable. Brassier’s concluding paean to limitlessness recalls Berman’s endlessly self-liquefying modernity, while pointing to mastery, of a certain sort, as the often strenuously disavowed normative content of the accelerationist program. Brassier, like his comrade Benedict Singleton, promotes a version of mastery as transcendence under the guise of Prometheus, or “jailbreak,” the prison being any and all material constraints on the human condition.
The accelerationists here join hands with less reputable (if more influential) singulitarian fellow travelers, like Ray Kurzweil, as they move from mechanical prometheanism to a Gnostic credo (in scientistic dress) that insists we can transcend our finite bodies, the natural world, and materiality itself through a knowledge and rationality which is occult in its power. But what might this speculative mythology look like in practice? The so-called ecomodernists provide one answer. The ecomodernists — who include Ted Nordhaus, Michael Shellenberger, and Stewart Brand, among others — are affiliated with a California-based environmental think tank known called the Breakthrough Institute. They are, according to their mission statement, “progressives who believe in the potential of human development, technology, and evolution to improve human lives and create a beautiful world.” The development of this potential is, in turn, predicated on “new ways of thinking about energy and the environment.” Luckily, these ecomoderns have published their own manifesto in which we learn that these new ways include embracing the anthropocene — used to denote, in a less specific way than Jason Moore’s capitolocene, the disastrous changes, wrought by humans on the planetary environment now inscribed in the geological record—as a good thing.
This “good anthropocene” provides human beings a unique opportunity to improve human welfare, and protect the natural world in the bargain, through a further “decoupling” from nature, at least according to the ecomodernist manifesto. The ecomodenists extol the “role that technology plays” in making humans “less reliant upon the many ecosystems that once provided their only sustenance, even as those same ecosystems have been deeply damaged.” The ecomodernists reject natural limits of any sort, along with the planetary metabolism that anchors eco-socialist political economy, the solar socialism discussed in a previous post, and all human life in actuality. As opposed to solar communism, and the construction of sustainable eco-socialist technological regime in accordance with the possibilities and limits of the planetary metabolism, the ecomodernists argue we can reduce our impact on the natural world while continuing to "grow" the global economy in a specifically capitalist fashion through "decoupling" from the natural world altogether. For the ecomodernists, we must divorce the earth for her own good. How can human beings completely “decouple” from a natural world that is, in the words of Marx, our “inorganic body” outside of species-wide self-extinction, which is current policy? The ecomodernists’ policy proposals run the gamut from a completely nuclear energy economy and more intensified industrial agriculture to insufficient or purely theoretical (non-existent) solutions to our environmental catastrophe, such as whole sale geoengineering or cold fusion reactors (terraforming Mars, I hope, will appear in the sequel). In the words of Chris Smaje:
Ecomodernists offer no solutions to contemporary problems other than technical innovation and further integration into private markets which are structured systematically by centralized state power in favour of the wealthy14, in the vain if undoubtedly often sincere belief that this will somehow help alleviate global poverty. They profess to love humanity, and perhaps they do, but the love seems to curdle towards those who don’t fit with its narratives of economic, technological and urban progress. And, more than humanity, what they seem to love most of all is certain favoured technologies, such as nuclear power.
Rather than viewing the partisans of ecomodernism as cynics, shills, or useful idiots, we should take them at their word. The ecomodernists, like their accelerationist comrades, are true believers, although the belief in this case is overdetermined by the long history of capitalist modernization and its Promethean mythology. These 21st century Fausts nonetheless push Brassier’s Promethean transcendence in a decidedly alchemical direction, as they seek the algorithms that would turn lead into gold and humans into the “God species.” In the words of their most prominent literary forerunner, who also sought to achieve alchemical ends with ostensibly scientific means, “Life and death appeared to me ideal bounds, which I should first break through, and pour a torrent of light into our dark world.” In other words, this Promethean program is better described as the new Frankensteinism.
In the last post I discussed the thirteenth-century Ziyad ibn ‘Amir al-Kinani as an Andalusi chivalric novel, one that has particular implications for how we understand the reception of Arthurian narrative in the Iberian Peninsula, but of particular interest for students of the Libro del Cavallero Zifar (Toledo, 1300).
There are a number of coincidences between Ziyad and Zifar. Most of them are on the level of narrative motif. Two episodes in particular are present in both texts but absent from popular Arabic literature in general: those of the supernatural wife who bears the hero a son, and of the underwater realm. These motifs are united in the Arthurian “Lady of the Lake”, and here find expression in Zifar in the episode of the Caballero Atrevido (González, Zifar 241–251). In Ziyad, they appear in the episodes of Ziyad’s marriage to the Princess Alchahia, mistress of the submerged castle of al-Laualib (Fernández y González 22–26), and in the following episode of his marriage to a “dama genio”, or enchanted lady (Fernández y González 30–31).
First Ziyad arrives at the castle, which each night submerges into the lake:
When the sun rises above the horizon, the castle begins to raise from the depths of the waters, until it reaches the level of the surface of the earth. Then horses cross a vast bridge to go out and graze, and the cows and flocks of sheep to pasture. As evening falls, when the sun leans toward the west, the flocks return, and the cows and horses, and they all sink again into the water, that is, enter into Al-laualib keep, submitting themselves to its movements. (Fernández y González 19).
There Ziyad is greeted by its mistress, who is dressed as a knight. She challenges him to combat, in the course of which Ziyad notices with some surprise that his opponent is female. Finally, he defeats her and proposes marriage. She accepts and he becomes her King and lord of the submerged castle. In the following episode, Ziyad encounters an enchanted lady who bears him a son and then releases Ziyad after the boy is two years of age. One day Ziyad goes out hunting a beautiful gazelle, and becomes lost in the woods. What follows is a perfectly conventional encounter of the hero with an enchanted fairy so common in Western folkloric tradition (Thompson 1:382–384, 3:40–42, ) and abundant in French Arthurian texts (Guerreau-Jalabert 30, 62; Ruck 167, 173):
When the star was hidden, I saw that it was climbing a high hill, where a road led that looked more like an ant path or the side of a beehive, she continued her flight and I followed close behind, until I came to a grotto where she hid. I dismounted and entered the grotto to give chase, and the darkness surrounded me; but in its midst I spied a damsel, radiant as the midday sun in a cloudless sky (Fernández y González 29).
The woman, Jatifa-al-horr, describes herself as “a good Jinn who believes in the Qur’an” (Fernández y González 30) (Believing jinn who marry humans are also mentioned in the 1001 Nights) (El-Shamy 69). In this way the compiler brings the Arthurian supernatural wife motif, one also present in Zifar, into line with the values of the Islamic textual community, by giving the supernatural a Quranic point of reference. She then reveals that she appeared to Ziyad in the form of a gazelle and enchanted him so that he would follow her to her hidden castle.
In these two episodes the “lady of the lake” motif is broken out into two separate episodes, each containing elements of the well-known Arthurian motif found also in Zifar. There is a good amount of speculation among critics as to the sources of these motifs, ranging from “Oriental” to “Celtic” to “Hispanic” (González, Reino lejano 103 n 25; Deyermond). It certainly is curious that the same two motifs, the only fantastic motifs in all of Zifar, whose source is contested by critics and still an open question, should appear in an Arabic manuscript from the same region written some 70 years prior to the composition of Zifar.
Depending on how we read this evidence, it could lend credence to a number of different theories about Zifar. On the one hand, if we belive the motifs are Celtic in origin, we should suppose their transmission to Ziyad through Arthurian tradition to Ziyad and thence to Zifar. This would ironically corroborate both the argument that Zifar relied on Arabic sources, and the argument for the Arthurian-Celtic sources of the fantastic episodes in Zifar.
The existence of the popular storytelling tradition attested by the 101 Nights manuscript and Ziyad suggests yet another model for understanding the presence of “Arabic” source material in Zifar, in the episodes of the Caballero atrevido (‘the Fearless Knight’) and the Yslas dotadas (‘The Enchanted Islands’). (González, Zifar 240–251 and 409–429).
Suppose there were a tradition of 101 and/or 1001 Nights-style storytelling that was based on dynamic, ever-changing live performances (imagine a genre or tradition instead of a manuscript). Authors introduced new tales, adapted other tales from other traditions, and dressed them in the fictional trappings of the popular storytelling tradition of the Arab world that then produced both the 101 Nights and the 1001 Nights. We have already established that Castilian authors such as Don Juan Manuel drew on Andalusi oral narrative tradition (Wacks, “Reconquest”). What if the author of Zifar had done likewise, relying not on Andalusi manuscripts of learned Arabic texts but rather of stories told and retold within the context of the Nights tradition? The apparent Arabization of names and place names that has led critics to suppose an Arabic origin for Zifar may well be instead a reflection of a shared storytelling culture by which Castilian authors adapt material learned from storytellers in their written works, conserving and at times Hispanizing (or straight out corrupting) personal and place names, simply because that was how the Castilian author heard them.
Arabic texts of the time also reflect a shared culture of storytelling. As we have seen, place names of faraway, exotic locations such as China vacillate between Romanized and Arabized versions (Ott 258). Like the author of Zifar, the compiler of 101 Nights was drawing on a live, multilingual storytelling performance tradition in which performers told tales alternately in Andalusi Arabic or in Castilian, and likely at times some combination of both. This suggests a world of code-switching storytellers who moved effortlessly from Arabic to Castilian and back again. Only when viewed through the lens of the literary manuscript does this culture appear as two separate cultures, who communicate with difficulty through translation and adaptation. Just as with Iberian Hebrew poets who were perfectly versed in Romance popular culture, but who were compelled by literary convention to write almost exclusively in Hebrew, our authors and compilers of 101 Nights, Ziyad, and Zifar recorded in monolingual form a tradition that was in practice at least bilingual and probably to a certain extent interlingual as is today’s US Latino culture, where English, Spanish, and Spanglish coexist on a continuum of linguistic practice.
The evidence Ziyad presents is compelling on two counts. On the one hand, Ziyad’s analogues of Arthurian motifs episodes found in Zifar complicate the question of Zifar’s putative Arabic sources. We must choose one of the following: did the Arthurian material pass from the French to Ziyad and thence to Zifar? This would be a delicious but perfectly Iberian irony for the Zifar to have received Arthurian material from an Andalusi text. Or alternatively, did both Ziyad and Zifar take the material directly from the French? Or, a third and in my opinion more likely alternative: that the Arthurian material entered the Iberian oral narrative practice, where both Ziyad and Zifar collected it. This thesis finds strong support in scholars’ assessment of the Andalusi storytelling practice reflected in the 101 Nights manuscript.
Ziyad and 101 Nights both attest to a corpus of Andalusi written popular literature giving voice to a specifically Iberian (or at least Maghrebi) experience vis-a-vis the Muslim East. This corpus is largely latent and we await quality critical editions and translations into other languages of Ziyad, the other 11 texts in Escorial Árabe MS 1876, the 101 Nights, and other texts as they come to light. Our findings are necessarily tentative, based as they are on translations, until these editions come to light. What we can state, however, is the following: Ziyad provides us with new, earlier examples of the penetration of Arthurian themes and motifs in the Iberian Peninsula that predate both the Castilian translations of the Arthurian romances as well as their adaptation in Caballero Zifar. These versions circulated in a multi-lingual, multi-confessional Iberian narrative practice that included both oral and written performances. All of the above changes our understanding of Caballero Zifar and potentially many other early works of Castilian prose fiction as part of a literary polysystem with an oral component that is underrepresented in the sources yet important for understanding the development of literary narrative in Iberia.
El-Shamy, Hasan M. A Motif Index of The Thousand and One Nights. Bloomington, Ind: Indiana University Press, 2006. Print.
Fernández y González, Francisco, trans. Zeyyad ben Amir el de Quinena. Madrid: Museo Español de Antigüedades, 1882. Print.
González, Cristina. “El cavallero Zifar” y el reino lejano. Madrid, España: Editorial Gredos, 1984. Print.
———, ed. Libro del Caballero Zifar. Madrid: Cátedra, 1984. Print.
Guerreau-Jalabert, Anita. Index des motifs narratifs dans les romans arthuriens français en vers (XIIe-XIIIe siècles). Geneva: Droz, 1992. Print.
Ott, Claudia. “Nachwort.” 101 Nacht. Zurich: Manesse Verlag, 2012. 241–263. Print.
Ruck, E. H. An Index of Themes and Motifs in 12th-Century French Arthurian Poetry. Cambridge: D.S. Brewer, 1991. Print.
Thompson, Stith. Motif-Index of Folk-Literature. Rev. and enl. ed. Bloomington: Indiana University Press, 1932. Print.
Wacks, David A. “Reconquest Colonialism and Andalusi Narrative Practice in Don Juan Manuel’s Conde Lucanor.” diacritics 36.3-4 (2006): 87–103. Print.
From author website
An earlier version of this post appeared in the Glasgow Review of Books on 9/27/2015. Thanks to those who commented on it!
Old Man Anthropos has a new date. I don’t believe in magic numbers, but this one has got me thinking.
The Age of Man, or Anthropocene, has become the word of the day. Making a bid to replace the Holocene, or Age of the Present, as the scientific term for the geological era in which we live, the Anthropocene has caught the attention of scientists, scholars, artists, poets, theorists, and the general public. As humanist and post-humanist critics explore the era’s implications, scientific debate continues about its precise nature. The question of origins remains vexed: when did the Age of Man start? The most recent candidate for the Golden Spike, or GSSP (Global Boundary Stratotype Section and Point), which marks the start of the Anthropocene is 1610. Geologists Simon L. Lewis and Mark A. Maslin argue in the journal Nature that the clearest geological markers of human influence on the global climate appear in 1964 and 1610. The late twentieth-century date reflects the peak of radioactive particles in the atmosphere, which subsequently declined after the Partial Test Ban Treaty of 1963. But the earlier date catches this Shakespeare professor’s eye: 1610 is three years after the founding of the Jamestown colony and one year before the first staging of The Tempest. Amid the glories of the English Renaissance sits an ecological spike. When Sir Walter Raleigh graced Queen Elizabeth’s court and Shakespeare’s dramas were first staged, our Anthropocene nightmare began. Or so goes the story.
It’s a problem that choosing this date might advance the swerve into modernity narrative that’s been receiving much-needed pushback in recent years. (I will take a few swings at triumphalist conceptions of this history in Shipwreck Modernity, out in December 2015 from University of Minnesota Press.) But Lewis and Maslin don’t base their claim for a 1610 spike on newly-recovered manuscripts of Lucretius or on the Baconian trio of print, gunpowder, and the compass. Instead these scientists state that 1610 marks “an unambiguously permanent change to the Earth system” generated by the ecological mixing of the Americas with Afro-Eurasia. The starkest consequence of this mixing from a human perspective was death on an unprecedented scale, primarily among Native Americans. Estimates vary, but the New World may have experienced the loss of nearly 50 million souls, out of an estimated pre-Contact population of roughly 60-65 million, during the century of first contact. No period in recorded history matches this death toll on so vast a scale. The massive die-off of the human population and subsequent “cessation of farming and reduction in fire use” led to the “regeneration of over 50 million hectares of forest, woody savanna and grassland” (Lewis and Maslin). The open vistas of the New World were not destiny’s gift to European settlers. These empty landscapes were visible evidence of the Anthropocene. The Age of Man is an Age of Death.
Lewis and Maslin name the 1610 date the “Orbis” spike, from the Latin for “world,” because its drivers are global: the worldwide movements of human and nonhuman populations, as well as other factors including “colonialism [and] global trade.” As Dana Luciano noted in Avidly this past spring, this spike describes an Anthropocene that emerges not from industrial expansion but through such phenomena as the “concurrent history of the Atlantic slave trade.” The 1610 Anthropocene represents the early stages of what we now call “globalization.” What might a global Anthropocene that shares its era with Shakespeare and Pocahantas mean?
Image courtesy of the Folger Shakespeare Library under Creative Commons License CC BY-SA4.0
It takes some imagination to conceive that the Age of Man started in 1610, but now that we know the date we can find the words. Listening with Anthropocene ears, we hear familiar old lines differently. The magician’s voice has changed. On the upper stage stands Prospero enrobed, singing out magnificent poetry in the voice of Gandalf and Magneto:
Ye elves of hills, brooks, standing lakes and groves,
And ye that on the sands with printless foot
Do chase the ebbing Neptune, and do fly him
When he comes back…
If we have ears to hear, we realize Shakespeare’s wizard sings destruction and the depopulation of the world. He creates and revels in ecological disorder:
I have bedimmed
The noontide sun, called forth the mutinous winds,
And twixt the green sea and the azured vault
Set roaring war…
[G]raves at my command
Have waked their sleepers, ope’d and let ‘em forth
By my so potent art.
Whether in Milan or the magic island, Duke Anthropocene presides: enchanting, indulging, releasing, destroying. His voice isn’t the only one to which we should listen — I find more hope and value in shipwrecked sailors, lovelorn poets, and disoriented pilots — but since we’ve been listening to him for so long, it might be time to reconsider what he is saying.
A Renaissance Anthropocene echoing in blank verse suggests some unexpected things about this increasingly popular term.
The 1610 Anthropocene means death, not heat, is humanity’s primary historical driver. We’re not just making the world warmer but making it deadlier. I think Thomas Pynchon nailed this one back in 1973, writing from his beach pad in southern Cal:
This is the world just before men. Too violently pitched alive in constant flow to ever to be seen by men directly. They are meant only to look at it dead, in still strata, transputrefied to oil or coal. Alive, it was a threat: it was Titans, was an overpeaking of life so clangorous and mad, such a green corona around earth’s body that some spoiler had to be brought in before it blew the Creation apart. So we, the crippled keepers, were sent out to multiply, to have dominion. God’s spoilers. Us. Counter-revolutionaries. It is our mission to promote death (Gravity’s Rainbow 720).
The 1610 Anthropocene means that the most consequential historical and ecological forces in the Age of Man have been inhuman viruses, not mortal industry. Smallpox and influenza cleared the New World for colonization; malaria made its tropical regions ripe for transatlantic slavery. This inhuman globalization connects with Jason Moore’s notion of the 450-year old Capitolocene as “a way of organizing nature” (Capitalism in the Web of Life 78). Moore’s project brings the nonhuman into capitalism through “a world history in which nature matters not merely as consequence, but as constitutive and active in the accumulation of abstract social labor” (84). Eco-modernity is not only a human story.
The 1610 Anthropocene means that the key motivation of our species was a desire for global connection, not simply our ability to produce things or grow our population. Columbus sailed for China. His successors midwifed global ecological catastrophe. There are many ways to blame, aggrandize, or describe the globalizing energies of early modern expansion. In addition to the almost-canonical Anthropocene and newer Capitolocene, I seek space for the Homogenocene, an Age of increasing Ecological Sameness, a Thalassocene or Age of Oceans, and — my real favorite — Naufragocene, the Age of Shipwreck. Each ‘cene jostles the others; each connects and disconnects.
The 1610 Anthropocene used to be called the “Columbian Exchange,” but that term is too reminiscent of “great man” theories of history. Old Man Anthropos may have started it, but He’s never been in control. The better phrase, “ecological globalization,” takes the soup out of human hands. That’s where it should be. We’re in it, not cooking it. Even Moore’s eco-Marxist reconceptualization of human history as “environment-making” (45) risks granting too much agency to humans.
Reconsidering the 1610 Anthropocene through both capitalist expansions and more-than-human collisions helps emphasize that the core story, the story that still needs telling and that meaningfully precedes the supposed modernity of the past half-millennium, concerns the production of hybrids through the collision of Unlike Worlds. Creating hybrid newness isn’t just a 1610 question, even if some forms of hybridity blossomed during that period. Hybrid-production typifies human cultural history, from neolithic art to postmodern architecture. Bruno Latour has given us a robust language for hybridity, but our best guide here may be Caribbean poet and theorist Éduoard Glissant, whose idea of Relation promises “a new and original dimension allowing each person to be there and elsewhere, rooted and open, lost in the mountains and free beneath the sea, in harmony and in errantry.” That’s the way to navigate storms, in or beyond the Anthropocene. Harmony and errantry, sailing together.
The 1610 Anthropocene takes the latest claim for the radical newness of today and submerges it back into History, with all of history’s messiness and swirl. A four-hundred-year-old Anthropocene promises an unstable future, and one in which it’ll be worth recalling our past as itself disorienting and malleable. If human civilizations have always been environment-makers, the mutual implication of human and non-human actors may not be so new after all. It turns out that this latest thing is also an old thing.
To recast an old phrase that has new resonance in an age of rising global temperatures: the past isn’t dead. It’s just getting warmed up.
What is the new relationship to necessity and the natural world envisioned by Percy Shelley in his 1819 poem Prometheus Unbound? And what is its relevance in our present moment, when the capitalist appropriation and exploitation of human and non-human natures that Shelley depicts in nascent form have grown up into our own planetary eco-social catastrophe? This relationship cannot be reduced to the standard caricature of romantic primitivism; nor should we read Thomas Malthus’s theory of the natural limits to growth, selectively applied to the lower orders of course, in the work of Shelley, the poet who chose damnation “with Plato and Lord Bacon” over “Heaven with Paley and Malthus.”
If technology, as Jean-Pierre Dupuy contends, is the discourse “of and about the technique, which fits it into a system with other techniques or know-how, with symbolic or imaginary representations, with conceptions of the world, but also with institutions, rules and norms,” then we need to approach accusations of “primitivism” with some trepidation. In popular discourse, the primitivist, like the Luddite, suggests someone who is opposed to technique in Dupuy’s sense: the constitutively human capacity to use tools in order to reshape our environments with specific ends in view. Technique is in this way an outgrowth of forethought, tactics, strategy, and as such, one would be hard-pressed to find an opponent of these things, let along a programmatic opposition, since such a program would require forethought, tactics, and strategy, in addition to concrete tools.
In fact, “primitivism” is more often than not a rhetorical club used by partisans of one technological system — a discourse, but also a preferred set of social relations — to delegitimize advocates of an alternative system with an alternative set of techniques. So, Alex Williams and Nick Srnicek, in their Accelerationist Manifesto, argue for strategy and organization as the only feasible response to “the breakdown of the planetary climatic system,” about which the most resolutely decelerationist eco-socialist would no doubt agree. But, on closer examination, their program is not a program so much as it is a myth and one more in the mold of Carl Schmitt than Karl Marx. And our accelerationist myth-makers draw the line between “those that hold to a folk politics of localism, direct action, and relentless horizontalism, and those that outline what must become called an accelerationist politics at ease with a modernity of abstraction, complexity, globality, and technology.” Which modernity? What technology?
Our new futurists repeatedly insist that neoliberal capitalism is a fetter on techno-science, and therefore human liberation, while their promotion of “secrecy, verticality, and exclusion” and a “Promethean politics of maximal mastery over society and its environment” could easily be mistaken for Silicon Valley start up copy or perhaps the transhumanist speculations of some capitalist singularitarian.
Similarly, Alberto Toscano, in an otherwise laudable essay entitled “The Prejudice Against Prometheus” identifies Prometheanism with “knowledge, scale, and purpose,” which he counterposes to a left wing anti-prometheanism that, in his estimation, is a symptom of the melancholia born of political defeat. But is there a principled opposition to knowledge, scale, and purpose as such? Or is framing such significant debates about left wing political strategy in this manner a roundabout way of elevating one Prometheus to the only god in heaven, while recasting the alternatives as so many dark and primitive idols to be smashed? Even Marshall Berman, the most persuasive advocate of Prometheanism in this vein ( on which more below), recognizes in certain explicitly anti-Promethean sixties era “advocates of solar, wind, and water power, of small and decentralized sources of energy, of ‘intermediate technologies’ of the 'steady state economy,'" a Promethean program that would require “a redistribution of economic and political power” and “the most extensive and staggeringly complex reorganization of the whole fabric of everyday life.”
The alternative Prometheanism that we can limn in Shelley’s work envisions a qualitative break with “things as they are,” and, more specifically, capitalist social relations. Mechanical Prometheanism foregrounds the cult of increased labor productivity, an investment in a certain kind of material abundance tied to the capitalist value form and its class society, and the domination of the natural world — recently rebranded, in an ideological reprise of Gnostic fantasy, as “decoupling,” despite our being a part of and therefore dependent on this same nature. Even the most emphatically utopian version of these “reasoners and mechanists” can only imagine quantitative change, “more,”as opposed to different: the qualitative shift that defines any revolutionary vocation worthy of the name. Shelley’s Prometheus embraces technics insofar as they serve this qualitative shift in human social relations and the relationship between human and non-human natures, as the poet explains in his Defense of Poetry:
We have more scientific and economical knowledge than can be accommodated to the just distribution of the produce which it multiplies. The poetry in these systems of thought is concealed by the accumulation of facts and calculating processes. There is no want of knowledge respecting what is wisest and best in morals, government, and political economy, or at least, what is wiser and better than what men now practise and endure. But we let I dare not wait upon I would, like the poor cat in the adage. We want the creative faculty to imagine that which we know; we want the generous impulse to act that which we imagine; we want the poetry of life; our calculations have outrun conception; we have eaten more than we can digest. The cultivation of those sciences which have enlarged the limits of the empire of man over the external world, has, for want of the poetical faculty, proportionally circumscribed those of the internal world; and man, having enslaved the elements, remains himself a slave. To what but a cultivation of the mechanical arts in a degree disproportioned to the presence of the creative faculty, which is the basis of all knowledge, is to be attributed the abuse of all invention for abridging and combining labor, to the exasperation of the inequality of mankind? From what other cause has it arisen that the discoveries which should have lightened, have added a weight to the curse imposed on Adam? Poetry, and the principle of Self, of which money is the visible incarnation, are the God and Mammon of the world.
As opposed to reading “poetry” or “the creative faculty” as quixotically romantic or idealist — an interpretation that ironically rests on a literal understanding of these terms — we should take them as figures used by a poet in constructing a political economy. While the Shelley of the Defense is most often remembered for describing poets as “the unacknowledged legislators of the world,” this declaration must be viewed for what it is: the conclusion to an essay in which poetry includes the republican institutions of Rome among other social, political, and economic arrangements. In underlining how “the abuse of all invention” abridges and combines “labor, to the exasperation of the inequality of mankind,” Shelley emphasizes technology as a relationship embedded within other social and economic relations; and in this case, the poet traces the counterintuitive relations between labor saving machinery, wage labor, and immiseration under capitalism.
Shelley casts the sciences and mechanical arts as handmaidens to a system described in terms of absence: “the want of the poetical faculty.” Poetry, in this case, functions as metonym for that non-alienated version of human making, outside of and after capitalism, that Marx outlines most explicitly in his early work. While our accelerationists tout “emancipatory alienation” as the key to their neo- (or post-) Marxist politics, Marx arguably systematized Shelley’s alternative Prometheanism, using the tools of bourgeois political economy and quantification— another technics — against alienation and in the service of qualitatively different social relations .
Nor should we equate this dialectical inversion with technological mastery of the non-human world insofar as that mastery enables human freedom and flourishing. Human beings cannot flourish at the expense of the planetary metabolism of which we are a part. As Shelley understood in the early nineteenth-century, “man having enslaved the elements, remains himself a slave.” Prometheus is a child of Earth and this version of Prometheanism assumes the continuities between the human and the natural, human social organization and extra-human natures, or what Jason Moore calls the “double internality,” which in the capitalist era, encompasses “capitalism’s internalization of planetary life” and the “biosphere’s internalization of capitalism.” Moore emphasizes how both green and futurist political economies share a Cartesian logic that clearly demarcates Nature—which we must conquer or to which we must return — from Culture, Society, or the Human; this “abstract social nature” was both prerequisite and product of early capitalism’s reorganization of the natural world.
Moore presents the appropriation of nature (transforming non-human natures into free or cheap natural inputs) as the indispensable basis for the exploitation that defines commodity production under capitalism. As Shelley recognized in 1819, any revolutionary break with the capitalist mode of production must entail a reorganization of human and extra-human natures, as the former is a subset of the latter, while both are components of a larger planetary metabolism now threatened with collapse in the era of the capitolocene. Rather than an Edenist return to some prelapsarian Nature, mending the “metabolic rift” would necessarily entail an alternative technological regime. Just as the proletariat must smash, rather than repurpose, the capitalist state form in order to reconfigure political power in a way that accords with revolutionary ends, which Marx recognized in the wake of the Paris Commune, modern capitalist technology also requires revolutionizing alongside human and extra-human relations. Capital’s techno-productive apparatus is, As Michael Lowy recognizes,
in contradiction with the need to protect the environment and the health of the population. One must therefore ‘revolutionize’ it through a process of radical transformation. This may mean, for certain branches of production, to discontinue them altogether: for instance, nuclear plants, certain methods of industrial fishing (already responsible for the extermination of several species in the seas), the destructive logging of tropical forests, and so on — the list is very long! In any case the productive forces, and not only the relations of production, have to be deeply changed to begin with, by a revolution in the energy system, with the replacement of the present sources — essentially fossil fuels — responsible for the pollution and poisoning of the environment, by renewable ones: water, wind, and sun.
Lewis Mumford’s distinction between “monotechnics,” organized around profit, production, and power, and “polytechnics,” denoting a plural arrangement of technical arrangements that serve both human and non-human natures, is useful in this regard. An alternative Prometheanism requires that we dismantle our monotechnical order and build a polytechnical arrangement in its place.
David Schwartzman elaborates one such possibility in his proposal for a full blown “solar communism”: a specifically Marxist response to the ecological catastrophe. Solar communism begins, for Schwartzman, with a new relationship to the ecosphere, no longer “transformed and degraded,” but mined for knowledge, that “will flow into the techno-sphere, driving its productive forces and internal transformation. For example, agricultural systems, a key component of the technosphere, will be transformed in multifold ways, with open field crops becoming polycultures, utilizing ecologic pest control and a big expansion of greenhouses, with potentially high productivity gains. Containment of the technosphere is a radical application of the precautionary principle, as well as a solution to the problem of future generational representation in today’s decisions, since it maximizes the preservation of the present ecosphere for the future.”
This Promethean project includes the construction of said solar socialism, encompassing a completely renewable energy economy, a sustainable agricultural system and the restoration of biodiversity eviscerated by specifically capitalist patterns of growth, all of which require novel forms of organization and planning.
Reconciliation, or overcoming alienation from our own productive powers and the natural world, is in the end indistinguishable from emancipation: “Man lives on nature — means that nature is his body, with which he must remain in continuous interchange if he is not to die. That man’s physical and spiritual life is linked to nature means simply that nature is linked to itself, for man is part of nature.” Marx develops this strand of alternative Prometheanism as program and critique, beginning with his critique of nineteenth-century accelerationism — a topic I will explore in my next post.
It is difficult to bid farewell to Gamal al-Ghitani: a friend, an author, a true Cairene who taught us how to read and admire our history, walk in our cities, feel the power of narrative, and stand in awe of its literal and allegorical significations.
After several encounters in Cairo over the years, I was fortunate to see Gamal al-Ghitani one last time when he visited the Bay Area four years ago to deliver a series of talks on his life and work. As usual, his presence was charismatic, his exuberance infectious, as he talked about his life, his early childhood, his quests, and his literary and cultural career. He also spoke of Sufism and of the art of weaving both carpets and language. He talked about the struggles with his heart condition and his gratitude to the USA for two things: the American heart surgeons who saved his life and NASA, which for him symbolizes humankind’s ardent investigation of the unknown mysteries that lie beyond the confines of this Earth. But most captivatingly and uninhibitedly, al-Ghitani spoke of his own humble beginnings as a writer and of the vexing question that stayed with him throughout his life and marked his literary career: mathaa hadatha lil-ams (what happened to yesterday), which is the motif of my own reflections about him.
Al-Ghitani fell ill a few months before he finally passed away on October 18th, 2015. Those who loved him and admired his contributions to Arab literary and philosophical thought knew that his illness and a three-month coma with a weak heart at an advanced age all worked against his survival. Yet, his passing was a shock. It was painful to realize that those prolific and skillful hands that wove more than fifty remarkable narratives over fifty-five years (he began writing when he was fourteen) had surrendered the pen at last.
“He was Cairo itself,” remarked his (and my) friend Nezar Al-Sayyad in an email correspondence as we both bemoaned his loss. “Seldom can a single individual capture the complexity of a city like Cairo but it happened with Gamal… He had both the beautiful gift of turning history into literature, and the uncanny ability of making fiction, in turn, living history.” If I were to think of an epitaph for al-Ghitani, al-Sayyad’s dignified words could hardly be improved upon.
Death is overwhelming to the living. It shocks despite its inevitability and it prompts us to go back in time and reflect. My last memory of al-Ghitani was a graceful dinner following a memorable talk, an unforgettable night of memories, nostalgia, dhihk min al-qalb (genuine laughing from the heart), thanks to his sharp wit and ingenious sense of humor. This will always be my last memory of him: a radiant smile and a deep sense of ridha (contentment) that he probably derived from his sufi (mystic) reflections on the world. I took notes on much of al-Ghitani’s memorable talk that remarkable evening not knowing exactly what I would do with them. I thought perhaps I would refer to them in a future essay on his work, but I have never gotten to it. It is befitting now to share some of those reflections in paying homage to al-Ghitani:
I remember a moment in 1959, I forgot which day it was; but it must have been in the winter because I recall the window of the room firmly shut and the blankets piled on top each other as I sat on the edge of the bed. I remember the desire, a hidden, vivid desire, compelling me to pick up the pen and start writing my very first short story, Nihayat al-Sikkir (The Drunkard’s End), about a pauper acting drunk in order to justify stealing a few loaves of bread.
The world is grateful to that moment too! From it came a flow of unmatched writings interrupted only by circumstances beyond his control including illness and imprisonment. That moment brought us such masterpieces like "Memoirs of a Youngman Who Lived a Thousand Years," "Guardians of the Eastern Gate," "Pyramid Texts," "The Excess of the City," "Stories of the Institution," Zayni Barakat, The Zafarani Files, The Book of Epiphanies, Naguib Mahfouz Remembers, "The Epistle of Insights" and "Destinies," "The Call of Absence," and many more. Al-Ghitani could talk about his writings and the moments that propelled them for hours. He had a rare talent to dive into the past and bring it back to life. He even tried to remember the moment of his birth, but failed. He grumbles:
Unfortunately our minds retain nothing, no picture, sound, or feeling, that reminds us of the moment of our arrival in the world, to existence, when the umbilical cord is cut and the newborn is separated from the mother.
I am certain he must have tried many times to retrieve some flicker of memory from that past because this is who he was. Al-Ghitani is not just a historian; in fact, it is an understatement to call him such. But if one insists on describing his passion for history then we might call him more appropriately ‘al-‘A’ish fi al-Tarikh, "the liver in history," because for him history is a living preoccupation, an organism that never ceases to dwell on the present. He is well aware of the difference between history and memory. How memory is in many ways subjective, but how it is also what gives us identity and a sense of belonging to our surroundings. He reflects on how children come into language and start wondering about the meaning of life, about where they come from, and what it all means. But al-Ghitani himself had a completely different question as a child: “what happened to yesterday?” He asks this question as he reflects on his humble beginnings:
My imagination was a like a shore in the midst of a natural state of isolation, and this perhaps was caused by the isolation of a poor family migrating from the deep south, sharing one room. My father was a low-ranking white-collar worker who had endured difficult challenges that prevented him from completing his education at al-Azhar. This left him with a steadfast determination to educate us all. He often said: “I do not want you to go through the hardships that I have endured.” I was the third to be born and the first to survive. Two brothers came before me, Khalaf and Kamal. Both of them passed away at an early age, Khalaf before I was born, and Kamal around my first birthday. I don’t remember anything about him.
He may not remember Kamal, and certainly not Khalaf who died before he was born, yet he still writes about his inability to remember as he pays tribute to their very short lives. This is the art and the ethos of al-Ghitani, an author sensitive to the ravages of time, a man unafraid of recording the loss even though he knows full well that his writing records its own failure to capture a painfully unmastered past. Al-Ghitani’s two brothers lie in the heart of this mysterious yesterday, the agent of death and the graveyard of human lives. Gamal al-Ghitani, like his two siblings Khalaf al-Ghitani and Kamal al-Ghitani, has now yielded to yesterday’s implacable sweep.
“What happened to yesterday” thus becomes in essence a question of unresolved grief and nostalgia. Yet, in that very yesterday, al-Ghitani is able to find solace and valediction for the loss of his father who died while al-Ghitani was away. When he discovered Ibn ‘Arabi, al-Ghitani felt indebted to him for the solace and transcendental language offered in his work, connecting the spiritual experience with human existence and allowing him to soothe his cares and overcome the tribulation over his father. This immersion in sufi diction and history led to the emergence of a new language in al-Ghitani, one that has become the hallmark of his work:
Although I am not a sufi in style, I still subscribe closely to a sufi vision of Time. I found accurate expression for my internal agonies in this tradition. In fact, my own agonies drove me to immerse myself in a sufi vision of the world. I was not in search of mere technique or style, but I discovered in the language of Sufism clarity and loftiness of diction even more poetic than poetry itself. If you haven’t yet read them, I would invite you to read Abu Hayyan al-Tawhidi’s Divine Signs, or ‘Abd al-Karim al-Jilani’s The Complete Human, or al-Hallaj’s Tawasin.
As I explain below, the language of Sufism not only becomes al-Ghitani’s signature narrative tool, but also alleviates his own anguish over the loss of his brothers and father by allowing the enormity of his mourning to dissolve in vaster realms of the sublime. The resolution for al-Ghitani lies in a language in continuous dialogue with history, one which treats the fleeting "yesterdays" of our calenderical world as neither fixed nor complete. This is how the language of art defies death in al-Ghitani:
It [yesterday] is a question of time - we could only imagine it, pinpoint the imaginary signs that register its progress, namely, the language of calendars, beginning with the seconds all the way up to the days, the months, the years, and the centuries. It is however impossible to for us to influence the movement of time, slow it down or speed it up. What remains implausible is the possibility of returning to any point in time that has elapsed. I do not know what mysteries determine my path but I am certain that this question, which began early with me, is my motivation and incentive. Al-ibda’ yaqhar al-‘adam (Creativity vanquishes nothingness). This is exactly what our ancient Egyptian ancestors expressed in their buildings, drawings, and writings: these are the human endeavors that oppose obliteration and stand tall in the face of nothingness.
Write, make art, and live, or abandon it and perish into nothingness. This time-defying spirit, in which eminent authors like Shakespeare composed much of their work, is at the heart of al-Ghitani’s art and makes him not just for his period, but for all time, as Jonson has remarked of Shakespeare. A single reading of any al-Ghitani's works leaves one with this touch of eternity.
I want to share a story I never told al-Ghitani. Even though he and I did chat about this particular work, I never had the chance to tell him how I stumbled onto an Arabic version of his Mutun al-Ahram (Pyramid Texts) during graduate school and how it changed my life. It happened on a cold winter day when I was struggling with graduate work in Wisconsin. Al-Ghitani could not know that a novel of his, in Arabic, would find its way to a section in the public library in Madison Wisconsin that contained books in foreign languages. Mutun al-Ahram, which in so many ways helped me find my own path, includes a main character in its first matn (text), al-Shaykh Tuhami, who is a talib ‘ilm, (a seeker of knowledge) in unfamiliar lands, who travels from the extreme south of the Moroccan desert in order to pursue a religious degree from al-Azhar, the well-known Islamic university in Cairo, and one of the oldest universities in the world. After he graduates, he returns home. As soon as he reunites with his mentor, the latter asks him about the pyramids. Al-Shaykh Tuhami’s quick reply is in answer to what he thinks is a strange and irrelevant question by his mentor:
“I do not have anything to tell you about the pyramids.”
Hearing this indifferent response, al-Shaykh Tuhami’s spiritual mentor rebukes him immediately: “vain is the pursuit of a learner who lacks the desire to learn. Didn't you pass by Cairo twice?” In shame and confusion, al-Shaykh Tuhami leaves Wadi al-Zamm and heads back to Cairo, this time determined to learn about the pyramids. His passion for comprehending the triangular structures overrides all else. He rents a cottage by Nazla-t-al-Samman near the road leading to Abu al-Hawl (Sphinx). He gazes and gazes at the pyramids from every angle, at every degree of light from sunrise to sunset, at nighttime and before dawn. He never takes his eyes off the pyramids. He is afraid of coming too close. It suffices to look at them from a distance.
Many Egyptian writers have addressed the pyramids in their fiction, but none comes closer to the Sufism of the sublime with which al-Ghitani delineates the mystic experience of al-Shaykh Tuhami and other characters as they encounter the pyramids; not Naguib Mahfouz or Ahmad Bakathir, or Abdelhameed Juda al-Sahhar; not Adil Kamil or Zaki Sa’d, nor Muhammad Jibril, nor Yusuf Kamal Abu Zayd. Not even Germany’s celebrated 18th-century philosopher and aesthetician of the Sublime, Immanuel Kant, who in fact uses the pyramids as an example of the sublime, even though he never saw them with his own eyes.
Coincidentally, at the time I stumbled onto Mutun al-Ahram, I was studying Kant’s Critique of Judgment in a graduate seminar in which the magnificent Jan Plug led us through the incalculable, incomprehensible, or as he called it, the “tear-ifying” horizons of Wordsworth’s “Crossing the Alps” section in The Prelude: “The immeasurable height/ Of woods decaying, never to be decayed/ The stationary blasts of waterfalls.” Kant resorts to powerful natural elements and impressive man-made architectural artifacts in order to illustrate the imagination’s failure to comprehend the totality and magnitude of external objects in one whole and thus form an aesthetic judgment. Kant’s account of the pyramids comes close to al-Ghitani’s depiction and is worth quoting in full:
Hence can be explained what Savary remarks, in his account of Egypt, viz. that we must keep from going very near the Pyramids just as much as we keep from going too far from them. For if we are too far away, the parts to be apprehended (the stones lying one over the other) are only obscurely represented, and the representation of them produces no effect upon the aesthetical judgment of the subject. But if we are very near, the eye requires some time to complete the apprehension of the tiers from the bottom up to the apex, and then the first tiers are always partly forgotten before the imagination has taken in the last, and so the comprehension of them is never complete.
In Mutun al-Ahram, al-Ghitani depicts al-Shaykh Tuhami’s relation to the pyramids in a manner similar to Kant’s reflections in his Critique, but al-Ghitani does not stop there. He exchanges the inexpressible essence of the pyramids in this architectonic text with a language that transcends mimesis or the mere semantic organization of words on a page. The pyramids in this first, almost untranslatable matn “tashawwuf,” become the desire for art, much like al-Ghitani’s own desire for writing which launched his prolific career. We see in al-Shaykh Tuhami al-Ghitani’s own urge for creativity and the fervent desire for art. What but art forbids all presentation, causes pleasure and pain, and bids the imagination to fail? What but art could save our imagination from dwelling on its own inferiority?
To conquer and transcend nothingness through art, al-Ghitani resorts to a meta-representational sufi language that perpetually signals to something infinite outside itself, something that it cannot capture or represent but could only allude to through the limited vocabulary of human language. The result is a diction that defies equivalence and perhaps even translation: tashwwuf, iighal, talash, idrak, nashwa, dhil, alaq, samt, raqsa. This level of semantic complexity and untranslatability is what lies ahead for al-Ghitani’s translators (who are certain to face a daunting yet incredibly rewarding task) as they work on many of his masterpieces awaiting translation.
The pyramids thus can only be represented negatively, through the inability of al-Shaykh Tuhami to come close to them. Kant uses the sublime as an example of the failure of imagination as it reflects on its own inferiority in a simultaneous process of attraction to and repulsion from an object which is not in itself sublime. However, Kant’s conclusion is only the beginning for al-Ghitani’s text, which reminds us that the pyramids are not just an illustration of art, but a colossal text, a magnificent constellation of signs that do not merely include texts on their walls but are themselves written and visible only to those who have “the vision” to read beneath their mere physicality and architectural marvel.
Phenomenology is complex in this sense, since looks can be deceiving yet looks are all we have. On his first religious education journey, al-Shaykh Tuhami sees nothing more than an insignificant piling of stones on top of each other as he tries to understand the mysteries of the universe in Azharite manuscripts and pedagogical tautologies. It is only on his second quest that he learns how to see and sees what it means to be a true seeker of knowledge. The return of al-Shaykh Tuhami, his renewed desire to “discern” and open his eyes to the pyramids, not only introduces him to the realm of transcendental eternity, but grants him a training of the soul which, even though he could only grasp it in flashes, immeasurably exceeds the administered and channeled education available in a religious institution.
The pyramid texts, the pyramids that are themselves texts, may thus remain incomplete and immeasurable in Kantian eyes (his philosophical eyes, that is, since he never saw them), but not to al-Ghitani, whose diction has transformed their incomprehensible totality in a manner never attempted before. To be fair, Kant never said that the pyramids were in themselves sublime, but the beholder's perception of them is. In al-Ghitani, however, the pyramids present knowledge not only through the negation of knowledge, but more importantly after the failure of imagination to grasp them in their totality. Understanding the pyramids, or understanding ‘of’ the pyramids comes not just from an overload of apprehensions that causes the failure of comprehension (ergo sublime, in Kant).
In al-Ghitani, there is more to the pyramids than the mere distance or proximity proposed by an Egyptologist. Savary’s formula to perceive their magnitude, “neither too far way nor too close,” situates the Kantian sublime in relational visual perception. In al-Ghitani, the pyramids are phenomenologically and metaphysically textual. Perceiving them is thus beyond material vision. More appropriately, perceiving the pyramids is a linguistic act predicated not only on the recognition of their phenomenality, but on their readability as texts, in deciphering their inner content and its relation to their outer appearance. This is how the beholder, al-Shaykh Tuhami, is able to transcend the Kantian agitation triggered by the needs of knowledge and/or the needs of desire.
As the text proceeds, we learn that this hermetic knowledge, whether we understand it or fail to grasp it, is the only solace and consolation before a painful and unimaginable yesterday, that yesterday which took away al-Ghitani’s brothers, prevented him from being by his father's side when he died, and yet spared him and gave him 70 years of life; that yesterday which also witnessed the creation of death-defying structures like the pyramids. The negative knowledge of that yesterday consoles us as we try to understand that there are matters lying beyond our understanding. It is somehow comforting to realize that our comprehension will always remain incomplete, that language understands the limits of representation yet tries it anyway, just as the pyramids embrace the death they were built to fetishize, yet defy it anyway.
‘Inda al-dhurwa yaqa‘u al-fanaa’ (At the climax lies ‘annihilation’) is the penultimate sentence that al-Ghitani’s first chapter celebrates. Al-fanaa’ contains multiple meanings, death and the perishing of the physical body, but also a breaking through the confines of the physical world, or al-fannaa’ fi al-dhat (disappearance in the essence), namely, the attainment of maqam al-ittihad, that is, the mystical union with the Divine. The novel ends with a peculiar repetition at the apex of the pyramids: “La shay’. La shay’. La shay.'” (Nothing. Nothing. Nothing). If the apex/end of life seems to come to over-emphasized nothingness, and if the triangular sides of pyramids converge in mid air, signaling a dhurwa (an apex or a climax), then their climatic end in the sky, which parallels the end of human life, becomes the very affirmation of the life it repetitively negates. Al-Ghitani’s pyramids, shrouded in mystery, still stand for something else above themselves, the desire for eternity. Look no further than the pyramids, al-Ghitani says to his careful reader. They are the very evidence of the victory of art over nothingness, of the continuity of life despite the marked inevitability of death, just as his work is the sign of the classical defeat of time by his own pen.
Al-Ghitani leaves our world with a valediction that forbids mourning: art is the irrefutable proof that the phenomenological world is not all that there is, but a fleeting present, a yesterday tomorrow, so to speak, and a sign of a yet-to-come that eludes our grasp of the passing moment. This promise alone is what matters to al-Ghitani, for whom language is immersed in a task whose aim it does not yet know and can only salute from a distance. This is not to say that al-Ghitani’s social commitment to this world is absent. His first work is about the stealing of a loaf of bread for survival. We can clearly discern a perspective of social consciousness in his work and an urgent call for justice and social equality.
In fact, most of al-Ghitani writings correspond to a tight connection in which both the individual and the social are tied to what Frederic Jameson has famously referred to as the “Utopian impulses” of texts. We see his critique of the present in Zayni Barakat, and The Zafarani Files, in addition to other remarkable works that are yet to be translated into English, including “The Excess of the City” and the “Stories of the Establishment.” This is a rare quality of an author: to dwell on the sublime while making us the subject of our own involvement in, and perception of, the social world. In Zayni Barakat, social criticism and allusions to the loss of “utopia” in Nasser’s regime is clear, as Edward Said has remarked. But lest we forgot, before Nasser, anti-colonial resistance, freedom from England and the corrupt Egyptian royalty, as well as the desire for self-rule were in themselves Egypt’s very utopian project.
In the Pyramid texts, however, utopia functions differently. It provokes a reflection on what exists and an aspiration for what lies beyond. Above all, it is a text that becomes its own utopia, precisely because it is predicated on the desire to attain that which is already achieved in the very act of writing it. This aesthetic build-up of language till it becomes the celebration of the text is what al-Ghitani leaves us. All his fiction points towards absolute emancipation: a utopia. In this enveloping spirituality of art a future becomes thinkable. And while we might think that al-Ghitani is no longer with us as we dwell on this hope, we must remember that we are following a map that he already drew for us in his fiction, and that he not only remains a living present, but the very future that we all aspire to reach.
there is no answer to this order of reasoning, except to advise a little wider perception, and extension of the too narrow horizon of habitual ideas. (or there is an answer to this order of reasoning.)—An algorithm
Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.
NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.
But what if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?
It is this search to use automation as a vehicle for defamiliarization that makes National Novel Generation Month (NaNoGenMo) so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.
While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.
Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.
At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments:
(required evolutions suddenly concentrating
favourable structures. a chemical behind
conclusions. determining the opinion in the event.
looking while happening. reciting the literature
on the water. the position, existing. the amount
around the resource. the task in the example. the
selection near attempts. undergoing the layer and
observing the object. the timeliness around the
availability. Beginning memories…)
Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.
Other submissions recontextualize tweets. Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.
Which brings us back to the assumptions that ground our judgments of generated texts. Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.
The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.
Percy Shelley’s Prometheus Unbound defies easy analysis. Shelley composed his verse drama to illustrate his father-in-law William Godwin’s radical social philosophy, at least in part. Although Godwin could not finish the poem, as he recorded in his journal. Shelley, in an oblique reference to Godwin and the Godwinian school, describes in his Preface to the poem the “great writers of our own age” as “forerunners of some unimagined change in our social condition or the opinions which cement it. The cloud of mind is discharging its collective lightning, and equilibrium between institutions and opinions is now restoring, or is about to be restored.”
Shelley here echoes Godwin, who viewed inequities of power and the corrupt institutional arrangements of his day as errors to be corrected by an objective Reason in the fullness of time. Truth will specifically emerge through “the clash of mind with mind,” in Godwin’s early and agonistic version of the bourgeois liberal public sphere ideal-type. The work of enlightenment is nonetheless a matter of “private judgment” for Godwin who in this way maintains the form, if not the content, of his early Calvinist formation. It was for effecting enlightenment and converting the reader’s private judgment that Godwin turned to novel writing with his Caleb Williams. Yet, as critics such as Pamela Clemit argue, Godwin eschewed didacticism in favor of formal and thematic ambiguity, specifically to exercise the judgment of his readers. Shelley certainly pushes the Godwinian form to a visionary extreme in his Prometheus Unbound, declaring didactic poetry “an abhorrence,” while seeking to represent cognitive and perceptual processes through analogy with the natural world; a reversal of traditional figurative practice that accounts for the poem’s difficulty.
Yet, while Shelley’s poem offers us an allegorical vision of a utopian future in an arguably Godwinian form, Prometheus Unbound stands in stark contrast with the proto-accelerationist speculations with which Godwin concluded his 1793 Political Justice. This departure is a significant one, as Shelley reworks the myth of Prometheus in a fashion radically distinct from the Prometheanism that would come to dominate the later nineteenth- and twentieth-century political imagination. This “mechanical Prometheanism," in the words of Arthur Mitzman, represents one ideologically convenient myth of modernization. The exemplars of this view see in the Titan who stole fire from the gods a shorthand for their preferred flavor of progress: technological determinism and domination of the natural world.
This more familiar Prometheus finds one prototype in the early Godwin, while his anarchist successor Joseph-Pierre Proudhon gives the myth its definitive nineteenth-century form. But it was during the twentieth century that Prometheanism of this stripe reached its zenith, exemplified in the various futurisms and productivisms that shaped modern capitalism and state socialism alike. While certain Second International socialists and their productivist heirs in the USSR carried the torch for this mechanical Prometheus, it was Western Marxists ( such as Walter Benjamin, Theodor Adorno, and Herbert Marcuse) who first recognized—in the mechanized slaughter of the First World War, a thoroughly technophilic fascism with its assembly line Judeocide, and the United States’ atomic atrocities—the endpoint of Progress conceived along these lines. These same thinkers shaped the intellectual formation of the sixties era New Left, who in rejecting this radioactive strain of Prometheanism necessarily rejected capitalist developmentalism and its nominally “communist” doppelgänger. As the twentieth century waned, so too did the taste for this techno-scientific drive to mastery, as leftists were forced to reckon with the ecological costs of industrial modernization. Now, as planetary civilization and the planet itself face imminent ecological collapse, techno-utopianism is making a come-back from the cyber-libertarian solutionists of Silicon Valley to the ostensibly left accelerationists, who seek to revive Prometheus, without ever asking which Prometheus they want to revive.
I argue—in this post and the several that follow—that we can discern in the Shelleys, Percy and Mary, an early articulation of an alternative Prometheanism, which Karl Marx develops, despite his undeserved reputation for machine worship.
There are multiple versions of the Prometheus myth from antiquity, but it is Plato’s Protagoras that most definitively identitfies the Titan with techne—a term that for the Ancient Greeks denoted craft, applied knowledge, and the mechanical arts--in an expansive sense. In Plato's version of the myth, Prometheus assigned his brother Epimetheus the task of distributing “proper qualities” among mortal creatures. And so Epimetheus worked according to an implicit principle of harmony with each species’ survival in mind, hence “he gave strength without swiftness, while he equipped the weaker with swiftness; some he armed, and others he left unarmed.” The unwise Epimetheus ran out of qualities to confer when he arrived at humankind, which is why Prometheus found the human being “naked and shoeless, without bed nor arms of defence.” In order to fill these gaps, Prometheus “carried off Hephaestus's art of working by fire, and also the art of Athene, and gave them to man" (Protagoras, 320c-328d). Plato, in the guise of Protagoras, implicitly defines human nature as the absence of instinct, while the human capacity for survival consists in the extra-somatic capacity to alter ourselves and our environments. Plato nonetheless anchors this myth of anthropocentric exceptionalism in Epimetheus’s mistake: a blunder that looks forward to James Whale’s loose film adaptation of Frankenstein, in which Fritz, Frankenstein’s foolish assistant, snatches a “criminal” brain after dropping the normal specimen he was tasked with procuring for his employer’s science project. The bad brain leads to the creature’s murderous antics.
This Prometheus illustrates Hans Blumenberg’s theory of myth as a functional response to the “absolutism of reality.” Myth, according to Blumenberg, originally offered finite human beings symbolic orientation amid the chaotic contingencies of living. Self-declared moderns transformed this myth in reviving it during the enlightenment period. What formerly oriented the pre-modern human community to the uncertain conditions of its own collective life was reconfigured to provide a new rationalism, a new science, and a new political economy with a raison d’être: from symbolic to actual mastery over life and nature.
Many modern readers nonetheless view Prometheus Bound—traditionally ascribed to Aeschylus, despite some skepticism on the part of classicists regarding this attribution—as the definitive rendition of the myth, despite some telling differences from Plato’s vision of Prometheus as homo faber. The drama consists in a series of exchanges between the Titan—chained to the rock where a bird gnaws on his self-regenerating liver in punishment for the grandest of larcenies—and various allegorical figures, including Might and Force, the henchmen of Zeus. Prometheus’s theft of fire, in violation of a tyrannical Zeus' prohibition, is just one among many instance of the titan’s intervention on behalf of mortals in Aeschylus’s drama. For instance, in recounting Zeus’s intention to destroy human beings, Prometheus recalls his intervention and his motivation for intervening: “I saved those death-bound creatures [because] I pitied mortals."
The Promethean gifts of fire and the useful arts are similarly justified as enabling human beings to live “a life of purpose.” Prometheus seemingly stands up for justice and mercy. In attempting to redistribute powers monopolized by Zeus to finite and semi-bestial humankind, the play suggests a more democratic ethos. Prometheus recalls how he initially sided with his fellow Titans against Zeus and his Olympian upstarts as they struggled for dominance. Yet, Prometheus, using the foresight implied in his name, switched sides once he realized the Titans would lose. The Titan then worked for the victory of the insurgent gods in their cosmic coup, through the use of his “superior guile” or cunning. Prometheus’s ability to see into the future fails him in the case of Zeus. In spite of its modern afterlives, Aeschylus’s version of the myth complicates one standard enlightenment era interpretation of Prometheus as an embodiment of enlightenment reason and revolutionary justice, as Corey Robin argues, “Prometheus made a mistake: not in giving fire (and much else) to humanity, but in hitching his wagon to such an unpromising star as Zeus. Prometheus’s growing contempt for Zeus and his followers is not that of a revolutionary against a tyrant; it reflects instead his old-regime hauteur, his contempt for the artless and the arriviste.”
Whether we interpret Aeschylus’s play in (anachronistically) progressive or conservative terms, the attentive reader will note that this text is concerned with questions of justice, power, and specifically political conflict.
In writing Prometheus Unbound, Shelley did not aim to provide the missing sequel to Aeschylus’s play, especially since that sequel dramatized “the reconciliation of Jupiter with his victim” as “the price of the disclosure of the danger threatened to his empire by the consummation of his marriage with Thetis." As Shelley makes clear, “I was averse from a catastrophe so feeble as that of reconciling the Champion with the Oppressor of Mankind." Shelley specifically invokes Milton’s Satan as the closest analog to his Prometheus or, Milton’s Satan as refracted through William Blake’s (and William Godwin’s) powerful misreading of Lucifer as a righteous rebel struggling to overthrow an oppressive cosmic order and its tyrannical God. Mary Shelley also invokes the Titan in her own “Modern Prometheus,” written with some input from her husband, a few years prior to the publication of Prometheus Unbound. Though readers usually align Victor Frankenstein with this “Modern Prometheus,” there are two Prometheanisms at work in a novel that should be read in tandem with Shelley’s verse drama (as I will show).
More than a visionary political allegory, Shelley sketches in his Prometheus “the type of the highest perfection of moral and intellectual nature impelled by the purest and the truest motives to the best and noblest ends.” As Earl Wasserman argues in an early and influential study of the poem, Prometheus is arguably the only character in a verse drama that personifies the Titan’s mental processes in the form of various gods and spirits. Wasserman makes one exception to this drama of personification and projection: Demigorgon. Demigorgon—a figure that encompasses the force of necessity in addition to the power of the people and the revolutionary masses in particular— destroys Zeus.
But, if Prometheus is for Shelley an ideal-type for human perfectibility how should we read the mental processes allegorized in Shelley’s verse drama? One answer to this question is that Shelley depicts in his Prometheus a new collective human subject. This subject’s previously unrealized capacities are unleashed through an unbinding that includes the transformation of social relations, especially those social relations Shelley observed first hand in what was then the world’s leading capitalist society, and a reconciliation between human and non-human natures.
Shelley recreates Prometheus, magnifying or—if we take Robin’s reading to heart—transforming Aeschylus’s Titan into a full blown exemplar of emancipated social relations, with an emphasis on collective freedom and love. Zeus, or the “strife” among human beings engendered by this tyrannical god, initially stymies this vision of egalitarian social relations and unfulfilled human capacities, as Prometheus recounts:
The nations thronged around, and cried aloud
As with one voice, Truth, liberty, and love!
Suddenly fierce confusion fell from heaven
Among them: there was strife, deceit, and fear:
Tyrants rushed in, and did divide the spoil.
Zeus the tyrant is as much a representative of a new and destructive capitalist order as he is the proxy for the ancien régime initially overthrown by the French Revolution only to be reconsitituted in its Thermidorean conclusion. Shelley accordingly depicts Zeus’s reign, and Prometheus’s imprisonment, as marked by ecological catastrophe, when Earth, the Titan’s mother, reacts to Zeus’s punishment, by retreating from the world in despair. This retreat in turn precipitates an ecological catastrophe, when “fire and lightning and inundation vexed the plains” while “Blue thistles bloomed in cities; foodless toads/Within voluptuous chambers panting crawled; and plague had fallen.”
Shelley suggests another Prometheanism even as he critiques the dominant model of modernization, then and now, in poetic form as one of his allegorical spirits sings:
In the void’s loose field
A world for the Spirit of Wisdom to wield;
We will take our plan
From the new world of man,
And our work shall be called the Promethean.
And it is with The Spirit of the Hour’s account of the new dispensation that follows the ruin of “thrones, altars, judgment-seats, and prisons” that Shelley elaborates his version of Prometheanism:
The painted veil, by those who were, called life,
Which mimicked, as with colours idly spread,
All men believed or hoped, is torn aside;
The loathsome mask has fallen, the man remains
Sceptreless, free, uncircumscribed, but man
Equal, unclassed, tribeless, and nationless,
Exempt from awe, worship, degree, the king
Over himself; just, gentle, wise: but man
Passionless? — no, yet free from guilt or pain,
Which were, for his will made or suffered them,
Nor yet exempt, though ruling them like slaves,
From chance, and death, and mutability,
The clogs of that which else might oversoar
The loftiest star of unascended heaven,
Pinnacled dim in the intense inane.
Shelley offers us a powerful image of enlightenment demystification in “the painted veil” that, in falling, reveals man as he is or should be: “sceptreless, free, uncircumscribed...Equal, unclassed, tribeless, and nationless.” Yet, the forces unleashed with Prometheus’ unfettering can only be identified with techne or praxis in the broadest sense of making; or, poeisis, if we attend to the passage above, the poem as a whole, and Shelley’s work in general. Giorgio Agamben explains the distinction between the two terms succinctly: “The Greeks …… made a clear distinction between poeisis and praxis (poiein “to pro-duce” in the sense of bringing into being) and praxis (prattein, “to do” in the sense of acting). Central to praxis was the idea of the will that finds its immediate expression in an act, while, by contrast, central to poiesis was the experience of pro-duction into presence, the fact that something passed from non being to being, from concealment into the full light of the work.”
The loathsome mask is, significantly, a bad imitation, “with colours idly spread” of life and its potentials. The Spirit of the Hour insists on the persistence of passion in the new Promethean order she describes in her song. Shelley in this way distinguishes his version of an emancipated social order from the rationalist utopias of the earlier Godwin and his enlightenment fellow travelers, as he details in his contemporaneous Defense of Poetry. Shelley’s most enduring critical work is, as suggested by the title, a defense of poets and the poetic vocation against “reasoners and mechanists” whose sole criterion of value is “utility,” in the Benthamite sense. Rather than the typically romantic—and organicist—diatribe against either incipient modernity or quantification, Shelley’s critique of mechanical science and the utilitarian calculus is notable for its focus on nascent capitalist social relations and the role of what was even then a recognizably modern techno-scientific rhetoric. As he writes, “whilst the mechanist abridges and the political economist combines labor, let them beware that speculations exasperate the extremes of luxury and wealth."
Shelley’s critique animates the distinction his Spirit of the Hour draws between “altars, prisons, their guilty human product” and “chance, death, mutability.” Shelley disentangles the realm of necessity from reified, hence naturalized, modes of human domination and exploitation. Demigorgon, who fuses the new popular power embodied in the French revolutionary-era crowd and the force of necessity, represents for William Keach, “a much more inclusive conception of human agency released from its own self-imposed bondage and capable now in the terms of Shelley’s utopian fiction of establishing unimagined relations to necessity and chance." The end of “Heaven’s despotism,” in the words of Demigorgon, entails a new relationship with necessity and the natural world.
With two great books over my shoulder —Raymond Williams’s Keywords (1976), and Sianne Ngai’s Our Aesthetic Categories (2012)—I had the notion of a series of essays on mental health keywords: terms drawn, usually, from psychiatric medicine, but circulating freely in pop culture and everyday idiom, where they carry connotations well beyond the clinical. There are perhaps more of these words than you would expect, and the lexicon is diversifying every day.
This post could have been about the term "OCD," which is used in colloquial speech either accusatorily, to criticize someone's unreasonable exactitude about trivialities (he's so OCD about grammar), or apologetically, to excuse an inability to let something go (sorry I took so long to set the table—I'm kind of OCD), or less often boastfully, to suggest that the speaker is more committed to an orderly life than her interlocutor (I'm OCD about my finances). (Note that in each case OCD becomes something one is rather than something one has —grammatically nonsensical, if we were to unpack the acronym, but perhaps picking up on an unavowed feature of psychiatric discourse.) These uses of "OCD," of course, bear little or no resemblance to the actual medical condition, but they do at least reflect the fact that Obsessive-Compulsive Disorder is classified as an anxiety disorder. Indeed, one of the primary features of OCD, as articulated by the National Institute of Mental Health, is that sufferers "don't get pleasure when performing the [compulsive] behaviors or rituals, but get brief relief from the anxiety the thoughts cause"; the affective experience of being trapped, unfree to do otherwise, seems to be part of OCD's symptomatology. Similarly, it's understood in the idiomatic "OCD" that a behavior becomes compulsive when it's neither plausibly motivated by pleasure nor necessary for one's thriving: the speaker above didn't spend five minutes adjusting the axes of the knife and fork because she enjoyed it, but because not doing it would feel bad.
It's odd, then, to turn from "OCD" to "obsessed"—another everyday idiom that draws upon clinical language, but that in dropping the "compulsive" element seems to completely reverse its affective charge. For "obsessed," as we find it in our Twitter feeds and Tumblr pages, is a positive, almost elated word; it describes a kind of infatuation with an object or an oeuvre: I am obsessed with platform boots, with health care policy, with the Coen brothers. But if it were simply a matter of intense liking for an object, we have other words for that; what is the difference that "obsessed" makes? A few propositions:
1) "Obsession" implies research. "I'm obsessed" is, in a sense, a socially acceptable excuse for pure intellectual curiosity of a sort that is rarely indulged in either school or adult employment. One gets the sense that such curiosity feels mysterious to most people, even a shade pathological, and that labelling themselves "obsessed" is a way to make sense of the feeling. Indeed, even in its more consumerist iterations, obsession seems to need explanation, to emerge suddenly and mysteriously in a way that demands investigation in its own right. Hence articles in which "scientists" explain "why we're obsessed" with zombie movies, pumpkin spice flavoring, etc. (The "we" here is obviously quite socially circumscribed, but the articles usually don't acknowledge it—we're meant to take this "we" as more or less coextensive with humanity.)
Not all research falls under the "obsession" rubric; a fascination with neuroscientific experiments or space exploration, for instance, would rarely be classified as an "obsession," coming instead under the category of nerdiness. (This has its own chic, of course, but it's not as universally accessible as obsession.) Rather, one is obsessed with a historical figure, a trial (as in Serial, a phenomenon that generated lots of obsessional discourse), an unsolved mystery, even simply a period ("I'm obsessed with the Edwardian era"). Obsession is typically humanistic, and more specifically forensic: it attempts to excavate a past event. Although it frames itself as a quirky impulse, then, obsession of this sort does nonetheless produce work—a podcast, an article, "content" in its most amorphous sense—and the activities it motivates certainly look like labor: compiling, interviewing, researching, writing. One wonders if "obsession" is something like a calling for the age of precarity: a passion overwhelming enough to inspire self-motivated work, but fleeting enough to allow for frequent job changes.
2) "Obsession" and consumption are intimately related. Fashion magazines list their "obsessions" of the moment, and one (InStyle) even has a recurring feature called "We're Obsessed!" (One precursor of this phenomenon might have been Oprah's "favorite things.") Pinterest, that great systematizer of consumer taste, not only traffics in the language of obsession; it can itself be an object of obsession ("How Obsessed With Pinterest Are You?", asks one Buzzfeed quiz). The products with which one can be obsessed are legion, but they tend to fall within a middle-class, mildly aspirational bandwidth: being obsessed with, say, Chanel coats is acceptable only for celebrities, whom it humanizes, and it's similarly hard to imagine being obsessed with, say, Target's house clothing brand Xhilaration. But one might easily be obsessed with, say, the designer Adam Lippes's limited-time collaboration with Target, which hits a sweet spot between exclusivity and accessibility: most shoppers can afford it, but not everyone will know where to look for it, and it disappears within a month or two. (A quick sidebar here to note that one might define “basic,” that aesthetic pejorative, as being obsessed with products and styles so ubiquitous that they merit mild liking at best: pumpkin spice lattes and Ugg boots and, most notoriously, fall itself.)
How does this consumer orientation jibe with the association of obsession with research—seemingly a purely intellectual activity? It's perhaps trite to observe that research is acquisitive; even when she doesn't seek to own the objects in question, the researcher wants to collect them, to have at her fingertips a kind of information trove. (I myself assembled these theses on obsession with the help of an app called Pearltrees, a kind of Pinterest for academics.) What's more, though, obsessional discourse indicates the degree to which consumption is now "powered by" research (to use a favorite information-technology idiom): the same search tools we use to do our jobs, if our jobs involve moving information around, are used at least as often to find new restaurants, new gadgets, or new pairs of shoes; moreover, the latter motivation is often the animating one behind new developments in search technology. It's perhaps fitting that when obsession does enter the sphere of production, then, it names production that frames itself as unconventional, uniquely personal, "outsider"; see, for instance, the company Casper, which advertises its "obsessively engineered mattresses" (primarily on podcasts, which, as mentioned above, are also often examples of obsessional work). Such products address themselves directly to the consumer, who is imagined to be sick and tired of the corporate norm (Big Mattress, for example, and just look here if you think I'm kidding).
It might seem essential to obsession of this sort that it needs an object: one is obsessed with, never simply obsessed. Or is one?
There may be an implied object to these products—fitness, or a sexual partner, or oneself—but the word seems to verge here on describing a personality trait, a kind of intensity ("intense," though not a clinical term, is another interesting psychological keyword—"she's intense," "it was intense") combined with a magnetism that makes one the object of obsession. Here we touch upon what we might call the paradox of romantic obsession, especially as it relates to consumer culture: for a woman to be obsessed with a man is at best embarrassing and at worst terrifying; obsession itself is something a woman is supposed to do in the presence of her female friends, not around men, with whom it would damage her carefully cultivated cool and laid-back aura; but in order to make herself potentially obsessable —to have lips, legs, hair that can inspire obsession in a romantic partner —a woman is more or less required to obsess over her own body in a mode at once critical and oddly erotically charged. All of this suggests that the capacity to be obsessed can itself be a commodity, both on the romantic "market" and, perhaps, on the labor market. (Sociologists, anthropologists, behavioral economists, I put it to you: does "obsessed" ever appear in the self-description of young folks looking for jobs?)
3) "Obsession" is collective. This is true, first, on the level of tastemakers—the aforementioned fashion magazines; news and culture sites like Slate, Salon, Vox, etc.; music and movie reviewers like Entertainment Weekly—for whom the "we" in "we're obsessed" is editorial and frequently evokes the workplace setting ("these days, the office is obsessed with ..."). This smallish social group, in turn, extends itself out to the reader/viewer/consumer, who by adopting this obsession as her own gains access to a community. (The trajectory isn't always top-down, of course; sometimes an ordinary individual discovers an obsession that then pulls others in; this is perhaps the primary distinction between obsessional research and obsessional consumption.)
It's in this feature that colloquial obsession differs most dramatically from its clinical counterpart. Obsessive-compulsive disorder is often isolating: it traps its victims in thought loops and forces them to perform elaborate rituals that make social interaction increasingly difficult. The obsessional discourse I'm talking about here, by contrast, requires collective buy-in, partly so that it feels socially acceptable; to be obsessed with an entirely uninteresting person who lived in the recent past, for instance, may have a certain quirky This American Life-type charm, but also feels dangerously close to mental illness. But the collective also matters to obsession because it enables crowdsourcing, a way to fulfill obsession's impulse to gather information. Indeed, like the idea of crowdsourcing, obsessional discourse seems to point toward the experience not of using but of being a search engine, of having a kind of neural "alert" out for information on certain subjects, of tagging incoming data according to one's needs, of privileging ideas and information that have passed through the hands of as many other people as possible.
If all this sounds dystopian, it's not meant to be; rather, I'm suggesting that obsessional discourse points toward the affective experience of a new way of imagining one's own cognitive processes. And that's what this project attempts to clarify: the subtle, day-to-day evolution of our metaphors of mind, and the corresponding slow changes in cognition itself. In subsequent posts, I’ll draw your attention to a few more ways that we’ve lately been understanding the brain and behavior —not in the responsibly peer-reviewed context of neuroscience or psychology journals, but “on the ground,” in our spontaneously generated accounts of why we act the way we do. Whether they will cohere into a unified model of contemporary cognition, or fragment and disperse our consciously held theories of mind, remains to be seen.
Like a novice third baseman, I can feel the errors piling up around me. I'll make a few stabs at them here, remembering that error isn't orderly. Quite the opposite. A good thing to!
Crafting a language for ecology in a post-sustainability context, I focus on error. If disruption and change are ecological principles, perhaps error represents a basic truth of Nature. In the atomic rain with Lucretius, waiting for the deviating clinamen, I seek ways to conceptualize ecological change.
Error wears many faces. Philosophical error, legal error, errors in engineering, in grammar, logic, ethics, mathematics, baseball. To err is to wander or deviate, and from that unplanned turn possibilities appear.
Unpacking the depths of my interest in error might require delving into the Little League of my childhood subconscious, but in scholarship my current error-fixation begins with early modern oceanic navigation. I've written about error in cartographic context:
The Age of Discovery was an Age of Error.
In a navigational sense, error is arriving unexpectedly at a place unlike the one you were planning to reach. It causes you to reach Cuba when you're sailing for China or wreck on the Scilly Islands when sailing for Plymouth. These kinds of deviation dominated early modern maritime travel. Global and oceanic errors piled up as early modern sailors reached unknown seas. Error was every voyage's shipmate.
Entangled with this mathematical or geophysical sense of error, which motivated progressive technical fixes from the Mercator projection to John Harrison's maritime chronometers, theological error invokes Original Sin, the deviance of human beings from divine law. To err is human, as the saying goes, but not only in a harmless way.
Being born into error requires humans to undertake endless labors of ineffective self-correction, an imperative that gave rise to such searching programs as the Spiritual Exercises of Ignatius of Loyola, the inescapable predestinatory labyrinths of John Calvin (and his Anglophone heirs Hermann Melville and Thomas Pynchon), and the brutal enjambment with which John Milton's Christian epic disposes of classical myth:
...thus they relate,
Erring. (Paradise Lost, 1.746-7)
The poet insists that all who told stories before him spoke in error. He knows that he errs too, and that the loss of paradise has never yet stopped erring. After some turns, you can't find your way back to the former path.
On the third hand — how many hands is that? error! — errancy sometimes turns out all for the best. In the literary world of romance, sudden turns become fortunate coincidences, at least as they are revealed over the long voyage. In my favorite genre-joke, Northrop Frye defines classical romance through its use of error:In Greek romance...the normal means of transportation is by shipwreck (The Secular Scripture 4)
In Spenser's Faerie Queene, Errour is a monster, half "like a serpent horribly displaide" (126.96.36.199) and the other half womanly. She frustrates interpretation and for a time immobilizes our knight:
God helpe the man so wrapt in Errours endlesse traine (188.8.131.52)
Trapped by error and in error, the knight needs faith to set him free. This monster is romance error by definition — the opening enemy in Elizabethan England's greatest verse romance — but where is she taking our knight and our poem?
So what is error? Navigational deviation, original sin, misunderstood cause, romance circuity? Is the narratability of error, its essential work in making stories, related to its corrupting theological presence? Can we err without catastrophe?
Any attempt to solve such problems courts — yes, you guessed it — further errors. But despite the risk of adding one more turn to error's many-forked idea-tree, I'll propose ecology as a cognate language. Linking error and ecology helps to understand the centrality of disruptive social and ecological change in early modern culture. It may also help untangle some eco-knots of our own era of catastrophic change.
Error is ecological because ecological systems include movement and difference in their concepts of unity and change over time.
Ecology is errant because, as "new" or dynamic ecologists have argued since the 1990s, there is no permanent stability in the natural world.
Error, not stasis, typifies natural order.
Human mechanisms for navigating error do not involve correction so much as learning to accommodate change.
I wonder how replacing the over-saturated word "Nature" with "Error" might change ecological thinking. In general, I agree with Tim Morton, Donna Harraway, Bruno Latour, and others who think that any "Nature" separate from culture or the human is a problem, not a solution. I mostly agree with the goal of an "ecology without nature." But I also wonder how we might renovate or reconfigure Nature, both by including humans within it and also considering dynamic change and disruption as essential rather than accidental. What if Nature and Error are not opposites but mutually entangled?
Living in Nature requires — and sometimes rewards — errancy.
Living in Error is Natural.
Nature loves to hide, says Heraclitus. Perhaps Error hides also?
Put more simply: Nature errs. What might follow from this heretical ecological principle?