Monthly Archives: September 2015

Why Bother? by Jonathan Franzen

 

 

“The Harper’s Essay” [“Perchance to Dream”] is retitled “Why Bother?” in Jonathan Franzen’s collection of essays “How to Be Alone”.

Why Bother?

by Jonathan Franzen

(The Harper’s Essay)

My despair about the American novel began in the winter of 1991, when I fled to Yaddo, the artists’ colony in upstate New York, to write the last two chapters of my second book. My wife and I had recently separated, and I was leading a life of self-enforced solitude in New York City, working long days in a small white room, packing up ten years’ worth of communal property, and taking nighttime walks on avenues where Russian, Hindi, Korean, and Spanish were spoken in equal measure. Even deep in my Queens neighborhood, however, news could reach me through my TV set and my Times subscription. The country was preparing for war ecstatically, with rhetoric supplied by George Bush: “Vital issues of principle are at stake.” In Bush’s eighty-nine-percent approval rating, as in the near-total absence of public skepticism about the war, the United States seemed to me hopelessly unmoored from reality — dreaming of glory in the massacre of faceless Iraqis, dreaming of infinite oil for hour-long commutes, dreaming of exemption from the rules of history. And so I, too, was dreaming of escape. I wanted to hide from America. But when I got to Yaddo and realized that it was no haven — the Times came there daily, and my fellow colonists kept talking about Patriot missiles and yellow ribbons — I began to think that what I really needed was a monastery.

Then one afternoon, in Yaddo’s little library, I picked up and read Paula Fox’s short novel Desperate Characters. “She was going to get away with everything!” is the hope that seizes the novel’s main character, Sophie Bentwood, a childless Brooklynite who’s unhappily married to a conservative lawyer. Sophie used to translate French novels; now she’s so depressed that she can hardly even read them. Against the advice of the husband, Otto, she has given milk to a homeless cat, and the cat has repaid the kindness by biting her hand. Sophie immediately feels “vitally wounded”—she’s been bitten for “no reason” just as Josef K. is arrested for “no reason” in The Trial—but when the swelling in her hand subsides she becomes giddy with the hope of being spared rabies shots.

The “everything” Sophie wants to get away with, however, is more than her liberal self-indulgence with the cat. She wants to get away with reading Goncourt novels and eating omelettes aux fines herbes on a street where derelicts lie sprawled in their own vomit and in a country that’s fighting a dirty war in Vietnam. She wants to be spared the pain of confronting a future beyond her life with Otto. She wants to keep dreaming. But the novel’s logic won’t let her. She’s compelled, instead, to this equation of the personal and the social:

“God, if I am rabid, I am equal to what is outside,” she said out loud, and felt an extraordinary relief as though, at last, she’d discovered what it was that could create a balance between the quiet, rather vacant progression of the days she spent in this house, and those portents that lit up the dark at the edge of her own existence.

Desperate Characters, which was first published in 1970, ends with an act of prophetic violence. Breaking under the strain of his collapsing marriage, Otto Bentwood grabs a bottle of ink from Sophie’s escritoire and smashes it against their bedroom wall. The ink in which his law books and Sophie’s translations have been printed now forms an unreadable blot. The black lines on the wall are both a mark of doom and the harbinger of an extraordinary relief, the end to a fevered isolation.

With its equation of a crumbling marriage with a crumbling social order, Desperate Characters spoke directly to the ambiguities that I was experiencing that January. Was it a great thing or a horrible thing that my marriage was coming apart? And did the distress I was feeling derive from some internal sickness of the soul, or was it imposed on me by the sickness of society? That someone besides me had suffered from these ambiguities and had seen light on their far side — that Fox’s book had been published and preserved; that I could find company and consolation and hope in an object pulled almost at random from a bookshelf — felt akin to an instance of religious grace.

Yet even while I was being saved as a reader by Desperate Characters I was succumbing, as a novelist, to despair about the possibility of connecting the personal and the social. The reader who happens on Desperate Characterstoday will be as struck by the foreignness of the Bentwoods’ world as by its familiarity. A quarter-century has only broadened and confirmed the sense of cultural crisis that Fox was registering. But what now feels like the locus of that crisis — the banal ascendancy of television, the electronic fragmentation of public discourse — is nowhere to be seen in the novel. Communication for the Bentwoods meant books, a telephone, and letters. Portents didn’t stream uninterruptedly through a cable converter or a modem; they were only dimly glimpsed, on the margins of existence. An ink bottle, which now seems impossibly quaint, was still thinkable as a symbol in 1970.

In a winter when every house in the nation was haunted by the ghostly telepresences of Peter Arnett in Baghdad and Tom Brokaw in Saudi Arabia — a winter when the inhabitants of those houses seemed less like individuals than a collective algorithm for the conversion of media jingoism into an eighty-nine-percent approval rating — I was tempted to think that if a contemporary Otto Bentwood were breaking down, he would kick in the screen of his bedroom TV. But this would have missed the point. Otto Bentwood, if he existed in the nineties, would not break down, because the world would no longer even bear on him. As an unashamed elitist, an avatar of the printed word, and a genuinely solitary man, he belongs to a species so endangered as to be all but irrelevant in an age of electronic democracy. For centuries, ink in the form of printed novels has fixed discrete, subjective individuals within significant narratives. What Sophie and Otto were glimpsing, in the vatic black mess on their bedroom wall, was the disintegration of the very notion of a literary character. Small wonder they were desperate. It was still the sixties, and they had no idea what had hit them.

There was a siege going on: it had been going on for a long time, but the besieged themselves were the last to take it seriously.

— from Desperate Characters

WHEN I GOT OUT of college, in 1981, I hadn’t heard the news about the social novel’s death. I didn’t know that Philip Roth had long ago performed the autopsy, describing “American reality” as a thing that “stupefies. . sickens. . infuriates, and finally. . is even a kind of embarrassment to one’s own meager imagination. The actuality is continually outdoing our talents. .” I was in love with literature and with a woman to whom I’d been attracted in part because she was a brilliant reader. I had lots of models for the kind of uncompromising novel I wanted to write. I even had a model for an uncompromising novel that had found a big audience: Catch-22. Joseph Heller had figured out a way of outdoing the actuality, employing the illogic of modern warfare as a metaphor for the more general denaturing of American reality. His book had seeped into the national imagination so thoroughly that my Webster’s Ninth Collegiate gave no fewer than five shades of meaning for the title. That no challenging novel since Catch-22 had affected the culture anywhere near as deeply, just as no issue since the Vietnam War had galvanized so many alienated young Americans, was easily overlooked. In college my head had been turned by Marxism, and I believed that “monopoly capitalism” (as we called it) abounded with “negative moments” (as we called them) that a novelist could trick Americans into confronting if only he could package his subversive bombs in a sufficiently seductive narrative.

I began my first book as a twenty-two-year-old dreaming of changing the world. I finished it six years older. The one tiny world-historical hope I still clung to was to appear on KMOX Radio, “the Voice of St. Louis,” whose long, thoughtful author interviews I’d grown up listening to in my mother’s kitchen. My novel, The Twenty-Seventh City, was about the innocence of a Midwestern city — about the poignancy of St. Louis’s municipal ambitions in an age of apathy and distraction — and I looked forward to forty-five minutes with one of KMOX’s afternoon talk-show hosts, whom I imagined teasing out of me the themes that I’d left latent in the book itself. To the angry callers demanding to know why I hated St. Louis I would explain, in the brave voice of someone who had lost his innocence, that what looked to them like hate was in fact tough love. In the listening audience would be my family: my mother, who considered fiction-writing a socially irresponsible career, and my father, who hoped that one day he would pick up Time magazine and find me reviewed in it.

It wasn’t until The Twenty-Seventh City was published, in 1988, that I discovered how innocent I still was. The media’s obsessive interest in my youthfulness surprised me. So did the money. Boosted by the optimism of publishers who imagined that an essentially dark, contrarian entertainment might somehow sell a zillion copies, I made enough to fund the writing of my next book. But the biggest surprise — the true measure of how little I’d heeded my own warning in The Twenty-Seventh City—was the failure of my culturally engaged novel to engage with the culture. I’d intended to provoke; what I got instead was sixty reviews in a vacuum.

My appearance on KMOX was indicative. The announcer was a journeyman with a whiskey sunburn and a heartrending comb-over who clearly hadn’t read past chapter two. Beneath his boom mike he brushed at the novel’s pages as though he hoped to absorb the plot transdermally. He asked me the questions that everybody asked me: How did it feel to get such good reviews? (It felt great, I said.) Was the novel autobiographical? (It was not, I said.) How did it feel to be a local kid returning to St. Louis on a fancy book tour? It felt obscurely disappointing. But I didn’t say this. I’d already realized that the money, the hype, the limo ride to a Vogue shoot weren’t simply fringe benefits. They were the main prize, the consolation for no longer mattering to a culture.

Exactly how much less novels now matter to the American mainstream than they did when Catch-22 was published is impossible to judge. But the ambitious young fiction writer can’t help noting that, in a recent USA Today survey of twenty-four hours in the life of American culture, there were twenty-one references to television, eight to film, seven to popular music, four to radio, and one to fiction (The Bridges of Madison County). Or that magazines like The Saturday Review, which in Joseph Heller’s heyday still vetted novels by the bushel, have entirely disappeared. Or that the Times Book Review nowadays runs as few as two full fiction reviews a week (fifty years ago, the fiction-to-nonfiction ratio was one to one).

The only mainstream American household I know well is the one I grew up in, and I can report that my father, who was not a reader, nevertheless had some acquaintance with James Baldwin and John Cheever, because Time magazine put them on its cover and Time, for my father, was the ultimate cultural authority. In the last decade, the magazine whose red border twice enclosed the face of James Joyce has devoted covers to Scott Turow and Stephen King. These are honorable writers; but no one doubts it was the size of their contracts that won them covers. The dollar is now the yardstick of cultural authority, and an organ like Time, which not long ago aspired to shape the national taste, now serves mainly to reflect it.

The literary America in which I found myself after I published The Twenty-Seventh City bore a strange resemblance to the St. Louis I’d grown up in: a once-great city that had been gutted and drained by white flight and superhighways. Ringing the depressed urban core of serious fiction were prosperous new suburbs of mass entertainments. Much of the inner city’s remaining vitality was concentrated in the black, Hispanic, Asian, gay, and women’s communities that had taken over the structures vacated by fleeing straight white males. MFA programs offered housing and workfare to the underemployed; a few crackpot city-loving artists continued to hole up in old warehouses; and visiting readers could still pay weekend visits to certain well-policed cultural monuments — the temple of Toni Morrison, the orchestra of John Updike, the Faulkner House, the Wharton Museum, and Mark Twain Park.

By the early nineties I was as depressed as the inner city of fiction. My second novel, Strong Motion, was a long, complicated story about a Midwestern family in a world of moral upheaval, and this time, instead of sending my bombs in a Jiffy-Pak mailer of irony and understatement, as I had with The Twenty-Seventh City, I’d come out throwing rhetorical Molotov cocktails. But the result was the same: another report card with A’s and B’s from the reviewers who had replaced the teachers whose approval, when I was younger, I had both craved and taken no satisfaction from; decent money; and the silence of irrelevance. Meanwhile, my wife and I had reunited in Philadelphia. For two years we’d bounced around in three time zones, trying to find a pleasant, inexpensive place in which we didn’t feel like strangers. Finally, after exhaustive deliberation, we’d rented a too-expensive house in yet another depressed city. That we then proceeded to be miserable seemed to confirm beyond all doubt that there was no place in the world for fiction writers.

In Philadelphia I began to make unhelpful calculations, multiplying the number of books I’d read in the previous year by the number of years I might reasonably be expected to live, and perceiving in the three-digit product not so much an intimation of mortality (though the news on that front wasn’t cheering) as a measure of the incompatibility of the slow work of reading and the hyperkinesis of modern life. All of a sudden it seemed as if the friends of mine who used to read no longer even apologized for having stopped. A young acquaintance who had been an English major, when I asked her what she was reading, replied: “You mean linear reading? Like when you read a book from start to finish?”

There’s never been much love lost between literature and the marketplace. The consumer economy loves a product that sells at a premium, wears out quickly or is susceptible to regular improvement, and offers with each improvement some marginal gain in usefulness. To an economy like this, news that stays news is not merely an inferior product; it’s an antithetical product. A classic work of literature is inexpensive, infinitely reusable, and, worst of all, unimprovable.

After the collapse of the Soviet Union, the American political economy had set about consolidating its gains, enlarging its markets, securing its profits, and demoralizing its few remaining critics. In 1993 I saw signs of the consolidation everywhere. I saw it in the swollen minivans and broad-beamed trucks that had replaced the automobile as the suburban vehicle of choice — these Rangers and Land Cruisers and Voyagers that were the true spoils of a war waged to keep American gasoline cheaper than dirt, a war that had played like a thousand-hour infomercial for high technology, a consumer’s war dispensed through commercial television. I saw leaf-blowers replacing rakes. I saw CNN holding hostage the travelers in airport lounges and the shoppers in supermarket checkout lines. I saw the 486 chip replacing the 386 and being replaced in turn by the Pentium so that, despite new economies of scale, the price of entry-level notebook computers never fell below a thousand dollars. I saw Penn State win the Blockbuster Bowl.

Even as I was sanctifying the reading of literature, however, I was becoming so depressed that I could do little after dinner but flop in front of the TV. We didn’t have cable, but I could always find something delicious: Phillies and Padres, Eagles and Bengals, M*A*S*H, Cheers, Homicide. Naturally, the more TV I watched, the worse I felt. If you’re a novelist and even you don’t feel like reading, how can you expect anybody else to read your books? I believed I oughtto be reading, as I believed I ought to be writing a third novel. And not just any third novel. It had long been a prejudice of mine that putting a novel’s characters in a dynamic social setting enriched the story that was being told; that the glory of the genre consisted of its spanning of the expanse between private experience and public context. And what more vital context could there be than television’s short-circuiting of that expanse?

But I was paralyzed with the third book. I was torturing the story, stretching it to accommodate ever more of those things-in-the-world that impinge on the enterprise of fiction writing. The work of transparency and beauty and obliqueness that I wanted to write was getting bloated with issues. I’d already worked in contemporary pharmacology and TV and race and prison life and a dozen other vocabularies; how was I going to satirize Internet boosterism and the Dow Jones as well, while leaving room for the complexities of character and locale? Panic grows in the gap between the increasing length of the project and the shrinking time increments of cultural change: How to design a craft that can float on history for as long as it takes to build it? The novelist has more and more to say to readers who have less and less time to read: Where to find the energy to engage with a culture in crisis when the crisis consists in the impossibility of engaging with the culture? These were unhappy days. I began to think that there was something wrong with the whole model of the novel as a form of “cultural engagement.”

In the nineteenth century, when Dickens and Darwin and Disraeli all read one another’s work, the novel was the preeminent medium of social instruction. A new book by Thackeray or William Dean Howells was anticipated with the kind of fever that a late-December film release inspires today.

The big, obvious reason for the decline of the social novel is that modern technologies do a much better job of social instruction. Television, radio, and photographs are vivid, instantaneous media. Print journalism, too, in the wake of In Cold Blood, has become a viable creative alternative to the novel. Because they command large audiences, TV and magazines can afford to gather vast quantities of information quickly. Few serious novelists can pay for a quick trip to Singapore, or for the mass of expert consulting that gives serial TV dramas like E.R. and NYPD Blue their veneer of authenticity. The writer of average talent who wants to report on, say, the plight of illegal aliens would be foolish to choose the novel as a vehicle. Ditto the writer who wants to offend prevailing sensibilities. Portnoy’s Complaint, which even my mother once heard enough about to disapprove of, was probably the last American novel that could have appeared on Bob Dole’s radar as a nightmare of depravity. Today’s Baudelaires are hip-hop artists.

The essence of fiction is solitary work: the work of writing, the work of reading. I’m able to know Sophie Bentwood intimately, and to refer to her as casually as I would to a good friend, because I poured my own feelings of fear and estrangement into my construction of her. If I knew her only through a video of Desperate Characters (Shirley MacLaine made the movie in 1971, as a vehicle for herself), Sophie would remain an Other, divided from me by the screen on which I viewed her, by the surficiality of film, and by MacLaine’s star presence. At most, I might feel I knew MacLaine a little better.

Knowing MacLaine a little better, however, is what the country mainly wants. We live in a tyranny of the literal. The daily unfolding stories of O. J. Simpson, Timothy McVeigh, and Bill Clinton have an intense, iconic presence that relegates to a subordinate shadow-world our own untelevised lives. In order to justify their claim on our attention, the organs of mass culture and information are compelled to offer something “new” on a daily, indeed hourly, basis. Although good novelists don’t deliberately seek out trends, many of them feel a responsibility to pay attention to contemporary issues, and they now confront a culture in which almost all the issues are burned out almost all the time. The writer who wants to tell a story about society that’s true not just in 1996 but in 1997 as well can find herself at a loss for solid cultural referents. What’s topically relevant while she’s planning the novel will almost certainly be passé by the time it’s written, rewritten, published, distributed, and read.

None of this stops cultural commentators — notably Tom Wolfe — from blaming novelists for their retreat from social description. The most striking thing about Wolfe’s 1989 manifesto for the “New Social Novel,” even more than his uncanny ignorance of the many excellent socially engaged novels published between 1960 and 1989, was his failure to explain why his ideal New Social Novelist should not be writing scripts for Hollywood. And so it’s worth saying one more time: Just as the camera drove a stake through the heart of serious portraiture, television has killed the novel of social reportage. Truly committed social novelists may still find cracks in the monolith to sink their pitons into. But they do so with the understanding that they can no longer depend on their material, as Howells and Sinclair and Stowe did, but only on their own sensibilities, and with the expectation that no one will be reading them for news.

This much, at least, was visible to Philip Roth in 1961. Noting that “for a writer of fiction to feel that he does not really live in his own country — as represented by Life or by what he experiences when he steps out the front door — must seem a serious occupational impediment,” he rather plaintively asked: “What will his subject be? His landscape?” In the intervening years, however, the screw has taken another turn. Our obsolescence now goes further than television’s usurpation of the role as news-bringer, and deeper than its displacement of the imagined with the literal. Flannery O’Connor, writing around the time that Roth made his remarks, insisted that the “business of fiction” is “to embody mystery through manners.” Like the poetics that Poe derived from his “Raven,” O’Connor’s formulation particularly flatters her own work, but there’s little question that “mystery” (how human beings avoid or confront the meaning of existence) and “manners” (the nuts and bolts of how human beings behave) have always been primary concerns of fiction writers. What’s frightening for a novelist today is how the technological consumerism that rules our world specifically aims to render both of these concerns moot.

O’Connor’s response to the problem that Roth articulated, to the sense that there is little in the national mediascape that novelists can feel they own, was to insist that the best American fiction has always been regional. This was somewhat awkward, since her hero was the cosmopolitan Henry James. But what she meant was that fiction feeds on specificity, and that the manners of a particular region have always provided especially fertile ground for its practitioners.

Superficially, at least, regionalism is still thriving. In fact it’s fashionable on college campuses nowadays to say that there is no America anymore, there are only Americas; that the only things a black lesbian New Yorker and a Southern Baptist Georgian have in common are the English language and the federal income tax. The likelihood, however, is that both the New Yorker and the Georgian watch Letterman every night, both are struggling to find health insurance, both have jobs that are threatened by the migration of employment overseas, both go to discount superstores to purchase Pocahontas tie-in products for their children, both are being pummeled into cynicism by commercial advertising, both play Lotto, both dream of fifteen minutes of fame, both are taking a serotonin reuptake inhibitor, and both have a guilty crush on Uma Thurman. The world of the present is a world in which the rich lateral dramas of local manners have been replaced by a single vertical drama, the drama of regional specificity succumbing to a commercial generality. The American writer today faces a cultural totalitarianism analogous to the political totalitarianism with which two generations of Eastern bloc writers had to contend. To ignore it is to court nostalgia. To engage with it, however, is to risk writing fiction that makes the same point over and over: technological consumerism is an infernal machine, technological consumerism is an infernal machine. .

Equally discouraging is the fate of “manners” in the word’s more common sense. Rudeness, irresponsibility, duplicity, and stupidity are hallmarks of real human interaction: the stuff of conversation, the cause of sleepless nights. But in the world of consumer advertising and consumer purchasing, no evil is moral. The evils consist of high prices, inconvenience, lack of choice, lack of privacy, heartburn, hair loss, slippery roads. This is no surprise, since the only problems worth advertising solutions for are problems treatable through the spending of money. But money cannot solve the problem of bad manners — the chatterer in the darkened movie theater, the patronizing sister-in-law, the selfish sex partner — except by offering refuge in an atomized privacy. And such privacy is exactly what the American Century has tended toward. First there was mass suburbanization, then the perfection of at-home entertainment, and finally the creation of virtual communities whose most striking feature is that interaction within them is entirely optional — terminable the instant the experience ceases to gratify the user.

That all these trends are infantilizing has been widely noted. Less often remarked is the way in which they are changing both our expectations of entertainment (the book must bring something to us, rather than our bringing something to the book) and the very content of that entertainment. The problem for the novelist is not just that the average man or woman spends so little time F2F with his or her fellows; there is, after all, a rich tradition of epistolary novels, and Robinson Crusoe’s condition approximates the solitude of today’s suburban bachelor. The real problem is that the average man or woman’s entire life is increasingly structured to avoid the kinds of conflicts on which fiction, preoccupied with manners, has always thrived.

Here, indeed, we are up against what truly seems like the obsolescence of serious art in general. Imagine that human existence is defined by an Ache: the Ache of our not being, each of us, the center of the universe; of our desires forever outnumbering our means of satisfying them. If we see religion and art as the historically preferred methods of coming to terms with this Ache, then what happens to art when our technological and economic systems and even our commercialized religions become sufficiently sophisticated to make each of us the center of our own universe of choices and gratifications? Fiction’s response to the sting of poor manners, for example, is to render them comic. The reader laughs with the writer, feels less alone with the sting. This is a delicate transaction, and it takes some work. How can it compete with a system — screen your calls; go out by modem; acquire the money to deal exclusively with the privatized world, where workers must be courteous or lose their jobs — that spares you the sting in the first place?

In the long run, the breakdown of communitarianism is likely to have all sorts of nasty consequences. In the short run, however, in this century of amazing prosperity and health, the breakdown takes a heavy toll on the ancient methods of dealing with the Ache. As for the sense of loneliness and pointlessness and loss that social atomization may produce — stuff that can be lumped under O’Connor’s general heading of mystery — it’s already enough to label it a disease. A disease has causes: abnormal brain chemistry, childhood sexual abuse, welfare queens, the patriarchy, social dysfunction. It also has cures: Zoloft, recovered-memory therapy, the Contract with America, multiculturalism, the World Wide Web. A partial cure, or better yet, an endless succession of partial cures, but failing that, even just the consolation of knowing you have a disease — anything is better than mystery. Science attacked religious mystery a long time ago. But it was not until applied science, in the form of technology, changed both the demand for fiction and the social context in which fiction is written that we novelists fully felt its effects.

Even now, even when I carefully locate my despair in the past tense, it’s difficult for me to confess to all these doubts. In publishing circles, confessions of doubt are widely referred to as “whining”—the idea being that cultural complaint is pathetic and self-serving in writers who don’t sell, ungracious in writers who do. For people as protective of their privacy and as fiercely competitive as writers are, mute suffering would seem to be the safest course. However sick with foreboding you feel inside, it’s best to radiate confidence and to hope that it’s infectious. When a writer says publicly that the novel is doomed, it’s a sure bet his new book isn’t going well; in terms of his reputation, it’s like bleeding in shark-infested waters.

Even harder to admit is how depressed I was. As the social stigma of depression dwindles, the aesthetic stigma increases. It’s not just that depression has become fashionable to the point of banality. It’s the sense that we live in a reductively binary culture: you’re either healthy or you’re sick, you either function or you don’t. And if that flattening of the field of possibilities is precisely what’s depressing you, you’re inclined to resist participating in the flattening by calling yourself depressed. You decide that it’s the world that’s sick, and that the resistance of refusing to function in such a world is healthy. You embrace what clinicians call “depressive realism.” It’s what the chorus in Oedipus Rex sings: “Alas, ye generations of men, how mere a shadow do I count your life! Where, where is the mortal who wins more of happiness than just the seeming, and, after the semblance, a falling away?” You are, after all, just protoplasm, and some day you’ll be dead. The invitation to leave your depression behind, whether through medication or therapy or effort of will, seems like an invitation to turn your back on all your dark insights into the corruption and infantilism and self-delusion of the brave new McWorld. And these insights are the sole legacy of the social novelist who desires to represent the world not simply in its detail but in its essence, to shine light on the morally blind eye of the virtual whirlwind, and who believes that human beings deserve better than the future of attractively priced electronic panderings that is even now being conspired for them. Instead of saying I am depressed, you want to say I am right!

But all the available evidence suggests that you have become a person who’s impossible to live with and no fun to talk to. And as you increasingly feel, as a novelist, that you are one of the last remaining repositories of depressive realism and of the radical critique of the therapeutic society that it represents, the burden of news-bringing that is placed on your art becomes overwhelming. You ask yourself, why am I bothering to write these books? I can’t pretend the mainstream will listen to the news I have to bring. I can’t pretend I’m subverting anything, because any reader capable of decoding my subversive messages does not need to hear them (and the contemporary art scene is a constant reminder of how silly things get when artists start preaching to the choir). I can’t stomach any kind of notion that serious fiction is good for us, because I don’t believe that everything that’s wrong with the world has a cure, and even if I did, what business would I, who feel like the sick one, have in offering it? It’s hard to consider literature a medicine, in any case, when reading it serves mainly to deepen your depressing estrangement from the mainstream; sooner or later the therapeutically minded reader will end up fingering reading itself as the sickness. Sophie Bentwood, for instance, has “candidate for Prozac” written all over her. No matter how gorgeous and comic her torments are, and no matter how profoundly human she appears in light of those torments, a reader who loves her can’t help wondering whether perhaps treatment by a mental-health-care provider wouldn’t be the best course all around.

I resist, finally, the notion of literature as a noble higher calling, because elitism doesn’t sit well with my American nature, and because even if my belief in mystery didn’t incline me to distrust feelings of superiority, my belief in manners would make it difficult for me to explain to my brother, who is a fan of Michael Crichton, that the work I’m doing is simply better than Crichton’s. Not even the French poststructuralists, with their philosophically unassailable celebration of the “pleasure of the text,” can help me out here, because I know that no matter how metaphorically rich and linguistically sophisticated Desperate Characters is, what I experienced when I first read it was not some erotically joyous lateral slide of endless associations, but something coherent and deadly pertinent. I know there’s a reason I loved reading and loved writing. But every apology and every defense seems to dissolve in the sugar water of contemporary culture, and before long it becomes difficult indeed to get out of bed in the morning.

Two quick generalizations about novelists: we don’t like to poke too deeply into the question of audience, and we don’t like the social sciences. How awkward, then, that for me the beacon in the murk — the person who inadvertently did the most to get me back on track as a writer — should have been a social scientist who was studying the audience for serious fiction in America.

Shirley Brice Heath is a MacArthur Fellow, a linguistic anthropologist, and a professor of English and linguistics at Stanford; she’s a stylish, twiggy, white-haired lady with no discernible tolerance for small talk. Throughout the eighties, Heath haunted what she calls “enforced transition zones”—places where people are held captive without recourse to television or other comforting pursuits. She rode public transportation in twenty-seven different cities. She lurked in airports (at least before the arrival of CNN). She took her notebook into bookstores and seaside resorts. Whenever she saw people reading or buying “substantive works of fiction” (meaning, roughly, trade-paperback fiction), she asked for a few minutes of their time. She visited summer writers’ conferences and creative-writing programs to grill ephebes. She interviewed novelists. Three years ago she interviewed me, and last summer I had lunch with her in Palo Alto.

To the extent that novelists think about audience at all, we like to imagine a “general audience”—a large, eclectic pool of decently educated people who can be induced, by strong enough reviews or aggressive enough marketing, to treat themselves to a good, serious book. We do our best not to notice that, among adults with similar educations and similarly complicated lives, some read a lot of novels while others read few or none.

Heath has noticed this circumstance, and although she emphasized to me that she has not polled everybody in America, her research effectively demolishes the myth of the general audience. For a person to sustain an interest in literature, she told me, two things have to be in place. First, the habit of reading works of substance must have been “heavily modeled” when he or she was very young. In other words, one or both of the parents must have been reading serious books and must have encouraged the child to do the same. On the East Coast, Heath found a strong element of class in this. Parents in the privileged classes encourage reading out of a sense of what Louis Auchincloss calls “entitlement”: just as the civilized person ought to be able to appreciate caviar and a good Burgundy, she ought to be able to enjoy Henry James. Class matters less in other parts of the country, especially in the Protestant Midwest, where literature is seen as a way to exercise the mind. As Heath put it, “Part of the exercise of being a good person is not using your free time frivolously. You have to be able to account for yourself through the work ethic and through the wise use of your leisure time.” For a century after the Civil War, the Midwest was home to thousands of small-town literary societies in which, Heath found, the wife of a janitor was as likely to be active as the wife of a doctor.

Simply having a parent who reads is not enough, however, to produce a lifelong dedicated reader. According to Heath, young readers also need to find a person with whom they can share their interest. “A child who’s got the habit will start reading under the covers with a flashlight,” she said. “If the parents are smart, they’ll forbid the child to do this, and thereby encourage her. Otherwise she’ll find a peer who also has the habit, and the two of them will keep it a secret between them. Finding a peer can take place as late as college. In high school, especially, there’s a social penalty to be paid for being a reader. Lots of kids who have been lone readers get to college and suddenly discover, ‘Oh my God, there are other people here who read.’”

As Heath unpacked her findings for me, I was remembering the joy with which I’d discovered two friends in junior high with whom I could talk about J. R. R. Tolkien. I was also considering that for me, today, there is nothing sexier than a reader. But then it occurred to me that I didn’t even meet Heath’s first precondition. I told her I didn’t remember either of my parents ever reading a book when I was a child, except aloud to me.

Without missing a beat Heath replied: “Yes, but there’s a second kind of reader. There’s the social isolate — the child who from an early age felt very different from everyone around him. This is very, very difficult to uncover in an interview. People don’t like to admit that they were social isolates as children. What happens is you take that sense of being different into an imaginary world. But that world, then, is a world you can’t share with the people around you — because it’s imaginary. And so the important dialogue in your life is with the authors of the books you read. Though they aren’t present, they become your community.”

Pride compels me, here, to draw a distinction between young fiction readers and young nerds. The classic nerd, who finds a home in facts or technology or numbers, is marked not by a displaced sociability but by an antisociability. Reading does resemble more nerdy pursuits in that it’s a habit that both feeds on a sense of isolation and aggravates it. Simply being a “social isolate” as a child does not, however, doom you to bad breath and poor party skills as an adult. In fact, it can make you hypersocial. It’s just that at some point you’ll begin to feel a gnawing, almost remorseful need to be alone and do some reading — to reconnect to that community.

According to Heath, readers of the social-isolate variety (she also calls them “resistant” readers) are much more likely to become writers than those of the modeled-habit variety. If writing was the medium of communication within the community of childhood, it makes sense that when writers grow up they continue to find writing vital to their sense of connectedness. What’s perceived as the antisocial nature of “substantive” authors, whether it’s James Joyce’s exile or J. D. Salinger’s reclusion, derives in large part from the social isolation that’s necessary for inhabiting an imagined world. Looking me in the eye, Heath said: “You are a socially isolated individual who desperately wants to communicate with a substantive imaginary world.”

I knew she was using the word “you” in its impersonal sense. Nevertheless, I felt as if she were looking straight into my soul. And the exhilaration I felt at her accidental description of me, in unpoetic polysyllables, was my confirmation of that description’s truth. Simply to be recognized for what I was, simply not to be misunderstood: these had revealed themselves, suddenly, as reasons to write.

By the spring of 1994 I was a socially isolated individual whose desperate wish was mainly to make some money. After my wife and I separated for the last time, I took a job teaching undergraduate fiction-writing at a small liberal arts college, and although I spent way too much time on it, I loved the work. I was heartened by the skill and ambition of my students, who hadn’t even been born when Rowan & Martin’s Laugh-In first aired. I was depressed, though, to learn that several of my best writers had vowed never to take a literature class again. One evening a student reported that his contemporary fiction class had been encouraged to spend an entire hour debating whether the novelist Leslie Marmon Silko was a homophobe.

Another evening, when I came to class, three women students were hooting with laughter at the utopian-feminist novel they were being forced to read for an honors seminar in Women and Fiction.

The therapeutic optimism now raging in English literature departments insists that novels be sorted into two boxes: Symptoms of Disease (canonical work from the Dark Ages before 1950) and Medicine for a Happier and Healthier World (the work of women and of people from nonwhite or nonhetero cultures). But the contemporary fiction writers whose work is being put to such optimistic use in the Academy are seldom, themselves, to blame. To the extent that the American novel still has cultural authority — an appeal beyond the Academy, a presence in household conversations — it’s largely the work of women. Knowledgeable booksellers estimate that seventy percent of all fiction is bought by women, and so perhaps it’s no surprise that in recent years so many crossover novels, the good books that find an audience, have been written by women: fictional mothers turning a sober eye on their children in the work of Jane Smiley and Rosellen Brown; fictional daughters listening to their Chinese mothers (Amy Tan) or Chippewa grandmothers (Louise Erdrich); a fictional freedwoman conversing with the spirit of the daughter she killed to save her from slavery (Toni Morrison). The darkness of these novels is not a political darkness, banishable by the enlightenment of contemporary critical theory; it’s the darkness of sorrows that have no easy cure.

The current flourishing of novels by women and cultural minorities shows the chauvinism of judging the vitality of American letters by the fortunes of the traditional social novel. Indeed, it can be argued that the country’s literary culture is healthier for having disconnected from mainstream culture; that a universal “American” culture was little more than an instrument for the perpetuation of a white, male, heterosexual elite, and that its decline is the just desert of an exhausted tradition. (Joseph Heller’s depiction of women in Catch-22, for example, is so embarrassing that I hesitated to recommend the book to my students.) It’s possible that the American experience has become so sprawling and diffracted that no single “social novel,” a la Dickens or Stendhal, can ever hope to mirror it; perhaps ten novels from ten different cultural perspectives are required now.

Unfortunately, there’s also evidence that young writers today feel imprisoned by their ethnic or gender identities — discouraged from speaking across boundaries by a culture in which television has conditioned us to accept only the literal testimony of the Self. And the problem is aggravated when fiction writers take refuge in university creative-writing programs. Any given issue of the typical small literary magazine, edited by MFA candidates aware that the MFA candidates submitting manuscripts need to publish in order to obtain or hold on to teaching jobs, reliably contains variations on three generic short stories: “My Interesting Childhood,” “My Interesting Life in a College Town,” and “My Interesting Year Abroad.” Fiction writers in the Academy do serve the important function of teaching literature for its own sake, and some of them also produce strong work while teaching, but as a reader I miss the days when more novelists lived and worked in big cities. I mourn the retreat into the Self and the decline of the broad-canvas novel for the same reason I mourn the rise of suburbs: I like maximum diversity and contrast packed into a single exciting experience. Even though social reportage is no longer so much a defining function of the novel as an accidental by-product — Shirley Heath’s observations confirm that serious readers aren’t reading for instruction — I still like a novel that’s alive and multivalent like a city.

The value of Heath’s work, and the reason I’m citing her so liberally, is that she has bothered to study empirically what nobody else has, and that she has brought to bear on the problem of reading a vocabulary that is neutral enough to survive in our value-free cultural environment. Readers aren’t “better” or “healthier” or, conversely, “sicker” than nonreaders. We just happen to belong to a rather strange kind of community.

For Heath, a defining feature of “substantive works of fiction” is unpredictability. She arrived at this definition after discovering that most of the hundreds of serious readers she interviewed have had to deal, one way or another, with personal unpredictability. Therapists and ministers who counsel troubled people tend to read the hard stuff. So do people whose lives haven’t followed the course they were expected to: merchant-caste Koreans who don’t become merchants, ghetto kids who go to college, openly gay men from conservative families, and women whose lives have turned out to be radically different from their mothers’. This last group is particularly large. There are, today, millions of American women whose lives do not resemble the lives they might have projected from their mothers’, and all of them, in Heath’s model, are potentially susceptible to substantive fiction.

In her interviews, Heath uncovered a “wide unanimity” among serious readers that literature “‘makes me a better person.’” She hastened to assure me that, rather than straightening them out in a self-help way, “reading serious literature impinges on the embedded circumstances in people’s lives in such a way that they have to deal with them. And, in so dealing, they come to see themselves as deeper and more capable of handling their inability to have a totally predictable life.” Again and again, readers told Heath the same thing: “Reading enables me to maintain a sense of something substantive—my ethical integrity, my intellectual integrity. ‘Substance’ is more than ‘this weighty book.’ Reading that book gives me substance.” This substance, Heath adds, is most often transmitted verbally, and is felt to have permanence. “Which is why,” she said, “computers won’t do it for readers.”

With near-unanimity, Heath’s respondents described substantive works of fiction as, she said, “the only places where there was some civic, public hope of coming to grips with the ethical, philosophical and sociopolitical dimensions of life that were elsewhere treated so simplistically. From Agamemnon forward, for example, we’ve been having to deal with the conflict between loyalty to one’s family and loyalty to the state. And strong works of fiction are what refuse to give easy answers to the conflict, to paint things as black and white, good guys versus bad guys. They’re everything that pop psychology is not.”

“And religions themselves are substantive works of fiction,” I said.

She nodded. “This is precisely what readers are saying: that reading good fiction is like reading a particularly rich section of a religious text. What religion and good fiction have in common is that the answers aren’t there, there isn’t closure. The language of literary works gives forth something different with each reading. But unpredictability doesn’t mean total relativism. Instead it highlights the persistence with which writers keep coming back to fundamental problems. Your family versus your country, your wife versus your girlfriend.”

“Being alive versus having to die,” I said.

“Exactly,” Heath said. “Of course, there is a certain predictability to literature’s unpredictability. It’s the one thing that all substantive works have in common. And that predictability is what readers tell me they hang on to — a sense of having company in this great human enterprise.”

“A friend of mine keeps telling me that reading and writing are ultimately about loneliness. I’m starting to come around.”

“It’s about not being alone, yes,” Heath said, “but it’s also about not hearing that there’s no way out — no point to existence. The point is in the continuity, in the persistence of the great conflicts.”

Flying back from Palo Alto in an enforced transition zone crewed by the employee-owners of TWA, I declined the headphones for The Brady Bunch Movie and a special one-hour segment of E!, but I found myself watching anyway. Without sound, the segment of E! became an exposé of the hydraulics of insincere smiles. It brought me an epiphany of inauthenticity, made me hunger for the unforced emotion of a literature that isn’t trying to sell me anything. I had open on my lap Janet Frame’s novel of a mental hospital, Faces in the Water: uningratiating but strangely pertinent sentences on which my eyes would not stick until, after two and a half hours, the silent screen in front of me finally went blank.

Poor Noeline, who was waiting for Dr. Howell to propose to her although the only words he had ever spoken to her were How are you? Do you know where you are? Do you know why you are here? — phrases which ordinarily would be hard to interpret as evidence of affection. But when you are sick you find yourself in a new field of perception where you make a harvest of interpretations which then provides you with your daily bread, your only food. So that when Dr. Howell finally married the occupational therapist, Noeline was taken to the disturbed ward.

Expecting a novel to bear the weight of our whole disturbed society — to help solve our contemporary problems — seems to me a peculiarly American delusion. To write sentences of such authenticity that refuge can be taken in them: Isn’t this enough? Isn’t it a lot?

As recently as forty years ago, when the publication of Hemingway’s The Old Man and the Sea was a national event, movies and radio were still considered “low” entertainments. In the fifties and sixties, when movies became “film” and demanded to be taken seriously, TV became the new low entertainment. Finally, in the seventies, with the Watergate hearings and All in the Family, television, too, made itself an essential part of cultural literacy. The educated single New Yorker who in 1945 read twenty-five serious novels in a year today has time for maybe five. As the modeled-habit layer of the novel’s audience peels away, what’s left is mainly the hard core of resistant readers, who read because they must.

That hard core is a very small prize to be divided among a very large number of working novelists. To make a sustainable living, a writer must also be on the five-book lists of a whole lot of modeled-habit readers. Every year, in expectation of this jackpot, a handful of good novelists get six-and even seven-figure advances (thus providing ammunition for cheery souls of the “American literature is booming!” variety), and a few of them actually hit the charts. E. Annie Proulx’s The Shipping News has sold nearly a million copies in the last two years; the hardcover literary best-seller of 1994, Cormac McCarthy’s The Crossing, came in at number fifty-one on Publishers Weekly’s annual best-seller list. (Number fifty was Star Trek: All Good Things.)

Anthony Lane, in a pair of recent essays in The New Yorker, has demonstrated that while most of the novels on the contemporary best-seller list are vapid, predictable, and badly written, the best-sellers of fifty years ago were also vapid, predictable, and badly written. Lane’s essays usefully destroy the notion of a golden pre-television age when the American masses had their noses stuck in literary masterworks; he makes it clear that this country’s popular tastes have become no worse in half a century. What has changed is the economics of book publishing. The number-one best-seller of 1955, Marjorie Morningstar, sold a hundred and ninety thousand copies in bookstores. In 1994, in a country less than twice as populous, John Grisham’s The Chamber sold more than three million. Publishing is now a subsidiary of Hollywood, and the blockbuster novel is a mass-marketable commodity, a portable substitute for TV.

The persistence of a market for literary fiction exerts a useful discipline on writers, reminding us of our duty to entertain. But if the Academy is a rock to ambitious novelists, then the nature of the modern American market — its triage of artists into Superstars, Stars, and Nobodies; its clear-eyed recognition that nothing moves a product like a personality — is a hard place indeed. It’s possible, if you have the right temperament, to market yourself successfully with irony, by making fun of marketing. Thus the subject of the young writer Mark Leyner’s fiction is the self-promotion of Mark Leyner, the young writer; he’s been on Letterman three times. But most novelists feel some level of discomfort with marketing the innately private experience of reading by means of a public persona — on book tours, on radio talk shows, on Barnes & Noble shopping bags and coffee mugs. The writer for whom the printed word is paramount is, ipso facto, an untelevisable personality, and it’s instructive to recall how many of our critically esteemed older novelists have chosen, in a country where publicity is otherwise sought like the Grail, to guard their privacy. Salinger, Roth, McCarthy, Don DeLillo, William Gaddis, Anne Tyler, Thomas Pynchon, Cynthia Ozick, and Denis Johnson all give few or no interviews, do little if any teaching or touring, and in some cases decline even to be photographed. Various Heathian dramas of social isolation are no doubt being played out here. But, for some of these writers, reticence is integral to their artistic creed.

In Gaddis’s first novel, The Recognitions (1954), a stand-in for the author cries: “What is it they want from the man that they didn’t get from the work? What do they expect? What is there left when he’s done with his work, what’s any artist but the dregs of his work, the human shambles that follows it around?” Postwar novelists like Gaddis and Pynchon and postwar artists like Robert Frank answered these questions very differently than Norman Mailer and Andy Warhol did. In 1954, before television had even supplanted radio as the regnant medium, Gaddis recognized that no matter how attractively subversive self-promotion may seem in the short run, the artist who’s really serious about resisting a culture of inauthentic mass-marketed image must resist becoming an image himself, even at the price of certain obscurity.

For a long time, trying to follow Gaddis’s example, I took a hard line on letting my work speak for itself. Not that I was exactly bombarded with invitations; but I refused to teach, to review for the Times, to write about writing, to go to parties. To speak extranovelistically in an age of personalities seemed to me a betrayal; it implied a lack of faith in fiction’s adequacy as communication and self-expression and so helped, I believed, to accelerate the public flight from the imagined to the literal. I had a cosmology of silent heroes and gregarious traitors.

Silence, however, is a useful statement only if someone, somewhere, expects your voice to be loud. Silence in the 1990s seemed only to guarantee that I would be alone. And eventually it dawned on me that the despair I felt about the novel was less the result of my obsolescence than of my isolation. Depression presents itself as a realism regarding the rottenness of the world in general and the rottenness of your life in particular. But the realism is merely a mask for depression’s actual essence, which is an overwhelming estrangement from humanity. The more persuaded you are of your unique access to the rottenness, the more afraid you become of engaging with the world; and the less you engage with the world, the more perfidiously happy-faced the rest of humanity seems for continuing to engage with it.

Writers and readers have always been prone to this estrangement. Communion with the virtual community of print requires solitude, after all. But the estrangement becomes much more profound, urgent, and dangerous when that virtual community is no longer densely populated and heavily trafficked; when the saving continuity of literature itself is under electronic and academic assault; when your alienation becomes generic rather than individual, and the business pages seem to report on the world’s conspiracy to grandfather not only you but all your kind, and the price of silence seems no longer to be obscurity but outright oblivion.

I recognize that a person writing confessionally for a national magazine may have less than triple-A credibility in asserting that genuine reclusiveness is simply not an option, either psychologically or financially, for writers born after Sputnik. It may be that I’ve become a gregarious traitor. But in belatedly following my books out of the house, doing some journalism and even hitting a few parties, I’ve felt less as if I’m introducing myself to the world than as if I’m introducing the world to myself. Once I stepped outside my bubble of despair I found that almost everyone I met shared many of my fears, and that other writers shared all of them.

In the past, when the life of letters was synonymous with culture, solitude was possible the way it was in cities where you could always, day and night, find the comfort of crowds outside your door. In a suburban age, when the rising waters of electronic culture have made each reader and each writer an island, it may be that we need to be more active in assuring ourselves that a community still exists. I used to distrust creative-writing departments for what seemed to me their artificial safety, just as I distrusted book clubs for treating literature like a cruciferous vegetable that could be choked down only with a spoonful of socializing. As I grope for my own sense of community, I distrust both a little less now. I see the authority of the novel in the nineteenth and early twentieth centuries as an accident of history — of having no competitors. Now the distance between author and reader is shrinking. Instead of Olympian figures speaking to the masses below, we have matching diasporas. Readers and writers are united in their need for solitude, in their pursuit of substance in a time of ever-increasing evanescence: in their reach inward, via print, for a way out of loneliness.

One of the cherished notions of cybervisionaries is that literary culture is antidemocratic — that the reading good books is primarily a pursuit of the leisured white male — and that our republic will therefore be healthier for abandoning itself to computers. As Shirley Heath’s research (or even a casual visit to a bookstore) makes clear, the cybervisionaries are lying. Reading is an ethnically diverse, socially skeptical activity. The wealthy white men who today have powerful notebook computers are the ones who form this country’s most salient elite. The word “elitist” is the club with which they bash those for whom purchasing technology fails to constitute a life.

That a distrust or an outright hatred of what we now call “literature” has always been a mark of social visionaries, whether Plato or Stalin or today’s free-market technocrats, can lead us to think that literature has a function, beyond entertainment, as a form of social opposition. Novels, after all, do sometimes ignite political debates or become embroiled in them. And since the one modest favor that any writer asks of a society is freedom of expression, a country’s poets and novelists are often the ones obliged to serve as voices of conscience in times of religious or political fanaticism. Literature’s aura of oppositionality is especially intense in America, where the low status of art has a way of turning resistant child readers into supremely alienated grownup writers. What’s more, since the making of money has always been of absolute centrality to the culture, and since the people who make a lot of it are seldom very interesting, the most memorable characters in U.S. fiction have tended to be socially marginal: Huck Finn and Janie Crawford, Hazel Motes and Tyrone Slothrop. Finally, the feeling of oppositionality is compounded in an age when simply picking up a novel after dinner represents a kind of cultural Je refuse!

It’s all too easy, therefore, to forget how frequently good artists through the ages have insisted, as Auden put it, that “art makes nothing happen.” It’s all too easy to jump from the knowledge that the novel can have agency to the conviction that it must have agency. Nabokov pretty well summed up the political platform that every novelist can endorse: no censorship, good universal education, no portraits of heads of state larger than a postage stamp. If we go any further than that, our agendas begin to diverge radically. What emerges as the belief that unifies us is not that a novel can change anything but that it can preserve something. The thing being preserved depends on the writer; it may be as private as “My Interesting Childhood.” But as the country grows ever more distracted and mesmerized by mass culture, the stakes rise even for authors whose primary ambition is to land a teaching job. Whether they think about it or not, novelists are preserving a tradition of precise, expressive language; a habit of looking past surfaces into interiors; maybe an understanding of private experience and public context as distinct but interpenetrating; maybe mystery, maybe manners. Above all, they are preserving a community of readers and writers, and the way in which members of this community recognize each other is that nothing in the world seems simple to them.

Shirley Heath uses the bland word “unpredictability” to describe this conviction of complexity; Flannery O’Connor called it “mystery.” In Desperate Characters, Fox captures it like this: “Ticking away inside the carapace of ordinary life and its sketchy agreements was anarchy.” For me, the word that best describes the novelist’s view of the world is tragic. In Nietzsche’s account of the “birth of tragedy,” which remains pretty much unbeatable as a theory of why people enjoy sad narratives, an anarchic “Dionysian” insight into the darkness and unpredictability of life is wedded to an “Apollonian” clarity and beauty of form to produce an experience that’s religious in its intensity. Even for people who don’t believe in anything that they can’t see with their own two eyes, the formal aesthetic rendering of the human plight can be (though I’m afraid we novelists are rightly mocked for overusing the word) redemptive.

It’s possible to locate various morals in Oedipus Rex—“Heed oracles,” say, or “Expect the unexpected,” or “Marry in haste, repent at leisure”—and their existence confirms in us a sense of the universe’s underlying orderliness. But what makes Oedipus human is that of course he doesn’t heed the oracle. And though Sophie Bentwood, twenty-five hundred years later, “shouldn’t” try to insulate herself from the rabid society around her, of course she tries to anyway. But then, as Fox writes: “How quickly the husk of adult life, its importance, was shattered by the thrust of what was, all at once, real and imperative and absurd.”

I hope it’s clear that by “tragic” I mean just about any fiction that raises more questions than it answers: anything in which conflict doesn’t resolve into cant. (Indeed, the most reliable indicator of a tragic perspective in a work of fiction is comedy.) The point of calling serious fiction tragic is to highlight its distance from the rhetoric of optimism that so pervades our culture. The necessary lie of every successful regime, including the upbeat techno-corporatism under which we now live, is that the regime has made the world a better place. Tragic realism preserves the recognition that improvement always comes at a cost; that nothing lasts forever; that if the good in the world outweighs the bad, it’s by the slimmest of margins. I suspect that art has always had a particularly tenuous purchase on the American imagination because ours is a country to which so few terrible things have ever happened. The one genuine tragedy to befall us was slavery, and it’s probably no accident that the tradition of Southern literature has been strikingly rich and productive of geniuses. (Compare the literature of the sunny, fertile, peaceful West Coast.) Superficially at least, for the great white majority, the history of this country has consisted of success and more success. Tragic realism preserves access to the dirt behind the dream of Chosenness — to the human difficulty beneath the technological ease, to the sorrow behind the pop-cultural narcosis: to all those portents on the margins of our existence.

People without hope not only don’t write novels, but what is more to the point, they don’t read them. They don’t take long looks at anything, because they lack the courage. The way to despair is to refuse to have any kind of experience, and the novel, of course, is a way to have experience.

— Flannery O’Connor

Depression, when it’s clinical, is not a metaphor. It runs in families, and it’s known to respond to medication and to counseling. However truly you believe there’s a sickness to existence that can never be cured, if you’re depressed you will sooner or later surrender and say: I just don’t want to feel so bad anymore. The shift from depressive realism to tragic realism — from being immobilized by darkness to being sustained by it — thus strangely seems to require believing in the possibility of a cure. But this “cure” is anything but straightforward.

I spent the early nineties trapped in a double singularity. Not only did I feel that I was different from everyone around me, but I felt that the age I lived in was utterly different from any age that had come before. For me the work of regaining a tragic perspective has therefore involved a dual kind of reaching out: both the reconnection with a community of readers and writers, and the reclamation of a sense of history.

It’s possible to have a general sense of history’s darkness, a mystical Dionysian conviction that the game ain’t over till it’s over, without having enough of an Apollonian grasp of the details to appreciate its consolations. Until a year ago, for example, it would never have occurred to me to assert that this country has “always” been dominated by commerce.[1] I saw only the ugliness of the commercial present, and naturally I raged at the betrayal of an earlier America that I presumed to have been truer, less venal, less hostile to the enterprise of fiction. But how ridiculous the self-pity of the writer in the late twentieth century can seem in light of, say, Herman Melville’s life. How familiar his life is: the first novel that makes his reputation, the painful discovery of how little his vision appeals to prevailing popular tastes, the growing sense of having no place in a sentimental republic, the horrible money troubles, the abandonment by his publisher, the disastrous commercial failure of his finest and most ambitious work, the reputed mental illness (his melancholy, his depression), and finally the retreat into writing purely for his own satisfaction.

Reading Melville’s biography, I wish that he’d been granted the example of someone like himself, from an earlier century, to make him feel less singularly cursed. I wish, too, that he’d been able to say to himself, when he was struggling to support Lizzie and their kids: Hey, if worse comes to worst, I can always teach writing. In his lifetime, Melville made about $10,500 from his books. Even today, he can’t catch a break. On its first printing, the title page of the second Library of America volume of Melville’s collected works bore the name, in twenty-four-point display type, HERMAN MEVILLE.

Last summer, as I began to acquaint myself with American history, and as I talked to readers and writers and pondered the Heathian “social isolate,” there was growing inside me a realization that my condition was not a disease but a nature. How could I not feel estranged? I was a reader. My nature had been waiting for me all along, and now it welcomed me. All of a sudden I became aware of how very hungry I was to construct and inhabit an imagined world. The hunger felt like a loneliness of which I’d been dying. How could I have thought that I needed to cure myself in order to fit into the “real” world? I didn’t need curing, and the world didn’t, either; the only thing that did need curing was my understanding of my place in it. Without that understanding — without a sense of belonging to the real world — it was impossible to thrive in an imagined one.

At the heart of my despair about the novel had been a conflict between a feeling that I should Address the Culture and Bring News to the Mainstream, and my desire to write about the things closest to me, to lose myself in the characters and locales I loved. Writing, and reading too, had become a grim duty, and considering the poor pay, there is seriously no point in doing either if you’re not having fun. As soon as I jettisoned my perceived obligation to the chimerical mainstream, my third book began to move again. I’m amazed, now, that I’d trusted myself so little for so long, that I’d felt such a crushing imperative to engage explicitly with all the forces impinging on the pleasure of reading and writing: as if, in peopling and arranging my own little alternate world, I could ignore the bigger social picture even if I wanted to.

As I was figuring all this out, I got a letter from Don DeLillo, to whom I’d written in distress. This, in part, is what he said:

The novel is whatever novelists are doing at a given time. If we’re not doing the big social novel fifteen years from now, it’ll probably mean our sensibilities have changed in ways that make such work less compelling to us — we won’t stop because the market dried up. The writer leads, he doesn’t follow. The dynamic lives in the writer’s mind, not in the size of the audience. And if the social novel lives, but only barely, surviving in the cracks and ruts of the culture, maybe it will be taken more seriously, as an endangered spectacle. A reduced context but a more intense one.

Writing is a form of personal freedom. It frees us from the mass identity we see in the making all around us. In the end, writers will write not to be outlaw heroes of some underculture but mainly to save themselves, to survive as individuals.

DeLillo added a postscript: “If serious reading dwindles to near nothingness, it will probably mean that the thing we’re talking about when we use the word ‘identity’ has reached an end.”

The strange thing about this postscript is that I can’t read it without experiencing a surge of hope. Tragic realism has the perverse effect of making its adherents into qualified optimists. “I am very much afraid,” O’Connor once wrote, “that to the fiction writer the fact that we shall always have the poor with us is a source of satisfaction, for it means, essentially, that he will always be able to find someone like himself. His concern with poverty is with a poverty fundamental to man.” Even if Silicon Valley manages to plant a virtual-reality helmet in every American household, even if serious reading dwindles to near-nothingness, there remains a hungry world beyond our borders, a national debt that government-by-television can do little more than wring its hands over, and the good old apocalyptic horsemen of war, disease, and environmental degradation. If real wages keep falling, the suburbs of “My Interesting Childhood” won’t offer much protection. And if multiculturalism succeeds in making us a nation of independently empowered tribes, each tribe will be deprived of the comfort of victimhood and be forced to confront human limitation for what it is: a fixture of life. History is the rabid thing from which we all, like Sophie Bentwood, would like to hide. But there’s no bubble that can stay unburst. On whether this is a good thing or a bad thing, tragic realists offer no opinion. They simply represent it. A generation ago, by paying close attention, Paula Fox could discern in a broken ink bottle both perdition and salvation. The world was ending then, it’s ending still, and I’m happy to belong to it again.

[1]

I realize that this is a dismal confession, and that my managing to slip through college without ever taking a course in either American history or American literature is hardly an excuse.

[1996]

Advertisements

At the Altar – On painting and spirituality‏ by Emma Chrichton Miller

 

 

At the Altar

On painting and spirituality‏

by Emma Chrichton Miller

I was in my teens when I first started to really look at paintings. Although I didn’t just look, I bathed in them, and I was perpetually teased by my friends for the tremendous length of time it took me to navigate an art gallery. This pleasure of looking and of being completely absorbed in painting has remained constant; whether ancient or modern, figurative or abstract, and whatever the style, I am prepared to give every work the chance to lure me in.

Going to an art gallery is like going to church — a spiritual experience. But what makes a painting worthy of veneration?

What is so compelling? When art was an adjunct of religion, its power was clear. But from the Renaissance on, painting, at least in the Western tradition, has preoccupied itself as intensely with secular as with overtly religious subject matter, or else with no subject at all. Yet when you are in the presence of an unequivocally great work of art, it seems to open a door to a realm of ideas and emotions not accessible through any other route. It’s a quality that goes far beyond prettiness or great skill, which on their own can numb and irritate, and it transcends the visceral excitement of paint, or the sorcery of summoning life onto canvas. Nor is it just the stories of power or desire, however literal or oblique, that binds us. There is some hankering after truth that drives us to look intently at pictures, some hunger of the spirit as much as the senses.

Colin 2

Colin Drew, Untitled, 1990, oil on canvas

One way or another, I think, artists themselves have always known this. About 3-4,000 year ago, artists in Ancient Egypt began to use borders to mark off narrative scenes and decorative panels on tomb walls. The great vase-painters of Ancient Greece and the mosaicists of Ancient Rome also understood the power of the edge in transforming our relationship with an image. Rather than being continuous with our mundane world, as is sculpture for example, a framed painting, or bordered image, offers a world apart, transfigured from four down to two dimensions; a window onto an ideal space.

For those of us who love painting, this is the key to the medium’s hold over us. Be it traditional, figurative painting, or abstract; Byzantine, or cubist, and whether from the 12th century or the 21st, the core pleasure of any painting is that of moving into another world, where time is stilled and passage for the eye is swift and free. This is as true of the blue depths of a landscape by 16th century Flemish artist Joachim Patinir as it is the complexity of character, wrought in swirling oils, in a portrait by Frank Auerbach. Of course, a great deal of the experience of a painting is aesthetic and even intellectual – you enjoy the structure of forms, textures and colours and you respond to the story, or ideas, or emotions the artist is eager to communicate.  You go to painting eager for a new vision of this world.  But you also go very often with a hope, too, for a glimpse into another.  Perhaps it is this illusion of a threshold that enables painting to so readily serve as a gateway to another psychological or even spiritual domain.

Colin 1

Colin Drew, Untitled, c 2005, 18 x 34 ins., oil on canvas

One painter who understood this potential very well was Balthus. In many ways a fantasist, with an unsettling fixation on young girls, he was compelling in his commitment to the power of painting to put us in touch with a spiritual dimension.  I have always been drawn to his landscapes, with their serene nostalgia for another world, and to his quiet interiors, in which a figure often stands against or looking through a window. These paintings conjure explicitly the pleasure we gain from looking through a frame at a painted landscape, as much as the pleasure of looking at the landscape itself.

It is the psychological power of the framed space that has long made painting, in particular, a natural ally of religion

According to Semir Zeki, professor of neuroesthetics at University College London, our pleasure is a neurological aptitude built into the visual system of the brain, whereby we are drawn to window-like, frame-like structures. For Zeki, Balthus is the quintessential painter in his ability to generate maximum excitement in our brain. But whatever the neurological source of the pleasure, there is no doubting the lively sense of communication, even of communion, that lovers of painting experience when they spend time in front of a great work. You have only to think of the hushed awe in the Sistine Chapel, despite the cricked necks, or the reverential queues of people wishing to pay homage to the Mona Lisa in the Louvre. Even the crude vandalism of Vladimir Umanets, who scrawled black paint on Mark Rothko’s painting Black on Maroon (1958) at the Tate Modern in October, is a perverse recognition of the painting’s power.

The Guardian’s Jonathan Jones commented at the time: ‘It is a horrible fact that people who for whatever reason feel compelled, in an art gallery, not to stand and look but to scribble, or throw acid, or pull out a hammer, tend to pick the most potent and authoritative works of art for their assaults. It seems there is a psychic force in truly great art that draws the attacker.’ Jones gave the example of Leonardo da Vinci’s Burlington Cartoon in the National Gallery – that soft sensuous deeply moving evocation of motherly love which portrays Mary and her mother Anne, with the infant Christ and the child St John the Baptist – which was shot in 1987 by a man with a gun. Likewise, in 1985, Rembrandt’s radiant and alluring Danaë in the Hermitage so provoked a visiting Lithuanian, later declared insane, that he threw sulphuric acid over the canvas before slashing it.

Setting aside the provocation of their subject matter, these works are obvious emblems of established value. The institutional apparatus of the museum – the velvet ropes, bullet-proof screens, priestly attendants and hushed whispers – all conspire to guide the alienated and angry to the most revered works. In just the same way, such scene-setting  encourages the homage of those more inclined to worship at the altar of art. These trappings of cultural value contribute to what the German philosopher Walter Benjamin, in his essay ‘The Work of Art in the Age of Mechanical Reproduction’ (1936), defined as the specific ‘aura’ of the unique work of art: they shore up our dwindling faith in the unique.

But I would argue that it is also the psychological power of the framed space that has long made painting, in particular, a natural ally of religion. Tradition holds that Saint Luke the Evangelist, as well as being a physician was also a painter. Legend has it that he painted the Virgin Mary from life, which perhaps, is the source of his status as the patron saint of artists, capturing with his likeness the power of the Virgin to heal and forgive. From the earliest Christian paintings in the catacombs of Rome, painting has proven to be a forceful tool of evangelisation, depicting on the walls truths too dangerous to speak, while reminding those persecuted for their beliefs of the glorious otherworld of grace to which they aspired. From the conversion of Emperor Constantine, religious art came above ground, with the icons of the Byzantine Orthodox Church standing as a passionately defended aid to devotion.

If part of the allure of great painting is its capacity to draw you into another world, to make an elsewhere tangible, the icon painters hoped in turn to be channelling a sacred power – one that would flow out from the image into the beholder.  It was the fear that the populace might take this transfer of sacred power too literally, might ascribe to the image the holiness reserved for the spiritual subject of the image and so fall into the sin of idolatry, that led to the fierce debacle of iconoclasm in the Orthodox Church of the eighth and ninth centuries. At two pivotal Church Councils, in 754 in Hiereia and 787 in Nicea, the use of sacred images was first condemned and then defended, but not before the destruction of many images.

It was not until 843, after yet another wave of vehement iconoclasm, that Theodora, widow of Emperor Theophilus (829-842) and regent for Emperor Michael III, was able finally to restore the use of images in the Orthodox Church.  She is supposed to have said: ‘If for love’s sake, anyone does not kiss and venerate these images in a relative manner, not worshipping them as gods but as images of their archetypes, let him be anathema!’ Theodora raises an argument as old at Plato: are images to be condemned because they are merely delusional copies of the shadows of reality that surround us; or are they rather to be celebrated as bringing intimations of the ideal world beyond?

Theodora drew heavily on the writings of the theologian St John of Damascus, who suggested that the Old Testament ban on graven images applied only to God, who is ineffable and invisible. St John argued that the act of incarnation was in itself an argument for the holiness of representation, since Christ was in a sense a representation of God in human form. To condemn religious imagery was therefore to refuse the miracle of Christ’s birth and the salvation he promised for the whole created world. From the point of view of art history, this was the decisive point where Christianity departed most completely from the monotheist traditions of Judaism and Islam, with their ban on religious imagery.

By the 12th and 13th centuries, sacred images had, in Western Christendom, become a powerful means of ensuring the flow of a potentially radical and passionate piety into orthodox channels.  The earliest painted borders, those first demarcations between the sacred and the profane, had become entire altar pieces – the elaborate architecture of public piety, where the sacred image was protected, honoured: separated out from our fallen world with gilded carving, but also made available to us for our contemplation. Our communion with the infinite was focused tightly through the lens of Christian mythology.

Then, on the threshold of the Renaissance, in the frescoes of Giotto and his followers in the 14th century, the membrane between the world of the Bible and sacred fable, and the day-to-day world grows thin. Giotto’s use of three dimensional modelling and naturalistic detail invited us to find eternity on our doorsteps, just as the earliest humanist philosophers encouraged us to trust our own enquiring intellects and all five senses in exploring a created world that was no longer opposed to heaven, but shot through with divinity. In the magnificent decorations in the Scrovegni Chapel in Padua, accomplished around 1305, which consist of 37 frescoes of the life of the Virgin Mary and the life of Christ, Giotto explores the manifold hopes, fears and dreams of humanity. Though expressly Biblical, these theatrical scenes are played out on a stage that seems tantalisingly adjacent to our own.

From then, the conversation between painting and religion has never entirely gone away, although the portal to the sublime has become wider, encompassing landscape, portraiture, and, latterly, in the 20th century, abstraction.  But whether these doorways lead anywhere except to a wall of paint, has become a central question for both artists and audience.

Colin 3

Colin Drew, Untitled, 1990, oil on wood

In the second half of the twentieth century, Mark Rothko’s dark, intensely layered abstract paintings – which promise depth and invite introspective reflection – became emblems of a contemporary art of transcendence that avoided the discredited symbolisms of both Christianity and Romanticism. In conversation with artist and critic Selden Rodman, Rothko defended himself against the claim that his abstract colour works were merely decorative and formalist.  He said;  ‘I’m interested only in expressing basic human emotions: tragedy, ecstasy, doom, and so on, and the fact that a lot of people break down and cry when confronted with my pictures shows that I communicate those basic human emotions…. The people who weep before my pictures are having the same religious experience I had when I painted them. And if you, as you say, are moved only by their color relationships, then you miss the point!’

The son of Russian Jewish immigrants to the United States, Rothko’s reading of Friedrich Nietzsche, Freud and Jung inspired him to search, in paint, for an alternative language of spiritual expression to that offered by traditional religious imagery.  He acknowledged the difficulty of doing this when, in 1947, he wrote: ‘Without monsters and gods, art cannot enact our drama: art’s most profound moments express this frustration. When they were abandoned as untenable superstitions, art sank into melancholy.’  Rothko saw his increasingly abstract art as an attempt to overcome this melancholy: ‘I do not believe that there was ever a question of being abstract or representational. It is really a matter of ending this silence and solitude, of breathing and stretching one’s arms again.’

Is it enough that what we find when we plunge in might be no more than an aura, or a trace, or a scent of the sublime?

In 1959, en route to Europe, after having embarked on that great sequence of murals for the private room in the new Four Seasons restaurant at the Seagram Building in New York – nine of which are now housed together at Tate Modern, and one of which was vandalized this year– Rothko explained in an interview that he had been deeply influenced by Michelangelo’s walls in the staircase of the Medicean Library in Florence. ‘He achieved just the kind of feeling I’m after,’ Rothko said, ‘he makes the viewers feel that they are trapped in a room where all the doors and windows are bricked up, so that all they can do is butt their heads for ever against the wall.’

In this light the Seagram Murals might be understood as a punitive refusal of transcendence for the restaurant’s privileged clientele, or else an admission of his own despair.

In the final six years of his life, before his suicide in 1970, Rothko dedicated himself to the creation of what is now known as the Rothko Chapel, in Houston, Texas, commissioned by the Texas oil millionaires John and Dominique de Menil. Initially conceived as a Roman Catholic chapel, Rothko intended this to be a place of pilgrimage for those seeking, as he did, a contemporary religious art.  Now non-denominational, the small, windowless building houses 14 large canvases, arranged in a series of triptychs, together with five individual canvases, all built up with impenetrable layers of chestnut brown, brick red, mauve and black. At the chapel’s dedication in 1971, Dominique de Menil, spoke unequivocally about her understanding of the significance of Rothko’s work: ‘We are cluttered with images and only abstract art can bring us to the threshold of the divine.’  Whether it is indeed to the threshold of the divine, with room to breathe and stretch one’s arms again, or to the blank end of some dark tunnel, where we can only butt our heads, is for the viewer to determine.

Rothko

A portal onto the divine? Mark Rothko; Orange, Red, Yellow 1961.

What is certainly true is that there are many very wealthy individuals who are prepared to pay for the experience of owning, and, presumably, contemplating a Rothko canvas. His glowing Orange, Red, Yellow (1961) shattered all auction records for post-war and contemporary art, when it sold for nearly $87 million (£54 million) at Christie’s in New York.

It is within the context of the near-veneration of Rothko’s work, both by the market and by the priesthood of art critics and museum curators, that we arrive at a contemporary cliché.  Art, we are told, has become the new religion.  In the godless West, where the churches are empty and theology is dismissed as a fairy tale, we seek spiritual nourishment in our temples of culture, and find in art the sublime we once sought through it. So when a small group of pilgrims reaches that great masterpiece of Marian devotion, Piero della Francesca’sMadonna del Parto, in its quaint wayside chapel, the tiny Museo della Madonna del Parto of Monterchi in Tuscany, it is the painter’s intercession with the beyond through paint that we beseech, rather than Mary’s mercy.

To follow the argument further we need to turn to another late-20th-century master, the German painter Gerhard Richter. Richter began his artistic career in Dresden in East Germany, painting murals and portraits, but he escaped to Düsseldorf just before work began on the Berlin Wall in 1961. In Dresden he had absorbed the priorities of Soviet Realism but also (under the influence of Dresden’s former resident, the great German Romantic artist, Caspar David Friedrich) the capacity of painting to invite the viewer to share the artist’s own subjective, spiritual communion with nature. In the West, Richter was stirred by American Abstract Expressionism, then Pop Art, and he began to work consistently with photography and found images, as well as paint. His oeuvre deliberately crosses between abstraction and realism, photography and painting, as if searching for the truly authentic image. Whether through a richly coloured portrait of his daughter; a painting of a bare flickering candle; a monochrome painting derived from a black and white newspaper clipping; a many-layered abstraction, hinting at a landscape just beyond; or simply through his veiled, questioning politics and refusal of obvious emotion, his work invites a profound engagement.  Perhaps it is for this reason that Richter, like Rothko, finds himself at the very top of the art market.  He became the most expensive living artist when his Abstraktes Bild (809-4), a vast haunting abstract canvas, created in vertical bands of colour with a squeegee, sold at Sotheby’s auction house in London for £21.3 million ($34.2million).

‘The church is no longer adequate as a means of affording experience of the transcendental’

Born in 1932, in Dresden, into a staunchly Protestant household, Richter grew up beneath the twin tyrannies of Facism and Communism. Consequently, dogma has been anathema in his work, but the relationship between the search for meaning in art and the search for the divine has always been central to his thinking.  In 1962 Richter wrote in his notebook: ‘Picturing things, taking a view, is what makes us human; art is making sense and giving shape to that sense. It is like the religious search for God.’

Richter

Gerhard Richter, Abstraktes Bild, 1987, Catalogue Raisonné: 627-4, oil on canvas, 52 cm x 72 cm

By 1964, he had grown bolder:  ‘Art is not a substitute religion: it is a religion (in the true sense of the word: ‘binding back’, ‘binding’ to the unknowable, transcending reason, transcendent being). But the church is no longer adequate as a means of affording experience of the transcendental, and of making religion real – and so art has been transformed from a means into the sole provider of religion: which means religion itself.’ This, surely, is the nub of our desire to tumble in imagination headlong into the spiritual depths (or heights) of a painting.  But is it simply enough that what we find when we plunge might be no more than an aura, or a trace, or a scent of the sublime?  That painting might indeed bind us back, but only to our own unknowable selves?

Richter’s insistence on the equivalence of art and religion reminds me of the French writer, art historian and mystic Romain Rolland, as quoted by Sigmund Freud in Civilization and Its Discontents. Freud had sent Rolland his book, The Future of an Illusion (1927), which as Freud cleanly puts it ‘treats religion as an illusion’.  Rolland had written back describing ‘a peculiar feeling, which he himself is never without…..a sensation of “eternity”, a feeling as of something limitless, unbounded – as it were, “oceanic”.’  Rolland was clear that this feeling was a purely subjective fact, ‘not an article of faith’.  But he also argued that it was the source of the energy that all the world’s different religious systems channel to their own ends. Freud’s gloss on this perspective was characteristically insightful: ‘One may, he thinks, rightly call oneself religious on the ground of this oceanic feeling alone, even if one rejects every belief and every illusion.’  The question we confront is whether the oceanic feeling, which for so many people is part of the pleasure of looking at certain paintings, that sense of time stilling to allow boundless  contemplation, amounts to a religious experience, or more properly to an experience of art’s own peculiar solace, which is not necessarily religious.

For Richter, himself a professed atheist, the status of art and the experience of art continues to be both a puzzle and a driving force. Even the gorgeous, stained-glass window he created for Cologne’s Roman Catholic Cathedral is a question rather than an answer. This is the Cathedral of the city where he lives, and where his three children and his third wife, Sabine Moritz, also a painter, were baptized. It is where he sometimes attends worship. And yet the window, far from some all-confirming assertion of Catholic teaching, is made up from 11,500 squares of glass in 72 colors, randomly organised by a computer to resemble pixels. The dazzling shower of multi-coloured light can lead one’s thoughts and emotions in many directions and can be harnessed to almost any orthodoxy – including an embrace of the pure chance of our existence. In fact, so outraged was the Archbishop of Cologne by the window’s stubborn refusal to give specific doctrinal shape to the emotions it stimulated that he found he had a prior commitment elsewhere on the day of its unveiling.

Perhaps there is no more to be said about painting’s seductive offering than Richter’s tentative statement to the American curator and critic, Robert Storr: ‘A painting can help us think something that goes beyond this senseless existence’.  In doing this it offers both the painter and the viewer a kind of salvation. And perhaps this, as Rothko said, is the miracle that every artist strives to achieve: ‘Pictures must be miraculous; the instant one is completed, the intimacy between the creation and the creator is ended. He is an outsider. The picture must be for him, as for anyone experiencing it later, a revelation, an unexpected and unprecedented resolution of an eternally familiar need.’

IMG_0758

Colin Drew, Untitled painting, acrylic on canvas, 101 x 105 cm, September 2015

The miracle, in other words, is creative resolution. While our desire to look at a painting might be powered by the same eternal, spiritual need that drives us to the desert or the temple, the revelation we find there is of a different order. True, we continue to seek in art, among many other things, a correspondence with those oceanic feelings, a soothing of our hunger for transcendence, but the salvation it offers is without substance or destination.

Perception, Cognition and Authentic Creation by Judith Chandler

 

 

Perception, Cognition and Authentic Creation

by Judith Chandler

I have often wondered if there is a sequence of development to undergo before transformation into ‘an artist.’ Before I began to call myself an artist there seemed to be an abyss, a huge impassable distance, without direction or guidelines, where many who take up art work but few actually do the art. An attraction to art-making may be connected to the human imperative to create, and, it seems, that there is a natural human desire for a method of enquiry that offers a way to make visible things that cannot be seen. Yet, if we embark on trying to express thoughts and feelings truthfully, the path is riddled with arguments between the significance of perception and cognition along the way. How we perceive may be the key to understanding what function, if any, art really serves, and to defining what it is we are in fact creating.

To highlight this point, I use the example of Tolstoy’s famous (but highly questionable) definition of the process of art:

‘To evoke in oneself a feeling one has experienced, and having evoked it in oneself, then by means of movement, lines, colours, sounds, or forms expressed in words to transmit that feeling that others experience the same feeling – this is the activity of art.’

Tolstoy’s perception of the function of art was the expression and communication of feeling (usually ennobling), I am certain that if he were around to view some of the work of the past fifty years, this would not be his explanation. The artist Mark Quinn, for example, using his own blood to fashion a bust of his head, relies on the modern understanding that all observations depend on the observer to consider anything a work of art. In other words, a private experience can only be accessible to a public sharing if the perceiver is willing and able. While the artist’s emotion may have an effect on the audience, it cannot, as Tolstoy thought, direct the collective mind (as has been the presumption of much Sensational art).

‘The Expressionist view of Tolstoy and many others declares that an artist, on having profound life experiences, should be able to express this in his work, and evoke it in his viewers, the function of art may be to transmit feelings, that is, to embody them, but not to evoke. In other words, so that others can interpret or understand them, not so that viewers also feel them. The trouble with accepting expressivism as the mark of ‘proper art’ is that it ignores the value of the imagination. When viewing a work of art, the beholder is invited to add his/her own interpretation, creating a unique experience of the picture. Feelings that the artist experiences, although connected to life, are not consciously intended to arouse a specific emotion in the way that speeches or horror movies do. ‘To be aesthetically effective the feelings expressed must reflect more that the personal idiosyncrasies of the artist:  what is expressed must be shared, the feelings must be held in common, the particular must reflect the universal. In this sense art is able to disclose truth about our shared life of feeling.’ 

[K. Dorter in Conceptual Truth and Aesthetic Truth]

A further element is that, although what I experience when creating is essentially private, it is quite possible that others also feel and relate the same feelings to the object. This happens so often that art provides a fundamental way of gaining insight into the human condition. That is, there is a logical connection between the object and the receptivity of the viewer, not only does the object express human feelings; it is quite possible that the viewer sees in the object express human feelings which he/she has longed to express. This is a matter of perception and not of cognition as expression theory suggests.

If I attempted to analyze or describe how I make art, at best it is a kind of looking at it from the outside, requiring the scalpel of scientific investigation, rather like analyzing the behaviour of cats at play and trying to describe it in a meaningful way.

‘… in its exuberant purposelessness, seems close to the heart of the whole business of life. Play is the opposite of Management by Objectives, the current creed which rightly screens out spontaneity, imagination and surprise as parts of the creative process.’

[Richard Maybe in Nature Cure]

Indeed play, in common with art-making, is not a means to an end outside itself. For although it can be analyzed in intellectual terms, the practise of art-making is an imaginative one, directed and formed by an exclusively instinctive process. While observing the process of painting, of composing and arranging forms and colours, can look like play it has more to do with the nature of perception and with things remembered in the body. And perhaps as a result of this, art is essentially productive, while play is not.

Martin Scorsese’s No Direction Home is a portrayal of Bob Dylan’s refusal to be pigeonholed into folk singer, protester or icon because his music stands for itself, in the same way that the poet may not fully understand what he writes until it is finished. In it, Dylan says ‘I just write ‘em, man.’ It isn’t necessary to be fully conscious of the work of art in the process of creation. What is more, I know that the ultimate experience is to paint: the deepest form of knowing and the deepest form of truth are – in the actual act itself – devilishly difficult to express and often evading conscious description.

There is a post-modern opinion that anything can be a work of art because the living body is an expressive medium, and every conception, every action or feeling is a work of art. Against this background, how is it that there is a conflicting view that in creating art a deep human need is answered, that the aesthetic, rooted in materials and fired by feeling, has the ability to take our ordinary experiences and transform them into an understanding of the meaning of those experiences? The question is this: is drawing a waste of time, can one ‘draw’ with a camera, or by leaving something to the weather, or does drawing serve a more basic purpose? Expression characterises the outward demonstration of an inward feeling. The human connection, the individual touch of the artist, has the capacity to embody mystery and feeling, and defines an intuitive aesthetic the moment a personal mark is made. Drawing joins observation with imagination, perception with physicality, and the mind with the hand. It is fertile territory for making visible the things that cannot be seen – states of mind, ideas – and processed.

The imagination is a legitimate alternative to rational thinking to arrive at truth; the difference is that the imagination does not grasp casual or universal principles in the way that reason does, although it does have direct access to feeling, or intuition, which Schopenhauer called ‘understanding.’ Herein lies the connection: that because art is essentially creative, we can understand or grasp what is transmitted without the interference of concepts. It is not the subjective experience, but the experience of being in the world itself – the sensual qualities that we experience and that are inaccessible to conceptual thinking that enable art to move us, for psychological, and not philosophical reasons.

Our experience always points beyond itself (to other patterns and memories), evoking and bestowing meaning on events, regardless of whether we have any way of knowing whether this is what the artist intended or not. The truth is that there is a natural human tendency to interpret, to assign meaning and significance to events, to relate a single example to the whole of society. Paul Crowther calls this phenomenon:

‘…antological reciprocity – the dynamic action of embodied subject and phenomenal world upon one another… which has (explicitly or implicitly) assigned philosophical significance to art.’

[Paul Crowther, Art and Embodiment: From Aesthetics to Self-consciousness]

There is a sense in which, as an artist, the truth of the thing is in the practice. This truth is akin to, but not identical to the pleasure found in doing art; it is the satisfaction of discovering something that is essentially connected to the human imperative to create.

‘The dancers claim to follow “truth” or claim to seek “reality,” but the Wu Li Masters know better. They know that the true love of all dancers is to dance.’

[Gary Zukav, The Dancing Wu Li Masters]

I’ll mention one more aspect to truth: when the perception of the past becomes linked with the present and with historical pastness, when the compulsion felt by a painter to paint the human figure (for example) resonates with the whole of art history, so that the practice of now communicates with other existences. In Burnt Norton, T.S. Eliot writes:

‘What might have been, and what has been

Point to one end, which is always present.’

The issues of creation and the authenticity of doing are in the actual act of doing the work itself. Perhaps the seemingly impassable abyss I found was as E.M. Forster famously said, ‘How can I know what I think until I see what I say?’ Applied to art-making, perhaps one could say, How can I know what my unconsciousness feeling/state of mind is until I make something? The self-reflective action of making is the creative process.