Some Reflections on an Auspicious Occasion

Cap and GownI’ve been promoted to full “Professor.” I am no longer “Associate Professor M.G. Piety.” I am now, or will be as of 1 September, “Professor M.G. Piety.” According to my colleague Jacques Catudal, I am the first person to make full “Professor” in philosophy at Drexel in more than 18 years.

It has been a long journey, as they say. I decided to study philosophy when I was an undergraduate at Earlham College, a small Quaker college in Richmond, Indiana. I became hooked on philosophy as a result of taking a course on rationalism and empiricism with Len Clark. I didn’t particularly enjoy reading philosophy, and I hated writing philosophy papers. I loved talking about it, though. Talking about it was endlessly fascinating to me, so I switched my major from English to philosophy. I became hooked on Kierkegaard after taking a Kierkegaard seminar with Bob Horn. “Bob,” my friends at Earlham explained, “was God.” He was preternaturally wise and kind and a brilliant teacher who could draw the best out of his students while hardly seeming to do anything himself. I don’t actually remember Bob ever talking in any of the seminars I took with him, and yet he must have talked, of course.

I spent nearly every afternoon of my senior year at Earlham in Bob’s office talking to him about ideas. I worried sometimes that perhaps I should not be taking up so much of his time. He always seemed glad to see me, though, and never became impatient, even as the light began to fade and late afternoon gave way to early evening. I don’t remember him encouraging me to go to graduate school in philosophy (my guess is that he would have considered that unethical, given the state of the job market in philosophy). I do remember, however, that he was pleased when I announced that I had decided to do that.

Graduate school was enormously stimulating, but also exhausting and, for a woman, occasionally demoralizing. There has been much in the news in the last few years about how sexist is the academic discipline of philosophy. Well, it was at least as bad back then when I entered graduate school as it is now, and possibly even worse. Still, I persevered. I began publishing while still a student and was very fortunate to gain the support and mentorship of some important people in the area of Kierkegaard scholarship, including C. Stephen Evans, Robert Perkins and Sylvia Walsh Perkins, and Bruce H. Kirmmse, who was one of my references for a Fulbright Fellowship I was awarded in 1990 to complete the work on my dissertation on Kierkegaard’s epistemology.

I lived in Denmark from 1990 until 1998. I received my Ph.D. from McGill University in 1995 but remained in Denmark to teach in Denmark’s International Study Program, then a division of the University of Copenhagen. I wasn’t even able to go back for my graduation, so I learned only a couple of years ago, when my husband bought me my own regalia as a gift, how gorgeous the McGill regalia are (see the photo).

I came to Drexel from Denmark in 1998 as a visiting professor. I liked Drexel. It was overshadowed by its neighbor, the University of Pennsylvania, but that seemed to me almost an advantage back then. That is, Drexel had carved out a unique niche for itself as a technical university, somewhat like MIT but smaller, that provided a first-class education in somewhat smaller range of degree programs than were offered by larger, more traditional institutions. The College of Arts and Sciences seemed to me, at that time, and to a certain extent, still today, a real jewel, as Drexel’s “secret weapon,” so to speak, because while most large universities had class sizes ranging anywhere from 40 to several hundred students, most of the courses in the humanities at Drexel were capped at 25 students. Drexel also boasted some first-class scholars who were as committed to teaching as to scholarship. Drexel was providing its students with what was effectively the same quality of education in the humanities as is provided at small liberal-arts colleges, while at the same time giving them invaluable hands-on work experience (through its co-op programs) that few liberal-arts colleges could provide.

Drexel asked me to stay on for a second and then a third year, despite the fact that my beginning was less than auspicious in that at the end of that first fall term, I had mistakenly conflated the times of what should have been two separate exams and hence left my students sitting in a room waiting patiently for almost an hour for me to materialize and administer the exam. It was too late, of course, to do anything by the time I learned, via a phone call from one of the secretaries in the department, of the mistake. I was relieved when not only was the then chair of the department, Ray Brebach, not angry with me, he was actually eager to see if I would be willing to stay on for another year. Ray has been one of my favorite colleagues ever since.

I received my tenure-track appointment in the spring of 2001. I liked my department. It was originally the Department of Humanities and Communications and included the disciplines of English, philosophy and communications. It was enormously stimulating to be in such a cross-disciplinary department. There were poets and novelists, as well as traditional literary scholars. I particularly liked being around the communications people, however, because many were engaged in politically significant media studies and that sort of work was reminiscent of the dinner-table discussions I remembered from childhood when my father was an editorial writer for one of the two newspapers in the town where I grew up. My association with the communications people led to the publication of an article I wrote together with my husband on the behavior of the mainstream media in the U.S. leading up to the second Iraq war.

Eventually, however, the communications people left our department and formed a new department together with the anthropologists and sociologists called the Department of Culture and Communications. So then we became the Department of English and Philosophy. I was sad to see the communications people go, but there were still plenty of creative writing people in the department who helped to make it a more stimulating environment than it would have been had it been comprised exclusively of traditional scholars. These people, including Miriam Kotzin and Don Riggs, both brilliantly talented poets, are some of my closest friends. Miriam has encouraged me to write for her outstanding online journal Per Contra, and Don, a talented caricaturist as well as poet, drew the picture of me that I occasionally use for this blog.

It was an ordeal, however, to go up for tenure. Our department has a tradition of requiring monstrously comprehensive tenure and promotion binders into which must go almost everything one has done on the road to tenure or promotion. I think each one of my tenure binders was around 500 pages in length (people going up for tenure or promotion must produce three separate binders: one for teaching, one for service, and one for scholarship). It took me the entire summer of 2006 to put those binders together, a summer when I would much rather have been writing material for publication. To add possible injury to the insult of having to devote so much time to the compilation of these binders was my fear that some of the reports of my “external reviewers” might not be so positive as they would haven been had I not become involved in a scandal in Denmark surrounding a controversial Danish biography of Kierkegaard. I lost several friends, including the aforementioned Bruce Kirmmse, as a result of my role in that controversy, friends whom I feared might well have been recruited to serve as external reviewers.

To this day I don’t know who all the reviewers were. Two were selected from a list I had provided my tenure committee, but the rest were selected by the committee itself. Whatever the reviewers said, however, it was not so negative as to override what subsequently became apparent was the high esteem in which my colleagues held me and my work. I was granted tenure in the spring of 2007 and I have fond memories to this day of the little reception provided by the dean for all faculty who where granted tenure that year. There was champagne and there were toasts and I was very, very happy.

I’d always been happy at Drexel, so I was surprised by the change that took place in me upon my becoming tenured. I felt, suddenly, that I had a home. I felt that people both liked and respected me. More even than that, however, I felt that I had found a community of high-minded people. People committed to principles of justice and fairness. I felt I had found a small community of the sort that Alasdair MacIntyre argues in After Virtue we must find if we are to live happy and fulfilling lives, the kind of community that is increasingly rare in contemporary society.

That all seems long ago now. Drexel has grown and changed. I am still fortunate, however, to have many brilliant, talented, and fair-minded colleagues. Thanks must go to my colleague Abioseh Porter, who chaired the department for most of the time I have been at Drexel and who was a staunch supporter of my development as “public intellectual” long before “public philosophy” enjoyed the vogue it does today. Thanks must also go to the members of my promotion committee, but especially to my colleague Richard Astro, who chaired the committee. I know from merely serving on tenure-review committees that no matter how uncontroversial the final decision is anticipated to be, there is an enormous amount of work demanded of the committee members, simply because of the level of detail required in the final report.

Thanks must also go to everyone who has supported me throughout my career. I set out, actually, to list each person individually, but then I realized that there are many, many more people than I would ever be able to list. I have been very fortunate.

Thank you everyone. Thank you for everything.IMG_0886

 

 

What Philosophers Think They Know

Portrait caricatureAh philosophers, you gotta love ‘em. Even if they have given up the pretension of being wiser than everyone else, they still purport to be smarter, or at least more analytically adept. And yet they continually make conspicuous public display of just how bad their arguments can sometimes be. Rebecca Newberger Goldstein’s recent review of the infamous Colin McGinn’s new book, Philosophy of Language: The Classics Explained, is a case in point. Goldstein, a prominent philosopher herself and recipient of this year’s National Humanities Medal, argues in her review that philosophy, contrary to popular belief, does progress in a manner analogous to the sciences and does have practical value. Unfortunately, her argument is deeply and obviously flawed, revealing, once again, that philosophers are often no better at reasoning than is the average person and that academic philosophy probably does not have the kind of practical value she argues it does.

McGinn’s book is designed to make some of the classics in the history of the philosophy of language, which are notoriously impenetrable to nonspecialists, accessible to students, and others, who have “no previous familiarity” with the field. Goldstein begins her review, “What Philosophers Really Know” (The New York Review of Books, Oct. 8, 2015), with the observation that “[a]cademic philosophy often draws ire.” There are two sorts of complaints against it, she says, but they are “not altogether consistent with each other.”

The first complaint is that philosophers “can’t seem to agree on anything.” This “lack of unanimity,” she explains, “implies a lack of objectivity” and that suggests that philosophy doesn’t actually “progress” in the way the that, for example, the sciences do. This complaint, she continues, “culminates in the charge that there is no such thing as philosophical expertise.”

The second complaint, she asserts, “is that academic philosophy has become inaccessible,” in that it has generated technical “vocabularies and theories aimed at questions remote from problems that outsiders consider philosophical.” But this latter charge, she asserts, contradicts the earlier charge in that it implies “there are philosophical experts and that, in carrying the field forward, they have excluded the nonprofessional.”

It’s pretty clear, however, that the second charge implies no such thing. How could the nonprofessional lament being excluded from this purported forward movement in philosophy, when any movement whatever within the profession has already been dismissed by the nonprofessional as remote from what he or she considers philosophical? A reading of the second charge that makes it not only internally coherent, but cohere with the first charge is not that it implies that there are philosophical “experts,” but rather that it implies there are philosophical initiates whose technical vocabularies are so complex and foreign to ordinary language usage that they make it impossible for the lay person even to enter the philosophical conversation, let alone argue that philosophers theories are “aimed at questions remote from problems outsiders consider philosophical.”

That is, the second charge is, I believe, that philosophers have effectively insulated themselves from criticism by making their ideas and theories inaccessible to anyone but themselves. Or in the words of Jonathan Rée (as quoted in Philosophy Now), that they have become nothing more than “a self-perpetuating clique like freemasons.” This reading is perfectly consistent with the charge that there is no progress in philosophy, the real “culmination” of the observation that philosophers can’t seem to agree on anything. No one disputes, after all, that there are degrees of mastery of any technical vocabulary and methodology specific to a given field and hence that those on the high end of the spectrum may be considered “experts.” What is at issue in criticisms of academic philosophy is whether this “expertise” amounts to wisdom (i.e., sophia), which is to say whether it can make a positive difference in anyone’s life, or indeed, any sort of difference at all.

Goldstein claims philosophy has practical value, but the examples she offers fail to support her claim. She presents the fact that “[c]ertain speech situations yield their meaning without inquiring about the speaker’s intentions… [and that] other situations require inquiry into what is called pragmatics, which analyzes both the language employed and the language user’s intentions” as an insight specific to philosophers. This in itself strains credulity given how important “misunderstandings” based on mistaken interpretations of “the language user’s intentions” have been throughout human history and how often they have been depicted in literature and film. Goldstein observes herself that “[e]very serious novelist” pays close attention to the relationship between “what a sentence means and what a person means in uttering the sentence.” But then she goes on to assert that this relationship is some kind of philosophical discovery, as if, even serious novelists cannot make sense of the relationship without the help of technical philosophical terms such as “referential definite descriptions” and “attributive definite descriptions.”

”Definite descriptions,”Goldstein explains, are ones that begin with the definite article ”the,” such as ”the blonde woman over there.” The philosopher Keith Donnellan, continues Goldstein, came up with the distinction, however, between referential and attributive definite descriptions. Whether the woman is really blonde, or only wearing a blonde wig, doesn’t matter. “When I use a definite description referentially,” explains Goldstein,

I have a specific individual in mind, and my aim is to refer to that individual. So long as I get the listener to know who or what I’m talking about, I’m using the definite description successfully… The specific content of the description doesn’t really matter; I’m just using it in effect, to point. But when I use a definite description attributively, the content is precisely the point. The phrase will refer to anything or anybody that uniquely satisfies what it describes, even if I, as the speaker, am ignorant of the referent, as when I say, “The bastard who hacked my computer has made my life a living hell.”

Goldstein then gives an example she asserts illustrates the practical value of this purported philosophical insight concerning the difference between what people mean with their words and what the words can be interpreted to mean independently of the speaker’s intentions.

The example is of a man who deserts his wife, but then later “marries” a new woman. In the process of forming a new business, he signs a contract stipulating, among other things, that “‘the wife of the depositor’ shall benefit in the event of his death.” He makes it clear, explains Goldstein, “though of course, not in writing,” that he intends the beneficiary to be his new “wife,” not the wife he deserted.

But what happens when he dies? The wife he deserted suddenly presents herself and declares she is the rightful beneficiary. “Should ‘the wife of the depositor’ be interpreted referentially, asks Goldstein, so that it would refer to the woman the bigamist intended to indicate with the phrase, or attributively, as the real wife demands?

“Just such a legal situation arose in 1935,” explains Goldstein, though she does not identify the case by name, “and the majority of judges decided on the referential interpretation.” But then Goldstein goes on to assert that “[t]he philosopher Gideon Rosen has argued that subtle points in the philosophy of language raised by, among others, [Saul] Kripke, imply that the majority opinion was mistaken.”

Really, I kid you not, she says that. How, one is compelled to ask, can philosophy of language determine that sort of legal, or indeed any sort of legal question? The court knew that “the wife of the depositor” could be interpreted to refer to either the original, and legally only legitimate wife, or to the second, legally illegitimate “wife.” It didn’t need the fancy-schmantzy philosophical terminology of “referential” versus “attributive definite descriptions” to know “the wife of the depositor” could be interpreted in two different ways. Nor, on Goldstein’s description of the case, can there have been any doubt on the part of the court concerning how the dead man intended it to be interpreted.

The question for the court was how it wanted to interpret the phrase. Did it want to honor the wishes of the dead bigamist or the technically correct claim of the first woman to be the genuine “wife” and hence the legitimate beneficiary? That is, did it want to honor the spirit or the letter of the contract? Philosophy can’t answer that question for the court. It can only give the court a new way of articulating it.

The second example Goldstein gives involves the interpretation of the Constitution. This example is analogous to the first and hence has the same problem. “Should we,” she asks, “as strict constructionists urge us, consider only the semantics of the words themselves in order to interpret the Constitution’s meaning, or must we use pragmatics, too, consulting historians to try to understand the original intentions of the framers?” That is, can philosophy “lend support to those who argue for the Constitution as a living document?” The answer, of course, is yes, it can, but so can it lend support for those who do not want to see it as a living document.

How, one might ask, are we supposed to be able to reliably determine the original intentions of the framers? What are we going to consider sufficient evidence of those intentions? Those kinds of questions are the very lifeblood of philosophy. God help us if we turn to philosophy for answers to them because philosophy, as Goldstein observes herself, is better at discovering questions than at discovering answers.

Goldstein points out that McGinn’s book omits many classics in the philosophy of language. Among those whose writings were omitted is, according to Goldtein, is Willard van Orman Quine. Goldstein would have done well to review her Quine because Quine argues in an essay entitled “Has Philosophy Lost Contact with People? (Theories and Things, 1981), that professional philosophers are not, in fact, purveyors of wisdom, nor, he asserts, have they any ”peculiar fitness for helping society get on an even keel.”

I don’t mean to suggest that there is no value at all to academic philosophy. Nor do I mean to suggest that there is no practical value to it. Philosophy, whether it is defined as the love of and search for wisdom, or as the love of and search for “conceptual clarity and argumentative precision” (Goldstein’s articulation of the “analytic” conception of philosophy), is a perennial human activity, and like other perennial human activities such as art and literature, it deserves serious study. I believe as well, and have argued elsewhere, that it has practical value. My point here is simply that it does not appear, according to Goldstein’s own arguments, to have the practical value she claims it has.

There is one point on which I agree with Goldstein. If there is such a thing as philosophical progress, then it is indeed “less accurately measured in the discovery of answers and more in the discovery of questions.” I doubt whether most people would consider this progress, but I do think it has a certain positive value in that it can encourage humility.

Strangely, humility is precisely what so many professional philosophers, including Goldstein, seem to lack.

(This piece appeared originally in the Sept. 18, 2015 edition of Counterpunch.)

On Collective Guilt

Ruth_Andreas-TitelWe can’t leave the Holocaust alone. That might be a good thing if we had the courage to view it honestly. We don’t though. We insist that it’s a puzzle we continue to try to solve, ostensibly so that we will know where to place blame, and in that way also know how to ensure that it will never happen again. We refuse, however, to place blame where it really belongs and so we keep turning it over and over, searching for something we will never find.

Why the Germans? Why the Jews? are questions that Götz Aly takes up in a new book the title of which begins with these questions (Metropolitan Books, 2014). Aly’s theory, not particularly novel, is that the social and economic advances made possible for Jews in Germany as a result of a series of legal reforms in the various German states in the eighteenth and nineteenth centuries made them objects of envy. “Not all Nazi voters,” acknowledges Christopher R. Browning in a review of Aly’s book, “were anti-Semitic, but they at least tolerated Nazi anti-Semitism” (“How Envy of Jews Lay Behind It,” The New York Review of Books, January 8, 2015).

“But how to explain,” Browning continues, “this ‘moral insensibility’ and ‘moral torpor’ of 1933-1944, which underpinned the ‘criminal collaboration’ between the German people and the Nazi regime?” The answer Aly offered first in Hitler’s Beneficiaries (Metropolitan Books, 2005), was material gain. Aly’s new work supplements the motive of material gain with a “new morality” involving race theory that would justify such collaboration.

Many Germans remained unconvinced, however, by the new race theory. Many Germans were, in fact, untroubled by the legal reforms that had made possible the flowering of the Jewish middle class. Many Germans had even championed these reforms.

What happened to those people?

The journalist Ruth Andreas-Friedrich, who lived in Berlin during the war, gives us some insight into what happened to them in the diary she kept from 1938-1945. Initially, at least, they were not helping the Nazis. Her entry for Nov 10, 1938, the day after the infamous Kristallnacht,“ gives moving testament to that fact. At half past nine in the morning Andreas-Friedrich took a bus to her office. “The bus conductor looks at me,” she writes,

as if he had something important to say, but then just shakes his head, and looks away guiltily. My fellow passengers don’t look up at all. Everyone’s expression seems somehow to be asking forgiveness. The Kurfürstendamm is a sea of broken glass. At the corner of Fasanenstraße people are gathering–a mute mass looking in dismay at the synagogue, whose dome is hidden in a cloud of smoke.

            ‘A damn shame!’ a man beside me whispers … [W]e all feel that we are brothers as we sit here in the bus ready to die of shame. Brothers in shame; comrades in humiliation” (Berlin Underground 1938-1945 [Paragon House, 1989).

When she gets to the office, her editor, whom she observes, was “rumored to have a tinge of Nazism, ” says “one doesn’t dare look people in the eye anymore” (21).

“They’ve dragged all them all away–all the Jewish men they could get hold of,” begins her entry for the next day.

Only those who were warned in time have escaped the raid. Thank Heavens, a good many were warned. Hundreds managed to disappear at the houses of friends; hundreds sought shelter with strangers and found it. One little seamstress took in two Jewish fugitives; she didn’t even know their names or where they came from. Workingmen in the Frankfurter Allee brought back to the Jewish shop-owners the merchandise that was scattered over the street. They didn’t say a word, just tugged sheepishly at their caps. The chief surgeon of a hospital is hiding a wounded rabbi in the back room from the bloodhounds of the Gestapo.

            While the SS was raging, innumerable fellow Germans were ready to die of pity and shame” (p. 25).

The next line of the translation reads “Almost all our friends have people quartered on them.” If one goes to the original German edition of the diaries, however, the text continues

Women are dashing about the city today with mysterious bundles under their arms, meeting one another on street corners: Shaving articles for Doctor Weißmann. A clean shirt for Fritz Levy, night things for Jochen Cohn. One tries, as much as possible, to look after those in hiding. It isn’t advisable for them to come out of hiding yet. What happened yesterday could continue today (Der Schattenmann [The Shadow Man], Suhrkamp, 2nd ed. 2012, p. 38).

Then comes the line “Almost all our friends have people quartered on them.” There is no ellipsis to indicate material was omitted. One could argue it doesn’t matter because what makes it into the translation makes clear that the general reaction of Berliners to Kristallnacht was one of horror. Still, the omitted material makes even clearer how widespread among gentiles was sympathy for the plight of the Jews.

Interesting, eh? People running about the city collecting the necessary articles for friends, and in some cases even strangers, they’re protecting. Jews being given shelter by countless German gentiles. Workmen returning to Jewish shop-owners merchandise that had been scattered on the street. What happened to those countless Germans who were sympathetic to the plight of the Jews, to those countless “brothers in shame”?

What do you think happened to them? What happens to people who try to help others as it becomes increasingly clear what such assistance might eventually cost them? Some continue, despite the danger, some go into resistance groups such as “Uncle Emil,“ the one with which Andreas-Friedrich became associated, but most do not.

Andreas-Friedrich “looks lovingly” at the man who whispers “A damn shame!” at the sight of the burning synagogue.

“It occurs to me,” she writes, “that this is ”really the time to call your neighbor ‘brother.’ But I don’t do it. One never does; one just thinks it. And if you really do pluck up the courage for a running start, in the end you just ask, ‘Pardon me, could you tell me the time?’ And then you are instantly ashamed of being such a coward” (p. 19).

Why couldn’t she do it? Why couldn’t she acknowledge to the man that she also condemned what had happened the night before? Why couldn’t any of the people on the bus who were hanging their heads in shame, in silent shame? Why doesn’t one do it?

Years ago I saw a nature program that focused on a litter of wolf cubs. There were three cubs in the den. One emerged, however, days before the other two. He was bold, he was courageous. He was eager to explore the outside world. Ah, I thought to myself, he will be the alpha wolf. He will grow up to be the leader.

One day, though, the brave little cub came home from his explorations with an injured foot. He left again the next day, undaunted by his grisly experience of the day before, but that evening, he did not return. He never returned again. Who knows what had gotten him, but something clearly had.

Several more days passed after the disappearance of the first little cub before the two remaining ones peeked out, trembling, bodies pressed together, from the mouth of the little den. Another day still passed before they had the courage actually to emerge fully from the shelter of their home.

And suddenly I understood why human beings are such a miserable craven lot. Natural selection has ensured that cowardly individuals have a higher survival rate than courageous ones. They live longer, produce more offspring. So it isn’t our fault, really, that we’re such a miserable, craven lot. It’s in our genes.

And yet it is our fault because cowardice isn’t the only thing that’s in our genes. We have somehow also evolved a conscience. We know, as Aristotle expressed it in the Nicomachean Ethics, that there are things we ought rather to “face death” than do (Book III 1). And yet few of us have the courage to face death to do the right thing. Few of us even have the courage to say “brother” to another who affirms the values we purport to hold dear.

Elizabeth Kolbert writes in the February 16th issue of The New Yorker that the Germans “failed miserably” to draw a line between the innocent and the guilty after the war. She writes, in fact, that to say they “failed miserably” would be “generous” (“The Last Trial”). That’s true, of course, though in a different sense, I think, than the one Kolbert meant, because the line, drawn properly, would encircle us all, all except for the few whose willingness to martyr themselves to do the right thing places them not outside the group, but above it.

We are all guilty of the cravenness that paved the way for the Holocaust, the glass through which we keep seeing darkly, which we keep turning over and over in a vain attempt to escape our own reflection. If we had the courage to recognize ourselves in it, then perhaps we could learn from it. But courage, sadly, is precisely what we lack.

(This piece is dedicated to my dear friend and German tutor of many years, Ebba Mørkeberg 1924-2014.  It originally appeared in the of Feb 17, 2015 issue of Counterpunch.)

Sport and the Sublime

Greg LouganisThe following piece originally appeared in the 25-27 January 2013 edition of Counterpuch. I am posting it here in honor of the 2014 Winter Olympics that have just gotten underway in Sochi, Russia.

I watched a lot of TV as a kid. That was before cable, so finding something interesting could be challenging. I was channel surfing one day when I happened on some diving. I didn’t know anything about diving. but even people who don’t know anything about it can appreciate the beauty of it. There was nothing better on, so I decided to watch for a bit.

One diver after another came on the screen and executed what seemed to be perfect dives. But then, suddenly, there was Greg Louganis. There’s a video of Louganis on YouTube that begins: “There are two categories of divers, those who perform with magnificent skill, grace, beauty, and courage–” there’s a pause and the narrator’s voice drops an octave, “then there is Greg Louganis.”

That pretty much sums it up. I was watching all these divers who seemed perfect, and then suddenly there was Greg Louganis. He wasn’t just perfect–he was sublime. I didn’t know anything about diving and yet watching Louganis gave me the feeling Emily Dickinson reportedly said one gets from good poetry–it made me cold to the bone. It gave me that shiver of the numinous that Rudolf Otto talks about in The Idea of the Holy.

That was a defining moment in my life. It was, I believe, when I first realized that there was more to reality than what appears on the surface of experience. Louganis executed the same beautiful movements as all the other divers, and yet there was something more in his movements than in everyone else’s. Something ineffable and yet so powerful; it hit the spectator with the force of a blow, like the shock of electricity. It seemed as if there were more energy in every fiber of his being, more vital life force. It was as if he were more real than the other divers, as if the other divers had been only moving images, whereas Louganis was a man in the flesh. Except that the other divers had been real. So Louganis seemed somehow to have more reality than the others.

*

I saw the same thing a few years ago in person. I’d just started taking figure skating lessons and used to go to competitions to cheer on a little boy whom my teacher was coaching. I stayed, once, to watch the next competition for slightly more advanced boys. One of the skaters caught my eye during the warmup. He was doing a very simple move, one I was trying, in fact, to learn myself at that time. It’s called “edges with three turns” and involves the skater making large arcs across the ice on alternating feet with a turn in the middle from forward to backward so that the tracings left on the ice look like a series of elongated number threes facing in opposite directions. It’s a simple looking move, yet it’s very difficult to perform well because, after the turn, the skater’s shoulders have a tendency to continue to pull him in the direction of the turn. If this motion is not checked, then it will be almost impossible for him to step forward again into the next arc. The shoulders and hips have to turn independently of each other, and the skater has to have a considerable degree of control over his upper body to keep the motion of the shoulders in check.

This boy, the one I was watching, can’t have been more than 14 years old, but he had the serene self possession of a dancer at the barre. His movements were slow, deliberate, and exquisite. I’d never seen anything like it. Not only did he have perfect form, he had perfect concentration. Other skaters raced past him, but he was so absorbed in what he was doing he seemed not to notice them. It was almost as if he were out there alone, as if the other skaters had been reduced to shadows. I could not take my eyes off him.

*

The idea that there are degrees of reality will seem strange to most people nowadays. It was a familiar one, however, to medieval and early-modern philosophers. For the medievals, things that were dependent on other things for their existence had less reality than did the things on which they were dependent. People, for example, had less reality than God. God had created people, hence people were dependent for their existence on God, whereas God’s existence was absolutely independent of anything else. God was the ultimately real thing, the thing with the greatest degree of reality, the thing that was more real than any other thing.

Kierkegaard also appears to have appropriated this idea of degrees of reality. Human beings, according to Kierkegaard, begin as ideas in the mind of God. The telos of an individual human life is therefore to bring the substance of that life into conformity with the form God conceived it should have. That’s what Kierkegaard means, I would argue, when he asserts that we must become who we are. We must become concretely who we are for God abstractly.

Most people, and that includes most athletes, don’t do that. Rather than striving to instantiate the ideal of their uniqueness, they constantly compare themselves to other people and try, in effect, to be better at being those people than those people are themselves.

There’s nothing wrong with competition. Competition can push athletes to higher levels of performance than they might otherwise achieve. What has not been adequately articulated, however, is precisely how this works. Competition improves performances, I would argue, only when athletes strive to instantiate a transcendent ideal that no particular performance can ever adequately instantiate. An athlete who strives in this way to instantiate an ideal provides a glimpse into the essence of that ideal that can spur on others in their own pursuit of it.

That’s a very different sort of phenomenon, however, from that of one athlete effectively copying another in the belief that he can do what the other has done better than the other did it himself. That kind of competition is inherently frustrating for the athlete in that he is trying to be something he’s not, and boring for the spectator in that he’s being subjected to what are effectively a bunch of imitations. When athletes strive only to win, rather than to be the best that they can be in their chosen sport, the reality of all the participants in a competition is diminished. Each becomes merely a copy of the others, and the ideal, which in a sense is more real than is any particular attempt to instantiate it, is lost sight of.

 

The idea that there are degrees of reality provides us a way to explain something that is otherwise inexplicable–greatness. Philosophers distinguish between quantitative and qualitative differences. A thing can be more or less blue, for example, in a quantitative sense. To be red, on the other hand, is to be something else entirely. Red is qualitatively different from blue.

A performance that is great is not distinguished from other performances in a merely quantitative sense. There’s something more to it that sets it apart. Greatness is qualitatively different from skill, even the most highly refined skill. It’s possible to execute a movement in a manner that many would judge to be technically perfect, and yet to be uninspiring. Conversely, it’s possible to deviate from universally accepted standards of performance and yet move an audience more profoundly than someone who is merely a consummate technician.

Part of this has to do with passion, but it is not reducible to passion. Passion is necessary for greatness, but it’s not sufficient. Passion is a natural attribute. Some people have more, others have less, just as some people have more or less patience than other people. Greatness, on the other hand, is not a natural attribute. A great artist, as every great athlete is, has to be passionate, and yet he also has to be more than that. He has to have a gift. That’s why greatness is edifying. It bursts the confines of the temporal-phenomenal world, provides us with a glimpse of something that is transcendent. There’s a spark of divinity to it.

That’s why the sport/art dichotomy is false. All great athletes are artists. They give us glimpses of the sublime by bringing into their performances something more in a qualitative rather than a quantitative sense. That’s why it’s wrong for athletes to strive merely to win. It’s not simply that striving to win, as Aristotle pointed out, is misguided in that winning is something over which one has no direct control. To strive to win is to aim for the quantitative rather than the qualitative, and that is inherently limiting. Athletes who strive to be the best they can be at their chosen sport rather than simply to win this or that contest are pursuing something transcendent. That’s ennobling, both for the athlete and the spectator.

Why then is winning so important? Because it is more obviously valued than is being sublime. It takes less energy, less effort, less engagement on the part of the spectator to be caught up in a contest than to be caught up in a performance. We can follow a contest with only half, or even less, of our attention. To follow a performance, on the other hand, is energy intensive. Human beings, like every other living creature, like to conserve energy. Contests are a way of doing that. We are told who the winner is rather than having to determine that for ourselves. To follow a performance, in contrast, requires us to be fully present in the moment, to bring all our capacities of attention and discrimination to the fore.

When we do that, when we truly follow the performances of athletes, we sometimes find that the superb performance is not always the one that wins. There are a variety of reasons for that. Sometimes reputations of athletes unduly influence scores. Other times the scoring systems themselves are simply too arbitrary and opaque to ensure that the best performance wins. Finally, scores are sometimes manipulated to ensure that particular athletes win, independently of how well they perform.

All of these reasons are traceable back, however, to a suspicion of the ineffable. It’s ultimately impossible to articulate what makes a performance great, and not everyone is an equally good judge of greatness. So in the service of fairness, we attempt to construct a set of objective criteria for evaluating performances, and the performance that best satisfies these criteria is the one we call “the winner.”

 

*

The name of the skater I saw a few years ago is Alexander Aiken. I tried to follow his career for a while. If there were a competition in the area I would go in the hope of seeing him, and I would look for news of his results in Skating magazine, the official publication of U.S. Figure Skating, the governing body of the sport. I eventually lost track of him, however, as my interest in the sport waned. The new judging system has imposed a level of conformity that is increasingly making skating boring to watch, and the perennial problem of inequities in the judging too often make the results of competitions an offense to the fair minded.

I quit following competitive skating. I continued to skate myself, though,  because it is the only real exercise I get. When I arrived in Jacksonville, where my husband teaches and where I spend half the year when I am not teaching in Philadelphia, I was surprised to find that a very advanced skater had recently begun to train there. I noticed him as I entered the rink and stopped to watch him for a few minutes. Something about him looked familiar. And then I realized who it was; it was Alexander Aiken. He was older, of course, than he had been the last time I’d seen him, but his looks had otherwise not changed much. I think it was less his face, though, than his skating that caused the shock of recognition to run through me. His skating is distinctively beautiful.

I could hardly believe the coincidence of his showing up to train in Jacksonville. I’d first seen him in Philadelphia and had learned then that he was from Atlanta. What, I wondered, was he doing in Jacksonville? I went over and introduced myself when he finally got off the ice. I told him how I’d seen him years ago and had been impressed with his skating. He smiled and thanked me politely and continued unlacing his skates. I learned later, from his girlfriend Michelle Pennington, who is a former competitive ice dancer and one of the instructors at the rink, that he’d moved to Jacksonville to live with a sister whose husband was in the military and was stationed there.

We skated together, Aiken and I, the sublime and the ridiculous, through the end of the summer and into the early fall. It was wonderful. Most of the time, we were the only two people on the ice. I was concerned that my presence might interfere with his training, but it was wonderful to be able to observe a great athlete so closely, and he went out of his way to make me feel welcome. Aiken brought a better face to the sport than the one I had seen of late and that helped bring back the joy I had earlier taken in it.

I was excited to have someone to cheer on again in competitions. Aiken was going places. He’s not just supremely graceful; he has enormous athletic ability. He’s able to land triple axels solidly and consistently, the jump widely considered to be the most difficult in the whole sport.  He won the bronze medal at the 2011 national figure skating championships in the Junior Men’s division and had competed at the Senior level for the first time last year. He hadn’t placed terribly well, but that’s how the sport works. Skaters are rarely allowed to place well their first year in “seniors.”

 

The nationals are this week in Omaha. The senior men compete on Friday and Saturday. You won’t see Aiken there though. He’s been plagued over the last few years, as so many skaters are, by the astronomically high costs of training. The stress of that has taken its toll on him. He narrowly missed qualifying for nationals and decided he’d had enough. He’s quit skating, or at least quit competing. He said he can no longer afford the $50,000 he’d had to pay every year to train. He’d gotten some help, of course––most skaters at his level do––just not enough.

It’s hard for me to say, finally, which spectacle is more ennobling: the sublime performance that wins the contest, hence reinforcing our faith in providence, or the one that doesn’t. I think sometimes that it’s the latter. The celebrity of the winner makes him a kind of public figure, someone who belongs, in a sense, to the masses, whereas the triumph of the athlete who achieved greatness but did not win is a more private thing, something that belongs only to himself and that select group of spectators whose intensity of attention has initiated them into the realm of the transcendent.

No skater I’ve ever seen in person has made such a strong impression on me as Alexander Aiken has. He’s a sublime skater, a great athlete, a great man. This piece is for him.

(This article has been excerpted from Sequins and Scandals: Reflections on Figure Skating, Culture, and the Philosophy of Sport. I’m indebted to Michelle Pennington for her help with it.)

Education and Philosophy

Mind CoverOne of the things I love about philosophy is how egalitarian it is. There’s no “beginning” philosophy and no “advanced” philosophy. You can’t do philosophy at all without jumping right in the deep end of the very same questions all philosophers have wrestled with since the time of Plato, questions such as what it means to be just, or whether people really have free will.

This distinguishes philosophy from disciplines such as math or biology where there’s a great deal of technical information that has to be memorized and mastered before students can progress to the point where they can engage with the kinds of issues that preoccupy contemporary mathematicians or biologists. There is thus a trend in higher education to create introductory courses in such disciplines for non-majors, courses that can introduce students to the discipline without requiring they master the basics the way they would have to if they intended to continue their study in that discipline.

Philosophy programs are increasingly coming under pressure to do the same kind of thing with philosophy courses. That is, they are essentially being asked to create dumbed-down versions of standard philosophy classes to accommodate students from other majors. Business majors, for example, are often required to take an ethics course, but business majors, philosophers are told, really do not need to read Aristotle and Kant, so it is unreasonable to ask them to do so.

Yeah, that’s right, after all, they’re not paying approximately 50K a year to get an education. They’re paying for a DEGREE, and the easier we can make that for them, the better!

But I digress. I had meant to talk about how egalitarian philosophy is. Anyone can do it, even today’s purportedly cognitively challenged generation. Just to prove my point, I’ll give you an example from a class I taught yesterday.

We’re reading John Searle’s Mind: A Brief Introduction (Oxford, 2004) in my philosophy of mind class this term. We’re up to the chapter on free will. “The first thing to notice,” Searle asserts, when examining such concepts as “psychological determinism” and “voluntary action,” “is that our understanding of these concepts rests on an awareness of a contrast between the cases in which we are genuinely subject to psychological compulsion and those in which we are not” (156).

“What do you think of that statement?” I asked my students. “Is there anything wrong with it?”

“It’s begging the question,” responded Raub Dakwale, a political science major.

“Yes, that’s right,” I said smiling. “Searle is BEGGING THE QUESTION!” Mr. Big deal famous philosopher, John Searle, whose book was published by Oxford University Press, commits a fallacy that is easily identified by an undergraduate student who is not even a philosophy major. That is, the issue Searle examines in that chapter is whether we have free will. He even acknowledges that we sometimes think our actions are free when they clearly are not (the example he gives is of someone acting on a post-hypnotic suggestion, but other examples would be easy enough to produce).

But if we can be mistaken about whether a given action is free, how do we know that any of our actions are free? We assume that at least some of them are free because it sometimes seems to us that our actions are free and other times that they are compelled. But to say that it sometimes seems to us that our actions are free is a very different sort of observation from Searle’s that we are sometimes aware that we are not, in fact, subject to psychological compulsion.

To be fair to Searle, I should acknowledge that he appears to associate “psychological compulsion” with the conscious experience of compulsion, as opposed to what he calls “neurobiological determinism,” which compels action just as effectively as the former, but which is never “experienced” consciously at all. So a charitable reading of the passage above might incline one to the view that Searle was not actually begging the question in that an awareness of an absence of psychological compulsion does not constitute and awareness of freedom.

But alas, Searle has to restate his position in the very next page in a manner that is even more conspicuously question begging. “We understand all of these cases [i.e., various cases of unfree action],” he asserts, “by contrasting them with the standard cases in which we do have free voluntary action” (158, emphasis added).

You can’t get more question begging than that. The whole point is whether any human action is ever really free or voluntary. This move is in the same family with the purported refutation of skepticism that was making the rounds of professional philosophers when I was in graduate school, but which I hope since then has been exposed for the shoddy piece of reasoning that it was.

Back then, philosophers would claim that the classical argument in favor of skepticism rested on cases of perceptual illusion (e.g., Descartes’ stick that appears broken when half of it is submerged under water but which appears unbroken when removed from the water), but that perceptual illusions could be appreciated as such only when compared with what philosophers refer to as “veridical cases” of sense perception. That is, you know the stick is not really broken because removing it from the water reveals that it is not really broken. But if sense experience can reveal the truth about the stick, then the skeptics are mistaken.

But, of course, you don’t need to assume that the latter impression of the stick is veridical in order to doubt that sense experience could ever be veridical. All you need is two conflicting impressions of the same object and the assumption that the same stick cannot be both broken and straight. That is, all you need is two conflicting impressions of the same object and the law of non-contradiction to support skepticism. That seemed glaringly obvious to me when I was still a student, and yet scads of professional philosophers failed to grasp it.

Professional philosophers can be incredibly obtuse, and ordinary undergraduates, even today, with the right sort of help and encouragement, can expose that obtuseness. It’s a real thrill for a student to do that, to stand right up there with the big guys/gals and actually get the better of them in an argument, so to speak. It’s a thrill that is reserved, I believe, for philosophy. That is, it seems unlikely that anything comparable happens in the average calculus or organic chemistry class.

My point here is not to argue that philosophers in general are stupid, or even that Searle, in particular, is stupid. They aren’t, and he isn’t. Despite Searle’s occasional errors in reasoning, he’s one of the most original philosophers writing today. My point is that philosophy, as one of my colleagues put it recently, “is freakin’ hard.” It’s hard even after one has been rigorously schooled in it.

There’s no way to dumb down philosophy and have it still be philosophy. Philosophy is training in thinking clearly. There’s no way to make that easier for people, so why would anyone suggest that there was?

Perhaps it’s because philosophy is the discipline most threatening to the status quo, even more threatening than sociology. Sociology can educate people concerning the injustices that pervade contemporary society, but only training in critical and analytical thinking can arm people against the rhetoric that buttresses those injustices. This country, and indeed the world, would look very different today, if the average citizen back in 2001 had been able to recognize that “You’re either with us, or against us” was a false dichotomy.

(This piece originally appeared in the Nov. 22-24, 2013 Weekend Edition of CounterPunch)

America the Philosophical?

America the Philosophical (cover)Carlin Romano’s book America the Philosophical (Knopf, 2012), opens with an acknowledgement that American culture is not widely perceived, even by Americans, to be very philosophical. He quotes Alexis de Tocqueville’s observation that “in no country in the civilized world is less attention paid to philosophy than in the United States” (p. 5) as well as Richard Hofstadter’s observation in Anti-Intellectualism in American Life (Knopf, 1963) that “[i]n the United States the play of the mind is perhaps the only form of play that is not looked upon with the most tender indulgence” (p. 3). Romano observes that while in England philosophers “write regularly for the newspapers” and that in France philosophers appear regularly on television, “[i]n the world of broader American publishing, literature, art and culture, serious references to philosophy, in either highbrow or mass-market material barely register” (p. 11). Yet despite these facts he boldly asserts that the U.S. “plainly outstrips any rival as the paramount philosophical culture” (p. 15).

I know Romano. I’m on the board of the Greater Philadelphia Philosophy Consortium and Romano has attended some of our meetings. He’s an affable guy, so I was predisposed to like his book despite its wildly implausible thesis. Maybe there is a sense, I thought to myself, in which Americans are more philosophical than people in other parts of the world. We tend to be less authoritarian, I realized hopefully, and authoritarianism is certainly antithetical to genuine philosophical inquiry. Unfortunately, I didn’t have to reflect long to realize that we tend to be less authoritarian than other peoples because we have little respect for learnin’, especially book learnin’. We don’t believe there really are such things as authorities.

How is it possible that the U.S., despite all the evidence to the contrary that Romano marshals, can be “the paramount philosophical culture”? Romano’s answer is that the evidence that suggests we are not philosophical consists of nothing more than “clichés” of what philosophy is. He asserts that if we throw out these “clichés” and reduce philosophy to “what philosophers ideally do” (p. 15), then it will become obvious that America is the “paramount philosophical culture.” That is, Romano makes his case for America the Philosophical by simply redefining what it means to be philosophical, which is to say that he simply begs the question.

According to Romano what philosophers ideally do is “subject preconceptions to ongoing analysis.” But do most Americans do this? It’s not clear to whom he’s referring when he asserts that Americans are supremely analytical. Some Americans are very analytical, but the evidence is overwhelming that most are not. Public discourse in the U.S. is littered with informal fallacies such as ad hominen, straw man, and post hoc, ergo propter hoc arguments that are almost never exposed as such. Americans like to “think from the gut”–which is to say that they tend not to care much for reasoned analysis.

Even if most Americans were analytical in this sense, however, that alone, would not make them philosophical. Subjecting preconceptions to ongoing analysis is certainly part of what philosophers do, but it isn’t all they do. Philosophers have traditionally pursued the truth. That, in fact, is the classical distinction between the genuine philosophers of ancient Greece, figures such as Socrates and Plato, and the sophists. Socrates and Plato were trying to get at the truth. The sophists, on the other hand, were teachers of rhetoric whose primary concern was making money (not unlike for-profit educators today). They were characterized, in fact, as advertising that they could teach their pupils how to make the weaker argument appear the stronger. That is, they taught persuasion with relative, if not complete, indifference to the actual merits of the arguments in question. That’s why they were reviled by genuine seekers after truth.

Romano is unapologetic in presenting his heroes as the sophist Isocrates and the “philosopher” Richard Rorty. He devotes a whole chapter of the book to Isocrates, attempting to defend him against the characterization of sophists presented above. He does a good job of this, but at the end of the chapter, the fact remains that Isocrates was far more practical in his orientation than was Socrates (or any of his followers). “Socrates,” observes Romano, “in the predominant picture of him drawn by Plato, favors discourse that presumes there’s a right answer, an eternally valid truth, at the end of the discursive road. Isocrates favors discourse, but thinks, like Rorty and Habermas, that right answers emerge from appropriate public deliberation, from what persuades people at the end of the road” (p. 558).

But people are often persuaded by very bad arguments. In fact, one of the reasons for the enduring popularity of the informal fallacies mentioned above is how effective they are at persuading people. Truth has to be more than what people happen to agree it is. If that were not the case, then people would never have come to consider that slavery was wrong, and slavery would never have been abolished. It won’t work to point out that slavery was abolished precisely when the majority of humanity was persuaded that it was wrong, and not simply because masses of humanity had to be dragged kicking and screaming to that insight, but primarily because someone had to do the dragging. That is, someone, or some small group of individuals had to be committed to the truth of a view the truth of which evaded the majority of humanity and they had to labor tirelessly to persuade this majority that it was wrong.

Right answers have to be more than “what persuades people at the end of the road” (unless “end of the road” is defined in such as way as to beg the question). The sophists were the first PR men, presenting to young Athenian aristocrats the intoxicating vistas of what can be achieved through self promotion when it is divorced from any commitment to a higher truth. In that sense, Romano is correct, Isocrates, to the extent that he elevates what actually persuades people over what should persuade them, is more representative of American culture than is Socrates.

But is it fair to say that most Americans are followers of this school of thought in that, like Isocrates and Rorty, they have carefully “analyzed” traditional absolutist and foundationalist accounts of truth and found them wanting, that they have self consciously abandoned the Enlightenment orientation toward the idea of the truth in favor of a postmodern relativism or Rortyan pragmatism. There’s a small portion of American society that has done this, a small sub-set of academics and intellectuals who’ve fallen under the Rortyan spell. Most Americans have never even heard of Richard Rorty, let alone self-consciously adopted his version of pragmatism.

That’s not to say we Americans are stupid though. Hofstadter distinguishes, early in Anti-Intellectualism in American Life, between “intelligence” and “intellect.” Intelligence, he observes,

is an excellence of mind that is employed within a fairly narrow, immediate, and predictable range; it is a manipulative, adjustive, unfailingly practical quality—one of the most eminent and endearing of the animal virtues. …. Intellect, on the other hand, is the critical, creative, and contemplative side of mind. Whereas intelligence seeks to grasp, manipulate, re-order, adjust, intellect examines, ponders, wonders, theorizes, criticizes, imagines. Intelligence will seize the immediate meaning in a situation and evaluate it. Intellect evaluates evaluations, and looks for the meanings of situations as a whole. Intelligence can be praised as a quality in animals; intellect, being a unique manifestation of human dignity, is both praised and assailed as a quality in men (p. 25).

These characterizations of intelligence and intellect seem fairly uncontroversial, and according to them, philosophy would appear to be an expression of intellect rather than intelligence. That is, it’s possible to be intelligent, indeed to be very intelligent, without being at all intellectual. Hofstadter asserts that while Americans have unqualified respect for intelligence, they are deeply ambivalent about intellect. “The man of intelligence,” he observes, “is always praised; the man of intellect is sometimes also praised, especially when it is believed that intellect involves intelligence, but he is also often looked upon with resentment or suspicion. It is he, and not the intelligent man, who may be called unreliable, superfluous, immoral, or subversive” (p. 24).

What, you may wonder, does Romano think of this argument? That’s hard to say because the only references to Hofstadter in the book are on pages 3 and 8. His name is never mentioned again, at least not so far as I could tell, and not according to the index. Conspicuously absent from the index as well are both “intelligence” and “intellect.” Romano has written an entire book of over 600 pages that purports (at least according to the intro) to refute Hofstadter’s argument that Americans are generally anti-intellectual without ever actually addressing the argument.

Now that is clever! It’s much easier to come off looking victorious if you simply proclaim yourself the winner without stooping to actually engage your opponent in a battle. It’s kind of disingenuous though and in that sense is a strategy more suited to a sophist than to a genuine philosopher.

(This piece originally appeared in the Nov. 8-10, 2013 Weekend edition of Counterpunch)

Where the Conflict Really Lies

9780199812097_custom-c96a4e01f4f5fdd8283f6cf84c1289baddd1d4e5-s6-c30Alvin Plantinga is one of the most prominent figures in a group of philosophers who work on what one could call religious epistemology. Plantinga has decided to take on the “new atheists” in his latest book Where the Conflict Really Lies: Science, Religion, and Naturalism (Oxford, 2011) and while I applaud that project, I’m not optimistic that he is going to succeed in the manner he hopes he will.

Plantinga writes in the preface that “according to Christian belief, God has created us in his image, which includes our being able, like God himself, to have knowledge of ourselves and our world.”

Really? Our knowledge is like God’s? So God is constantly having to discard flawed theories concerning the nature of physical reality in favor of what appear to be better theories? So the discrete bits of God’s “knowledge” are as incompatible as general relativity and quantum mechanics? That’s a disturbing thought. I like to think God is always right, not that he is constantly getting things wrong and having hence to revise and improve his theories.

Plantinga contends that knowledge of physical reality is possible only if one assumes that there’s some kind of pre-established harmony between the way our minds work and the way the world is. That’s actually a very reasonable claim. He’s right in that without some assumption of that sort, we’re stuck in the Kantian realm of the way the world is for us, rather than the way it is in itself. Humanity has been deeply uncomfortable with this insight ever since Kant (or more correctly, the Pyrrhonists) first expressed it. The view that knowledge liberates us from the confines of our subjectivity seems an almost ineradicable intuition, a fact about the way the mind works. And yet, it is not merely difficult to defend; many would argue that it’s demonstrably false.

It’s not that science is a free for all, or that reality is however and whatever we think it is. To say that we cannot escape the confines of our subjectivity, even when we are at our most “objective” (as is the case, hopefully, when we are doing empirical science), is merely to say that the world is always going to look to us the way it does, not because of the kinds of individuals we are in particular, but because of the kinds of creatures we are in general. Kant, as I indicated above, didn’t actually discover this fundamental truth about what you could call our epistemological predicament. This insight goes at least all the way back to the ancient skeptics.

Plantinga is correct in his observation that atheists who claim to base their views in science lack support for their belief in the veridical nature of scientific “knowledge.” He’s incorrect, however, in his claim that believers stand on firmer ground.

Plantinga asserts that God “created us and our world in such a way that there is a match between our cognitive powers and the world. To use the medieval phrase, there is an adaequatio intellectus ad rem (an adequation of the intellect to reality).”

“Medieval” is the operative word here. Plantinga seems stuck in some medieval world view. He appears to have forgotten that science “progresses.” That is, he appears to have forgotten that we are constantly getting things wrong. The history of science makes it glaringly obvious that the purported fit of which Plantinga speaks between our cognitive powers and the world is far from a good one.

The views Plantinga expresses in Where the Conflict Really Lies are not new. He’s been engaging in elaborate machinations for years in an attempt to defend his position concerning the “match between our cognitive powers and the world.” One of the most intractable problems in the history of epistemological theorizing is known as “the Gettier problem.” Edmund Gettier published a little two-page paper entitled “Is Justified True Belief Knowledge” back in 1963 that has been the bane of epistemologists ever since. Basically, what Gettier showed is that it is possible to have a justified belief that is true by accident, or a belief where the justification is not related to the truth in the way we intuitively believe it ought to be.

The second of Gettier’s two counter examples to the view that knowledge is simply justified true belief concerns a man, Jones, that another man, Smith, has reason to believe owns a Ford. Why does Smith believe this? “Smith’s evidence,” writes Gettier, “might be that Jones has at all times in the past within Smith’s memory owned a car, and always a Ford, and that Jones has just offered Smith a ride while driving a Ford.”

“Let us imagine, now,” continues Gettier,

that Smith has another friend, Brown, of whose whereabouts he is totally ignorant. Smith selects three place names quite at random and constructs the following three propositions:

  1. Either Jones owns a Ford, or Brown is in Boston.
  2. Either Jones owns a Ford, or Brown is in Barcelona.
  3. Either Jones owns a Ford, or Brown is in Brest-Litovsk.

Each of these propositions is entailed by (f) [the belief that Jones owns a Ford]. Imagine that Smith realizes the entailment of each of these propositions he has constructed by (f), and proceeds to accept (g), (h), and (i) on the basis of (f). Smith has correctly inferred (g), (h), and (i) from a proposition for which he has strong evidence. Smith is therefore completely justified in believing each of these three propositions, Smith, of course, has no idea where Brown is.

But imagine now that two further conditions hold. First Jones does not own a Ford, but is at present driving a rented car. And secondly, by the sheerest coincidence, and entirely unknown to Smith, the place mentioned in proposition (h) happens really to be the place where Brown is. If these two conditions hold, then Smith does not know that (h) is true, even though (i) (h) is true, (ii) Smith does believe that (h) is true, and (iii) Smith is justified in believing that (h) is true.

It’s kind of a contrived example, but it makes a good point. A justified belief can be true by accident. When it is, that is, when the justification does not relate to the truth of the belief in the way we think it ought to, we’re inclined to think that the belief in question does not amount to knowledge, even though it satisfies what have long been taken to be conditions necessary and sufficient for knowledge.

Everyone has been trying to better Gettier and this has generated some very interesting work in epistemology. No one seems able to do it, however, without abandoning some intuition we feel is basic to knowledge. You can get around the Gettier problem, for example, if you just add a proviso onto your account of justification that requires that it relate to the conditions in the world that are responsible for the belief being true. The only problem with such an account of justification is that we are never in a position to determine whether this is the case. It is possible, on such a view, to have knowledge; you just can’t know when you know something. The problem is that we’re inclined to believe that you can’t know something without also knowing that you know it.

So the project to better Gettier continues…

Plantinga believes he’s done it though. He gives a detailed account of how his theory of what he calls “warrant” (which is Plantinga’s version of “justification”) avoids the Gettier problem in his book Warrant and Proper Function (Oxford, 1993). Look closely, however, at what Plantinga says about how “warrant” avoids the Gettier problem. The basic idea, says Plantinga, is simple enough:

[A] true belief is formed in these cases all right, but not as a result of the proper function of all the cognitive modules governed by the relevant parts of the design plan [i.e., God’s plan]. The faculties involved are functioning properly, but there is still no warrant; and the reason has to do with the local cognitive environment in which the belief is formed. Consider the first example, the original Smith owns a Ford or Brown is in Barcelona example. Our design plan leads us to believe what we are told by others; there is what Thomas Reid calls “the Principle of Credulity,” a belief-forming process whereby for the most part, we believe what our fellows tell us … [C]redulity is part of our design plan. But it does not work well when our fellows lie to us or deceive us … as in the case of Smith, who lies about the Ford” (33-34).

What, you ask? Who said anything about lying? Gettier doesn’t say anything about lying. Jones never says he owns a Ford. Smith’s evidence, again, is “that Jones has at all times in the past within Smith’s memory owned a car, and always a Ford, and that Jones has just offered Smith a ride while driving a Ford.”

You can’t avoid the Gettier problem by pointing out that God designed us generally to believe what other people say because no one lies in either of Gettier’s examples. It would appear that Plantinga hasn’t even read Gettier because the example in question is Jones owns a Ford or Brown is in Barcelona, not Smith owns a Ford or Brown is in Barcelona, and it is the second of Gettier’s two examples (or counter examples) not the first.

If God had a design plan for the operation of the human intellect, I’m inclined to believe that part of that plan was that we should actually read the works on which we argue we’ve improved. Something went wrong with that plan somewhere!

(An earlier version of this post appeared in the Sept. 27-29 Weekend Edition of Counterpunch).