On the Importance of Learning a Second Language

Portrait caricatureThere is an article in today’s New York Times entitled “The Benefits of Failing at French” that reminds me of a debate in the Times back in 2011 entitled “Why Didn’t the U.S. Foresee the Arab Revolts?” Six scholars, academics, political appointees and think tankers debate the issue in The Times online. They all appear to believe it is very complicated.

Jennifer E. Sims, a professor and director of intelligence studies at Georgetown University’s School of Foreign Service and a senior fellow at the Chicago Council on Global Affairs, thinks the problem is our over reliance on foreign assistance.

Reuel Marc Gerecht, a former CIA officer, thinks it’s that we were captured by “group think.”

Vicki Divoll, a professor of U.S. government and constitutional development at the United States Naval Academy, the former general counsel to the Senate Select Committee on Intelligence and assistant general counsel to the C.I.A., thinks the president is at fault for failing to allocate sufficient resources to the CIA. But then, on the other hand, she says “no amount of resources can predict the unknowable. Sometimes no one is to blame.”

Richard K. Betts, the Arnold A. Saltzman Professor of War and Peace Studies, director of the International Security Policy program at Columbia University and the author of Enemies of Intelligence: Knowledge and Power in American National Security, thinks the problem is that “it is impossible to know exactly what will catalyze a chain of events producing change.”

Celeste Ward Gventer associate director of the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin and a former deputy assistant secretary of defense, thinks the problem is that we’re too preoccupied with “foreign policy minutiae.”

Peter Bergen, the director of the national security studies program at the New America Foundation and is the author of “The Longest War: The Enduring Conflict between America and Al-Qaeda,” thinks the explanation is as simple as that revolutions are unpredictable.

There is probably some small grain of truth in each of these rationalizations. I’m only a professor of philosophy, not a professor of political science, let alone a former governmental bureaucrat, political appointee, or think-tank fat cat. It seems pretty clear to me, however, that despite all the theories offered above, the real reason we didn’t see the revolts coming was good old-fashioned stupidity. That’s our strong suit in the U.S.–stupidity. We’re the most fiercely anti-intellectual of all the economically developed nations, and proud of it! We go on gut feelings. Oh yes, our elected officials even proudly proclaim this. We don’t think too much, and on those few occasions when we do, we’re really bad at it for lack of practice.

One of the great things about Americans is that they are probably the least nationalistic people in the world. Oh yeah, they trot out the flag on the fourth of July and for the Super Bowl, but that’s about it. A few crazy fascists brandish it throughout the year, but most people, except for a brief period after September 11,th pay no attention to them. Danes, in contrast, about whom I know a little because I lived there for eight years, plaster Danish flags all over everything. Stores put them in their windows when they have sales, they are standard decorations for almost every holiday and a must, in their small toothpick versions, for birthday cakes. This isn’t because they suffer from some sort of aesthetic deficiency that compels them to turn to this national symbol for want of any better idea of how to create a festive atmosphere. No, Danes throw Danish flags all over everything because they are incredibly nationalistic, as is about every other European and almost everyone else in the rest of the world who’s had to fight off the encroachment of foreign powers onto their national sovereignty. We’ve seldom, OK, really never, had to do that. Still, if we, you know, seriously studied European history, we would have something of an appreciation for how basic is nationalism to the psyches of most people in the world and we could use this as our point of departure for understanding the dynamics of international relations, as well as for appreciating the obstacles to our understanding of the internal dynamics of other countries.

Years ago, when I had just returned to the U.S. after having spent the previous eight years living in Denmark, I accompanied one of my former professors to a Phi Beta Kappa dinner in Philadelphia (he was the member, not I). The speaker that evening, was the former editor of the one-time illustrious Philadelphia Inquirer. His talk, apart from one offhand comment, was eminently forgettable. That one comment, however, left an indelible impression on me. This editor, whom I think was Robert Rosenthal, mentioned, at one point, that he did not think it was important for foreign correspondents to know the language of the country from which they were reporting because, as he explained, “you can always find someone who speaks English.”

How do you begin to challenge a statement of such colossal stupidity? It’s true, of course, that you can always, or at least nearly always, find a person who speaks English. I don’t mean to suggest that that’s not true. The problem is, if you don’t know the indigenous language, to use an expression from anthropology, then you really have no idea whether you are being told the whole story. And the thing is, if you ever do become fluent in a second language, and more or less assimilated into a culture into which you were not born, you will know that foreigners are never given the whole story. This was clear to me as a result of my having lived in Denmark, Denmark, a country with which we are on friendly terms, a country that in many ways is strikingly similar to the U.S. How much clearer ought it to be with respect to countries with which we are not on friendly terms, countries we know are either deeply ambivalent about us or outright hate us?

You will always get a story in English, certainly, from a native about what is going on in some other country, but if you don’t know the language of the people, then you aren’t really in a position to assess whether the story might be biased. You might have some idea of the social class of the person who is your source, but how are you going to know what the people as a whole think of this class, or of this individual? How are you going to know whether this person has some sort of personal or political agenda, or whether he is simply attempting to whitewash was is going on out of national pride, or a fear of being perceived by foreigners as powerless, or provincial, or intolerant?

This seems a fairly straightforward point, yet it is one that nearly all Americans miss. We generalize from our own experience. We assume everyone is just like we are, or just like we are taught to be, which usually means that we assume pretty much everyone in the world is motivated primarily by the objective of personal, material enrichment. We don’t really understand things such as cultural pride or what is, for so much of the rest of the world, the fierce desire for self-determination, so we are pretty much always taken by surprise when such things seems to motivate people. That’s the real meaning of “American exceptionalism,” an expression that is used in an increasing number of disciplines from law, to political science, to history with varying shades of meaning in each. That is, the real meaning is that our difference from the rest of the world is that we are dumber. Yes, that’s right, we are the dumbest f#*@!ing people on the face of the earth and just now, when we need so desperately to understand what is going on in other parts of the world, we are reducing, and in some instances even completely eliminating, the foreign language programs in our schools and universities.

It’s no great mystery why we didn’t foresee the Arab revolts. The mystery is why we seem incapable of learning from either history or our own experience. It doesn’t help for the writing to be on the wall if you can’t read the language.

(This piece originally appeared under the title “The Writing on the Wall” in the February 28, 2011 edition of Counterpunch)

 

 

 

On Violating the First Amendment

Portrait caricatureA friend, Dave Nelson, who is a standup comedian and academic jokes that the thing about being an academic is that it gives one a kind of sixth sense like the kid in the M. Night Shyamalan film, except that instead of seeing dead people you see stupid ones. He’s right about that. Sometimes this “gift” seems more like a curse in that one can feel overwhelmed by the pervasiveness of sheer idiocy. When I saw the New York Times’ piece from April 2 “Supreme Court Strikes Down Overall Political Donation Gap” I wanted to crawl right back into bed and stay there for the rest of my life. Even the dissenting opinions were idiotic. Limits on the size of contributions to individual candidates are still intact. It’s just the overall caps that have been removed, so now while you can’t give more than $2,600 to a single candidate, you can give to as many candidates as you like. It seems the dissenters are worried, however, that the absence of an overall cap raises the possibility that the basic limits may be “circumvented.”

That sounds to me just a little bit too much like arguing over how many angels can dance on the head of a pin. “There is no right in our democracy more basic,” intoned Chief Justice Roberts, “than the right to participate in electing our political leaders.” Oh yeah? Well, if a financial contribution to a political campaign counts as “participating in electing our political leaders,” then a whole slew of Americans’ first amendments rights are being violated all the time in that many Americans don’t have enough money to pay for the basic necessities of life, let alone have any left over to contribute to political campaigns. The rules of the political game have been written in such a way that the “participation” of the poor is limited before the process even gets started. Sure, they can attend protests, write letters, etc., etc. Or can they? What if their penury forces them to work around the clock? What if they are effectively illiterate? Even if they could do these things, however, the extent to which they could affect the political process is limited by the rules of the process itself. They have less money, so they have less say.

Philosophers are fond of invoking the ceteris paribus clause. All other things being equal, they say, this or that would be the case. The point, however, of invoking the ceteris paribus clause is to expose that all other things are not in fact equal. Ceteris paribus, limits on campaign contributions would infringe on people’s First Amendment rights. So if we think such limits do not infringe on people’s First Amendment rights, the next step is to ask why we think this. The obvious answer is that all other things are not equal. That is, people do not all have the same amount of money. Even in a country such as Denmark that has effectively wiped out poverty and hence where everyone in principle is able to contribute money to political campaigns, some people are able to contribute much more money than other people and thus able to have a much greater influence on the political process. Danes, being the intelligent people they are, understand that such an inequity is antithetical to democracy so they legislate that political campaigns will be financed with precisely the same amount of money and that this money will come directly from the government rather than from individuals.

This is, in fact, how pretty much every country in the economically developed world finances political campaigns and presumably for the same reason. Everyone who is not irredeemably stupid understands that to tether the ability of an individual to participate in the political process to the amount of money he can spend on such “participation” is a clear violation of the basic principles of democracy. If writing a check is a form of political expression, then the economic situation of millions of Americans excludes them a priori from such expression, which is to say that their rights are unjustifiably curtailed in a way that the rights of the wealthy are not. (I say “unjustifiably” on the assumption that few people would openly defend the explicitly Calvinist view that the poor are poor through their own fault.)

So the issue here is not really one of defending the First Amendment. It’s “pay to play” in the U.S. You have no money, you have no, or almost no, political voice. Pretty much everyone who is not, again, irredeemably stupid understands that. The haggling is not about protecting people’s First Amendment rights. It’s a power struggle between what in the eyes of most of the world would be considered the wealthy and the super wealthy.

But then one should not underestimate the number of the irredeemably stupid. “The government may no more restrict how many candidates or causes a donor may support,” pontificated Roberts, “than it may tell a newspaper how many candidates it may endorse.” Anyone whose had an introductory course in critical reasoning, or informal logic, will immediately see that the analogy Roberts has drawn here is false. Roberts is using the terms “support” and “endorse” as if they are synonyms. They’re not synonyms though, at least not in Roberts’ analogy. The “support” a “donor” gives to a political candidate is financial, whereas the “endorse[ment]” a newspaper gives to a candidate is editorial. To suggest that such a distinction is unimportant is to beg the question. God help me we must be the only country on the face of the earth where someone can make it not only all the way through law school without understanding such errors in reasoning, but all the way to the Supreme Court.

But fallacious reasoning isn’t the worst of Roberts crimes. Many on the left have long suspected people on the right are more or less closeted fascists. Well, Roberts has come out of the closet. Yes, that’s right. Roberts explicitly compared the removal of overall limits to campaign contributions to Nazi parades. If the First Amendment protects the latter, he asserted, then it protects the former. The analogy here is just as bad as the earlier one given that a person doesn’t have to pay to march in a parade. It’s a little more revealing, however, to those who have eyes to see.

(This piece originally appeared in Counterpunch, 4-6 April 2014 )

Lies, Damned Lies, and Public Discourse on Higher Education

Portrait caricatureTwo staggeringly inane points are being made ad nauseam in public discourse about higher education. The first is that tenure is an institution that has far outlived its usefulness (if it ever was useful). The second is that universities today need to focus on providing students with the technical skills they will need in order to effectively tackle the demands of the contemporary, technologically advanced workplace.

Kevin Carey, director of the education policy program at the New America Foundation wrote last summer in The Chronicle of Higher Education that tenure was “one of the worst deals in all of labor. The best scholars don’t need tenure, because they attract the money and prestige that universities crave. A few worthy souls use tenure to speak truth to administrative power, but for every one of those, 100 stay quiet. For the rest, tenure is a ball and chain. Professors give up hard cash for job security that ties them to a particular institution—and thus leaves them subject to administrative caprice—for life.”

Carey seems to have confused tenure with indentured servitude. Tenure does not tie professors to particular institutions. A tenured professor is just as free to move to a new institution as a non-tenured one. Few will leave a tenured position for an untenured one, but that doesn’t make them less mobile than they would be if tenure were abolished. Academic stars seldom have difficulty moving from one tenured position to another, and professors who are not stars seldom have the opportunity to move.

I’m uncertain what Carey means by “administrative caprice.” In my experience, the faculties most subject to administrative caprice are those at for-profit institutions. Traditional colleges and universities more often than not share the governance of the university with the tenured faculty through the agency of a faculty senate, as well as through the judicious promotion of faculty to administrative positions.

Sure academic stars don’t need tenure. One doesn’t become an academic star, though, by excelling as a teacher. One becomes an academic star by excelling as a scholar. Excellent scholars, however, are not always excellent teachers. A good university needs both. Of course if human beings were fully rational, then university administrators would realize that the long-term health of an institution depends on its good teachers as much as, if not more than, on the reputation of its scholars. No one gives money to his alma mater because of his fond memories of studying at the same institution where Dr. Famous Scholar taught. I give money every month to my alma mater even though not one of my professors was famous. They may not have been famous, but they were fantastic teachers who cared about their students and instilled in them a love of learning. Quaint, eh? That doesn’t change the fact, though, that I give money to the least famous of the institutions of higher education with which I have been affiliated and that I give it for the simple reason of the quality of instruction I received there–and I am not alone.

Carey would likely counter that he is all for good teaching. He believes making professors “at-will employees” would require them to do “a great job teaching.” But who would be the judge of this “great teaching”? What would the standards be? If it were student evaluations, that could be problematic because students are not always the best judges of good teaching. Too many tend to give their most positive evaluations of instructors who give the fewest assignments and the highest numbers of As. Many come around eventually, of course. I had a student write me last winter to thank me for giving her the skills she needed to make it through law school. She had not written that letter upon her graduation from Drexel (let alone at the end of my course), however, but upon her graduation from law school! Unfortunately, we don’t solicit teaching evaluations from alumni for courses they took years earlier. Fortunately for me, I was tenured, so I could be demanding of my students without fearing that angry evaluations might cause me to lose my job.”At-will” professors are not so fortunate.

These are dark times in higher education. The intellectual backbone of a culture is the mass of university-level teachers who slave away in almost complete obscurity, not because they don’t have the intellectual stuff to make it in the highly-competitive atmosphere of “world-class scholarship,” but very often because they do not have the stomach for the nauseating degrees of self-promotion that are sometimes required to break into that world, and because they have too much conscience to abandon their students to their own, literally untutored, devices. Teaching is extraordinarily time consuming. It takes time away from research, the kind of research that brings fame and fortune. Teaching brings it own rewards, and thank heavens there are many who still value those rewards. Unfortunately, few such individuals are found among the ranks of university administrators.

As I said, however, this is not the only inanity that is being bandied about by talking empty-heads. The suggestions that universities should concentrate on providing students with technical skills is even more conspicuously ludicrous. The most obvious objection to this point is that the provision of technical skills is the purview of vo-tech (i.e., vocational-technical) schools and institutes, not universities. For the latter to suddenly begin to focus on imparting technical skills would effectively mean that we would no longer have universities. (That this may be the hidden agenda of the peculiarly American phenomenon of the anti-intellectual intellectual is understandable given that the days of their media hegemony would be threatened by even the slightest rise in the number of Americans who did not need to count on their fingers.)

There is a more profound objection, however, to the assertion that universities ought to focus on teaching technical skills: the shelf-life of those skills has become so short that any technical training a university could provide its students would be obsolete by the time of their graduation if not before. Dealing effectively and adaptively with technology is a skill acquired now in childhood. Many kids entering college are more tech savvy than their professors. Almost everything I know about computers I’ve learned from my students, not from the tech-support staffs of the various institutions with which I’ve been affiliated. One of my students just posted a comment to a class discussion in which he mentioned that one of his engineering professors had explained that what he learned in class might, or might not, apply once he was out in the workforce.

Technology is simply developing too rapidly for universities to be able to teach students the sorts of technical skills that old-farts are blustering they need. Kids don’t need to be taught how to deal with technology. They know that. They need to be taught how to think. The need to be taught how to concentrate (something that it is increasingly evident they are not learning in their ubiquitous interactions with technology). They need to be taught how to focus for extended periods of time on very complex tasks. They need to be taught how to follow extended arguments, to analyze them, to see if they are sound, to see if the premises on which they are based are plausible, to recognize whether any of the myriad inferences involved are fallacious. They need to be taught that they are not entitled to believe whatever they want, that there are certain epistemic responsibilities that go along with having the highly-developed brain that is specific to the human species, that beliefs must be based on evidence, evidence assiduously, painstakingly, and impartially collected.

Finally, students need to be taught to trust their own educated judgment, not constantly to second guess themselves or to defer to a superior simply because that person is superior and hence in a position to fire them. They need to be taught to believe in themselves and their right to be heard, particularly when they are convinced, after much careful thought, that they are correct and that their superiors are not.

Unfortunately, young people are not being taught these things. We are preparing them to be cogs in new kind of machine that no longer includes cogs. No wonder our economy, not to mention our culture more generally, is on the skids.

(This piece originally appeared in the 3 February 2014 issue of Counterpunch)

Education and Philosophy

Mind CoverOne of the things I love about philosophy is how egalitarian it is. There’s no “beginning” philosophy and no “advanced” philosophy. You can’t do philosophy at all without jumping right in the deep end of the very same questions all philosophers have wrestled with since the time of Plato, questions such as what it means to be just, or whether people really have free will.

This distinguishes philosophy from disciplines such as math or biology where there’s a great deal of technical information that has to be memorized and mastered before students can progress to the point where they can engage with the kinds of issues that preoccupy contemporary mathematicians or biologists. There is thus a trend in higher education to create introductory courses in such disciplines for non-majors, courses that can introduce students to the discipline without requiring they master the basics the way they would have to if they intended to continue their study in that discipline.

Philosophy programs are increasingly coming under pressure to do the same kind of thing with philosophy courses. That is, they are essentially being asked to create dumbed-down versions of standard philosophy classes to accommodate students from other majors. Business majors, for example, are often required to take an ethics course, but business majors, philosophers are told, really do not need to read Aristotle and Kant, so it is unreasonable to ask them to do so.

Yeah, that’s right, after all, they’re not paying approximately 50K a year to get an education. They’re paying for a DEGREE, and the easier we can make that for them, the better!

But I digress. I had meant to talk about how egalitarian philosophy is. Anyone can do it, even today’s purportedly cognitively challenged generation. Just to prove my point, I’ll give you an example from a class I taught yesterday.

We’re reading John Searle’s Mind: A Brief Introduction (Oxford, 2004) in my philosophy of mind class this term. We’re up to the chapter on free will. “The first thing to notice,” Searle asserts, when examining such concepts as “psychological determinism” and “voluntary action,” “is that our understanding of these concepts rests on an awareness of a contrast between the cases in which we are genuinely subject to psychological compulsion and those in which we are not” (156).

“What do you think of that statement?” I asked my students. “Is there anything wrong with it?”

“It’s begging the question,” responded Raub Dakwale, a political science major.

“Yes, that’s right,” I said smiling. “Searle is BEGGING THE QUESTION!” Mr. Big deal famous philosopher, John Searle, whose book was published by Oxford University Press, commits a fallacy that is easily identified by an undergraduate student who is not even a philosophy major. That is, the issue Searle examines in that chapter is whether we have free will. He even acknowledges that we sometimes think our actions are free when they clearly are not (the example he gives is of someone acting on a post-hypnotic suggestion, but other examples would be easy enough to produce).

But if we can be mistaken about whether a given action is free, how do we know that any of our actions are free? We assume that at least some of them are free because it sometimes seems to us that our actions are free and other times that they are compelled. But to say that it sometimes seems to us that our actions are free is a very different sort of observation from Searle’s that we are sometimes aware that we are not, in fact, subject to psychological compulsion.

To be fair to Searle, I should acknowledge that he appears to associate “psychological compulsion” with the conscious experience of compulsion, as opposed to what he calls “neurobiological determinism,” which compels action just as effectively as the former, but which is never “experienced” consciously at all. So a charitable reading of the passage above might incline one to the view that Searle was not actually begging the question in that an awareness of an absence of psychological compulsion does not constitute and awareness of freedom.

But alas, Searle has to restate his position in the very next page in a manner that is even more conspicuously question begging. “We understand all of these cases [i.e., various cases of unfree action],” he asserts, “by contrasting them with the standard cases in which we do have free voluntary action” (158, emphasis added).

You can’t get more question begging than that. The whole point is whether any human action is ever really free or voluntary. This move is in the same family with the purported refutation of skepticism that was making the rounds of professional philosophers when I was in graduate school, but which I hope since then has been exposed for the shoddy piece of reasoning that it was.

Back then, philosophers would claim that the classical argument in favor of skepticism rested on cases of perceptual illusion (e.g., Descartes’ stick that appears broken when half of it is submerged under water but which appears unbroken when removed from the water), but that perceptual illusions could be appreciated as such only when compared with what philosophers refer to as “veridical cases” of sense perception. That is, you know the stick is not really broken because removing it from the water reveals that it is not really broken. But if sense experience can reveal the truth about the stick, then the skeptics are mistaken.

But, of course, you don’t need to assume that the latter impression of the stick is veridical in order to doubt that sense experience could ever be veridical. All you need is two conflicting impressions of the same object and the assumption that the same stick cannot be both broken and straight. That is, all you need is two conflicting impressions of the same object and the law of non-contradiction to support skepticism. That seemed glaringly obvious to me when I was still a student, and yet scads of professional philosophers failed to grasp it.

Professional philosophers can be incredibly obtuse, and ordinary undergraduates, even today, with the right sort of help and encouragement, can expose that obtuseness. It’s a real thrill for a student to do that, to stand right up there with the big guys/gals and actually get the better of them in an argument, so to speak. It’s a thrill that is reserved, I believe, for philosophy. That is, it seems unlikely that anything comparable happens in the average calculus or organic chemistry class.

My point here is not to argue that philosophers in general are stupid, or even that Searle, in particular, is stupid. They aren’t, and he isn’t. Despite Searle’s occasional errors in reasoning, he’s one of the most original philosophers writing today. My point is that philosophy, as one of my colleagues put it recently, “is freakin’ hard.” It’s hard even after one has been rigorously schooled in it.

There’s no way to dumb down philosophy and have it still be philosophy. Philosophy is training in thinking clearly. There’s no way to make that easier for people, so why would anyone suggest that there was?

Perhaps it’s because philosophy is the discipline most threatening to the status quo, even more threatening than sociology. Sociology can educate people concerning the injustices that pervade contemporary society, but only training in critical and analytical thinking can arm people against the rhetoric that buttresses those injustices. This country, and indeed the world, would look very different today, if the average citizen back in 2001 had been able to recognize that “You’re either with us, or against us” was a false dichotomy.

(This piece originally appeared in the Nov. 22-24, 2013 Weekend Edition of CounterPunch)

America the Philosophical?

America the Philosophical (cover)Carlin Romano’s book America the Philosophical (Knopf, 2012), opens with an acknowledgement that American culture is not widely perceived, even by Americans, to be very philosophical. He quotes Alexis de Tocqueville’s observation that “in no country in the civilized world is less attention paid to philosophy than in the United States” (p. 5) as well as Richard Hofstadter’s observation in Anti-Intellectualism in American Life (Knopf, 1963) that “[i]n the United States the play of the mind is perhaps the only form of play that is not looked upon with the most tender indulgence” (p. 3). Romano observes that while in England philosophers “write regularly for the newspapers” and that in France philosophers appear regularly on television, “[i]n the world of broader American publishing, literature, art and culture, serious references to philosophy, in either highbrow or mass-market material barely register” (p. 11). Yet despite these facts he boldly asserts that the U.S. “plainly outstrips any rival as the paramount philosophical culture” (p. 15).

I know Romano. I’m on the board of the Greater Philadelphia Philosophy Consortium and Romano has attended some of our meetings. He’s an affable guy, so I was predisposed to like his book despite its wildly implausible thesis. Maybe there is a sense, I thought to myself, in which Americans are more philosophical than people in other parts of the world. We tend to be less authoritarian, I realized hopefully, and authoritarianism is certainly antithetical to genuine philosophical inquiry. Unfortunately, I didn’t have to reflect long to realize that we tend to be less authoritarian than other peoples because we have little respect for learnin’, especially book learnin’. We don’t believe there really are such things as authorities.

How is it possible that the U.S., despite all the evidence to the contrary that Romano marshals, can be “the paramount philosophical culture”? Romano’s answer is that the evidence that suggests we are not philosophical consists of nothing more than “clichés” of what philosophy is. He asserts that if we throw out these “clichés” and reduce philosophy to “what philosophers ideally do” (p. 15), then it will become obvious that America is the “paramount philosophical culture.” That is, Romano makes his case for America the Philosophical by simply redefining what it means to be philosophical, which is to say that he simply begs the question.

According to Romano what philosophers ideally do is “subject preconceptions to ongoing analysis.” But do most Americans do this? It’s not clear to whom he’s referring when he asserts that Americans are supremely analytical. Some Americans are very analytical, but the evidence is overwhelming that most are not. Public discourse in the U.S. is littered with informal fallacies such as ad hominen, straw man, and post hoc, ergo propter hoc arguments that are almost never exposed as such. Americans like to “think from the gut”–which is to say that they tend not to care much for reasoned analysis.

Even if most Americans were analytical in this sense, however, that alone, would not make them philosophical. Subjecting preconceptions to ongoing analysis is certainly part of what philosophers do, but it isn’t all they do. Philosophers have traditionally pursued the truth. That, in fact, is the classical distinction between the genuine philosophers of ancient Greece, figures such as Socrates and Plato, and the sophists. Socrates and Plato were trying to get at the truth. The sophists, on the other hand, were teachers of rhetoric whose primary concern was making money (not unlike for-profit educators today). They were characterized, in fact, as advertising that they could teach their pupils how to make the weaker argument appear the stronger. That is, they taught persuasion with relative, if not complete, indifference to the actual merits of the arguments in question. That’s why they were reviled by genuine seekers after truth.

Romano is unapologetic in presenting his heroes as the sophist Isocrates and the “philosopher” Richard Rorty. He devotes a whole chapter of the book to Isocrates, attempting to defend him against the characterization of sophists presented above. He does a good job of this, but at the end of the chapter, the fact remains that Isocrates was far more practical in his orientation than was Socrates (or any of his followers). “Socrates,” observes Romano, “in the predominant picture of him drawn by Plato, favors discourse that presumes there’s a right answer, an eternally valid truth, at the end of the discursive road. Isocrates favors discourse, but thinks, like Rorty and Habermas, that right answers emerge from appropriate public deliberation, from what persuades people at the end of the road” (p. 558).

But people are often persuaded by very bad arguments. In fact, one of the reasons for the enduring popularity of the informal fallacies mentioned above is how effective they are at persuading people. Truth has to be more than what people happen to agree it is. If that were not the case, then people would never have come to consider that slavery was wrong, and slavery would never have been abolished. It won’t work to point out that slavery was abolished precisely when the majority of humanity was persuaded that it was wrong, and not simply because masses of humanity had to be dragged kicking and screaming to that insight, but primarily because someone had to do the dragging. That is, someone, or some small group of individuals had to be committed to the truth of a view the truth of which evaded the majority of humanity and they had to labor tirelessly to persuade this majority that it was wrong.

Right answers have to be more than “what persuades people at the end of the road” (unless “end of the road” is defined in such as way as to beg the question). The sophists were the first PR men, presenting to young Athenian aristocrats the intoxicating vistas of what can be achieved through self promotion when it is divorced from any commitment to a higher truth. In that sense, Romano is correct, Isocrates, to the extent that he elevates what actually persuades people over what should persuade them, is more representative of American culture than is Socrates.

But is it fair to say that most Americans are followers of this school of thought in that, like Isocrates and Rorty, they have carefully “analyzed” traditional absolutist and foundationalist accounts of truth and found them wanting, that they have self consciously abandoned the Enlightenment orientation toward the idea of the truth in favor of a postmodern relativism or Rortyan pragmatism. There’s a small portion of American society that has done this, a small sub-set of academics and intellectuals who’ve fallen under the Rortyan spell. Most Americans have never even heard of Richard Rorty, let alone self-consciously adopted his version of pragmatism.

That’s not to say we Americans are stupid though. Hofstadter distinguishes, early in Anti-Intellectualism in American Life, between “intelligence” and “intellect.” Intelligence, he observes,

is an excellence of mind that is employed within a fairly narrow, immediate, and predictable range; it is a manipulative, adjustive, unfailingly practical quality—one of the most eminent and endearing of the animal virtues. …. Intellect, on the other hand, is the critical, creative, and contemplative side of mind. Whereas intelligence seeks to grasp, manipulate, re-order, adjust, intellect examines, ponders, wonders, theorizes, criticizes, imagines. Intelligence will seize the immediate meaning in a situation and evaluate it. Intellect evaluates evaluations, and looks for the meanings of situations as a whole. Intelligence can be praised as a quality in animals; intellect, being a unique manifestation of human dignity, is both praised and assailed as a quality in men (p. 25).

These characterizations of intelligence and intellect seem fairly uncontroversial, and according to them, philosophy would appear to be an expression of intellect rather than intelligence. That is, it’s possible to be intelligent, indeed to be very intelligent, without being at all intellectual. Hofstadter asserts that while Americans have unqualified respect for intelligence, they are deeply ambivalent about intellect. “The man of intelligence,” he observes, “is always praised; the man of intellect is sometimes also praised, especially when it is believed that intellect involves intelligence, but he is also often looked upon with resentment or suspicion. It is he, and not the intelligent man, who may be called unreliable, superfluous, immoral, or subversive” (p. 24).

What, you may wonder, does Romano think of this argument? That’s hard to say because the only references to Hofstadter in the book are on pages 3 and 8. His name is never mentioned again, at least not so far as I could tell, and not according to the index. Conspicuously absent from the index as well are both “intelligence” and “intellect.” Romano has written an entire book of over 600 pages that purports (at least according to the intro) to refute Hofstadter’s argument that Americans are generally anti-intellectual without ever actually addressing the argument.

Now that is clever! It’s much easier to come off looking victorious if you simply proclaim yourself the winner without stooping to actually engage your opponent in a battle. It’s kind of disingenuous though and in that sense is a strategy more suited to a sophist than to a genuine philosopher.

(This piece originally appeared in the Nov. 8-10, 2013 Weekend edition of Counterpunch)

When Bad Things Happen to Good Academics

I wonder sometimes what makes people go bad. There doesn’t seem to be any logic to it. James Gilligan, a forensic psychiatrist who has worked with serial killers, writes that nearly all of them have been abused as children. That makes sense to me. I’m inclined to think that people are like other animals, that if they get what they need when they’re young, they grow up to be well- adjusted members of their species. We know how to make an animal, a dog for example, vicious: simply mistreat it. My understanding is that that works on pretty much any animal. If it gets what it needs when it’s young, it will turn out to be a fine adult. If it doesn’t it won’t, it’s that simple.

I like this view, not simply because it’s humane, but also because it’s optimistic. It gives us a formula for wiping out cruelty and intolerance. We just need to work to ensure that people get what they need. We need to make sure that parents don’t have so many financial worries that they cannot be sufficiently attentive to their children, or worse, that they end up taking out their stress on their children. We need to make sure that every person, every job, is accorded respect, that people are treated with dignity, etc., etc., and eventually cruelty and inhumanity will become things of the past. That’s a tall order, of course, and perhaps it’s idealistic, but it’s something to aim at anyway. There was a time when people said things such as poverty and hunger could never be wiped out. But we’ve made great strides in eliminating them, and have even eliminated them completely in parts of the world. It’s widely believed now to be a question of will, not of practical possibility. If we want to eliminate poverty and hunger, we can.

I like to think that the same thing is true with cravenness and cruelty (meaning that it can be wiped out if we have the will to do so) and generally, I do believe it. But sometimes I’m confronted with examples of what seems to be completely gratuitous and inexplicable viciousness from people whose lives to all outward appearances anyway, would seem to be pretty cushy, people who give no evidence (no other evidence anyway) of having been abused as children. The mystery of why some people go bad gives me a certain sympathy with John Calvin, and others who believe in predestination, or the view that some people are just inherently bad. I don’t really believe that, but in my weaker moments, I wonder if it might not be true.

There are just so many variables. Is it not enough to have loving and attentive parents? Can having been picked last for a team in gym class cause a wound that festers for years leading finally to generalized suspicion and paranoia as an adult? Can one slight on the playground explain a vicious and unprovoked attack on a colleague years later?

My mother once said that in her experience, religion made good people better and bad people worse. (Both her parents were ministers in the Assemblies of God church.) The same thing, sadly, seems to be true of academia. I don’t believe there is a better life than that of a tenured academic. Hardly ever in human experience are the stars aligned so perfectly as they are in the lives of tenured academics. Teaching of any sort is fulfilling but most teaching doesn’t come with the job security and other material benefits routinely accorded to the tenured academic. To be paid to teach, not to mention to read, and write, well, it’s like winning the lottery.

I had some wonderful teachers when I was in college. This led me to believe that teachers were, in general, not simply wiser and more learned than the average person, but also kinder, more considerate, more understanding and tolerant. This made sense to me because they had what appeared to be wonderful jobs. How could anyone not be happy with such a life, I asked myself, and how could anyone who was happy fail to be anything but nice?

Since then, however, I have learned that two kinds of people enter academia: (1) well adjusted people, people who are basically kind and decent, sympathetic and empathetic, people who love to read and sometimes (though not always) also to write, people who like people in general and like to think that in their own small way they are doing something to better the human condition, and (2) maladjusted people who like to use their learning as a club with which they can intimidate others, people who suffer from varying degrees of paranoia, people possessed of a messianic zeal to single-handedly save humanity from what in their fevered imaginations they believe to be the ravages inflicted on it by the forces of evil they take to be embodied in the form of despised colleagues, people who spend more time plotting to undermine and even publicly humiliate these colleagues than they spend on teaching.

There is almost no way to check the damage the latter sort of academic can cause once he or she becomes tenured. They sit plotting and poisoning the air in their departments until they retire, and they do not generally retire until very late in life because they thrive on conflict, a kind of conflict that it is hard to find outside a professional context. When, as sometimes happens, I’m confronted with the spectacle of the damage such people can do, the havoc they can wreak in an otherwise harmonious community of scholars, the pain they can cause to colleagues for whom they have conceived a pathological dislike, I have a certain sympathy with the anti-academic element in our vociferously anti-intellectual society. Academics are not really the plague that they are increasingly represented as being, but there is, lamentably, a sizable contingent that gives the rest of us a bad name.

On Race and Intelligence

My fifth grade class photo.

My fifth grade class photo.

One of the readers of this blog, who came to it after having read a piece in the online political magazine CounterPunch, suggested that I should post, after a suitable interval, all my articles from CounterPunch to this blog. I published a piece recently in CounterPunch on racism, so I thought perhaps I should post an earlier piece I did on racism here. I think it is a good piece to follow the post “On Teaching” because it relates to that topic as well. This piece originally appeared under the title “Racism and the American Psyche” in the Dec. 7, 2007 issue of CounterPunch.

Race is in the news again. First it was the Jena Six, then Nobel laureate James D. Watson’s assertion, that blacks are less intelligent than whites, and finally, a series of articles two weeks ago in Slate arguing that there was scientific evidence to back Watson’s claim.

The reaction to these recent developments was predictable. There have been a number of heated debates on the internet concerning not only race and intelligence, but also the appropriateness of studying race and intelligence. Two crucial points have yet to be made, however. The first concerns the contentious association of intelligence with  IQ score and the second is the equally contentious assumption that we have anything like a clear scientific conception or race.

Let’s take the first one first. What is intelligence anyway? We have no better grasp of this than we have of the relation of the mind to the brain. Sure, some people can solve certain sorts of puzzles faster than other people, but everyone knows people who are great at Scrabble, or crosswords, or chess, or who can fix almost any mechanical or electrical gadget, but who seem unable to wrap their minds around even the most rudimentary of social or political theories. Then there are the people with great memories who are able to retain all the elements of even the most arcane theories and who can undertake an explanation of them if pressed, but whose inability to express them in novel terms betrays that they have not really grasped them after all. Other people–I’ve known quite a few of this type–have keenly analytical minds. They can break individual claims, or even entire theories, down into their conceptual components, yet they appear to lack any sort of synthetic intelligence in that they are unable to see the myriad implications of these analyses. Still other people are great at grasping the big picture, so to speak, but have difficulty hanging onto the details.

Some people plod slowly and methodically toward whatever insights they achieve and others receive them almost effortlessly, through flashes of inspiration. But the insights of the former group are sometimes more profound than those of the latter group. Then there are people who are mostly mistaken in their beliefs, sometimes quite obviously so, but correct in some one belief the implications of which are so staggering that we tend to forget they are otherwise unreliable.

I’m inclined to put Watson in this last group. Perhaps that’s not fair. After all, I know of only one point on which he is obviously mistaken. That mistake is so glaring, however, that it leads me to think he is probably more like an idiot savant than a genuinely intelligent human being. I.Q. scores represent something. It just isn’t all that clear what. To suggest that they represent intelligence in any significant sense is thus to betray that one has less than the ideally desirable quantity of this quality himself.

Sure the mind, and therefore intelligence, is intimately connected with the brain. Read Oliver Sacks if you want to see just how intimate that connection is. Sacks is one of my favorite authors not simply because the substance of his writings is so fascinating, but also because he is himself so clearly intelligent. Not only does he not go leaping to conclusions on issues that lie outside his area of professional expertise (though I have to say I’d be more interested to hear Sacks’ social and political views than Watson’s), he doesn’t go leaping to conclusions about the implications of what he has observed in his own work in neurology. He’d be one of the first people, I think, to defend the claim that we do not yet have a clear enough idea of what intelligence is to be reliably able to quantify it. We don’t even understand it well enough yet to be able to say confidently that it is quantifiable. At this point, all we can say is that it appears so intimately connected with the brain that it can, in some sense, be associated with, or represented by, we-know-not-yet-what neurological activities or tendencies.

Okay, so far, so little. But what is a black brain and what is a white brain? Most blacks in the U.S., as opposed to blacks in Africa, have a great deal of white blood, or whatever you want to call it. If whites really were more intelligent than blacks, that would mean African-Americans would be that much more intelligent than Africans. (I’m sure my friend, the Nigerian author, Chimamanda Ngozi Adichie, would be interested to hear that one.) There may well be people who believe this. I am not aware of any empirical evidence, however, that supports such a conclusion. My own experience does not support it. I grew up in a predominantly black neighborhood and attended predominantly black schools from fourth grade to college. Since that time I have also met more than a few Africans. I couldn’t detect any difference in intelligence. I’m unaware of even anecdotal evidence that would support the conclusion that there was such a difference. Do you see what I’m saying? We’re not looking at a slippery slope here, but at a meteoric descent down into a pile of deep doo-doo.

From what I’ve read, there is no clear scientific definition of race. “Race” is just a name we give to a collection of physical characteristics such as eye and hair color and degree of pigmentation of the skin. There is no race gene. There are just genes that encode for these individual characteristics. So how many, and what sort, of  characteristics does one have to have to be either black or white. It is some kind of ineffable sum isn’t it? Blacks sometimes have very pale skin, some whites actually have darker skin than some blacks. Blacks even occasionally have blue eyes, or straight hair, just as whites often have brown eyes or tightly curled hair.

In the past, we just arbitrarily determined what made a person black, and, by implication, white. Since, presumably, we have gotten beyond the point where we would say that even one drop of black blood makes a person black, the only reasonable definition of race (even given its circularity) would, therefore, appear to be one based on the statistical representation of the various races in one’s family tree. That would mean people with predominantly white, or perhaps I should say “white-ish” ancestry would be considered white. Have you ever seen a photo of Charles Chestnut or Anatole Broyard?  Not only are these guys clearly white, according to this definition, there are a whole lot of other people walking around this country who call themselves “black” because of the social environment into which they were born, but who ought properly to consider themselves white.

Since when have scientific studies been undertaken on ineffable, or arbitrarily determined, classes of thing? It’s like trying to determine whether people with purportedly good taste are more intelligent than people with purportedly bad taste, or whether people who live in Chicago are more intelligent than people who live in L.A. You might undertake such a thing as a sociological study with some arbitrarily agreed upon criteria for what would constitute good and bad taste, or for how far out into the suburbs you want to go before you decide you have left Chicago, as well with some equally arbitrarily agreed upon criteria for what constitutes intelligence.

You cannot undertake such a thing though as a scientific study (no matter how convinced you may be in the genetic superiority of people who live in Chicago), and to think that you could betrays that you have a very weak grasp of what constitutes natural science. Given that race, at least from the standpoint of natural science, is nothing more than a collection of certain physical characteristics, the view that white people are more intelligent than black people is not uncomfortably close to view of the Nazis that blue-eyed blonds were inherently superior to everyone else–it is essentially the same thing.

As I said earlier, I spent a huge portion of my life in the almost exclusive company of black people. I’ve been around black people and I’ve been around white people and I haven’t found any general differences in terms of intelligence. My experience has led me to believe that most of what often passes for intelligence is actually intellectual self confidence, confidence in one’s own reasoning powers, confidence in the value of one’s insights. Teachers, of which I am one, will tell you that you can just see some people’s brains seize up when they are confronted with tasks they fear may be beyond them but which sometimes later prove not to have been beyond them. This fear, however, that certain tasks are beyond one, is a substantial obstacle to completing them. One stumbles again and again, fearing his “guess” is just that, a guess, rather than understanding. One fails to pursue an insight for fear that it is not genuine, or from fear that it is so obvious that others have come to it long ago.

I don’t mean to suggest that there are not innate differences in intelligence among human beings. I’m sure there are, but I agree with what I believe Noam Chomsky said somewhere about how these differences, measured relative to the difference in intelligence between human beings and their closes relatives the apes, are simply vanishingly small.

I construe my job as an educator not to impart knowledge, but to nurture intellectual confidence. (Of course this could be partly a defensive mechanism because I am a philosopher, which means I don’t have any knowledge to impart.) I try to teach critical thinking skills, of course, but even more important to me is somehow to get my students to believe in their own intellectual potential because even these skills, I believe, can, at least to a certain extent, be acquired naturally by people who are confident in their ability to acquire them.

I say, teach people to believe in themselves and then see what they are able to do with that faith. But be very careful when you start judging the results because if anything of value has emerged from the recent debates on race and intelligence, it is that many of us in the U.S. are much closer to the edge of idiocy than we would like to admit. Noted intellectuals have failed to grasp even the most basic facts about what constitutes natural scientific research and failed to understand that to parade this ignorance in the way they have before a public still marked by social and economic inequities that cut along racial lines is offensive in the extreme. The whole thing has been very humbling. It has shown, I believe, that racism is still very firmly entrenched in the American psyche.

(This piece originally appeared under the title “Racism and the American Psyche” in the Dec. 7, 2007 issue of CounterPunch.)