A Cure for Academic Bullying

Portrait caricatureWorkplace bullying is an increasing problem. Books are being written about it, and there is even a Workplace Bullying Institute. The problem isn’t restricted to the business world. Books such as Faculty Incivility: The Rise of the Academic Bully Culture and What to Do About It, Bully in the Ivory Tower: How Aggression and Incivility Erode American Higher Education, and Workplace Bullying in Higher Education suggest that bullying is a particular problem among academics.

Unfortunately, academic bullying is often allowed to go on unchecked. That’s just how academics are, people think. What can you expect? It’s hard to control tenured faculty, administrators argue, because there is little you can do to discipline them.

Rot starts from the top, though. The failure of administrators to curb academic bullying and other forms of professional misconduct on the part of faculty is the reason academic departments become dysfunctional. Faculty harass and bully one another with impunity. Distressed administrators have even been known to reward trouble makers in a misguided attempt to win their goodwill, not realizing that the trouble makers see such gestures as a sign of weakness and a green light to cause even more trouble.

Bullying can sometimes take such unequivocal forms as yelling at and or publicly disparaging the victim, but micro-aggressions are the bully’s trademark because there are innumerable opportunities for them and because no single micro-aggression ever appears sufficiently heinous to warrant disciplinary action. Micro-aggressions include such things as a consistently condescending tone of voice on the part of the bully toward her target, repeatedly interrupting the target when she attempts to make a point in a department or committee meeting, laughing or making faces or whispering to colleagues when the target speaks and failing to respect the target’s authority as a committee chair, program director, or academic advisor. (More examples of bullying are listed in an article entitled “Tackling the Menace of Workplace Bullying” on the website Law Crossing.)

People usually try to ignore micro-aggressions. Sometimes they even worry they’ve imagined them. People don’t expect to be relentlessly taunted and goaded. Human beings are social creatures and evidence suggest that their default position relative to others is trust (see, for example, Louis Quéré, “The Cognitive and Normative Structure of Trust,” and Guido Möllering, “The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation and Suspension”).

That people are social creatures and, all other things being equal, generally decent, kind, sympathetic and empathetic toward those with whom their lives bring them into contact holds, I believe, the key to controlling academic bullies, and any other kind of bully for that matter. People don’t like bullies. Since all human beings, as social creatures, want to be liked, bullies can be controlled, to a large extent anyway, if not entirely, by simple public condemnation of their behavior. Someone in a position of authority has to make it clear that the offending behavior is unacceptable and will not be tolerated. Academic departments, like other professional communities, become toxic when people in positions of authority are reluctant to do this.

The absence of an open condemnation of unacceptable behavior makes people fearful that if they express disapproval of such behavior, they’ll draw the attention of the bully and become her next victim. Worse, rather than expressing disapproval, many people will try to ingratiate themselves with the bully in order to insulate themselves from attack, hence rewarding the bully socially for her bullying behavior.

A bully whose behavior is positively reinforced by frightened colleagues quickly becomes out of control. There are ways, however, to discipline faculty, even tenured ones. They can be denied authority on committees, excluded from departmental social functions and given teaching schedules that effectively isolate them from the rest of the faculty. In extreme cases they can be excluded from serving on committees, assigned undesirable courses, have their teaching loads increased and be denied promotion and sabbatical leave.

Ideally, a code of professional conduct that clearly indicates what sorts of behavior are considered unacceptable would become part of the bylaws of the department, college, and or university. This code can then be referred to when taking disciplinary action. Such a code isn’t necessary, however, for disciplining academics for bullying and other forms of professional misconduct. There are myriad ways chairs and other upper-level administrators can make clear to faculty that they will not tolerate unacceptable behavior.

The safest and most effective way to discipline faculty, however, is simply to openly condemn bad behavior. A statement by the chair at a department meeting that harassing and badgering colleagues, raising one’s voice at a colleague, rolling one’s eyes, or making a face when a colleague is speaking, are all unacceptable, can have a dramatic effect because everyone will know at precisely whom these remarks are aimed. Few things are so humiliating for an adult as to have it pointed out publicly that she is behaving chronically like an ill-mannered child. It’s humiliating, and human beings, being social creatures, are sensitive to public humiliation.

A subtle wave of relief will ripple through those present at the meeting because they will feel that finally, there is something they can do when they are the victims of bullying by colleagues: they can complain to the chair. A wave of relief will ripple through the faculty and people will begin, gradually, to band together against the bully or bullies.

I’ve spoken so far only about the general harassment and bullying of colleagues. Everything I have said about that, however, is equally true of other forms of unacceptable professional behavior, such as sexual harassment. There have been several highly publicized cases of sexual harassment among academics in recent years. Emphasis has tended to be placed on the harassers themselves. The problem I believe, however, is less the individuals than what would appear to be a lack of moral leadership in the environments that have allowed the harassment to take place. It isn’t difficult to communicate to a colleague that that sort of behavior is unacceptable. It it continues over a period of weeks, months, or even years, it’s because those in authority have decided to look the other way.

A department chair needs to have the courage to publicly condemn unacceptable behavior and upper-level administrators such as the dean of the relevant college need to support the chair in such condemnations. I have seen firsthand the effect that strong moral leadership can have on a department and the effect that the absence of such leadership can have.

Few people, it seems to me, understand the nature of moral authority. A moral leader is not a “nice” person in the sense in which people generally understand that term. A moral leader is not someone who tries to look the other way when people behave badly, or endeavors always to interpret malevolent behavior in a way that makes it appear benign. Sometimes people’s behavior is conspicuously ill intentioned and interpreting it in any other way can have disastrous consequences.

Plato addresses this problem in an early examination of what constitutes just behavior in his Republic. “[E]veryone would surely say,” observes Socrates, “that if a man takes weapons from a friend when the latter is of sound mind, and the friend demands them back when he is mad, one shouldn’t give back such things, and the man who gave them back would not be just” (Republic, 331 c-d). Giving back the weapons wouldn’t be just, of course, because the the “mad” man is going to use them for malevolent purposes and may do things that he will likely later regret himself when he has recovered his sanity.

People are sometimes ill intentioned and it is not a kindness toward anyone to fail to acknowledge that. Certain forms of behavior are unacceptable, however, quite independently of the intentions behind them. The reluctance to recognize unacceptable behavior as such is not equivalent to being “nice.” It is cowardice and people in positions of authority who suffer from this conflation of decency and cowardice can wreak untold damage on those over whom they have authority.

A moral leader is not necessarily perfect. No human being, after all, is perfect. A moral leader is not necessarily a warm, effusive person, not necessarily outgoing or gregarious. A moral leader may lack a sense of humor. There are numerous other personal flaws from which they may suffer. What makes a moral leader, or what gives a person moral authority, is that they exhibit an unwavering commitment to decency and fairness, that they openly and unequivocally condemn unacceptable behavior while at the same time, continuing to evince respect for those who engage in it.

That is, unacceptable behavior must be quickly an unequivocally condemned. It is important to appreciate, however, that only the behavior should be condemned, not the people who engage in it. Anyone can behave badly, at least occasionally, and an environment where harassment and bullying have become the rule rather than the exception encourages people who would not otherwise behave in such a way, to do so as a form of self defense.

A moral leader is someone who can make clear, both that certain forms of behavior are unacceptable, and that they expect even those who engage in them habitually are capable of reforming themselves. People need to feel that they can redeem themselves when they’ve behaved badly. A moral leader is someone who makes clear that they believe everyone under their authority is perfectly capable of behaving properly and that only such behavior is acceptable.

A moral leader has to have the courage to condemn unacceptable behavior, knowing that doing so will expose their own behavior to closer scrutiny than most people are comfortable with. It takes a lot of courage to throw the first stone, so to speak, particularly since none of us is without sin. A moral leader has to have that courage, however, or we are all lost.

(This piece originally appeared in May 2, 2017 issue of Counterpunch under the title “Academic Bullying and the Vacuum of Moral Leadership in the Academy.”)

“Fake News” and the Responsibility of Philosophers

”Fake news” is not actually a new phenomenon. Sheldon Rampton and John Stauber document in their book Trust Us, We’re Experts, that it is an invention of the public relations industry that is as old as the industry itself. The First Amendment makes it pretty hard to prevent such efforts to manipulate public opinion. That’s the price it appears we have to pay to live in a free society. It wouldn’t be so serious a problem as it is if people in the field of higher education didn’t fall down on their responsibility to alert alert the public to it.

A recent case in point is an article entitled Study: For-Profits Match Similar Nonprofits in Learning Results,” that ran in the January 11th issue of Inside Higher Education. The third paragraph cites the study as claiming that “[i]n all six comparisons, students at proprietary institutions actually outperformed the students at the nonproprietary comparison institutions.”

Who would have thought that? I mean, really, aren’t the for-profits infamous for having poor learning outcomes? One doesn’t actually even have to look at the original study, however, to realize that something is fishy with it. The first red flag is the fact that the study uses the euphemism “proprietary” institutions rather than the straightforwardly descriptive “for-profits.”

The study is described as measuring “learning outcomes in six areas for 624 students from four for-profit higher education systems, which the study does not name, and then compar[ing] the scores with those of a matched group of students from 20 unnamed public and private institutions that were selected because they were similar to the for-profits on key measures related to academic performance” (emphasis added).

The second red flag is the “matched group of students.” Matched in what sense? That isn’t explained.

The third red flag is that neither the traditional nonprofit institutions nor the for-profit ones are named.

The fourth red flag is that the nonprofit institutions were selected because they were “similar to the for-profits on key measures related to academic performance.” Really? Since for-profits are reputed to have abysmal results in terms of academic performance, they must have searched long and hard to find nonprofits that had similarly abysmal results, if indeed they really did find such institutions, which cannot be verified since they are “unnamed.”

The whole thing reminds me of an old television commercial for Rolaids. Someone dumps a white powder into a beaker of what appears to be water with red food coloring in it, then stirs the powder, which gradually becomes clear again, while a voiceover announces “In this test with Rolaids’ active ingredient, laboratory acid changes color to PROVE Rolaids consumes 47 times its weight in excess stomach acid.”

There was no way, however, to prove that the beaker had actually contained acid, or that what had been dumped into it was really Rolaids’ “active ingredient,” or indeed even that the change in color represented Rolaids’ “absorbing” anything let alone acid, not to mention how much acid. I credit that commercial with starting me on the road to becoming a philosophy professor because even as a child I found it outrageous that someone should expect I would take it as proving anything.

One of the chief duties of philosophers, I believe, is to expose errors in reasoning and man were there errors of reasoning in that commercial. I learned very early that commercials were not to be trusted. Most people know that, I think. Most people know to be skeptical when, for example, a commercial claims that some detergent removes stains better than any other detergent ever invented and presents what it purports is proof.

Most people know to be skeptical about claims made in commercials. Unfortunately, most people do not know to be skeptical about claims made in what is presented to them as “news.” That’s why I use Rampton and Stauber’s book when I teach critical reasoning. I feel it is part of my responsibility as a philosopher to alert my students to the pervasiveness of the practice of dressing up propaganda as news.

Back to the education “study.” Even if the study were genuine, the results are pretty much useless because the whole study is circular. That is, the study admittedly sought out “matched” students at “similar” institutions. It thus isn’t surprising that the for-profits come out looking better than one would expect if the selection of students and institutions had been random.

The study was conducted by a group called the Council for Aid to Education, or CAE. The “Executive Summary” (p. 2) of the study makes it very clear where the CAE stand on the for-profits. “The proprietary education sector stands at a crossroads,” it begins.

Proprietary colleges and universities are key providers of postsecondary education in the United States, enrolling over 1.7 million students. However, the sector has seen its enrollment decline since its peak in 2010 due to the growing employment opportunities following the Great Recession, the heavy regulatory burdens imposed during the last six years, and the perception that education at proprietary institutions is not on par with that offered by their non-proprietary peers.

The Council for Aid to Education (CAE) believes this junction presents a critical time to explore the efficacy of proprietary institutions and to document the student learning they support.

If there were doubt in anyone’s mind concerning the study’s objectivity, the opening of the “Executive Summary” should remove it. The CAE set out to show that the for-profits were doing as good a job of educating students as are traditional nonprofit institutions of higher education.

Of course the CAE is within its rights to do this. The problem is not so much the the CAE’s clear bias in favor of the “proprietary education sector,” but Inside Higher Education’s failure to expose that bias. Inside Higher Education purports to be “an independent journalism organization.” This “journalistic independence is critical,” IHE asserts in its “Ownership Statement,” “in ensuring fairness and thoroughness” of its “higher education coverage.”

The thing is, Quad Partners, “a private equity firm that invests in the education space,” purchased a controlling share of IHE in 2014. That is, Inside Higher Education is now an arm of the “proprietary education sector.” So the purported “independence,” “fairness,” and “thoroughness” of its reporting on issues in higher education appears now to be only so much more propaganda in the service of the for-profits.

Doug Lederman, the editor of Inside Higher Education protested to me in an email, after he saw an earlier version of this article that appeared in Counterpunch, that he and the people over at IHE had had their own suspicions about that piece and that that was why they had given is only a “barebones Quick Take.”

“What confuses me,” he said,

is why you viewed our minimalist and questioning treatment of the CAE research as evidence that we are in the tank for the for-profits because our lead investor has also invested in for-profit higher education––rather than as proof that our ownership situation has changed us not at all.

I fear Lederman may be right in protesting that IHE had not been willingly shilling for the for-profits. It apparently didn’t even occur to him that the suspicions he and others had had about the study should have led them to do a full-scale investigation of it (an investigation that would have involved actually reading at least the “Executive Summary” of the study to which they included a link in their article) and to publish an exposé on the study as a piece of propaganda for the for-profits rather than a “barebones” article that presented it as “news.”

What concerns me is not so much that the for-profits are trying to manipulate public opinion to make it more favorable toward them. What concerns me is that the editors of a leading publication that reports on issues in higher don’t have the critical acumen to identify what ought to have been readily identifiable as a piece of “fake news,” or the journalistic experience and expertise to know what to do with it once they have identified it as such.

That’s disturbing.

(An earlier version of this piece appeared in the 12 January 2017 issue of Counterpunch.)



On the Demise of the Professoriate

Portrait caricatureMichael Schwalbe’s recent article in Counterpunch, The Twilight of the Professors,” paints a rather darker picture of the future of the professoriate than I believe is warranted. Or perhaps it would be more correct to say, paints a somewhat misleading picture of the dynamics behind the demise of the professoriate as a positive force for social and political progress.

Schwalbe is correct that the “tightening of the academic job market has intensified competition for the tenure-track jobs that remain.” He’s also correct that it is prudent for graduate students to focus their efforts on publishing in academic journals rather than in media more accessible to a general readership. Hasn’t that always been the case, though? The problem, I submit, with academic journals is not so much that their intended audience is academics as it is that most of these journals just aren’t very good. The pressure on academics is not merely to publish in academic journals, but also to edit them with the result that there are now too many of them and too many of questionable quality. Almost anyone can get published in an academic journal nowadays, but much of the material that is published in them, as Alan Sokal demonstrated to devastating effect back in 1996, is gibberish.

The situation is not much better with academic presses than with scholarly journals. Even some of the top presses are publishing material that would never have seen the light of day in earlier periods when there was greater quality control. Nearly all the emphasis in academia these days, as in the larger society, is on quantity rather than quality. Academic presses, such as Lexington Books, send out mass emails to academics, effectively trawling for book proposals. I spoke about this problem recently with a representative from the more selective German publisher Springer. “These guys are just publishing too much,” he said, smiling in a conspiratorial way.

No one can keep up with which among the proliferating academic journals and presses are actually any good, so emphasis tends to be placed on the quantity of publications a scholar can amass rather than on their quality. This means, of course, that the savvy self promoter with little of any real value to contribute to the life of the mind can more easily carve out an academic career now than can a genuine intellectual who would have actual scruples about dressing up old insights as new ones, as well as against publishing what is effectively the same article over and over again.

The problem is not that academic journals are in principle of no popular value so much as it is that most academic journals these days are in fact of no popular value because there are just too damn many of them and most of them are no damn good. Hardly anyone actually reads them, even among academics.

It may be true, as Schwalbe observes, that graduate students are advised to craft Facebook pages and Tweets “with the concerns of prospective employers in mind,” but what does that mean? The prospective employers in questions are other scholars, not university administrators. There are too many demands on the time of most university administrators for them to scrutinize the Facebook pages and Tweets of all the scholars who earn the department hiring committee’s seal of approval. The problem, I believe, is less that hiring committees are on the lookout for political radicals as it is that they’re too often on the lookout for people who are going to show them up. Few people are possessed of such high self esteem that they are comfortable in the company of someone they suspect might actually be smarter than they are, and academics are no exception.

The growing ranks of “contingently employed” academics “is further conservatizing” charges Schwalbe. The argument that such faculty will censor their writing in order not to offend their employers sounds good in the abstract, but as is so often the case with arguments that are internally coherent, it doesn’t correspond to the facts. Some particularly fearful and feeble-minded underemployed academics may do this, but it doesn’t take long for contingent faculty to realize that most of the tenured faculty in their own departments, to say nothing of university administrators, don’t even know who they are, let alone what they are writing.

Contingently employed academics represent a growing army of educated, literate, yet grossly underpaid workers. Such a population is the ideal breeding ground for political radicalism and, indeed, some are beginning to unionize.

Demands for grant getting, as Schwalbe observes, undeniably slant research in the sciences in the corporate direction. But, most leftist public intellectuals have traditionally come from the humanities rather than the sciences.

The real threat, I believe, to the professoriate as a force for positive social and political change, comes not so much from the factors Schwalbe mentions as from things more deeply rooted in American culture such as egoism and anti-intellectualism. The egoism that is fostered by so much in American culture keeps many academics from making what appear on a superficial level to be personal sacrifices even for the good of their students, let alone for the good of society more generally (I say “on a superficial level” because faculty who make such “sacrifices” are rewarded many times over by the satisfaction of actually bettering the lives of their students and, in that way, of humanity more generally). Tenured faculty have a responsibility to help their students develop the critical, analytical and communicative skills that are essential to actualizing the distinctively human potential for self determination, but too many abdicate this responsibility because of the time and effort required to live up to it.

The professoriate is almost universally opposed to assessment. I have never been an opponent of it however. I’m well aware, of course, that it can be abused, but it has become increasingly clear to me that at least one reason so many academics are opposed to it is that it would reveal that they are not, in fact, teaching their students much.

Some effort at assessment of student learning in the humanities could be a vehicle of revolutionary change in that it would put pressure on tenured faculty actually to teach students something, and would expose that the working conditions of many contingent faculty are such that requiring this of them is like asking them to make bricks without straw.

Assessment could be a force for radical social and political change in that implemented properly, it would make all too clear both how decades of the dismantling of the K-12 system of public education and the analogous onslaught on the funding of higher education have not simply resulted in a generation of less-than-excellent sheep, but also, as Ray Marshall and Marc Tucker argue in Thinking for a Living: Education and the Wealth of Nations (Basic Books, 1993), threaten the social and economic future of this country. In fact, assessment in higher education could have such a profoundly progressive effect that if I didn’t know better, I’d think the movement against it was a conservative plot.

It isn’t a conservative plot, though, unless conservatives are far more devious than most of us imagine and their whole sustained attack on education in general was originally designed to produce an academic job market that was so neurotically competitive it would gradually weed out academics committed to anything other than the advancement of their own, individual careers.

It’s counter productive to demonize university administrators. There are some bad ones, of course, and their salaries, like the salaries of their corporate equivalents, need to be brought back into line with those of the individuals they supervise. It’s not university administrators, however, as Schwalbe claims, who are responsible for the purported decline in leftist intellectuals, but scarcity conditions in the academic job market that are ultimately traceable back to American egoism and anti-intellectualism. But American egoism and anti-intellectualism are problems that are far less easily solved than the largely phantom “conservatizing trends” in higher education that Schwalbe discusses in his article.

(This piece originally appeared in the 8 June 2015 edition of Counterpunch under the title “The Real Threat to the American Proefssoriate.”)

On the Importance of Learning a Second Language

Portrait caricatureThere is an article in today’s New York Times entitled “The Benefits of Failing at French” that reminds me of a debate in the Times back in 2011 entitled “Why Didn’t the U.S. Foresee the Arab Revolts?” Six scholars, academics, political appointees and think tankers debate the issue in The Times online. They all appear to believe it is very complicated.

Jennifer E. Sims, a professor and director of intelligence studies at Georgetown University’s School of Foreign Service and a senior fellow at the Chicago Council on Global Affairs, thinks the problem is our over reliance on foreign assistance.

Reuel Marc Gerecht, a former CIA officer, thinks it’s that we were captured by “group think.”

Vicki Divoll, a professor of U.S. government and constitutional development at the United States Naval Academy, the former general counsel to the Senate Select Committee on Intelligence and assistant general counsel to the C.I.A., thinks the president is at fault for failing to allocate sufficient resources to the CIA. But then, on the other hand, she says “no amount of resources can predict the unknowable. Sometimes no one is to blame.”

Richard K. Betts, the Arnold A. Saltzman Professor of War and Peace Studies, director of the International Security Policy program at Columbia University and the author of Enemies of Intelligence: Knowledge and Power in American National Security, thinks the problem is that “it is impossible to know exactly what will catalyze a chain of events producing change.”

Celeste Ward Gventer associate director of the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin and a former deputy assistant secretary of defense, thinks the problem is that we’re too preoccupied with “foreign policy minutiae.”

Peter Bergen, the director of the national security studies program at the New America Foundation and is the author of “The Longest War: The Enduring Conflict between America and Al-Qaeda,” thinks the explanation is as simple as that revolutions are unpredictable.

There is probably some small grain of truth in each of these rationalizations. I’m only a professor of philosophy, not a professor of political science, let alone a former governmental bureaucrat, political appointee, or think-tank fat cat. It seems pretty clear to me, however, that despite all the theories offered above, the real reason we didn’t see the revolts coming was good old-fashioned stupidity. That’s our strong suit in the U.S.–stupidity. We’re the most fiercely anti-intellectual of all the economically developed nations, and proud of it! We go on gut feelings. Oh yes, our elected officials even proudly proclaim this. We don’t think too much, and on those few occasions when we do, we’re really bad at it for lack of practice.

One of the great things about Americans is that they are probably the least nationalistic people in the world. Oh yeah, they trot out the flag on the fourth of July and for the Super Bowl, but that’s about it. A few crazy fascists brandish it throughout the year, but most people, except for a brief period after September 11,th pay no attention to them. Danes, in contrast, about whom I know a little because I lived there for eight years, plaster Danish flags all over everything. Stores put them in their windows when they have sales, they are standard decorations for almost every holiday and a must, in their small toothpick versions, for birthday cakes. This isn’t because they suffer from some sort of aesthetic deficiency that compels them to turn to this national symbol for want of any better idea of how to create a festive atmosphere. No, Danes throw Danish flags all over everything because they are incredibly nationalistic, as is about every other European and almost everyone else in the rest of the world who’s had to fight off the encroachment of foreign powers onto their national sovereignty. We’ve seldom, OK, really never, had to do that. Still, if we, you know, seriously studied European history, we would have something of an appreciation for how basic is nationalism to the psyches of most people in the world and we could use this as our point of departure for understanding the dynamics of international relations, as well as for appreciating the obstacles to our understanding of the internal dynamics of other countries.

Years ago, when I had just returned to the U.S. after having spent the previous eight years living in Denmark, I accompanied one of my former professors to a Phi Beta Kappa dinner in Philadelphia (he was the member, not I). The speaker that evening, was the former editor of the one-time illustrious Philadelphia Inquirer. His talk, apart from one offhand comment, was eminently forgettable. That one comment, however, left an indelible impression on me. This editor, whom I think was Robert Rosenthal, mentioned, at one point, that he did not think it was important for foreign correspondents to know the language of the country from which they were reporting because, as he explained, “you can always find someone who speaks English.”

How do you begin to challenge a statement of such colossal stupidity? It’s true, of course, that you can always, or at least nearly always, find a person who speaks English. I don’t mean to suggest that that’s not true. The problem is, if you don’t know the indigenous language, to use an expression from anthropology, then you really have no idea whether you are being told the whole story. And the thing is, if you ever do become fluent in a second language, and more or less assimilated into a culture into which you were not born, you will know that foreigners are never given the whole story. This was clear to me as a result of my having lived in Denmark, Denmark, a country with which we are on friendly terms, a country that in many ways is strikingly similar to the U.S. How much clearer ought it to be with respect to countries with which we are not on friendly terms, countries we know are either deeply ambivalent about us or outright hate us?

You will always get a story in English, certainly, from a native about what is going on in some other country, but if you don’t know the language of the people, then you aren’t really in a position to assess whether the story might be biased. You might have some idea of the social class of the person who is your source, but how are you going to know what the people as a whole think of this class, or of this individual? How are you going to know whether this person has some sort of personal or political agenda, or whether he is simply attempting to whitewash was is going on out of national pride, or a fear of being perceived by foreigners as powerless, or provincial, or intolerant?

This seems a fairly straightforward point, yet it is one that nearly all Americans miss. We generalize from our own experience. We assume everyone is just like we are, or just like we are taught to be, which usually means that we assume pretty much everyone in the world is motivated primarily by the objective of personal, material enrichment. We don’t really understand things such as cultural pride or what is, for so much of the rest of the world, the fierce desire for self-determination, so we are pretty much always taken by surprise when such things seems to motivate people. That’s the real meaning of “American exceptionalism,” an expression that is used in an increasing number of disciplines from law, to political science, to history with varying shades of meaning in each. That is, the real meaning is that our difference from the rest of the world is that we are dumber. Yes, that’s right, we are the dumbest f#*@!ing people on the face of the earth and just now, when we need so desperately to understand what is going on in other parts of the world, we are reducing, and in some instances even completely eliminating, the foreign language programs in our schools and universities.

It’s no great mystery why we didn’t foresee the Arab revolts. The mystery is why we seem incapable of learning from either history or our own experience. It doesn’t help for the writing to be on the wall if you can’t read the language.

(This piece originally appeared under the title “The Writing on the Wall” in the February 28, 2011 edition of Counterpunch)




On Violating the First Amendment

Portrait caricatureA friend, Dave Nelson, who is a standup comedian and academic jokes that the thing about being an academic is that it gives one a kind of sixth sense like the kid in the M. Night Shyamalan film, except that instead of seeing dead people you see stupid ones. He’s right about that. Sometimes this “gift” seems more like a curse in that one can feel overwhelmed by the pervasiveness of sheer idiocy. When I saw the New York Times’ piece from April 2 “Supreme Court Strikes Down Overall Political Donation Gap” I wanted to crawl right back into bed and stay there for the rest of my life. Even the dissenting opinions were idiotic. Limits on the size of contributions to individual candidates are still intact. It’s just the overall caps that have been removed, so now while you can’t give more than $2,600 to a single candidate, you can give to as many candidates as you like. It seems the dissenters are worried, however, that the absence of an overall cap raises the possibility that the basic limits may be “circumvented.”

That sounds to me just a little bit too much like arguing over how many angels can dance on the head of a pin. “There is no right in our democracy more basic,” intoned Chief Justice Roberts, “than the right to participate in electing our political leaders.” Oh yeah? Well, if a financial contribution to a political campaign counts as “participating in electing our political leaders,” then a whole slew of Americans’ first amendments rights are being violated all the time in that many Americans don’t have enough money to pay for the basic necessities of life, let alone have any left over to contribute to political campaigns. The rules of the political game have been written in such a way that the “participation” of the poor is limited before the process even gets started. Sure, they can attend protests, write letters, etc., etc. Or can they? What if their penury forces them to work around the clock? What if they are effectively illiterate? Even if they could do these things, however, the extent to which they could affect the political process is limited by the rules of the process itself. They have less money, so they have less say.

Philosophers are fond of invoking the ceteris paribus clause. All other things being equal, they say, this or that would be the case. The point, however, of invoking the ceteris paribus clause is to expose that all other things are not in fact equal. Ceteris paribus, limits on campaign contributions would infringe on people’s First Amendment rights. So if we think such limits do not infringe on people’s First Amendment rights, the next step is to ask why we think this. The obvious answer is that all other things are not equal. That is, people do not all have the same amount of money. Even in a country such as Denmark that has effectively wiped out poverty and hence where everyone in principle is able to contribute money to political campaigns, some people are able to contribute much more money than other people and thus able to have a much greater influence on the political process. Danes, being the intelligent people they are, understand that such an inequity is antithetical to democracy so they legislate that political campaigns will be financed with precisely the same amount of money and that this money will come directly from the government rather than from individuals.

This is, in fact, how pretty much every country in the economically developed world finances political campaigns and presumably for the same reason. Everyone who is not irredeemably stupid understands that to tether the ability of an individual to participate in the political process to the amount of money he can spend on such “participation” is a clear violation of the basic principles of democracy. If writing a check is a form of political expression, then the economic situation of millions of Americans excludes them a priori from such expression, which is to say that their rights are unjustifiably curtailed in a way that the rights of the wealthy are not. (I say “unjustifiably” on the assumption that few people would openly defend the explicitly Calvinist view that the poor are poor through their own fault.)

So the issue here is not really one of defending the First Amendment. It’s “pay to play” in the U.S. You have no money, you have no, or almost no, political voice. Pretty much everyone who is not, again, irredeemably stupid understands that. The haggling is not about protecting people’s First Amendment rights. It’s a power struggle between what in the eyes of most of the world would be considered the wealthy and the super wealthy.

But then one should not underestimate the number of the irredeemably stupid. “The government may no more restrict how many candidates or causes a donor may support,” pontificated Roberts, “than it may tell a newspaper how many candidates it may endorse.” Anyone whose had an introductory course in critical reasoning, or informal logic, will immediately see that the analogy Roberts has drawn here is false. Roberts is using the terms “support” and “endorse” as if they are synonyms. They’re not synonyms though, at least not in Roberts’ analogy. The “support” a “donor” gives to a political candidate is financial, whereas the “endorse[ment]” a newspaper gives to a candidate is editorial. To suggest that such a distinction is unimportant is to beg the question. God help me we must be the only country on the face of the earth where someone can make it not only all the way through law school without understanding such errors in reasoning, but all the way to the Supreme Court.

But fallacious reasoning isn’t the worst of Roberts crimes. Many on the left have long suspected people on the right are more or less closeted fascists. Well, Roberts has come out of the closet. Yes, that’s right. Roberts explicitly compared the removal of overall limits to campaign contributions to Nazi parades. If the First Amendment protects the latter, he asserted, then it protects the former. The analogy here is just as bad as the earlier one given that a person doesn’t have to pay to march in a parade. It’s a little more revealing, however, to those who have eyes to see.

(This piece originally appeared in Counterpunch, 4-6 April 2014 )

Lies, Damned Lies, and Public Discourse on Higher Education

Portrait caricatureTwo staggeringly inane points are being made ad nauseam in public discourse about higher education. The first is that tenure is an institution that has far outlived its usefulness (if it ever was useful). The second is that universities today need to focus on providing students with the technical skills they will need in order to effectively tackle the demands of the contemporary, technologically advanced workplace.

Kevin Carey, director of the education policy program at the New America Foundation wrote last summer in The Chronicle of Higher Education that tenure was “one of the worst deals in all of labor. The best scholars don’t need tenure, because they attract the money and prestige that universities crave. A few worthy souls use tenure to speak truth to administrative power, but for every one of those, 100 stay quiet. For the rest, tenure is a ball and chain. Professors give up hard cash for job security that ties them to a particular institution—and thus leaves them subject to administrative caprice—for life.”

Carey seems to have confused tenure with indentured servitude. Tenure does not tie professors to particular institutions. A tenured professor is just as free to move to a new institution as a non-tenured one. Few will leave a tenured position for an untenured one, but that doesn’t make them less mobile than they would be if tenure were abolished. Academic stars seldom have difficulty moving from one tenured position to another, and professors who are not stars seldom have the opportunity to move.

I’m uncertain what Carey means by “administrative caprice.” In my experience, the faculties most subject to administrative caprice are those at for-profit institutions. Traditional colleges and universities more often than not share the governance of the university with the tenured faculty through the agency of a faculty senate, as well as through the judicious promotion of faculty to administrative positions.

Sure academic stars don’t need tenure. One doesn’t become an academic star, though, by excelling as a teacher. One becomes an academic star by excelling as a scholar. Excellent scholars, however, are not always excellent teachers. A good university needs both. Of course if human beings were fully rational, then university administrators would realize that the long-term health of an institution depends on its good teachers as much as, if not more than, on the reputation of its scholars. No one gives money to his alma mater because of his fond memories of studying at the same institution where Dr. Famous Scholar taught. I give money every month to my alma mater even though not one of my professors was famous. They may not have been famous, but they were fantastic teachers who cared about their students and instilled in them a love of learning. Quaint, eh? That doesn’t change the fact, though, that I give money to the least famous of the institutions of higher education with which I have been affiliated and that I give it for the simple reason of the quality of instruction I received there–and I am not alone.

Carey would likely counter that he is all for good teaching. He believes making professors “at-will employees” would require them to do “a great job teaching.” But who would be the judge of this “great teaching”? What would the standards be? If it were student evaluations, that could be problematic because students are not always the best judges of good teaching. Too many tend to give their most positive evaluations of instructors who give the fewest assignments and the highest numbers of As. Many come around eventually, of course. I had a student write me last winter to thank me for giving her the skills she needed to make it through law school. She had not written that letter upon her graduation from Drexel (let alone at the end of my course), however, but upon her graduation from law school! Unfortunately, we don’t solicit teaching evaluations from alumni for courses they took years earlier. Fortunately for me, I was tenured, so I could be demanding of my students without fearing that angry evaluations might cause me to lose my job.”At-will” professors are not so fortunate.

These are dark times in higher education. The intellectual backbone of a culture is the mass of university-level teachers who slave away in almost complete obscurity, not because they don’t have the intellectual stuff to make it in the highly-competitive atmosphere of “world-class scholarship,” but very often because they do not have the stomach for the nauseating degrees of self-promotion that are sometimes required to break into that world, and because they have too much conscience to abandon their students to their own, literally untutored, devices. Teaching is extraordinarily time consuming. It takes time away from research, the kind of research that brings fame and fortune. Teaching brings it own rewards, and thank heavens there are many who still value those rewards. Unfortunately, few such individuals are found among the ranks of university administrators.

As I said, however, this is not the only inanity that is being bandied about by talking empty-heads. The suggestions that universities should concentrate on providing students with technical skills is even more conspicuously ludicrous. The most obvious objection to this point is that the provision of technical skills is the purview of vo-tech (i.e., vocational-technical) schools and institutes, not universities. For the latter to suddenly begin to focus on imparting technical skills would effectively mean that we would no longer have universities. (That this may be the hidden agenda of the peculiarly American phenomenon of the anti-intellectual intellectual is understandable given that the days of their media hegemony would be threatened by even the slightest rise in the number of Americans who did not need to count on their fingers.)

There is a more profound objection, however, to the assertion that universities ought to focus on teaching technical skills: the shelf-life of those skills has become so short that any technical training a university could provide its students would be obsolete by the time of their graduation if not before. Dealing effectively and adaptively with technology is a skill acquired now in childhood. Many kids entering college are more tech savvy than their professors. Almost everything I know about computers I’ve learned from my students, not from the tech-support staffs of the various institutions with which I’ve been affiliated. One of my students just posted a comment to a class discussion in which he mentioned that one of his engineering professors had explained that what he learned in class might, or might not, apply once he was out in the workforce.

Technology is simply developing too rapidly for universities to be able to teach students the sorts of technical skills that old-farts are blustering they need. Kids don’t need to be taught how to deal with technology. They know that. They need to be taught how to think. The need to be taught how to concentrate (something that it is increasingly evident they are not learning in their ubiquitous interactions with technology). They need to be taught how to focus for extended periods of time on very complex tasks. They need to be taught how to follow extended arguments, to analyze them, to see if they are sound, to see if the premises on which they are based are plausible, to recognize whether any of the myriad inferences involved are fallacious. They need to be taught that they are not entitled to believe whatever they want, that there are certain epistemic responsibilities that go along with having the highly-developed brain that is specific to the human species, that beliefs must be based on evidence, evidence assiduously, painstakingly, and impartially collected.

Finally, students need to be taught to trust their own educated judgment, not constantly to second guess themselves or to defer to a superior simply because that person is superior and hence in a position to fire them. They need to be taught to believe in themselves and their right to be heard, particularly when they are convinced, after much careful thought, that they are correct and that their superiors are not.

Unfortunately, young people are not being taught these things. We are preparing them to be cogs in new kind of machine that no longer includes cogs. No wonder our economy, not to mention our culture more generally, is on the skids.

(This piece originally appeared in the 3 February 2014 issue of Counterpunch)

Education and Philosophy

Mind CoverOne of the things I love about philosophy is how egalitarian it is. There’s no “beginning” philosophy and no “advanced” philosophy. You can’t do philosophy at all without jumping right in the deep end of the very same questions all philosophers have wrestled with since the time of Plato, questions such as what it means to be just, or whether people really have free will.

This distinguishes philosophy from disciplines such as math or biology where there’s a great deal of technical information that has to be memorized and mastered before students can progress to the point where they can engage with the kinds of issues that preoccupy contemporary mathematicians or biologists. There is thus a trend in higher education to create introductory courses in such disciplines for non-majors, courses that can introduce students to the discipline without requiring they master the basics the way they would have to if they intended to continue their study in that discipline.

Philosophy programs are increasingly coming under pressure to do the same kind of thing with philosophy courses. That is, they are essentially being asked to create dumbed-down versions of standard philosophy classes to accommodate students from other majors. Business majors, for example, are often required to take an ethics course, but business majors, philosophers are told, really do not need to read Aristotle and Kant, so it is unreasonable to ask them to do so.

Yeah, that’s right, after all, they’re not paying approximately 50K a year to get an education. They’re paying for a DEGREE, and the easier we can make that for them, the better!

But I digress. I had meant to talk about how egalitarian philosophy is. Anyone can do it, even today’s purportedly cognitively challenged generation. Just to prove my point, I’ll give you an example from a class I taught yesterday.

We’re reading John Searle’s Mind: A Brief Introduction (Oxford, 2004) in my philosophy of mind class this term. We’re up to the chapter on free will. “The first thing to notice,” Searle asserts, when examining such concepts as “psychological determinism” and “voluntary action,” “is that our understanding of these concepts rests on an awareness of a contrast between the cases in which we are genuinely subject to psychological compulsion and those in which we are not” (156).

“What do you think of that statement?” I asked my students. “Is there anything wrong with it?”

“It’s begging the question,” responded Raub Dakwale, a political science major.

“Yes, that’s right,” I said smiling. “Searle is BEGGING THE QUESTION!” Mr. Big deal famous philosopher, John Searle, whose book was published by Oxford University Press, commits a fallacy that is easily identified by an undergraduate student who is not even a philosophy major. That is, the issue Searle examines in that chapter is whether we have free will. He even acknowledges that we sometimes think our actions are free when they clearly are not (the example he gives is of someone acting on a post-hypnotic suggestion, but other examples would be easy enough to produce).

But if we can be mistaken about whether a given action is free, how do we know that any of our actions are free? We assume that at least some of them are free because it sometimes seems to us that our actions are free and other times that they are compelled. But to say that it sometimes seems to us that our actions are free is a very different sort of observation from Searle’s that we are sometimes aware that we are not, in fact, subject to psychological compulsion.

To be fair to Searle, I should acknowledge that he appears to associate “psychological compulsion” with the conscious experience of compulsion, as opposed to what he calls “neurobiological determinism,” which compels action just as effectively as the former, but which is never “experienced” consciously at all. So a charitable reading of the passage above might incline one to the view that Searle was not actually begging the question in that an awareness of an absence of psychological compulsion does not constitute and awareness of freedom.

But alas, Searle has to restate his position in the very next page in a manner that is even more conspicuously question begging. “We understand all of these cases [i.e., various cases of unfree action],” he asserts, “by contrasting them with the standard cases in which we do have free voluntary action” (158, emphasis added).

You can’t get more question begging than that. The whole point is whether any human action is ever really free or voluntary. This move is in the same family with the purported refutation of skepticism that was making the rounds of professional philosophers when I was in graduate school, but which I hope since then has been exposed for the shoddy piece of reasoning that it was.

Back then, philosophers would claim that the classical argument in favor of skepticism rested on cases of perceptual illusion (e.g., Descartes’ stick that appears broken when half of it is submerged under water but which appears unbroken when removed from the water), but that perceptual illusions could be appreciated as such only when compared with what philosophers refer to as “veridical cases” of sense perception. That is, you know the stick is not really broken because removing it from the water reveals that it is not really broken. But if sense experience can reveal the truth about the stick, then the skeptics are mistaken.

But, of course, you don’t need to assume that the latter impression of the stick is veridical in order to doubt that sense experience could ever be veridical. All you need is two conflicting impressions of the same object and the assumption that the same stick cannot be both broken and straight. That is, all you need is two conflicting impressions of the same object and the law of non-contradiction to support skepticism. That seemed glaringly obvious to me when I was still a student, and yet scads of professional philosophers failed to grasp it.

Professional philosophers can be incredibly obtuse, and ordinary undergraduates, even today, with the right sort of help and encouragement, can expose that obtuseness. It’s a real thrill for a student to do that, to stand right up there with the big guys/gals and actually get the better of them in an argument, so to speak. It’s a thrill that is reserved, I believe, for philosophy. That is, it seems unlikely that anything comparable happens in the average calculus or organic chemistry class.

My point here is not to argue that philosophers in general are stupid, or even that Searle, in particular, is stupid. They aren’t, and he isn’t. Despite Searle’s occasional errors in reasoning, he’s one of the most original philosophers writing today. My point is that philosophy, as one of my colleagues put it recently, “is freakin’ hard.” It’s hard even after one has been rigorously schooled in it.

There’s no way to dumb down philosophy and have it still be philosophy. Philosophy is training in thinking clearly. There’s no way to make that easier for people, so why would anyone suggest that there was?

Perhaps it’s because philosophy is the discipline most threatening to the status quo, even more threatening than sociology. Sociology can educate people concerning the injustices that pervade contemporary society, but only training in critical and analytical thinking can arm people against the rhetoric that buttresses those injustices. This country, and indeed the world, would look very different today, if the average citizen back in 2001 had been able to recognize that “You’re either with us, or against us” was a false dichotomy.

(This piece originally appeared in the Nov. 22-24, 2013 Weekend Edition of CounterPunch)