Reflections on “Reflections from a Hashtag”

I was sexually harassed by one of my professors in graduate school. He was the director of the graduate program and was known to host parties at his apartment for the graduate students. I assumed, when he invited me to his apartment for “dinner,” that the “dinner” in question was such an event.

I was wrong. I was the only guest for what had clearly been conceived as a romantic dinner. There was filet mignon wrapped in bacon and an excellent cabernet. I was surprised to find myself the object of such attentions, but I wasn’t frightened, not at first, anyway. The professor in question, let’s call him Professor H. (H. for “harasser”), was only a few years older than I was. We were both young and unattached. Unfortunately, though I was flattered by his interest, I didn’t reciprocate it. I tried to communicate this to him in a way that would minimize his hurt and embarrassment. He was a hard man to put off though. The evening ended, I kid you not, with his literally chasing me around the dining table. He kept moving uncomfortably close to me and I kept moving away, around and around the dining table until, finally, he seemed to get the point.

When he realized, or appeared to realize, anyway, that I was not simply playing hard-to-get, he told me that he appreciated my honesty and that what was most important to him was that we continued to have a positive professional relationship. And we did continue to have a positive professional relationship, at least for the next couple of weeks.

“Whew, dodged that bullet,” I thought to myself gratefully.

But then, things changed. He suddenly became openly hostile toward me. He would publicly disparage everything I said, both in class and outside of it. He once spent an entire class arguing to the other students present that a remark I had made in relation to what is known in philosophy as “personal identity theory” demonstrated beyond all doubt that I was an irredeemable idiot.

Professor H.’s behavior toward me became increasingly hostile as the weeks passed. Finally, the lone tenured woman in the department approached me privately and explained that she knew what was going on. She had been a victim of Professor H. herself. It was very important, she explained to me, that I complain to the chair of the department because Professor H. was disparaging me to other faculty to such an extent that I was in danger of losing my funding.

So I dutifully complained to the chair. I will never forget his first words.

“Oh, I am so sorry,” he said, “Professor H. has been warned about this.”

By that time, I knew Professor H. had a history. I just didn’t know how extensive it was. It seemed he used the graduate program as his personal dating pool. He’d started doing that, actually, even before he’d become the director of the graduate program. His behavior was so conspicuous that a group of graduate students had actually protested his appointment as director.

“Oh, I am so sorry,” the chair said. “You don’t want to make a formal complaint against him, though,” he continued, “because that would hurt his career.”

I’m not a vindictive person. It seemed to me that Professor H. was not really evil, but simply incredibly emotionally immature. I didn’t want to hurt his career (though in retrospect, I doubt very much that a formal complaint against him would have had that effect). I just wanted him to leave me alone. I wanted to have my work evaluated fairly. The chair said he would talk to Professor H., and I’m sure he did, because my funding was not revoked.

I never again enjoyed the favor, in an academic sense, I mean, of any of my professors. When I’d first arrived in the program, I’d been feted as if I were some kind of celebrity. All the professors welcomed me, commented favorably on my work, invited me to their homes, etc. Not after I had gone to the chair about Professor H., though. No one was openly hostile, the way Professor H. had been, but everyone was decidedly cool. I was grudgingly given passing grades (one of my papers from this period was later published, in the same form in which I had submitted it for a grade, and then reprinted both in English and in Chinese and Russian translations, in an anthology and a textbook). The same well-intentioned female professor again approached me privately, however, and explained to me that I should not solicit letters of recommendation from any of the faculty in my own program, that I would have to rely on what she knew was my growing list of professional contacts outside my program when it came time for me to look for a job.

Thanks to the practice of blind reviewing, which involves concealing the identity of the author of a scholarly paper when it is submitted to referees for judgment concerning whether it should be published, I was able to begin publishing scholarly articles while still a student and to build, gradually, a reputation that made it possible for me to obtain a Fulbright fellowship and then, finally, a tenure-track job.

It was a long, hard slog, though. The job market back then was no better than it is now. Philosophy is a notoriously sexist discipline and a job candidate, man or woman, who cannot present letters of recommendation from any of the faculty of their degree-granting institution is automatically thought of as suspect.

I labored mightily for years to become the best possible scholar, and amassed an impressive collection of publications, and yet I still regard it as something of a miracle that I was able to secure a tenure-track position, to get tenure, and finally, to be promoted to full “Professor.” I knew I would have to work as if my life depended on it, so I did. It seemed pointless to reflect on how unfair it was that I did not enjoy the patronage of a powerful professor that is more often than not the decisive factor in opening the door to a tenure-track position for a newly-minted Ph.D. in philosophy. That was my lot, so I tried to make the best of it.

I spent a great deal of time, however, trying to figure out how things could ever go so terribly wrong as they had for me. Why hadn’t Professor H. been read the riot act immediately after his first transgression? Why hadn’t the proverbial fear of God been placed in him by so that he would at least have been discreet, even if he’d been a victim of satyriasis and unable actually to stop himself?[1] Professor H. wasn’t the only professor in that department who abused his authority to initiate sexual liaisons with female graduate students. Not everyone did it, but many did, and those who didn’t, viewed the antics of the others as a spectator sport.

This all came rushing back to me when I read Jian Ghomeshi’s “Reflections from a Hashtag” in the New York Review of Books (October 11, 2018). Ghomeshi was a prominent Canadian broadcaster who lost his job and was publicly vilified after he was accused of sexual harassment and assault.

“When a well-known fellow broadcaster saw me with a twenty-something date at a film festival event in Toronto,” writes Jian Ghomeshi, who was then thirty-nine, “he left a voice mail saying, ‘Dude, you are the king!’ I basked in his praise,” Ghomeshi continues, “He’d never called me before and never mentioned my work; the real message was the women I was with were the true gauge of success” (p. 29).

That was the way Professor H. was viewed. He was “the king!” He eventually left the university in question and moved to another university where he continued to harass female students until one of them finally sued.

I haven’t mentioned Professor H.’s name because singling him out for blame is now pointless. You could figure out who he was, of course, if you wanted to do a little research. The purpose of my recounting these events, however, is to make clear that harassment and abuse of women is a systemic problem. It goes on for one very simple and straightforward reason: It is allowed to go on. This is partly because of what Ghomeshi correctly identifies as a “systemic culture of unhealthy masculinity” (p. 30) that leads many men not merely to derive pleasure from harassing and abusing women, but to derive pleasure from the spectacle of it.

There is more to the problem than that, though. There is what I call “the first-stone problem.” Ghomeshi writes that many male acquaintances furtively commiserated with him. “What happened to you,” they wrote, “could have been me.” People are naturally reluctant to point fingers at one another for fear of having fingers pointed back at them. Most people are not sexual predators, but there aren’t many people who don’t have something to be embarrassed about or ashamed of, something they don’t want paraded before the general public. This makes people naturally reluctant to call out the bad behavior of others.

“Professor H. didn’t mean to harass you,” the chair explained to me. “He didn’t mean to make you feel uncomfortable or threatened, or to coerce you into a sexual relationship.” (I’m paraphrasing now, of course, because the conversation took place many years ago and only his first words remain indelibly marked on my memory.) “He’s just emotionally immature. He reacts badly when things don’t go the way he wants them to.”

I think that was a pretty accurate depiction of Professor H.’s character. He wasn’t a bad guy. He just had an unfortunate habit of behaving badly, very badly under certain circumstances. Philosophers distinguish, however, between explanation and justification. Professor H.’s emotional immaturity explained his bad behavior, but it didn’t justify it. Bad behavior should never be tolerated just because the person engaging it isn’t normally a bad person. People need to be called on their behavior, and judgment about their character, reserved for a higher power. Unless, of course, they are being considered for a position of such authority that the question of their character, however ultimately undecidable, becomes crucially relevant.

People are so social that they tend to respond more or less appropriately to censure, even private censure, to say nothing of public censure, by someone in a position of authority. If people are called on their inappropriate behavior, unless they are serious sociopaths, they will usually, at least eventually, stop engaging in it.

Aristotle figured this out long ago (if Plato hadn’t actually figured it out before him). If you want people to behave in certain ways, he wrote in the Nicomachean Ethics (Books I and II), then the culture needs to reinforce that kind of behavior. And if there are ways you don’t want them to behave, then the culture needs to send a clear message to that effect as well.

We need, without exception, to hold individuals responsible for behaviors that violate norms of what we, as a culture, collectively feel is right. We are deluding ourselves, however, if we think that by targeting individuals in this way we are dealing effectively with what is clearly a systemic problem. It may give the impression we are doing something about the problem, but all the while, the problem waxes and thrives.

 

[1] Discretion is actually very important. One of the problems of the conspicuous abuse of authority to initiate sexual relationships with students is that it makes other students feel vulnerable. Not only does it create anxiety. It can lead students to think that they would be well advised to initiate such relationships themselves simply to make sure that they have a protector.

(An earlier version of this piece appeared in the 1 October 2018 issue of the online political journal Counterpunch.

Some Reflections on Mother’s Day

AlicePietyAndOneOfHerBabiesMy father said he had no memory of his mother, none that is, except for one very vague, shadowy memory of being held by a woman who was standing at a stove, cooking something. He was the third of what would eventually be five children. His mother was seventeen when he was born.

Alice Eugenia Farrar liked to write poetry. She never finished school, though, because she was forced to marry her teacher when she became pregnant by him. Her husband, Austin LeGrand Piety, protested in a letter to one of his children that he had not seduced her. He was twenty-seven when they married. She was fifteen.

Fifteen is too young, of course, to begin having children. Five children in rapid succession ruined her health. She contracted tuberculosis and was sent to the Arkansas State Tuberculosis Sanatorium when my father was still a toddler. Her husband was unable both to earn a living and take care of five children all by himself, so he deposited them in the Working Woman’s Home and Day Nursery in Little Rock. That was sometime in 1934 when my father was three years old.

As far as I can make out, he never saw his mother again. She had brief periods when her health improved, but she never fully recovered. She died of tuberculosis when my father was twelve. He told me several times that I looked like her, but that judgment must have been based on photographs rather than memory. We had a few photographs of her when I was a child. I didn’t know they were photos of her, though. I didn’t know who the subject was. They were just old photos of someone I didn’t know.

My father never talked about his mother when I was a child because he had no memories of her to share. I didn’t even know her name until I was in my thirties. It just never came up. She must have been a very good mother, though (an impressive accomplishment for someone who was herself still a child) because all her children turned out well. And by “well” I don’t mean they became successful professionals, though they did. I mean they were kind and caring people, sympathetic and empathetic with developed social consciences. Research has shown that the first two to three years in a child’s life are crucial to its development, more crucial, perhaps, than any subsequent years. A child who isn’t held enough, during this period, who isn’t nurtured and coddled enough, cooed over and talked to enough, will not develop normally in an emotional sense (see, for example, Hildyard and Wolfe, “Child neglect: developmental issues and outcomes”). I don’t mean to suggest that it is impossible for children who received insufficient attention in those crucial early years to develop into happy, well-adjusted adults. I don’t know that that is something anyone could ever know. What we do know, however, is that achievement is more difficult for them.

Alice Eugenia Farrar Piety must have been an extraordinary person. It is almost impossible now, however, to learn anything about her. There is no one alive anymore, so far as I know, who knew her when she was alive. Among my father’s papers, there is one heart-rending letter from her to her sister-in-law begging that her children not be put up for adoption.

There are also a couple of her poems. They’re unimpressive, though, compared to those of her more intellectually accomplished husband. But then, she was not allowed to become intellectually accomplished. An early pregnancy and subsequent poverty killed whatever dreams she may have had of that sort.

She doesn’t look poor in the photo at the beginning of this piece. She looks, in fact, very middle class, sitting in the backyard of a nice middle-class house, coddling a baby that her apparent age suggests she is looking after for someone else. She isn’t the child’s baby-sitter, though, she’s its mother. No one even knows anymore who the child in the photo is. The writing on the back reads simply “Alice with one of her babies.” The photo was likely taken early in her marriage, before her husband’s pay was cut in half, before he began to lose one job after another.

Scan 4The photo here, of my father and his two older brothers Philip (far left) and Louis (middle), gives a better idea of Alice’s material circumstances after her marriage and repeated pregnancies. One can just glimpse a portion of their house in the background.

James Agee wrote in Let Us Now Praise Famous Men that the inability of the share croppers he studied to control how many children they had was one of the main reasons they were unable to work themselves up out of poverty. How many people, men as well as women, have been forced into marriages simply as a result of the fact that they could not control whether they would have children if they had sex? How much alcoholism, drug addiction, and child abuse can be traced back to unwanted pregnancies? How much violence more generally can be traced to anger at feeling trapped in a life over which one has no control?

My father was a life-long contributor to Planned Parenthood. He was a kind-hearted person with a soft spot for children. He didn’t like the idea of abortion, but he liked even less the idea of children being forced to become mothers, or worse, driven to acts that might end their own lives as well as those of their unborn children. Many people associate Planned Parenthood with abortion. Most Planned Parenthood clinics don’t provide abortion services, though, but only contraceptive services (see their website where they state that only “some” Planned Parenthood clinics provide abortion services), as well as more general healthcare services. (I received treatment for a bladder infection at a Planned Parenthood clinic when I was in graduate school.) Such services, my father thought, were essential to a humane and properly functioning society.

How many women’s lives have vanished, like Alice’s, because they became mothers too young?

My father was adamant about continuing his monthly contributions to Planned Parenthood, even when, late in retirement, he could ill afford them. I never asked him if this was in part because of his mother. It seems to me now, though, that she has to have been there, in the back of his mind, the shadowy figure, balancing his tiny form on her hip as she stood at the stove.

 

(An earlier version of this piece appeared in the 11 May issue of Counterpunch. I had erroneously indicated that Alice’s age when she married was thirteen. That was what my father had told me. My cousin Timothy, one of Louis’ children corrected me, however. He calculated Alice’s age when she married based on her birthday on her gravestone and the year his father, the oldest child, was born.)

A Cure for Academic Bullying

Portrait caricatureWorkplace bullying is an increasing problem. Books are being written about it, and there is even a Workplace Bullying Institute. The problem isn’t restricted to the business world. Books such as Faculty Incivility: The Rise of the Academic Bully Culture and What to Do About It, Bully in the Ivory Tower: How Aggression and Incivility Erode American Higher Education, and Workplace Bullying in Higher Education suggest that bullying is a particular problem among academics.

Unfortunately, academic bullying is often allowed to go on unchecked. That’s just how academics are, people think. What can you expect? It’s hard to control tenured faculty, administrators argue, because there is little you can do to discipline them.

Rot starts from the top, though. The failure of administrators to curb academic bullying and other forms of professional misconduct on the part of faculty is the reason academic departments become dysfunctional. Faculty harass and bully one another with impunity. Distressed administrators have even been known to reward trouble makers in a misguided attempt to win their goodwill, not realizing that the trouble makers see such gestures as a sign of weakness and a green light to cause even more trouble.

Bullying can sometimes take such unequivocal forms as yelling at and or publicly disparaging the victim, but micro-aggressions are the bully’s trademark because there are innumerable opportunities for them and because no single micro-aggression ever appears sufficiently heinous to warrant disciplinary action. Micro-aggressions include such things as a consistently condescending tone of voice on the part of the bully toward her target, repeatedly interrupting the target when she attempts to make a point in a department or committee meeting, laughing or making faces or whispering to colleagues when the target speaks and failing to respect the target’s authority as a committee chair, program director, or academic advisor. (More examples of bullying are listed in an article entitled “Tackling the Menace of Workplace Bullying” on the website Law Crossing.)

People usually try to ignore micro-aggressions. Sometimes they even worry they’ve imagined them. People don’t expect to be relentlessly taunted and goaded. Human beings are social creatures and evidence suggest that their default position relative to others is trust (see, for example, Louis Quéré, “The Cognitive and Normative Structure of Trust,” and Guido Möllering, “The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation and Suspension”).

That people are social creatures and, all other things being equal, generally decent, kind, sympathetic and empathetic toward those with whom their lives bring them into contact holds, I believe, the key to controlling academic bullies, and any other kind of bully for that matter. People don’t like bullies. Since all human beings, as social creatures, want to be liked, bullies can be controlled, to a large extent anyway, if not entirely, by simple public condemnation of their behavior. Someone in a position of authority has to make it clear that the offending behavior is unacceptable and will not be tolerated. Academic departments, like other professional communities, become toxic when people in positions of authority are reluctant to do this.

The absence of an open condemnation of unacceptable behavior makes people fearful that if they express disapproval of such behavior, they’ll draw the attention of the bully and become her next victim. Worse, rather than expressing disapproval, many people will try to ingratiate themselves with the bully in order to insulate themselves from attack, hence rewarding the bully socially for her bullying behavior.

A bully whose behavior is positively reinforced by frightened colleagues quickly becomes out of control. There are ways, however, to discipline faculty, even tenured ones. They can be denied authority on committees, excluded from departmental social functions and given teaching schedules that effectively isolate them from the rest of the faculty. In extreme cases they can be excluded from serving on committees, assigned undesirable courses, have their teaching loads increased and be denied promotion and sabbatical leave.

Ideally, a code of professional conduct that clearly indicates what sorts of behavior are considered unacceptable would become part of the bylaws of the department, college, and or university. This code can then be referred to when taking disciplinary action. Such a code isn’t necessary, however, for disciplining academics for bullying and other forms of professional misconduct. There are myriad ways chairs and other upper-level administrators can make clear to faculty that they will not tolerate unacceptable behavior.

The safest and most effective way to discipline faculty, however, is simply to openly condemn bad behavior. A statement by the chair at a department meeting that harassing and badgering colleagues, raising one’s voice at a colleague, rolling one’s eyes, or making a face when a colleague is speaking, are all unacceptable, can have a dramatic effect because everyone will know at precisely whom these remarks are aimed. Few things are so humiliating for an adult as to have it pointed out publicly that she is behaving chronically like an ill-mannered child. It’s humiliating, and human beings, being social creatures, are sensitive to public humiliation.

A subtle wave of relief will ripple through those present at the meeting because they will feel that finally, there is something they can do when they are the victims of bullying by colleagues: they can complain to the chair. A wave of relief will ripple through the faculty and people will begin, gradually, to band together against the bully or bullies.

I’ve spoken so far only about the general harassment and bullying of colleagues. Everything I have said about that, however, is equally true of other forms of unacceptable professional behavior, such as sexual harassment. There have been several highly publicized cases of sexual harassment among academics in recent years. Emphasis has tended to be placed on the harassers themselves. The problem I believe, however, is less the individuals than what would appear to be a lack of moral leadership in the environments that have allowed the harassment to take place. It isn’t difficult to communicate to a colleague that that sort of behavior is unacceptable. It it continues over a period of weeks, months, or even years, it’s because those in authority have decided to look the other way.

A department chair needs to have the courage to publicly condemn unacceptable behavior and upper-level administrators such as the dean of the relevant college need to support the chair in such condemnations. I have seen firsthand the effect that strong moral leadership can have on a department and the effect that the absence of such leadership can have.

Few people, it seems to me, understand the nature of moral authority. A moral leader is not a “nice” person in the sense in which people generally understand that term. A moral leader is not someone who tries to look the other way when people behave badly, or endeavors always to interpret malevolent behavior in a way that makes it appear benign. Sometimes people’s behavior is conspicuously ill intentioned and interpreting it in any other way can have disastrous consequences.

Plato addresses this problem in an early examination of what constitutes just behavior in his Republic. “[E]veryone would surely say,” observes Socrates, “that if a man takes weapons from a friend when the latter is of sound mind, and the friend demands them back when he is mad, one shouldn’t give back such things, and the man who gave them back would not be just” (Republic, 331 c-d). Giving back the weapons wouldn’t be just, of course, because the the “mad” man is going to use them for malevolent purposes and may do things that he will likely later regret himself when he has recovered his sanity.

People are sometimes ill intentioned and it is not a kindness toward anyone to fail to acknowledge that. Certain forms of behavior are unacceptable, however, quite independently of the intentions behind them. The reluctance to recognize unacceptable behavior as such is not equivalent to being “nice.” It is cowardice and people in positions of authority who suffer from this conflation of decency and cowardice can wreak untold damage on those over whom they have authority.

A moral leader is not necessarily perfect. No human being, after all, is perfect. A moral leader is not necessarily a warm, effusive person, not necessarily outgoing or gregarious. A moral leader may lack a sense of humor. There are numerous other personal flaws from which they may suffer. What makes a moral leader, or what gives a person moral authority, is that they exhibit an unwavering commitment to decency and fairness, that they openly and unequivocally condemn unacceptable behavior while at the same time, continuing to evince respect for those who engage in it.

That is, unacceptable behavior must be quickly an unequivocally condemned. It is important to appreciate, however, that only the behavior should be condemned, not the people who engage in it. Anyone can behave badly, at least occasionally, and an environment where harassment and bullying have become the rule rather than the exception encourages people who would not otherwise behave in such a way, to do so as a form of self defense.

A moral leader is someone who can make clear, both that certain forms of behavior are unacceptable, and that they expect even those who engage in them habitually are capable of reforming themselves. People need to feel that they can redeem themselves when they’ve behaved badly. A moral leader is someone who makes clear that they believe everyone under their authority is perfectly capable of behaving properly and that only such behavior is acceptable.

A moral leader has to have the courage to condemn unacceptable behavior, knowing that doing so will expose their own behavior to closer scrutiny than most people are comfortable with. It takes a lot of courage to throw the first stone, so to speak, particularly since none of us is without sin. A moral leader has to have that courage, however, or we are all lost.

(This piece originally appeared in May 2, 2017 issue of Counterpunch under the title “Academic Bullying and the Vacuum of Moral Leadership in the Academy.”)

“Fake News” and the Responsibility of Philosophers

”Fake news” is not actually a new phenomenon. Sheldon Rampton and John Stauber document in their book Trust Us, We’re Experts, that it is an invention of the public relations industry that is as old as the industry itself. The First Amendment makes it pretty hard to prevent such efforts to manipulate public opinion. That’s the price it appears we have to pay to live in a free society. It wouldn’t be so serious a problem as it is if people in the field of higher education didn’t fall down on their responsibility to alert alert the public to it.

A recent case in point is an article entitled Study: For-Profits Match Similar Nonprofits in Learning Results,” that ran in the January 11th issue of Inside Higher Education. The third paragraph cites the study as claiming that “[i]n all six comparisons, students at proprietary institutions actually outperformed the students at the nonproprietary comparison institutions.”

Who would have thought that? I mean, really, aren’t the for-profits infamous for having poor learning outcomes? One doesn’t actually even have to look at the original study, however, to realize that something is fishy with it. The first red flag is the fact that the study uses the euphemism “proprietary” institutions rather than the straightforwardly descriptive “for-profits.”

The study is described as measuring “learning outcomes in six areas for 624 students from four for-profit higher education systems, which the study does not name, and then compar[ing] the scores with those of a matched group of students from 20 unnamed public and private institutions that were selected because they were similar to the for-profits on key measures related to academic performance” (emphasis added).

The second red flag is the “matched group of students.” Matched in what sense? That isn’t explained.

The third red flag is that neither the traditional nonprofit institutions nor the for-profit ones are named.

The fourth red flag is that the nonprofit institutions were selected because they were “similar to the for-profits on key measures related to academic performance.” Really? Since for-profits are reputed to have abysmal results in terms of academic performance, they must have searched long and hard to find nonprofits that had similarly abysmal results, if indeed they really did find such institutions, which cannot be verified since they are “unnamed.”

The whole thing reminds me of an old television commercial for Rolaids. Someone dumps a white powder into a beaker of what appears to be water with red food coloring in it, then stirs the powder, which gradually becomes clear again, while a voiceover announces “In this test with Rolaids’ active ingredient, laboratory acid changes color to PROVE Rolaids consumes 47 times its weight in excess stomach acid.”

There was no way, however, to prove that the beaker had actually contained acid, or that what had been dumped into it was really Rolaids’ “active ingredient,” or indeed even that the change in color represented Rolaids’ “absorbing” anything let alone acid, not to mention how much acid. I credit that commercial with starting me on the road to becoming a philosophy professor because even as a child I found it outrageous that someone should expect I would take it as proving anything.

One of the chief duties of philosophers, I believe, is to expose errors in reasoning and man were there errors of reasoning in that commercial. I learned very early that commercials were not to be trusted. Most people know that, I think. Most people know to be skeptical when, for example, a commercial claims that some detergent removes stains better than any other detergent ever invented and presents what it purports is proof.

Most people know to be skeptical about claims made in commercials. Unfortunately, most people do not know to be skeptical about claims made in what is presented to them as “news.” That’s why I use Rampton and Stauber’s book when I teach critical reasoning. I feel it is part of my responsibility as a philosopher to alert my students to the pervasiveness of the practice of dressing up propaganda as news.

Back to the education “study.” Even if the study were genuine, the results are pretty much useless because the whole study is circular. That is, the study admittedly sought out “matched” students at “similar” institutions. It thus isn’t surprising that the for-profits come out looking better than one would expect if the selection of students and institutions had been random.

The study was conducted by a group called the Council for Aid to Education, or CAE. The “Executive Summary” (p. 2) of the study makes it very clear where the CAE stand on the for-profits. “The proprietary education sector stands at a crossroads,” it begins.

Proprietary colleges and universities are key providers of postsecondary education in the United States, enrolling over 1.7 million students. However, the sector has seen its enrollment decline since its peak in 2010 due to the growing employment opportunities following the Great Recession, the heavy regulatory burdens imposed during the last six years, and the perception that education at proprietary institutions is not on par with that offered by their non-proprietary peers.

The Council for Aid to Education (CAE) believes this junction presents a critical time to explore the efficacy of proprietary institutions and to document the student learning they support.

If there were doubt in anyone’s mind concerning the study’s objectivity, the opening of the “Executive Summary” should remove it. The CAE set out to show that the for-profits were doing as good a job of educating students as are traditional nonprofit institutions of higher education.

Of course the CAE is within its rights to do this. The problem is not so much the the CAE’s clear bias in favor of the “proprietary education sector,” but Inside Higher Education’s failure to expose that bias. Inside Higher Education purports to be “an independent journalism organization.” This “journalistic independence is critical,” IHE asserts in its “Ownership Statement,” “in ensuring fairness and thoroughness” of its “higher education coverage.”

The thing is, Quad Partners, “a private equity firm that invests in the education space,” purchased a controlling share of IHE in 2014. That is, Inside Higher Education is now an arm of the “proprietary education sector.” So the purported “independence,” “fairness,” and “thoroughness” of its reporting on issues in higher education appears now to be only so much more propaganda in the service of the for-profits.

Doug Lederman, the editor of Inside Higher Education protested to me in an email, after he saw an earlier version of this article that appeared in Counterpunch, that he and the people over at IHE had had their own suspicions about that piece and that that was why they had given is only a “barebones Quick Take.”

“What confuses me,” he said,

is why you viewed our minimalist and questioning treatment of the CAE research as evidence that we are in the tank for the for-profits because our lead investor has also invested in for-profit higher education––rather than as proof that our ownership situation has changed us not at all.

I fear Lederman may be right in protesting that IHE had not been willingly shilling for the for-profits. It apparently didn’t even occur to him that the suspicions he and others had had about the study should have led them to do a full-scale investigation of it (an investigation that would have involved actually reading at least the “Executive Summary” of the study to which they included a link in their article) and to publish an exposé on the study as a piece of propaganda for the for-profits rather than a “barebones” article that presented it as “news.”

What concerns me is not so much that the for-profits are trying to manipulate public opinion to make it more favorable toward them. What concerns me is that the editors of a leading publication that reports on issues in higher education don’t have the critical acumen to identify what ought to have been readily identifiable as a piece of “fake news,” or the journalistic experience and expertise to know what to do with it once they have identified it as such.

That’s disturbing.

(An earlier version of this piece appeared in the 12 January 2017 issue of Counterpunch.)

 

 

On the Demise of the Professoriate

Portrait caricatureMichael Schwalbe’s recent article in Counterpunch, The Twilight of the Professors,” paints a rather darker picture of the future of the professoriate than I believe is warranted. Or perhaps it would be more correct to say, paints a somewhat misleading picture of the dynamics behind the demise of the professoriate as a positive force for social and political progress.

Schwalbe is correct that the “tightening of the academic job market has intensified competition for the tenure-track jobs that remain.” He’s also correct that it is prudent for graduate students to focus their efforts on publishing in academic journals rather than in media more accessible to a general readership. Hasn’t that always been the case, though? The problem, I submit, with academic journals is not so much that their intended audience is academics as it is that most of these journals just aren’t very good. The pressure on academics is not merely to publish in academic journals, but also to edit them with the result that there are now too many of them and too many of questionable quality. Almost anyone can get published in an academic journal nowadays, but much of the material that is published in them, as Alan Sokal demonstrated to devastating effect back in 1996, is gibberish.

The situation is not much better with academic presses than with scholarly journals. Even some of the top presses are publishing material that would never have seen the light of day in earlier periods when there was greater quality control. Nearly all the emphasis in academia these days, as in the larger society, is on quantity rather than quality. Academic presses, such as Lexington Books, send out mass emails to academics, effectively trawling for book proposals. I spoke about this problem recently with a representative from the more selective German publisher Springer. “These guys are just publishing too much,” he said, smiling in a conspiratorial way.

No one can keep up with which among the proliferating academic journals and presses are actually any good, so emphasis tends to be placed on the quantity of publications a scholar can amass rather than on their quality. This means, of course, that the savvy self promoter with little of any real value to contribute to the life of the mind can more easily carve out an academic career now than can a genuine intellectual who would have actual scruples about dressing up old insights as new ones, as well as against publishing what is effectively the same article over and over again.

The problem is not that academic journals are in principle of no popular value so much as it is that most academic journals these days are in fact of no popular value because there are just too damn many of them and most of them are no damn good. Hardly anyone actually reads them, even among academics.

It may be true, as Schwalbe observes, that graduate students are advised to craft Facebook pages and Tweets “with the concerns of prospective employers in mind,” but what does that mean? The prospective employers in questions are other scholars, not university administrators. There are too many demands on the time of most university administrators for them to scrutinize the Facebook pages and Tweets of all the scholars who earn the department hiring committee’s seal of approval. The problem, I believe, is less that hiring committees are on the lookout for political radicals as it is that they’re too often on the lookout for people who are going to show them up. Few people are possessed of such high self esteem that they are comfortable in the company of someone they suspect might actually be smarter than they are, and academics are no exception.

The growing ranks of “contingently employed” academics “is further conservatizing” charges Schwalbe. The argument that such faculty will censor their writing in order not to offend their employers sounds good in the abstract, but as is so often the case with arguments that are internally coherent, it doesn’t correspond to the facts. Some particularly fearful and feeble-minded underemployed academics may do this, but it doesn’t take long for contingent faculty to realize that most of the tenured faculty in their own departments, to say nothing of university administrators, don’t even know who they are, let alone what they are writing.

Contingently employed academics represent a growing army of educated, literate, yet grossly underpaid workers. Such a population is the ideal breeding ground for political radicalism and, indeed, some are beginning to unionize.

Demands for grant getting, as Schwalbe observes, undeniably slant research in the sciences in the corporate direction. But, most leftist public intellectuals have traditionally come from the humanities rather than the sciences.

The real threat, I believe, to the professoriate as a force for positive social and political change, comes not so much from the factors Schwalbe mentions as from things more deeply rooted in American culture such as egoism and anti-intellectualism. The egoism that is fostered by so much in American culture keeps many academics from making what appear on a superficial level to be personal sacrifices even for the good of their students, let alone for the good of society more generally (I say “on a superficial level” because faculty who make such “sacrifices” are rewarded many times over by the satisfaction of actually bettering the lives of their students and, in that way, of humanity more generally). Tenured faculty have a responsibility to help their students develop the critical, analytical and communicative skills that are essential to actualizing the distinctively human potential for self determination, but too many abdicate this responsibility because of the time and effort required to live up to it.

The professoriate is almost universally opposed to assessment. I have never been an opponent of it however. I’m well aware, of course, that it can be abused, but it has become increasingly clear to me that at least one reason so many academics are opposed to it is that it would reveal that they are not, in fact, teaching their students much.

Some effort at assessment of student learning in the humanities could be a vehicle of revolutionary change in that it would put pressure on tenured faculty actually to teach students something, and would expose that the working conditions of many contingent faculty are such that requiring this of them is like asking them to make bricks without straw.

Assessment could be a force for radical social and political change in that implemented properly, it would make all too clear both how decades of the dismantling of the K-12 system of public education and the analogous onslaught on the funding of higher education have not simply resulted in a generation of less-than-excellent sheep, but also, as Ray Marshall and Marc Tucker argue in Thinking for a Living: Education and the Wealth of Nations (Basic Books, 1993), threaten the social and economic future of this country. In fact, assessment in higher education could have such a profoundly progressive effect that if I didn’t know better, I’d think the movement against it was a conservative plot.

It isn’t a conservative plot, though, unless conservatives are far more devious than most of us imagine and their whole sustained attack on education in general was originally designed to produce an academic job market that was so neurotically competitive it would gradually weed out academics committed to anything other than the advancement of their own, individual careers.

It’s counter productive to demonize university administrators. There are some bad ones, of course, and their salaries, like the salaries of their corporate equivalents, need to be brought back into line with those of the individuals they supervise. It’s not university administrators, however, as Schwalbe claims, who are responsible for the purported decline in leftist intellectuals, but scarcity conditions in the academic job market that are ultimately traceable back to American egoism and anti-intellectualism. But American egoism and anti-intellectualism are problems that are far less easily solved than the largely phantom “conservatizing trends” in higher education that Schwalbe discusses in his article.

(This piece originally appeared in the 8 June 2015 edition of Counterpunch under the title “The Real Threat to the American Proefssoriate.”)

On the Importance of Learning a Second Language

Portrait caricatureThere is an article in today’s New York Times entitled “The Benefits of Failing at French” that reminds me of a debate in the Times back in 2011 entitled “Why Didn’t the U.S. Foresee the Arab Revolts?” Six scholars, academics, political appointees and think tankers debate the issue in The Times online. They all appear to believe it is very complicated.

Jennifer E. Sims, a professor and director of intelligence studies at Georgetown University’s School of Foreign Service and a senior fellow at the Chicago Council on Global Affairs, thinks the problem is our over reliance on foreign assistance.

Reuel Marc Gerecht, a former CIA officer, thinks it’s that we were captured by “group think.”

Vicki Divoll, a professor of U.S. government and constitutional development at the United States Naval Academy, the former general counsel to the Senate Select Committee on Intelligence and assistant general counsel to the C.I.A., thinks the president is at fault for failing to allocate sufficient resources to the CIA. But then, on the other hand, she says “no amount of resources can predict the unknowable. Sometimes no one is to blame.”

Richard K. Betts, the Arnold A. Saltzman Professor of War and Peace Studies, director of the International Security Policy program at Columbia University and the author of Enemies of Intelligence: Knowledge and Power in American National Security, thinks the problem is that “it is impossible to know exactly what will catalyze a chain of events producing change.”

Celeste Ward Gventer associate director of the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin and a former deputy assistant secretary of defense, thinks the problem is that we’re too preoccupied with “foreign policy minutiae.”

Peter Bergen, the director of the national security studies program at the New America Foundation and is the author of “The Longest War: The Enduring Conflict between America and Al-Qaeda,” thinks the explanation is as simple as that revolutions are unpredictable.

There is probably some small grain of truth in each of these rationalizations. I’m only a professor of philosophy, not a professor of political science, let alone a former governmental bureaucrat, political appointee, or think-tank fat cat. It seems pretty clear to me, however, that despite all the theories offered above, the real reason we didn’t see the revolts coming was good old-fashioned stupidity. That’s our strong suit in the U.S.–stupidity. We’re the most fiercely anti-intellectual of all the economically developed nations, and proud of it! We go on gut feelings. Oh yes, our elected officials even proudly proclaim this. We don’t think too much, and on those few occasions when we do, we’re really bad at it for lack of practice.

One of the great things about Americans is that they are probably the least nationalistic people in the world. Oh yeah, they trot out the flag on the fourth of July and for the Super Bowl, but that’s about it. A few crazy fascists brandish it throughout the year, but most people, except for a brief period after September 11,th pay no attention to them. Danes, in contrast, about whom I know a little because I lived there for eight years, plaster Danish flags all over everything. Stores put them in their windows when they have sales, they are standard decorations for almost every holiday and a must, in their small toothpick versions, for birthday cakes. This isn’t because they suffer from some sort of aesthetic deficiency that compels them to turn to this national symbol for want of any better idea of how to create a festive atmosphere. No, Danes throw Danish flags all over everything because they are incredibly nationalistic, as is about every other European and almost everyone else in the rest of the world who’s had to fight off the encroachment of foreign powers onto their national sovereignty. We’ve seldom, OK, really never, had to do that. Still, if we, you know, seriously studied European history, we would have something of an appreciation for how basic is nationalism to the psyches of most people in the world and we could use this as our point of departure for understanding the dynamics of international relations, as well as for appreciating the obstacles to our understanding of the internal dynamics of other countries.

Years ago, when I had just returned to the U.S. after having spent the previous eight years living in Denmark, I accompanied one of my former professors to a Phi Beta Kappa dinner in Philadelphia (he was the member, not I). The speaker that evening, was the former editor of the one-time illustrious Philadelphia Inquirer. His talk, apart from one offhand comment, was eminently forgettable. That one comment, however, left an indelible impression on me. This editor, whom I think was Robert Rosenthal, mentioned, at one point, that he did not think it was important for foreign correspondents to know the language of the country from which they were reporting because, as he explained, “you can always find someone who speaks English.”

How do you begin to challenge a statement of such colossal stupidity? It’s true, of course, that you can always, or at least nearly always, find a person who speaks English. I don’t mean to suggest that that’s not true. The problem is, if you don’t know the indigenous language, to use an expression from anthropology, then you really have no idea whether you are being told the whole story. And the thing is, if you ever do become fluent in a second language, and more or less assimilated into a culture into which you were not born, you will know that foreigners are never given the whole story. This was clear to me as a result of my having lived in Denmark, Denmark, a country with which we are on friendly terms, a country that in many ways is strikingly similar to the U.S. How much clearer ought it to be with respect to countries with which we are not on friendly terms, countries we know are either deeply ambivalent about us or outright hate us?

You will always get a story in English, certainly, from a native about what is going on in some other country, but if you don’t know the language of the people, then you aren’t really in a position to assess whether the story might be biased. You might have some idea of the social class of the person who is your source, but how are you going to know what the people as a whole think of this class, or of this individual? How are you going to know whether this person has some sort of personal or political agenda, or whether he is simply attempting to whitewash was is going on out of national pride, or a fear of being perceived by foreigners as powerless, or provincial, or intolerant?

This seems a fairly straightforward point, yet it is one that nearly all Americans miss. We generalize from our own experience. We assume everyone is just like we are, or just like we are taught to be, which usually means that we assume pretty much everyone in the world is motivated primarily by the objective of personal, material enrichment. We don’t really understand things such as cultural pride or what is, for so much of the rest of the world, the fierce desire for self-determination, so we are pretty much always taken by surprise when such things seems to motivate people. That’s the real meaning of “American exceptionalism,” an expression that is used in an increasing number of disciplines from law, to political science, to history with varying shades of meaning in each. That is, the real meaning is that our difference from the rest of the world is that we are dumber. Yes, that’s right, we are the dumbest f#*@!ing people on the face of the earth and just now, when we need so desperately to understand what is going on in other parts of the world, we are reducing, and in some instances even completely eliminating, the foreign language programs in our schools and universities.

It’s no great mystery why we didn’t foresee the Arab revolts. The mystery is why we seem incapable of learning from either history or our own experience. It doesn’t help for the writing to be on the wall if you can’t read the language.

(This piece originally appeared under the title “The Writing on the Wall” in the February 28, 2011 edition of Counterpunch)

 

 

 

On Violating the First Amendment

Portrait caricatureA friend, Dave Nelson, who is a standup comedian and academic jokes that the thing about being an academic is that it gives one a kind of sixth sense like the kid in the M. Night Shyamalan film, except that instead of seeing dead people you see stupid ones. He’s right about that. Sometimes this “gift” seems more like a curse in that one can feel overwhelmed by the pervasiveness of sheer idiocy. When I saw the New York Times’ piece from April 2 “Supreme Court Strikes Down Overall Political Donation Gap” I wanted to crawl right back into bed and stay there for the rest of my life. Even the dissenting opinions were idiotic. Limits on the size of contributions to individual candidates are still intact. It’s just the overall caps that have been removed, so now while you can’t give more than $2,600 to a single candidate, you can give to as many candidates as you like. It seems the dissenters are worried, however, that the absence of an overall cap raises the possibility that the basic limits may be “circumvented.”

That sounds to me just a little bit too much like arguing over how many angels can dance on the head of a pin. “There is no right in our democracy more basic,” intoned Chief Justice Roberts, “than the right to participate in electing our political leaders.” Oh yeah? Well, if a financial contribution to a political campaign counts as “participating in electing our political leaders,” then a whole slew of Americans’ first amendments rights are being violated all the time in that many Americans don’t have enough money to pay for the basic necessities of life, let alone have any left over to contribute to political campaigns. The rules of the political game have been written in such a way that the “participation” of the poor is limited before the process even gets started. Sure, they can attend protests, write letters, etc., etc. Or can they? What if their penury forces them to work around the clock? What if they are effectively illiterate? Even if they could do these things, however, the extent to which they could affect the political process is limited by the rules of the process itself. They have less money, so they have less say.

Philosophers are fond of invoking the ceteris paribus clause. All other things being equal, they say, this or that would be the case. The point, however, of invoking the ceteris paribus clause is to expose that all other things are not in fact equal. Ceteris paribus, limits on campaign contributions would infringe on people’s First Amendment rights. So if we think such limits do not infringe on people’s First Amendment rights, the next step is to ask why we think this. The obvious answer is that all other things are not equal. That is, people do not all have the same amount of money. Even in a country such as Denmark that has effectively wiped out poverty and hence where everyone in principle is able to contribute money to political campaigns, some people are able to contribute much more money than other people and thus able to have a much greater influence on the political process. Danes, being the intelligent people they are, understand that such an inequity is antithetical to democracy so they legislate that political campaigns will be financed with precisely the same amount of money and that this money will come directly from the government rather than from individuals.

This is, in fact, how pretty much every country in the economically developed world finances political campaigns and presumably for the same reason. Everyone who is not irredeemably stupid understands that to tether the ability of an individual to participate in the political process to the amount of money he can spend on such “participation” is a clear violation of the basic principles of democracy. If writing a check is a form of political expression, then the economic situation of millions of Americans excludes them a priori from such expression, which is to say that their rights are unjustifiably curtailed in a way that the rights of the wealthy are not. (I say “unjustifiably” on the assumption that few people would openly defend the explicitly Calvinist view that the poor are poor through their own fault.)

So the issue here is not really one of defending the First Amendment. It’s “pay to play” in the U.S. You have no money, you have no, or almost no, political voice. Pretty much everyone who is not, again, irredeemably stupid understands that. The haggling is not about protecting people’s First Amendment rights. It’s a power struggle between what in the eyes of most of the world would be considered the wealthy and the super wealthy.

But then one should not underestimate the number of the irredeemably stupid. “The government may no more restrict how many candidates or causes a donor may support,” pontificated Roberts, “than it may tell a newspaper how many candidates it may endorse.” Anyone whose had an introductory course in critical reasoning, or informal logic, will immediately see that the analogy Roberts has drawn here is false. Roberts is using the terms “support” and “endorse” as if they are synonyms. They’re not synonyms though, at least not in Roberts’ analogy. The “support” a “donor” gives to a political candidate is financial, whereas the “endorse[ment]” a newspaper gives to a candidate is editorial. To suggest that such a distinction is unimportant is to beg the question. God help me we must be the only country on the face of the earth where someone can make it not only all the way through law school without understanding such errors in reasoning, but all the way to the Supreme Court.

But fallacious reasoning isn’t the worst of Roberts crimes. Many on the left have long suspected people on the right are more or less closeted fascists. Well, Roberts has come out of the closet. Yes, that’s right. Roberts explicitly compared the removal of overall limits to campaign contributions to Nazi parades. If the First Amendment protects the latter, he asserted, then it protects the former. The analogy here is just as bad as the earlier one given that a person doesn’t have to pay to march in a parade. It’s a little more revealing, however, to those who have eyes to see.

(This piece originally appeared in Counterpunch, 4-6 April 2014 )