Dear OSU students

So, as many of you know, sometime during the past week this happened:


These posters were found on the bulletin boards in Hagerty Hall, which houses the World Media and Culture Center, the Center for Languages, Literatures and Culture, the Diversity and Identity Studies Collective, and other centers that focus on multicultural issues. It’s also where I top off on coffee a few times a week.

I have to say that, as a white person, I was not inspired by these posters to be more proud of my whiteness. I’ve never been especially proud to be white, really. I’m proud to be a German-American, and if you are too I’d urge you to check out the American Center for German Culture or, if you have some musical talent, Columbus Männerchor. They’re very worthwhile organizations, and—unlike some white pride organizations in central Ohio—they’re not listed on the Southern Poverty Law Center’s hate map.

But the question remains: what should be done about these signs? They’re deeply offensive to a lot of people on campus, and I have no doubt that those people, and the Administration, will make their thoughts known on the subject soon. I find them deeply offensive as well, and profoundly disappointing: those among you whom I’ve had the pleasure to meet have been very welcoming of diversity, and in such a seemingly effortless way that it inspires a twinge of envy and pride in someone of my own generation. Ugliness of this sort is not what the OSU students I know stand for, and it’s not what the University stands for.

Some people will respond that this is free speech. Public universities are public, and the First Amendment applies in full force. Especially in a University setting, it’s important for people to be able to voice their ideas, however unpopular they might be.

While that argument holds considerable merit in the abstract, it overlooks a simple fact in this particular case: the empirical claims that are marshaled in favor of white supremacy are so deeply moronic that they are unworthy of being given serious consideration in an institution of higher learning. I’d go into detail, but honestly I don’t want to dignify any of the above by giving it the appearance of being one side of a reasoned debate. It simply isn’t. And in case any white supremacists out there doubt me, well, my skull is almost certainly bigger than yours, so by your own logic I must be smarter.

As I found myself wondering how such ideas could persist for many decades despite being completely devoid of any meaningful empirical support, it occurred to me that there was something that I could do, without even raising the issue of free speech: I could put these ideas in their proper context by promoting other ideas that have received the same degree of support from the scientific community.


So without further ado, I give you three original advertisements for three of science’s biggest losers: Ptolemaic astronomy, Lamarckian evolution, and cold fusion. Each one has text at the bottom pointing out that the idea in question has been totally discredited by science—as has white supremacy. They encourage interested readers to use the hashtags #EmbraceKnowledge and #RejectHate to post about this subject.

The images above link to (large) PDFs that you can download and either print out (they use a lot of ink!) or send off to a print shop. I sent mine to the FedEx shop just off of High Street, which charged me the flyer rate of 69¢ per copy and produced really lovely copies on nice, thick paper. Feel free to post them next to any white supremacist signs that you see on campus, or put them up in your dorm room or on your door. If you do, or if you see one, please snap a photo of it and add the hashtags.

Embrace knowledge. Reject hate.

Actually, Academic Research is More Relevant Now Than It Has Ever Been

A Washington Post opinion article by Steven Pearlstein entitled “Four tough things universities should do to rein in costs” got a lot of buzz on social media yesterday. The attention paid to it isn’t surprising: it’s election season, after all, and skyrocketing student debt numbers have focused candidates’ attention on the cost (and value) of higher education. While that’s not a bad thing in and of itself, many of the arguments leveled by college critics don’t stand up to close scrutiny, as Dan Drezner usefully points out in his Post column today.

One point that Dan makes deserves further elaboration. In the original article, Pearlstein writes the following:

“The vast majority of the so-called research turned out in the modern university is essentially worthless,” wrote Page Smith, a longtime professor of history at the University of California and an award-winning historian. “It does not result in any measurable benefit to anything or anybody. . . . It is busywork on a vast, almost incomprehensible scale.”

The number of journal articles published has climbed from 13,000 50 years ago to 72,000 today, even as overall readership has declined. In his new book “Higher Education in America,” former Harvard president Derek Bok notes that 98 percent of articles published in the arts and humanities are never cited by another researcher. In social sciences, it is 75 percent. Even in the hard sciences, where 25 percent of articles are never cited, the average number of citations is between one and two.

That is true, in the sense that those numbers do appear in Bok’s book (though it is worth noting that Bok’s overall assessment of higher education differs dramatically from Pearlstein’s). As Dan points out, however, the numbers are “badly outdated, relying on a study that first appeared in 1990 and compares apples and oranges.”

I’d go a bit farther: not only are the numbers outdated, but the general claim—that an increased volume of academic research has produced a corresponding decline in relevance—is exactly backward.

In a recent issue of the Journal of the American Society for Information Science and Technology, Vincent Larivière, Yves Gingras, and Éric Archambault take on the impressive task of tallying citations per article from 1900 to the present, using data from Thompson Reuters’ comprehensive Web of Science. (An ungated copy of their article is here). The authors examine the number of citations that an article has received two and five years after publication. They find that, despite the remarkable increase in the number of journals published, the percentage of papers receiving at least one citation has been steadily climbing for decades. Their Figure 1. demonstrates that this trend holds across all fields except the humanities (in which, as they note, scholars are far more prone to cite books rather than articles).


These trends shouldn’t come as much of a surprise to, well, anyone who reads academic journals. Most journal articles generate far more citations than they receive, so the increase in the number of academic journals over time has produced far more citations than it has citable articles. As Larivière, Gingras, and Archambault also point out, those citations increasingly go to more specialized articles in more specialized journals—a welcome indicator of the growth of knowledge.

It’s still the case, of course, that a significant number of articles go uncited. But anyone who knows the Web of Science knows that it tracks citations in some really obscure journals: any academic can point to a significant number in his or her own field that he or she has never even heard of. Such journals contribute disproportionately to the number of uncited articles.

Even taking those articles into account, though, the numbers tell a pretty unambiguous story: if we use the percentage of uncited articles as an indicator of the irrelevance of academic research, as Pearlstein himself does, academic research is more relevant now than it has ever been before.

Politics and the LaCour Scandal

One of the first things that I tell students in my Data Literacy and Data Visualization course is that, when they walk in the door, they should leave their ideological predilections behind. I don’t care whether they’re Sanders socialists or Rockefeller Republicans—the point of the class isn’t to learn how to support Team Red or Team Blue. Our goal, to borrow the beautifully succinct subtitle from, is to achieve “a fact-based worldview.”

I’m far from alone in this pursuit. While my colleagues study politics for a living and often take clear positions on specific issues, the overwhelming majority are pretty circumspect about expressing any sort of party affiliation. To some extent, I think that’s because our worldviews don’t map very well to party platforms. For the most part, though, we realize that impartiality is essential to our ability to function as researchers and educators. That’s why, in my “How to Lie with Data Visualization” lecture, I point out the disingenuousness of both Washington Monthly’s change-in-change-in-unemployment graph and the Heritage Foundation’s “26 months of gas prices” graph. It’s important for young citizens to recognize that the truth has no political affiliation.

That’s why I was incensed after reading the Wall Street Journal‘s editorial, “Scientific Fraud and Politics.” The Journal pounces on the LaCour scandal, arguing that the findings got a free pass into Science magazine in part because they “flattered the ideological sensibilities of liberals.” The editorial then generalizes from this one instance in such a breathtaking manner that “sweeping” doesn’t quite seem to do it justice:

Similar bias contaminates inquiries across the social sciences, which often seem to exist so liberals can claim that “studies show” some political assertion to be empirical. Thus they can recast stubborn political debates about philosophy and values as disputes over facts that can be resolved by science.

It’s easy to dismiss the Journal’s editorial page as being rather extreme (or, more to the point, just terrible to the point of irresponsibility). To do so misses the real importance of the issue. This argument will almost certainly come up again and again in the run-up to the 2016 elections. It will be a talking point for any politician whose positions are inconveniently at odds with scientific findings. To the extent that it resonates with voters, it will further degrade the role of scientific knowledge in guiding public policy. Worse, and perversely, the discrediting of science may give more weight to policy arguments that specifically run contrary to scientific findings.

Fortunately, there are two major holes in the Journal‘s reasoning. The first is that, as Gary King argued, this is how science actually works. The fact that something like the LaCour-Green study can be discredited is crucial: as Karl Popper famously argued, a science is only a science if its claims can be disproved. The fact that studies can be challenged and their findings overturned should increase our confidence in the findings that survive the process.

Second, the majority of studies prior to LaCour and Green (2014) pointed to a very different conclusion regarding the ability of canvassers to change people’s minds. As Green himself put it,

Conventional wisdom was that a canvasser might prompt you to rethink your stance on a controversial issue for a few days at most, but that once you went back into your social milieu, your opinion would snap back into accordance with your pre­existing views.

If, as the Wall Street Journal editorial suggests, ideological biases are endemic in academic studies, why is it that such a large body of academic literature prior to LaCour-Green pointed to conclusions that were not flattering to those biases?

Reading professional journal articles on the iPad

I’ve been able to read the New York Times on my iPad for years, so I suspected that it was only a matter of time (and more time… and still more time…) before I’d be able to read and process professional journal articles on it as well. After fiddling around with a dozen or so different apps, each of which has its own issues, I’ve finally come up with an efficient workflow that utilizes the iPad to its best advantage. I describe it below in the hopes that I can save you the time and effort of uncovering it yourself.

zotero_512x512x32First, you should get a free Zotero account. Zotero is a cloud-based bibliography manager. It’s free (up to a certain amount of storage, after which there’s a monthly charge), the application is smart, and it exports bibliographies into other formats, like BibTeX. There is also an app for your desktop or laptop. Your online Zotero library will be where all of your citations and marked-up PDFs end up.


The next step is to install PaperShip. Papership is an iPad interface to your Zotero account. It’s efficient, it syncs automatically, and most important, it can usually extract the citation information from a PDF that you import to it. (See update note below!!!)


unnamedNext, purchase BrowZine. BrowZine is the breakthrough app that makes all of this possible. It lets you set up a library of journals that you read regularly. When you open the app it tunnels through all of your library’s authentication windows to figure out which of those journals has new articles. You can read them immediately or save them for later. This is fantastic! And the ThirdIron support team is incredibly helpful if you run into problems.

app-icon-632-mzl.vchtrhihThe last app to purchase and install, if you haven’t already, is iAnnotate. This is my favorite app for marking up PDFs. As my students well know, it can capture voice annotations as well as jotted marginal notes. It’s indispensable for taking notes and highlighting key passages on papers, dissertation chapters, journal articles, what have you.


Once these four things are in place, the workflow looks something like this:

  1. Start BrowZine and set it up with your library sign-in information. Then add journals to your virtual “bookshelf.” A count of new articles will appear over each journal.
  2. Save interesting articles for later in BrowZine, or send them from BrowZine to iAnnotate.
  3. Mark key passages, record reactions, etc. in iAnnotate.
  4. Send file from iAnnotate to Papership,* which should capture bibliographic information and save both the citation and the PDF to your Zotero library.
  5. Return to BrowZine and select another article.

That’s it. Your annotated PDFs will automatically be saved in your Zotero library, where you can refer to them months or years later. And it’s all as quick as the blink of an eye.

Addendum: I forgot to note that Papership stores your files in an “inbox” rather than in your Zotero library. To move an item from the inbox to a folder in your library, use your finger or stylus to drag the item to the left until an option appears to copy or move the item. Then simply select the folder to which you’d like to move it.

UPDATE: I’ve discovered what looks like a nasty bug in Papership: bibliographic entries that are imported to its Inbox don’t appear in your Zotero library, and when you delete these entries there is no impact on your Zotero library. But papers imported to your Zotero library show up, for some reason, in your Papership Inbox, and when you delete these entries to get them out of your inbox, they are also deleted from your Zotero library. I’ve just permanently deleted a few dozen entries from my library by trying to get all of the clutter out of my Papership inbox. For the moment, until Shazino can get Papership to stop doing this, I’ll be using a different method for getting annotated PDFs into my Zotero library.

Specifically, for the moment I’m exporting from Browzine directly to my Zotero library, switching to an app called ZotPad, opening the PDF in Zotpad, exporting to iAnnotate, marking up, re-exporting to ZotPad, and saving. It’s a clunkier workflow, but it avoids Papership’s Inbox issues.

*Note: The command for sending a file from iAnnotate to another program is not immediately obvious. This is how you do it: Hold your finger or stylus down on the name tab at the top of the document until a list of commands pop up. Select “Share.” Another list of commands will then pop up. Select “Open in…” and choose the target application.

In Defense of Research Notes

Research notes have all but died in political science journals. I think that’s a bad thing.

Back in 2007, I noticed a subtle but very significant problem with a well-established social science methodology. This methodology is not really central to my research agenda, though, and I had other things to work on. So I set it aside. Last year, a simple fix occurred to me, so I tried a few simulations and it worked quite well. I realized that it wouldn’t really take much time to point out the problem or to provide a useful remedy, so I wrote up a brief (<10pp.) description of the problem and the solution and sent it off to an appropriate journal. It was promptly desk-rejected. The editor wrote, in part, that “very short ‘research notes’ tend to not fare very well with our reviewers.” He suggested additional simulations and “comprehensive application to applied problems.”

I respect the editor very much, and to be fair I had half expected an outcome like this. But it still stinks. The method is widely used: the work that introduces it has nearly 2,000 citations. The problem, once you see it, is obvious. The solution is a quick test straight out of a first-semester statistics class. This just isn’t rocket science. The long and the short of it is, this thing just doesn’t merit extended treatment.

In most disciplines, that’s a recipe for a research note—a short memo to other researchers that says, “Hey, here’s something that could be useful.” Political science journals publish very, very few of these, and I can’t fathom why we don’t. It’s certainly not because every project we conceive of merits 30 pages.

In fact, research notes have a lot of advantages. They leave more space in a journal for our colleagues’ work. They require less time and effort from reviewers. They reward brevity. They allow authors to focus on the point of the article. As a reader, I seek them out. As an author, I’d gladly publish more of them.

I understand that our disciplinary culture tends not to reward research notes, but I’d urge editors and editorial boards to work toward changing that situation. Specifically allowing for length-limited research notes in journal submission guidelines would disallow rejection based on length alone. It would save space and allow more of us to be published in better journals. It would also encourage researchers to publish short pieces that they might not otherwise submit and which could be of significant use.

To underscore the importance of the latter point, consider this: Did you find yourself wondering whether the problem I describe above affects your work? It very well might. If this little research note never gets published, though, you’ll never know for sure.

Thoughts on Academics and the Public Sphere

Following Nicholas Kristof’s provocative call to social scientists to be more engaged in public debates (“Professors, We Need You!“), Ezra Klein has weighed in with a thoughtful riposte (“The Real Reason Nobody Reads Academics“). To a much greater degree, I think, Klein hits the nail on the head: even interested journalists have a hard time keeping track of academic insights because distribution of those insights via journals is costly and highly inefficient.

The problem I see with this point, though, is that journals aren’t meant to serve as vehicles for specialists to communicate with nonspecialists. For the most part, they serve as a collective repository for basic research in the social sciences. And that’s exactly as it should be. As a recent article in the Boston Globe points out,

There is a huge zeitgeist for research that translates existing knowledge into cures, treatments, and technologies. That’s in part because it’s easy to explain the relevance to the public — it might cure Alzheimer’s or cancer or lead to a technology that transforms society and creates jobs. Who could argue against those lofty goals?

But the idea that marshalling existing knowledge into products will solve the biggest problems facing society is naive. … The point of science is to make discoveries, and if it were already known which areas would yield insights that would be useful, scientific inquiry wouldn’t be necessary.

Don’t get me wrong: We could (and should!!) create something like ArXiv, the archive and distribution website for a handful of mostly hard-science disciplines. Indeed, the Society for Political Methodology does a great job at this for articles on methodology, and I’d love to see that model applied to political science more generally. But it wouldn’t help journalists all that much, because most of the research would be theoretical—the raw material from which applied insights can later be mined.

Moreover, as this point suggests, relevant academic insights are rarely published contemporaneously. Most of the theoretical material that academic commentators draw on was published years ago. One of the more relevant insights for America’s response to the situation in Ukraine, for example—to my mind, anyway—, comes from Jim Fearon’s article, “Selection Effects and Deterrence” (International Interactions, 2002). Jim points out that deterrent threats issued during crises will tend to fail because the initiator will already have taken them into account. Worse, the same logic leads to the conclusion that the most credible threats are exactly the ones that are most likely to fail—not because of their credibility or any lack of resolve on the part of the issuer (got that, Lindsey Graham?) but because of the situations in which they’re issued. This isn’t a contemporary article by any means, but the insight is important  for American policymakers and commentators alike. It’s difficult to imagine someone who isn’t an academic knowing that it exists. For that reason, it’s difficult to imagine a means for journalists to apply academic insights that doesn’t involve academics.

At present, I think two models for doing what Klein wants have evolved that make a lot of sense. The first, as he notes, is the academic group blog—The Monkey Cage being one of the best examples (but don’t miss Duck of Minerva, Political Violence @ a Glance, and others). These give academics the freedom to contribute only when we really have something to say.

The second model is the University Office of Communications, which can do a remarkable job of translating and disseminating research findings when they are relevant to current affairs. From what I can tell, surprisingly few social scientists seem to make use of this office. If ours is representative, they will read your paper, write up a press release in plain English, and post it wherever such things get posted so that journalists will find them. Their ability to convey the importance of our findings and to make contact with journalists is a tremendous asset, one of which we should avail ourselves when the opportunity arises.

So while Klein is right about the problems that academic journals represent for engaged journalists, I’m not sure that solving those problems would benefit journalists as much as it would academics. We need to accept at least some of the responsibility for our own obscurity and take steps to rectify it when we can, and those steps need to be recognized as valuable by our institutions. At the same time, the public (and Congress) needs to recognize the value of basic research that doesn’t have clear and immediate applications.

On Left-Handed Latino Republicans and Interaction Terms

About a decade ago, I wrote an article on interaction terms. In it, I tried to clear up some common misperceptions about how interaction terms can and should be used in regression (logit, probit, etc.) equations. A fair number of people seem to have heeded most of the advice in the article, but in retrospect I realize that I should have put a much greater emphasis on one very important topic: incomplete or overlapping sets of variables in interaction terms.

Political scientists (and, as far as I can tell, uniquely political scientists) have created and sustained the illusion that we can pick and choose which variables to interact in our statistical models without concern for omitted variable bias. Do you want to understand the impact of being a left-handed Latino Republican on how much the respondent likes the President? Multiply left-handed by Latino by Republican and include that in your regression. Don’t worry about including a variable for left-handed Republican, or left-handed Latino, or even left-handed, because they’re not part of your theory. By the same logic (the thinking goes), if you have a theory about left-handed Latinos and left-handed Republicans, go ahead and put those two interactions in there without including left-handed Latino Republicans, because they’re not a part of your theory.

Boo, political scientists. Booooooooo.

The short version is this: When creating interaction terms, always include every possible lower- and higher-order term in your model. If you’re interacting x1, x2, and x3, include x1x2, x1x3, x2x3, x1, x2, and x3 in the equation. If you’re interacting x1 and x2 and, in the same equation, interacting x1 and x3, do the same thing—include everything all the way up to x1x2x3.


Think of a coefficient as reflecting some difference between the category you’re interested in and an excluded category. In the example above, the coefficient on left-handed x Latino x Republican reflects the difference between left-handed Latino Republicans and other people.

The key point is… who are these other people?

Obviously, the excluded category includes right-handed Black Democrats. Not so obviously, it includes left-handed people who either aren’t Latinos or aren’t Republicans or both. It includes Republicans who either aren’t left-handed or aren’t Latinos or both. It includes Latinos who either aren’t left-handed or aren’t Republicans or both.

In short, your excluded category is a mess.

“Fine, fine,” you say, “But obviously, the coefficient on left-handed Latino Republican still estimates the effect of being a left-handed Latino Republican, right?”

No, it doesn’t. Because you haven’t controlled for just being a Republican. Or being a left-handed Republican. And so on. And the absence of those controls can produce misleading inferences.

Let’s imagine that Republicans really dislike the President but Latino identity, handedness, and all combinations of the three are irrelevant. You could very well get a significant coefficient—a false positive—on your variable just because left-handed Latino Republicans are more Republican.

Alternately, let’s imagine that left-handed Latino Republicanness really does change people’s attitudes about the President, but that on their own, Republican-ness pulls in one direction, left-handedness pulls in another, and Latino-ness doesn’t matter. You could very well end up with a null result—a false negative—just because your excluded category is so heterogeneous.

So, no. Your excluded category is such a friggin’ nightmare that your coefficient and significance level tell you precisely nothing about the quantity that you’re trying to estimate.

Let that sink in for a moment. Precisely nothing.

So next time you’re reading a journal or watching a presentation at a conference, keep an eye peeled for omitted terms in interactions. You’ll be surprised at how many of the results out there really don’t tell you anything at all.

Networking at Conferences—My Personal Take

After tossing out a brief opinion on the value of networking at conferences like APSA (nutshell: say smart things, don’t worry about networking per se), I was surprised to find that I’d become part of a controversy on the subject. Will Moore both indulges his considerable intellectual curiosity—

I invite Professors Saideman, Drezner, Voeten, and Braumoeller to make arguments about why a belief that (1) the distribution over these things is uniform, or (2) that the value is unlikely correlated across supply and demand is superior to my belief that they are varied.  I have often found, upon reflection, that my beliefs are poor, and will do my best to update, provided compelling reasons to so.

—and vents his spleen—

Here I will snark: can you mainsplain that to me, because I surely won’t be able to figure that out on my own.  Yeah, you fuktup.

—in an extended post on his blog.

I’m happy to respond. I probably shouldn’t do so just yet, since Dan Drezner already showed some interest in the exchange:


Since Dan blogs professionally, and he usually has insightful responses to questions like this, and it looks like he’s pretty motivated, my best option would probably be to get back to working on my syllabi, wait until Dan posts something, and chime in with, “Yeah, what Dan said!” But the time and effort that Will put into elucidating his position deserve a reply in kind.

Will’s main point is that, along with others, my post is “guilty, at a very high level of generality, of … (implicitly) believing that advice drawn from your experience is universally valuable.” That belief, he argues, is not unrelated to the fact that I’m a tenured white male with a degree from a top department.

Without getting into the impact of tenure, race, sex, or locus of degree on self-centeredness—I don’t study them, and I don’t presume to be an expert—I’ll point out a few things that suggest a different interpretation of my remarks.

  1. They start with the words “My two cents.” That’s one way in which authors generally indicate that their perspective may not be generalizable or universally valid.
  2. They were a private post on my Facebook page rather than a public post on my  blog. I do have a blog. I don’t use it much, nor do I send in many submissions to blogs like the Monkey Cage, though I have sent in one or two. That’s because I think our discipline is most credible when we keep our powder dry and only opine publicly when we’ve put in enough hard work to make our opinions worth reading.
  3. Even when I do blog, I invariably use the words “I think…” or “My sense is…” to indicate that I’m offering my own perspective, not presuming to offer a universally valid one.

I address this point largely because it’s the one that seemed to upset Will the most. Will’s and my relationship falls into a category that probably seems odd to most people but very familiar to academics: we get along pleasantly enough, I’d enjoy seeing him arrive at my table at the conference-hotel bar, but neither of us has been to the other’s house or anything of the sort. I don’t think English has a very good word for that sort of relationship, really, but the upshot is, I like the guy enough not to want to upset him unnecessarily.

I don’t think the point is really relevant to the discussion of networking, though, because on that point, I think we’re simply talking past each other. As I pointed out in a comment to that same Facebook post—a response to Will, in fact—, I was discussing the value of networking at APSA, not the value of having a professional network.


I’ll elaborate. In my experience, when graduate students talk about “networking” at big conferences like APSA, they’re talking about meeting fairly senior and well-known people for the sake of meeting them. My own sense is that there isn’t much value to that practice. Despite being skeptical, I did try it myself once as a graduate student. The response reminded me of stories I’d read about Lyndon B. Johnson. I never tried it again. As a professor, I’ve spent some very pleasant social hours at conferences with various graduate students, but those evenings don’t really have any weight when it comes to hiring decisions and the like.

That’s not to say that people can’t form useful connections at conferences. But any time I’ve noticed someone, or feel as though I’ve been noticed, it hasn’t been through conscious effort. It’s been the result of a really smart comment, discussion, or presentation. And it’s not to say that networking via other means isn’t valuable: I think conferences like Journeys in World Politics do a terrific job of helping people make connections to more senior scholars who can advise them on their work and their careers.

(As a quick aside, Steve Saideman used “networking” in his response to Will in a way that I hadn’t considered—peer-to-peer networking, or networking with other junior scholars. I actually think there’s immense value in that practice. I just hadn’t considered it.)

So with all this in mind, when I re-read the passage in which Will writes, “my claim is that neither the value of the network itself, nor the potential return from a given unit of networking activity, are uniform,” I’m not sure we actually disagree at all. I don’t think networks are equally valuable across scholars. I don’t think the potential return from network-building is the same for different people. I do think the best return, at conferences like APSA, comes from saying smart things… but as I wrote earlier, that’s just my two cents.

The Prolific Comma

Over the course of the past couple of years, I’ve read quite a few application essays. Most are very thoughtful. Some are inspiring. All are written by smart people. Yet many, if not most, suffer from a single pathology: their authors sprinkle commas through their sentences with a pathologically unbridled enthusiasm.

To be honest, I think the problem is that our students read too much of our bad writing. Before long, they start to emulate the epic, serpentine sentences that span line after line of our turgid journal articles. They never learned the rules for punctuating such sentences because, beyond a certain point, those sentences shouldn’t be punctuated—they should be taken out behind a barn and shot. We can’t blame them for scattering commas like talismans to ward off incomprehensibility.

That’s not to say that we can’t do something about it. As an antidote, I highly recommend Lynne Truss’ Eats, Shoots and Leaves: The Zero Tolerance Approach to Punctuation. For those unwilling to read even that slim volume, there’s a useful cheat sheet over at WikiHow. And for graduate students and professors alike, Michael Billig’s Learn to Write Badly: How to Succeed in the Social Sciences, while not yet out, sounds awfully promising.

Good luck to all of you. And on behalf of all of us, we’re sorry.

Criticism and the Growth of Knowledge

(with apologies to Lakatos and Musgrave)

When you’re a professor, you’re likely to get involved in the occasional discussion about the high price of a college education. One common answer is, “If you think education is expensive, try ignorance.” As the Freakonomics guys demonstrated, that’s also a correct answer: college degrees generally pay for themselves in well under a decade. But it’s also oversimplified, and a bit insulting, since education and ignorance aren’t complete opposites.

I think a better answer is this: Whatever the major, and whatever the university, education imparts a respect for knowledge, and that respect is one of the most valuable traits a person can have.

Note that I did not write, nor did I mean, that education imparts a respect for people with knowledge. It doesn’t, and it shouldn’t. It teaches us that knowledge deserves even more respect than people who have a lot of it. Knowledge is the cornerstone of responsible citizenship, the fuel for the engine of a creative economy, and a hell of a boon when working a crossword puzzle. More than that, though, respect for knowledge is crucial for success and personal happiness.

Why is that? Simply put, when you respect knowledge sufficiently, you don’t fear criticism, and fear of criticism is a powerful barrier to growth. My grandmother once told me, “there’s no such thing as constructive criticism.” With all due respect to Grandma, she was totally wrong. A healthy chunk of my job involves constructive criticism—pointing out the ways in which people can take a good idea or a good piece of work and turn it into a great one. At the same time, I subject my own work to the scrutiny and criticism of others. Far from being impossible, constructive criticism is essential to the growth of knowledge.

Does it feel good, when someone points out a gaping hole in your argument? Sure, the embarrassment does sting a bit. But it’s far outweighed by the rush of knowing that there’s a better answer to be found out there, and if I find it, it’ll give me a much more compelling answer to the question I’m trying to understand. Of course, criticism can be unnecessarily dickish, even scathing, despite the fact that we all receive as well as give. After a while, though, you learn to ignore all that and ask yourself, “Does this person have anything valuable to tell me?”

Putting knowledge over ego is a fantastic way to get ahead in the world. To take a simple example, I occasionally get asked by chefs what I thought of the meal I’ve just had. If I sense that they’re just looking for validation, I’ll say something blandly positive. Most of the time, though, I gather they’re looking for an honest answer, and I give it to them. Some of them thank me for giving them candid feedback. Some don’t. The ones who do generally succeed, in a very tough business—not because they listen to me, but because they listen. They understand that feedback, even if it’s not always perfect, is vital for success. I strongly suspect that, if you talk to nearly any successful business owner, you’ll find the same attitude.

The same principle even applies to our relations with each other. Do you like to hear that you’ve got bad breath? I doubt it. But you’d probably rather know it, and be able to do something about it, than walk around in a halitosic miasma. Yet for some reason the same principle doesn’t often apply to our political beliefs, or our moral judgments, or our grammar and punctuation, or our fashion choices. It should. We might actually find more common ground with each other. And somebody might even tell those people in Crocs to get a proper pair of shoes.