Professor Bear F. Braumoeller https://www.braumoeller.info The Ohio State University Mon, 20 Aug 2018 14:36:39 +0000 en-US hourly 1 https://wordpress.org/?v=5.9 https://www.braumoeller.info/wp-content/uploads/2018/07/cropped-O-32x32.png Professor Bear F. Braumoeller https://www.braumoeller.info 32 32 Three Great LaTeX Tools for Academics https://www.braumoeller.info/2018/08/18/three-great-latex-tools-for-academics/ https://www.braumoeller.info/2018/08/18/three-great-latex-tools-for-academics/#comments Sat, 18 Aug 2018 05:46:03 +0000 http://www.braumoeller.info/?p=1329 Love it or hate it, LaTeX has become the go-to writing tool for academics in the sciences. It does have some undeniable advantages, including great citation and bibliography management, terrific templates, and unparalleled control over equations and tables. It also produces documents that look ridiculously good. But it leaves more than a little to be desired as a writing environment, and the built-in facilities for versioning and comments aren’t all that optimal.

Fortunately, there are a few really handy tools out there that are easy to set up and use.

LaTeXdiff

LaTeXdiff is a great tool for highlighting changes from one version of a manuscript to the next. Text from the first version that’s been deleted in the second is formatted in red and crossed out. New text is formatted in blue and underlined. I ask my graduate students to use it for successive drafts of dissertation chapters, and it’s a huge time-saver.

Example of LaTeXdiff output from the online LaTeXdiff tool.

The simplest way to use it is to go to the online LaTeXdiff tool at https://3142.nl/latex-diff/ and copy and paste your old and new versions into the two windows there, then click “Generate LaTeX document showing differences.” If that doesn’t work for some reason, some LaTeX distributions come with LaTeXdiff built in. To see whether yours does, open the Terminal window (Mac) or Command Prompt program (I guess?, for Windows) and type

latexdiff -V

at the prompt. If that doesn’t produce an error, navigate to your document directory and type

latexdiff version1.tex version2.tex > versionsdiff.tex

to produce a differenced version of your manuscript. If latexdiff -V does produce an error, well, installation is going to be a bit of a pain, but Dean Bodenham at ETH Zurich has a handy set of installations here.

todonotes Modifications

LaTeX has a built-in facility, called todonotes, to let you add marginal notes to your manuscript. It’s OK. My main complaint is that, thanks to the default typeface size and spacing, you can’t really fit much text into a note. I went looking for a simple fix to that problem and, thanks to the generosity of user MLC in this post on StackExchange, found five of them. If you add the following text—

\usepackage{xargs} in a new commands
\usepackage[pdftex,dvipsnames]{xcolor}
\usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes}
\newcommandx{\unsure}[2][1=]{\todo[linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]{#2}}
\newcommandx{\change}[2][1=]{\todo[linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]{#2}}
\newcommandx{\info}[2][1=]{\todo[linecolor=OliveGreen,backgroundcolor=OliveGreen!25,bordercolor=OliveGreen,#1]{#2}}
\newcommandx{\improvement}[2][1=]{\todo[linecolor=Plum,backgroundcolor=Plum!25,bordercolor=Plum,#1]{#2}}
\newcommandx{\thiswillnotshow}[2][1=]{\todo[disable,#1]{#2}}

—to the preamble of your LaTeX document, it’ll create a handy little suite of new commands (\change{}, \unsure{}, etc.) to generate marginal notes that are correctly sized, attractive, and functionally differentiated by color. If you don’t like the colors, it’s easy to change them.

User MLC’s example of different kinds of marginal notes.

Summary page of notes and their locations.

 

LanguageTool Integration

This is the trickiest of the three tools to use, but it’s well worth it. I was inspired to look for it when I noticed blurt, a really cool-looking little writing app with a very clean interface. Blurt flags spelling mistakes, as most programs do, but it also issues warnings about usage, grammar, and style. Could this be useful for academics?

Writing in the social sciences is so notoriously terrible that there’s actually a book about it.

Mmmmmmmmaybe. Unfortunately, there aren’t any LaTeX environments that I know of that do the same thing out of the box.

Enter LanguageTool. LanguageTool is a powerful general-purpose proofreader with a downloadable desktop version. The ability to use it is built in to TeXStudio, which happens to be my LaTeX editor of choice. (If it’s not yours, it’s worth a try.) Downloading, activating, and connecting LanguageTool to TeXStudio allows TeXStudio to flag a wide range of flaws in your writing, from usage and grammar to sentences and paragraphs that are just too damn long.

LanguageTool, implemented in TeXStudio.

When you fire up LanguageTool’s desktop program, you can easily specify the rules that you want it to apply when it’s proofreading your writing.

The LanguageTool interface.

The price for all of this awesomeness is a bit of hassle when it comes to setup. LanguageTool’s desktop program is written in Java, so you’ll need the most recent version of the Java Development Kit in order to make it work. That last sentence is very important. You can get the newest version of Java from quite a few sources, but they will not necessarily update the version of Java that your computer uses from the command line, and that’s the version of Java that TeXStudio uses to connect to LanguageTool. Only the Java Development Kit replaces this version too.

If you don’t screw that up like I did, installing LanguageTool and connecting it to TeXStudio should be pretty straightforward:

  1. If you don’t have the most recent version of TeXStudio, go to the TeXStudio download page, download it, and install it.
  2. Go to the Oracle Java Development Kit download page and download the correct version of the JDK for your operating system. Install that too.
  3. Go to the LanguageTool homepage, scroll to the bottom, and select “Download Desktop Version.” Uncompress the resulting file and put it wherever you’d like. It makes sense to put it in the Applications folder on a Mac, for reasons that will become clear in a moment.
  4. Open your LanguageTool file and find languagetool.jar. Double-click it to run the desktop version. Click on the “Text Checking” menu and select “Options…” to set your preferred language and grammar/style rules.
  5. Open TeXStudio, select Preferences, and check “Advanced Options” at the bottom. That should create a “Language Checking” tab. Click on it.
  6. In the space for “LT Path,” type the path to your standalone languagetool.jar file. At the time of this writing, mine is /Applications/LanguageTool-4.2/languagetool.jar (see why it’s handy to have it in the Applications folder?)
  7. Make sure that “Start LanguageTool if not running” is checked and the Server URL, Java, and LT arguments fields are all filled. My values for those are
    • Server URL: http://localhost:8081/
    • Java: java
    • LT Arguments: org.languagetool.server.HTTPServer -p 8081
  8. Save preferences and quit all programs.

If you’d like to see all of these steps rather than read them, this handy video by YouTube user AtwoZi shows you how to do it.

Once you’ve done all this, you should be set up. TeXStudio will call LanguageTool when it runs and will access it via an internal server port. That said, for some reason LanguageTool works much better for me when I launch languagetool.jar myself before opening TeXStudio, rather than letting TeXStudio do it. I’d guess that the server version of LanguageTool isn’t picking up the preferences from the Desktop version (why? Beats me), but both versions grab port 8081, so either one can be used by TeXStudio.

A final note: When I was asking around on Twitter about grammar and style checkers for LaTeX, Professor Robert Carroll at Florida State DM’ed me to make the case for writing in a separate environment entirely and then worrying about footnotes, citations, etc., later on, in a different program. Rob’s preferred environment for this is Grammarly, which I like quite a bit.

I think Rob is right, in principle. If I’m not doing academic writing, I prefer to use Writeroom, which gives me a big, distraction-free empty screen to fill with nothing but words. It’s a terrific way to write. I wish I could do it more often. (And I wish there were a LanguageTool plugin.) Blurt also looks terrific, though I haven’t put it through its paces and it requires a subscription. (Word to the wise: Blurt creator Corey Gwin offers a $0.99/month academic rate for students.) But when I’m doing academic writing, I think in text that includes citations, and my digressions come in the form of footnotes. That’s just how my brain works. If you haven’t yet been warped by academic writing, though, I’d highly recommend giving Rob’s system a shot.

So what are your favorite academic LaTeX modifications?

]]>
https://www.braumoeller.info/2018/08/18/three-great-latex-tools-for-academics/feed/ 1
Dear OSU students https://www.braumoeller.info/2016/11/14/dear-osu-students/ Mon, 14 Nov 2016 18:52:41 +0000 http://www.braumoeller.info/?p=934 So, as many of you know, sometime during the past week this happened:

15025136_812076430810_484908446204513479_o15042163_812076495680_4452428746015830632_o

These posters were found on the bulletin boards in Hagerty Hall, which houses the World Media and Culture Center, the Center for Languages, Literatures and Culture, the Diversity and Identity Studies Collective, and other centers that focus on multicultural issues. It’s also where I top off on coffee a few times a week.

I have to say that, as a white person, I was not inspired by these posters to be more proud of my whiteness. I’ve never been especially proud to be white, really. I’m proud to be a German-American, and if you are too I’d urge you to check out the American Center for German Culture or, if you have some musical talent, Columbus Männerchor. They’re very worthwhile organizations, and—unlike some white pride organizations in central Ohio—they’re not listed on the Southern Poverty Law Center’s hate map.

But the question remains: what should be done about these signs? They’re deeply offensive to a lot of people on campus, and I have no doubt that those people, and the Administration, will make their thoughts known on the subject soon. I find them deeply offensive as well, and profoundly disappointing: those among you whom I’ve had the pleasure to meet have been very welcoming of diversity, and in such a seemingly effortless way that it inspires a twinge of envy and pride in someone of my own generation. Ugliness of this sort is not what the OSU students I know stand for, and it’s not what the University stands for.

Some people will respond that this is free speech. Public universities are public, and the First Amendment applies in full force. Especially in a University setting, it’s important for people to be able to voice their ideas, however unpopular they might be.

While that argument holds considerable merit in the abstract, it overlooks a simple fact in this particular case: the empirical claims that are marshaled in favor of white supremacy are so deeply moronic that they are unworthy of being given serious consideration in an institution of higher learning. I’d go into detail, but honestly I don’t want to dignify any of the above by giving it the appearance of being one side of a reasoned debate. It simply isn’t. And in case any white supremacists out there doubt me, well, my skull is almost certainly bigger than yours, so by your own logic I must be smarter.

As I found myself wondering how such ideas could persist for many decades despite being completely devoid of any meaningful empirical support, it occurred to me that there was something that I could do, without even raising the issue of free speech: I could put these ideas in their proper context by promoting other ideas that have received the same degree of support from the scientific community.

ptoemy-360lamarck-360cold-fusion-360

So without further ado, I give you three original advertisements for three of science’s biggest losers: Ptolemaic astronomy, Lamarckian evolution, and cold fusion. Each one has text at the bottom pointing out that the idea in question has been totally discredited by science—as has white supremacy. They encourage interested readers to use the hashtags #EmbraceKnowledge and #RejectHate to post about this subject.

The images above link to (large) PDFs that you can download and either print out (they use a lot of ink!) or send off to a print shop. I sent mine to the FedEx shop just off of High Street, which charged me the flyer rate of 69¢ per copy and produced really lovely copies on nice, thick paper. Feel free to post them next to any white supremacist signs that you see on campus, or put them up in your dorm room or on your door. If you do, or if you see one, please snap a photo of it and add the hashtags.

Embrace knowledge. Reject hate.

]]>
Actually, Academic Research is More Relevant Now Than It Has Ever Been https://www.braumoeller.info/2015/11/30/actually-academic-research/ Mon, 30 Nov 2015 17:03:15 +0000 http://www.braumoeller.info/?p=851 A Washington Post opinion article by Steven Pearlstein entitled “Four tough things universities should do to rein in costs” got a lot of buzz on social media yesterday. The attention paid to it isn’t surprising: it’s election season, after all, and skyrocketing student debt numbers have focused candidates’ attention on the cost (and value) of higher education. While that’s not a bad thing in and of itself, many of the arguments leveled by college critics don’t stand up to close scrutiny, as Dan Drezner usefully points out in his Post column today.

One point that Dan makes deserves further elaboration. In the original article, Pearlstein writes the following:

“The vast majority of the so-called research turned out in the modern university is essentially worthless,” wrote Page Smith, a longtime professor of history at the University of California and an award-winning historian. “It does not result in any measurable benefit to anything or anybody. . . . It is busywork on a vast, almost incomprehensible scale.”

The number of journal articles published has climbed from 13,000 50 years ago to 72,000 today, even as overall readership has declined. In his new book “Higher Education in America,” former Harvard president Derek Bok notes that 98 percent of articles published in the arts and humanities are never cited by another researcher. In social sciences, it is 75 percent. Even in the hard sciences, where 25 percent of articles are never cited, the average number of citations is between one and two.

That is true, in the sense that those numbers do appear in Bok’s book (though it is worth noting that Bok’s overall assessment of higher education differs dramatically from Pearlstein’s). As Dan points out, however, the numbers are “badly outdated, relying on a study that first appeared in 1990 and compares apples and oranges.”

I’d go a bit farther: not only are the numbers outdated, but the general claim—that an increased volume of academic research has produced a corresponding decline in relevance—is exactly backward.

In a recent issue of the Journal of the American Society for Information Science and Technology, Vincent Larivière, Yves Gingras, and Éric Archambault take on the impressive task of tallying citations per article from 1900 to the present, using data from Thompson Reuters’ comprehensive Web of Science. (An ungated copy of their article is here). The authors examine the number of citations that an article has received two and five years after publication. They find that, despite the remarkable increase in the number of journals published, the percentage of papers receiving at least one citation has been steadily climbing for decades. Their Figure 1. demonstrates that this trend holds across all fields except the humanities (in which, as they note, scholars are far more prone to cite books rather than articles).

cite

These trends shouldn’t come as much of a surprise to, well, anyone who reads academic journals. Most journal articles generate far more citations than they receive, so the increase in the number of academic journals over time has produced far more citations than it has citable articles. As Larivière, Gingras, and Archambault also point out, those citations increasingly go to more specialized articles in more specialized journals—a welcome indicator of the growth of knowledge.

It’s still the case, of course, that a significant number of articles go uncited. But anyone who knows the Web of Science knows that it tracks citations in some really obscure journals: any academic can point to a significant number in his or her own field that he or she has never even heard of. Such journals contribute disproportionately to the number of uncited articles.

Even taking those articles into account, though, the numbers tell a pretty unambiguous story: if we use the percentage of uncited articles as an indicator of the irrelevance of academic research, as Pearlstein himself does, academic research is more relevant now than it has ever been before.

]]>
Politics and the LaCour Scandal https://www.braumoeller.info/2015/06/07/politics-and-the-lacour-scandal/ Sun, 07 Jun 2015 18:42:43 +0000 http://www.braumoeller.info/?p=786 One of the first things that I tell students in my Data Literacy and Data Visualization course is that, when they walk in the door, they should leave their ideological predilections behind. I don’t care whether they’re Sanders socialists or Rockefeller Republicans—the point of the class isn’t to learn how to support Team Red or Team Blue. Our goal, to borrow the beautifully succinct subtitle from Gapminder.org, is to achieve “a fact-based worldview.”

I’m far from alone in this pursuit. While my colleagues study politics for a living and often take clear positions on specific issues, the overwhelming majority are pretty circumspect about expressing any sort of party affiliation. To some extent, I think that’s because our worldviews don’t map very well to party platforms. For the most part, though, we realize that impartiality is essential to our ability to function as researchers and educators. That’s why, in my “How to Lie with Data Visualization” lecture, I point out the disingenuousness of both Washington Monthly’s change-in-change-in-unemployment graph and the Heritage Foundation’s “26 months of gas prices” graph. It’s important for young citizens to recognize that the truth has no political affiliation.

That’s why I was incensed after reading the Wall Street Journal‘s editorial, “Scientific Fraud and Politics.” The Journal pounces on the LaCour scandal, arguing that the findings got a free pass into Science magazine in part because they “flattered the ideological sensibilities of liberals.” The editorial then generalizes from this one instance in such a breathtaking manner that “sweeping” doesn’t quite seem to do it justice:

Similar bias contaminates inquiries across the social sciences, which often seem to exist so liberals can claim that “studies show” some political assertion to be empirical. Thus they can recast stubborn political debates about philosophy and values as disputes over facts that can be resolved by science.

It’s easy to dismiss the Journal’s editorial page as being rather extreme (or, more to the point, just terrible to the point of irresponsibility). To do so misses the real importance of the issue. This argument will almost certainly come up again and again in the run-up to the 2016 elections. It will be a talking point for any politician whose positions are inconveniently at odds with scientific findings. To the extent that it resonates with voters, it will further degrade the role of scientific knowledge in guiding public policy. Worse, and perversely, the discrediting of science may give more weight to policy arguments that specifically run contrary to scientific findings.

Fortunately, there are two major holes in the Journal‘s reasoning. The first is that, as Gary King argued, this is how science actually works. The fact that something like the LaCour-Green study can be discredited is crucial: as Karl Popper famously argued, a science is only a science if its claims can be disproved. The fact that studies can be challenged and their findings overturned should increase our confidence in the findings that survive the process.

Second, the majority of studies prior to LaCour and Green (2014) pointed to a very different conclusion regarding the ability of canvassers to change people’s minds. As Green himself put it,

Conventional wisdom was that a canvasser might prompt you to rethink your stance on a controversial issue for a few days at most, but that once you went back into your social milieu, your opinion would snap back into accordance with your pre­existing views.

If, as the Wall Street Journal editorial suggests, ideological biases are endemic in academic studies, why is it that such a large body of academic literature prior to LaCour-Green pointed to conclusions that were not flattering to those biases?

]]>
Reading professional journal articles on the iPad https://www.braumoeller.info/2015/01/06/reading-journal-articles-on-the-ipad/ https://www.braumoeller.info/2015/01/06/reading-journal-articles-on-the-ipad/#comments Tue, 06 Jan 2015 23:22:12 +0000 http://www.braumoeller.info/?p=661 I’ve been able to read the New York Times on my iPad for years, so I suspected that it was only a matter of time (and more time… and still more time…) before I’d be able to read and process professional journal articles on it as well. After fiddling around with a dozen or so different apps, each of which has its own issues, I’ve finally come up with an efficient workflow that utilizes the iPad to its best advantage. I describe it below in the hopes that I can save you the time and effort of uncovering it yourself.

zotero_512x512x32First, you should get a free Zotero account. Zotero is a cloud-based bibliography manager. It’s free (up to a certain amount of storage, after which there’s a monthly charge), the application is smart, and it exports bibliographies into other formats, like BibTeX. There is also an app for your desktop or laptop. Your online Zotero library will be where all of your citations and marked-up PDFs end up.

i6PorYeX

The next step is to install PaperShip. Papership is an iPad interface to your Zotero account. It’s efficient, it syncs automatically, and most important, it can usually extract the citation information from a PDF that you import to it. (See update note below!!!)

 

unnamedNext, purchase BrowZine. BrowZine is the breakthrough app that makes all of this possible. It lets you set up a library of journals that you read regularly. When you open the app it tunnels through all of your library’s authentication windows to figure out which of those journals has new articles. You can read them immediately or save them for later. This is fantastic! And the ThirdIron support team is incredibly helpful if you run into problems.

app-icon-632-mzl.vchtrhihThe last app to purchase and install, if you haven’t already, is iAnnotate. This is my favorite app for marking up PDFs. As my students well know, it can capture voice annotations as well as jotted marginal notes. It’s indispensable for taking notes and highlighting key passages on papers, dissertation chapters, journal articles, what have you.

 

Once these four things are in place, the workflow looks something like this:

  1. Start BrowZine and set it up with your library sign-in information. Then add journals to your virtual “bookshelf.” A count of new articles will appear over each journal.
  2. Save interesting articles for later in BrowZine, or send them from BrowZine to iAnnotate.
  3. Mark key passages, record reactions, etc. in iAnnotate.
  4. Send file from iAnnotate to Papership,* which should capture bibliographic information and save both the citation and the PDF to your Zotero library.
  5. Return to BrowZine and select another article.

That’s it. Your annotated PDFs will automatically be saved in your Zotero library, where you can refer to them months or years later. And it’s all as quick as the blink of an eye.

Addendum: I forgot to note that Papership stores your files in an “inbox” rather than in your Zotero library. To move an item from the inbox to a folder in your library, use your finger or stylus to drag the item to the left until an option appears to copy or move the item. Then simply select the folder to which you’d like to move it.

UPDATE: I’ve discovered what looks like a nasty bug in Papership: bibliographic entries that are imported to its Inbox don’t appear in your Zotero library, and when you delete these entries there is no impact on your Zotero library. But papers imported to your Zotero library show up, for some reason, in your Papership Inbox, and when you delete these entries to get them out of your inbox, they are also deleted from your Zotero library. I’ve just permanently deleted a few dozen entries from my library by trying to get all of the clutter out of my Papership inbox. For the moment, until Shazino can get Papership to stop doing this, I’ll be using a different method for getting annotated PDFs into my Zotero library.

Specifically, for the moment I’m exporting from Browzine directly to my Zotero library, switching to an app called ZotPad, opening the PDF in Zotpad, exporting to iAnnotate, marking up, re-exporting to ZotPad, and saving. It’s a clunkier workflow, but it avoids Papership’s Inbox issues.

*Note: The command for sending a file from iAnnotate to another program is not immediately obvious. This is how you do it: Hold your finger or stylus down on the name tab at the top of the document until a list of commands pop up. Select “Share.” Another list of commands will then pop up. Select “Open in…” and choose the target application.

]]>
https://www.braumoeller.info/2015/01/06/reading-journal-articles-on-the-ipad/feed/ 4
In Defense of Research Notes https://www.braumoeller.info/2014/05/16/in-defense-of-research-notes/ https://www.braumoeller.info/2014/05/16/in-defense-of-research-notes/#comments Fri, 16 May 2014 13:27:54 +0000 http://www.braumoeller.info/?p=621 Research notes have all but died in political science journals. I think that’s a bad thing.

Back in 2007, I noticed a subtle but very significant problem with a well-established social science methodology. This methodology is not really central to my research agenda, though, and I had other things to work on. So I set it aside. Last year, a simple fix occurred to me, so I tried a few simulations and it worked quite well. I realized that it wouldn’t really take much time to point out the problem or to provide a useful remedy, so I wrote up a brief (<10pp.) description of the problem and the solution and sent it off to an appropriate journal. It was promptly desk-rejected. The editor wrote, in part, that “very short ‘research notes’ tend to not fare very well with our reviewers.” He suggested additional simulations and “comprehensive application to applied problems.”

I respect the editor very much, and to be fair I had half expected an outcome like this. But it still stinks. The method is widely used: the work that introduces it has nearly 2,000 citations. The problem, once you see it, is obvious. The solution is a quick test straight out of a first-semester statistics class. This just isn’t rocket science. The long and the short of it is, this thing just doesn’t merit extended treatment.

In most disciplines, that’s a recipe for a research note—a short memo to other researchers that says, “Hey, here’s something that could be useful.” Political science journals publish very, very few of these, and I can’t fathom why we don’t. It’s certainly not because every project we conceive of merits 30 pages.

In fact, research notes have a lot of advantages. They leave more space in a journal for our colleagues’ work. They require less time and effort from reviewers. They reward brevity. They allow authors to focus on the point of the article. As a reader, I seek them out. As an author, I’d gladly publish more of them.

I understand that our disciplinary culture tends not to reward research notes, but I’d urge editors and editorial boards to work toward changing that situation. Specifically allowing for length-limited research notes in journal submission guidelines would disallow rejection based on length alone. It would save space and allow more of us to be published in better journals. It would also encourage researchers to publish short pieces that they might not otherwise submit and which could be of significant use.

To underscore the importance of the latter point, consider this: Did you find yourself wondering whether the problem I describe above affects your work? It very well might. If this little research note never gets published, though, you’ll never know for sure.

]]>
https://www.braumoeller.info/2014/05/16/in-defense-of-research-notes/feed/ 1
Thoughts on Academics and the Public Sphere https://www.braumoeller.info/2014/03/03/thoughts-on-academics-and-the-public-sphere/ Mon, 03 Mar 2014 16:51:07 +0000 http://www.braumoeller.info/?p=608 Following Nicholas Kristof’s provocative call to social scientists to be more engaged in public debates (“Professors, We Need You!“), Ezra Klein has weighed in with a thoughtful riposte (“The Real Reason Nobody Reads Academics“). To a much greater degree, I think, Klein hits the nail on the head: even interested journalists have a hard time keeping track of academic insights because distribution of those insights via journals is costly and highly inefficient.

The problem I see with this point, though, is that journals aren’t meant to serve as vehicles for specialists to communicate with nonspecialists. For the most part, they serve as a collective repository for basic research in the social sciences. And that’s exactly as it should be. As a recent article in the Boston Globe points out,

There is a huge zeitgeist for research that translates existing knowledge into cures, treatments, and technologies. That’s in part because it’s easy to explain the relevance to the public — it might cure Alzheimer’s or cancer or lead to a technology that transforms society and creates jobs. Who could argue against those lofty goals?

But the idea that marshalling existing knowledge into products will solve the biggest problems facing society is naive. … The point of science is to make discoveries, and if it were already known which areas would yield insights that would be useful, scientific inquiry wouldn’t be necessary.

Don’t get me wrong: We could (and should!!) create something like ArXiv, the archive and distribution website for a handful of mostly hard-science disciplines. Indeed, the Society for Political Methodology does a great job at this for articles on methodology, and I’d love to see that model applied to political science more generally. But it wouldn’t help journalists all that much, because most of the research would be theoretical—the raw material from which applied insights can later be mined.

Moreover, as this point suggests, relevant academic insights are rarely published contemporaneously. Most of the theoretical material that academic commentators draw on was published years ago. One of the more relevant insights for America’s response to the situation in Ukraine, for example—to my mind, anyway—, comes from Jim Fearon’s article, “Selection Effects and Deterrence” (International Interactions, 2002). Jim points out that deterrent threats issued during crises will tend to fail because the initiator will already have taken them into account. Worse, the same logic leads to the conclusion that the most credible threats are exactly the ones that are most likely to fail—not because of their credibility or any lack of resolve on the part of the issuer (got that, Lindsey Graham?) but because of the situations in which they’re issued. This isn’t a contemporary article by any means, but the insight is important  for American policymakers and commentators alike. It’s difficult to imagine someone who isn’t an academic knowing that it exists. For that reason, it’s difficult to imagine a means for journalists to apply academic insights that doesn’t involve academics.

At present, I think two models for doing what Klein wants have evolved that make a lot of sense. The first, as he notes, is the academic group blog—The Monkey Cage being one of the best examples (but don’t miss Duck of Minerva, Political Violence @ a Glance, and others). These give academics the freedom to contribute only when we really have something to say.

The second model is the University Office of Communications, which can do a remarkable job of translating and disseminating research findings when they are relevant to current affairs. From what I can tell, surprisingly few social scientists seem to make use of this office. If ours is representative, they will read your paper, write up a press release in plain English, and post it wherever such things get posted so that journalists will find them. Their ability to convey the importance of our findings and to make contact with journalists is a tremendous asset, one of which we should avail ourselves when the opportunity arises.

So while Klein is right about the problems that academic journals represent for engaged journalists, I’m not sure that solving those problems would benefit journalists as much as it would academics. We need to accept at least some of the responsibility for our own obscurity and take steps to rectify it when we can, and those steps need to be recognized as valuable by our institutions. At the same time, the public (and Congress) needs to recognize the value of basic research that doesn’t have clear and immediate applications.

]]>
On Left-Handed Latino Republicans and Interaction Terms https://www.braumoeller.info/2013/10/26/left-handed-latino-republicans-and-interaction-terms/ Sat, 26 Oct 2013 20:45:25 +0000 http://www.braumoeller.info/?p=574 About a decade ago, I wrote an article on interaction terms. In it, I tried to clear up some common misperceptions about how interaction terms can and should be used in regression (logit, probit, etc.) equations. A fair number of people seem to have heeded most of the advice in the article, but in retrospect I realize that I should have put a much greater emphasis on one very important topic: incomplete or overlapping sets of variables in interaction terms.

Political scientists (and, as far as I can tell, uniquely political scientists) have created and sustained the illusion that we can pick and choose which variables to interact in our statistical models without concern for omitted variable bias. Do you want to understand the impact of being a left-handed Latino Republican on how much the respondent likes the President? Multiply left-handed by Latino by Republican and include that in your regression. Don’t worry about including a variable for left-handed Republican, or left-handed Latino, or even left-handed, because they’re not part of your theory. By the same logic (the thinking goes), if you have a theory about left-handed Latinos and left-handed Republicans, go ahead and put those two interactions in there without including left-handed Latino Republicans, because they’re not a part of your theory.

Boo, political scientists. Booooooooo.

The short version is this: When creating interaction terms, always include every possible lower- and higher-order term in your model. If you’re interacting x1, x2, and x3, include x1x2, x1x3, x2x3, x1, x2, and x3 in the equation. If you’re interacting x1 and x2 and, in the same equation, interacting x1 and x3, do the same thing—include everything all the way up to x1x2x3.

Why?

Think of a coefficient as reflecting some difference between the category you’re interested in and an excluded category. In the example above, the coefficient on left-handed x Latino x Republican reflects the difference between left-handed Latino Republicans and other people.

The key point is… who are these other people?

Obviously, the excluded category includes right-handed Black Democrats. Not so obviously, it includes left-handed people who either aren’t Latinos or aren’t Republicans or both. It includes Republicans who either aren’t left-handed or aren’t Latinos or both. It includes Latinos who either aren’t left-handed or aren’t Republicans or both.

In short, your excluded category is a mess.

“Fine, fine,” you say, “But obviously, the coefficient on left-handed Latino Republican still estimates the effect of being a left-handed Latino Republican, right?”

No, it doesn’t. Because you haven’t controlled for just being a Republican. Or being a left-handed Republican. And so on. And the absence of those controls can produce misleading inferences.

Let’s imagine that Republicans really dislike the President but Latino identity, handedness, and all combinations of the three are irrelevant. You could very well get a significant coefficient—a false positive—on your variable just because left-handed Latino Republicans are more Republican.

Alternately, let’s imagine that left-handed Latino Republicanness really does change people’s attitudes about the President, but that on their own, Republican-ness pulls in one direction, left-handedness pulls in another, and Latino-ness doesn’t matter. You could very well end up with a null result—a false negative—just because your excluded category is so heterogeneous.

So, no. Your excluded category is such a friggin’ nightmare that your coefficient and significance level tell you precisely nothing about the quantity that you’re trying to estimate.

Let that sink in for a moment. Precisely nothing.

So next time you’re reading a journal or watching a presentation at a conference, keep an eye peeled for omitted terms in interactions. You’ll be surprised at how many of the results out there really don’t tell you anything at all.

]]>
Networking at Conferences—My Personal Take https://www.braumoeller.info/2013/08/19/networking-at-conferences-my-personal-take/ Mon, 19 Aug 2013 05:30:24 +0000 http://www.braumoeller.info/?p=551 After tossing out a brief opinion on the value of networking at conferences like APSA (nutshell: say smart things, don’t worry about networking per se), I was surprised to find that I’d become part of a controversy on the subject. Will Moore both indulges his considerable intellectual curiosity—

I invite Professors Saideman, Drezner, Voeten, and Braumoeller to make arguments about why a belief that (1) the distribution over these things is uniform, or (2) that the value is unlikely correlated across supply and demand is superior to my belief that they are varied.  I have often found, upon reflection, that my beliefs are poor, and will do my best to update, provided compelling reasons to so.

—and vents his spleen—

Here I will snark: can you mainsplain that to me, because I surely won’t be able to figure that out on my own.  Yeah, you fuktup.

—in an extended post on his blog.

I’m happy to respond. I probably shouldn’t do so just yet, since Dan Drezner already showed some interest in the exchange:

litup

Since Dan blogs professionally, and he usually has insightful responses to questions like this, and it looks like he’s pretty motivated, my best option would probably be to get back to working on my syllabi, wait until Dan posts something, and chime in with, “Yeah, what Dan said!” But the time and effort that Will put into elucidating his position deserve a reply in kind.

Will’s main point is that, along with others, my post is “guilty, at a very high level of generality, of … (implicitly) believing that advice drawn from your experience is universally valuable.” That belief, he argues, is not unrelated to the fact that I’m a tenured white male with a degree from a top department.

Without getting into the impact of tenure, race, sex, or locus of degree on self-centeredness—I don’t study them, and I don’t presume to be an expert—I’ll point out a few things that suggest a different interpretation of my remarks.

  1. They start with the words “My two cents.” That’s one way in which authors generally indicate that their perspective may not be generalizable or universally valid.
  2. They were a private post on my Facebook page rather than a public post on my  blog. I do have a blog. I don’t use it much, nor do I send in many submissions to blogs like the Monkey Cage, though I have sent in one or two. That’s because I think our discipline is most credible when we keep our powder dry and only opine publicly when we’ve put in enough hard work to make our opinions worth reading.
  3. Even when I do blog, I invariably use the words “I think…” or “My sense is…” to indicate that I’m offering my own perspective, not presuming to offer a universally valid one.

I address this point largely because it’s the one that seemed to upset Will the most. Will’s and my relationship falls into a category that probably seems odd to most people but very familiar to academics: we get along pleasantly enough, I’d enjoy seeing him arrive at my table at the conference-hotel bar, but neither of us has been to the other’s house or anything of the sort. I don’t think English has a very good word for that sort of relationship, really, but the upshot is, I like the guy enough not to want to upset him unnecessarily.

I don’t think the point is really relevant to the discussion of networking, though, because on that point, I think we’re simply talking past each other. As I pointed out in a comment to that same Facebook post—a response to Will, in fact—, I was discussing the value of networking at APSA, not the value of having a professional network.

cheaptalk

I’ll elaborate. In my experience, when graduate students talk about “networking” at big conferences like APSA, they’re talking about meeting fairly senior and well-known people for the sake of meeting them. My own sense is that there isn’t much value to that practice. Despite being skeptical, I did try it myself once as a graduate student. The response reminded me of stories I’d read about Lyndon B. Johnson. I never tried it again. As a professor, I’ve spent some very pleasant social hours at conferences with various graduate students, but those evenings don’t really have any weight when it comes to hiring decisions and the like.

That’s not to say that people can’t form useful connections at conferences. But any time I’ve noticed someone, or feel as though I’ve been noticed, it hasn’t been through conscious effort. It’s been the result of a really smart comment, discussion, or presentation. And it’s not to say that networking via other means isn’t valuable: I think conferences like Journeys in World Politics do a terrific job of helping people make connections to more senior scholars who can advise them on their work and their careers.

(As a quick aside, Steve Saideman used “networking” in his response to Will in a way that I hadn’t considered—peer-to-peer networking, or networking with other junior scholars. I actually think there’s immense value in that practice. I just hadn’t considered it.)

So with all this in mind, when I re-read the passage in which Will writes, “my claim is that neither the value of the network itself, nor the potential return from a given unit of networking activity, are uniform,” I’m not sure we actually disagree at all. I don’t think networks are equally valuable across scholars. I don’t think the potential return from network-building is the same for different people. I do think the best return, at conferences like APSA, comes from saying smart things… but as I wrote earlier, that’s just my two cents.

]]>
The Prolific Comma https://www.braumoeller.info/2013/07/30/the-prolific-comma/ Tue, 30 Jul 2013 02:14:05 +0000 http://www.braumoeller.info/?p=547 Over the course of the past couple of years, I’ve read quite a few application essays. Most are very thoughtful. Some are inspiring. All are written by smart people. Yet many, if not most, suffer from a single pathology: their authors sprinkle commas through their sentences with a pathologically unbridled enthusiasm.

To be honest, I think the problem is that our students read too much of our bad writing. Before long, they start to emulate the epic, serpentine sentences that span line after line of our turgid journal articles. They never learned the rules for punctuating such sentences because, beyond a certain point, those sentences shouldn’t be punctuated—they should be taken out behind a barn and shot. We can’t blame them for scattering commas like talismans to ward off incomprehensibility.

That’s not to say that we can’t do something about it. As an antidote, I highly recommend Lynne Truss’ Eats, Shoots and Leaves: The Zero Tolerance Approach to Punctuation. For those unwilling to read even that slim volume, there’s a useful cheat sheet over at WikiHow. And for graduate students and professors alike, Michael Billig’s Learn to Write Badly: How to Succeed in the Social Sciences, while not yet out, sounds awfully promising.

Good luck to all of you. And on behalf of all of us, we’re sorry.

]]>