Wednesday, July 25, 2012

Some Thoughts on the American Political Science Association




On June 24, 2012 the New York Times Sunday Review ran my essay Political Scientists Are Lousy Forecasters explaining the problems with the National Science Foundation's current political science grants. Not surprisingly, it drew strong responses. I intended to reply to these earlier but some unanticipated time-sensitive work intervened.

The question of how best to study politics is an immense one that I will take up in more detail elsewhere. For now I want to review a few specific responses to the essay, and raise some big picture questions, the first one being about the word "science" in the name of our professional association.  I also want to address in an abbreviated fashion some of criticisms of what I wrote. I'll do this in a few posts, of which this is the first.

Here are a couple criticisms: 1)  political scientists do not claim to predict anything; and 2)  Karl Popper's work has been relegated to intellectual history.  Quick responses: first, many political scientists and New York Times blogger and statistical enthusiast Nate Silver disagree with both of these positions.  I mention Silver and will go through the work of others later because the guys writing for the political science blogs offered no conclusive evidence for their claims that political scientists do not or should not attempt to make predictions nor that Karl Popper is wrong, but rather make these ad populi assertions in the bombastic fashion that is typical of their bluster and a major reason for, and symptom of, our discipline's intellectual impoverishment.   

Substantively, here are five quick thoughts I'd like to share.

 First, the assertion in my New York Times piece was that political scientists who were modeling nuclear war and deterrence scenarios, and experienced Sovietologists and even Kremlinologists, just blew it.  And they knew it at the time--no idea why folks now are trying to muddy the waters on this point now.  I was a grad student at Berkeley in 1991 and had taken classes with Ken Jowett, who repeatedly claimed that the Communist Party in the USSR was like the Catholic Church and would never willingly give up power.  His was not just a majority position but encompassed the views of every expert in the field.  Even those who didn't specify this position as a specific prediction were implicitly predicting it in every sentence they wrote.  Likewise, Middle East experts also didn't see the Arab Spring coming.  Period.  To the extent they discussed the upending of regimes in places such as Tunisia, Libya, Egypt, and Syria, it was to explain why this was not happening nor going to happen. 

Second, the most interesting response to my analysis was the claim that some to many political scientists are not even trying to orient us to the future but merely using statistical analysis to retrospectively understand causal mechanisms associated with changes in cases in their databases.  

Assuming that some political scientists really are only quantitative historians (and perhaps should be applying for history grants, anyway) these folks, too, are making predictions.  I was very surprised to read this criticism because it was coming from the folks who are supposedly in the more subtle and sophisticated wing of the quantitative field.  Regardless, they know, or should know, that their claims about "expected values," either as a universal snapshot to be generalized or just retrospectively, are by definition predictions about the likelihood of a particular case having characteristics consistent with the categorical characteristics imputed to the mean: "expected value: The mean value of the theoretical sampling distribution of any statistic.  Statistical reasoning is centrally concerned with the expected value, as opposed to any particular observed value.  In statistical procedures that seek to predict the values of the dependent variable using one or more independent variables, the predictions are typically estimated expected values, conditional on the independent variables included in the analysis."  Jason Seawright and David Collier, Rethinking Social Inquiry: Diverse Tools, Shared Standards. p. 326.  

This is important because many cases in these sorts of studies have outcomes that are diametrically the opposite of that predicted by the so-called scientists' expected values; if you think Popper is right, and many of us do, including those who do work emphasizing necessary but not sufficient conditions, then these cases have falsified the predictions of the expected value.

Third, some claimed that when political scientists get it wrong, and are corrected later, say in the example of the civil war studies I referenced, then a) I am a hypocrit for quoting quantitative work; and b) the system works.  Steven Saideman, in his post referring to me as a "Self-hating Political Scientist," made this point.  (Professor Saideman: Really?  You want to associate yourself with Zionist ideologues who attempt to avoid accountability for racist policies by name-calling tactics?  And me with leftist Jewish critics of racist policies?  I'm perfectly fine with this but thought it might merit further reflection.)

I have a few thoughts on the boosterism for quantitative work's putatively cumulative approach to knowledge. The first is that my juxtaposition of positions on the relevance of ethnic grievances to civil wars -- comparing the analyses of James Fearon and David Laitin with that of Lars-Erik Cederman, Nils Weidmann, and Kristian Skrede Gleditsch -- was to emphasize that the work of the former had been falsified and to perform how quantitative analysts are re-representing journalistic observations through equations and not producing new knowledge. 

If you read the latter article's commendable qualifications and especially the last line, this becomes especially obvious: "It is very unlikely that such conflicts can ever be understood, let alone durably solved, without taking seriously the claims of marginalized populations."  "Horizontal Inequalities and Ethnonationalist Civil War: A Global Comparison," American Political Science Review, p. 492.  These guys, who, for what it's worth, did not receive any attributed funding from the U.S. NSF -- Saideman's claim to the contrary notwithstanding -- but did receive funding from the Air Force Office of Scientific Research,  are saying, hey, we are going to tell you through our databases we spent tens of thousands of dollars putting together the same thing we hear from the people enduring violence from civil conflicts.

I understand that the authors and Saideman likely will claim that political scientists with lots of gigabytes still need to sweep in and use their magical variables and formulas to render the "claims of marginalized populations" as knowledge, but then we're just back in the old debate about what counts as knowledge.  In other words, the article Saideman claims is evidence of the effectiveness of quantitative work also is evidence that this work at its best is merely an echo chamber for the claims of marginalized populations that are noted by journalists in newspapers such as the New York Times.

As far as the assertion that science-in-progress means lots of mistakes and corrections.  Come on, guys!  Sounds to me as though the great thing about this "probabilistic nonsense," as Popper called it, is that it helps you put together a protection racket for the guys who crunch.  As long as you keep doing it, it doesn't matter if you are right or wrong.  In other words, you can have a long and lucrative, government-funded career if you just keep reviewing each other's work, as Saideman himself claims occurred in the case above, and you never have to worry about getting it wrong.  Also, as Nate Silver points out, echoing Popper, if your field is prone to mistakes and the work is probabilistic, then it's impossible to know when someone really has made a correction or when the supposed correction is just noise.  For instance, Fearon and Laitin were supposedly correcting previous quantitative analyses of civil war studies.  (Professor James Fearon did write a thoughtful response and I will be engaging this in more detail in another post.)

As I mentioned at the beginning, I think this discussion is part of a larger debate and to that end want to end this post by asking why we bother inserting the word "science" in the name of our professional association, which, by the way, is excluding only my opinion piece from its web page advocating the position of the association.

(Charles Lane's column in the Washington Post, also critical of NSF funding, by the way, was included, and also an opinion piece by a psychologist on his inferiority complex among natural scientists.  I asked APSA President Bingham Powell, Jr. about this and he refused to share with me the names of individuals making decisions about the contents of the APSA web pages on the NSF funding and he gave me an explanation for the exclusion of my piece that would have required as well the exclusion of the opinion pieces by Lane and the psychologist Timothy Wilson.) 
-----
UPDATE (7/31/2012):  Since this was posted, Professor Powell and I have shared additional e-mails that have been mutually collegial, and for which I am very grateful.  He has clarified that Michael Britnall, the APSA Executive Director, is supervising the staff making decisions on the APSA NSF funding web pages and that the decision to present information for the purposes of advocacy on this issue reflected a longstanding commitment to this objective, though Professor Powell did not specify any particular APSA directive or decision initiating this commitment. Especially heartening was Professor Powell's assurance that he would raise at the next Council meeting the question of whether the APSA should provide guidance to the NSF on the nature of what the APSA considers its "pure research" priorities, and he was receptive to my question about the relevance and accuracy of the current name for our professional association.
-----
I have a few other thoughts on the positive and negative responses I will make in another post.  
but to conclude for now...

Why are we still calling our professional organization the American Political Science Association? 
 Leaving aside, for now, the word “American,” why is “science” in the title, a word that does not appear in the professional association names for any other discipline engaged in empirical and theoretical research on our macro- or micro-level institutions, communities, practices, and ideas, to wit: the American Anthropological Association, the American Economic Association, the American Historical Association, the American Psychological Association, and the American Sociological Association

Why not change our association's name to the American Politics Association, or the American Political Association, or even the American Politicological Association, which is the least preferable among the three but still more accurate than our current name?

(That “politicological” does not roll easily off one's tongue is not because the syllables are inherently more jargony or difficult to pronounce than, say, “sociological” or “anthropological,” but because of habits associated with our ordinary use today of “political science,” or “politics,” and because “politicological” is not a word. )

Instead of debating whether we members of the American Political Science Association and the authors of the articles appearing in the American Political Science Review are really scientists, thus forcing us to engage the question of what counts as a science and whether we support or oppose working under this rubric, why not eliminate this question altogether, and work under a disciplinary heading that embraces scholarship about politics?

Why are we not those who for the domain of politics “craft knowledge,” a somewhat literal translation of “Wissenschaft,” as opposed to aspiring to be politics' scientists? Why not discuss a more interesting, fundamental, and difficult question, of which the debate about science is one tangent: what counts as knowledge about politics and how do we know this?

(“Wissen” is the perfectly colloquial verb “to know” and does not have the white lab coat connotations of the English “science.” It is possible to use a broader understanding of knowledge than that pursued by some of my colleagues through considering the etymology of the English “science” as well, but the more general connotations of knowledge are less idiomatic than for the German “wissen.”)

The word “science” in our professional association title is an empty abstraction and distraction. If my colleagues want to be scientists, let them defend their definition of science and their fitness for this vocation through their research, publications, and logical arguments, and let them not occupy the throne of scholarly authority by bullying and vague nomenclature.

Especially in light of the fact that some, though not all, of my colleagues are claiming that the “scientific method” they advocate has nothing to do with the scientific method from the time of the APSA's founding, or what science looked like in the 1930s, the word “science”-s significance and meaning in the name of our professional association today seems especially unclear and hence indefensible and ripe for elimination.

Please note that I am not claiming here that “scientific” research is this or that, or good or bad—this discussion comes later. I am simply pointing out that in light of the intellectual diversity of our actual membership and the metaphysical uncertainty about methods most conducive to knowledge, not to mention the contradictory defenses of the “science” in political science—some today think political scientists should be good at prediction, and are or are not, others that this is not required of political science—there are no consistent and widely accepted reasons to claim that we all are or should be working as scientists; and hence the word “science” has no good grounds for remaining in the name of our professional association.