So this is my first post back from a prolonged break. As I mentioned in a previous—albeit brief—entry, I’ve had a busy but enjoyable summer. I got married, defended my dissertation at the beginning of July, and my wife and I have since relocated to Tuscaloosa, Alabama where I’ve accepted a position as a post-doc. We’re both pretty excited about the move and we’re really enjoying ourselves. Having grown up in the Adirondacks in upstate New York, summers that extend beyond a two month window have a certain appeal. And now that we’re getting settled in down here, I’m slowly getting back into a routine for work. I’ve been sitting on some papers for a while now that I’m finally turning my attention to, and it’s my experience with one of these papers that motivated me to write this particular post.
I’ve been thinking recently about how we view the place that history holds in our discipline. Now, I suspect there is an initial reaction that many people (particularly quantitative political scientists) are likely to have to the word “history,” and I’m guessing that is probably one of derision. It’s not that political scientists necessarily dislike history or historians, but this reaction is conceivable given that these disciplines are marked by some important epistemological differences, and many quantitative political scientists are often taught from the outset to avoid relying on single cases when drawing more generalizable lessons and conclusions (with good reason). When we think of historians, it is often in the context of someone who provides an excruciating level of detail about one particular event, and then tries to explain its origins and broader implications/lessons. My issue is not in fact with this particular point—as social scientists, we want to be able to assess just how generalizable the relationships that we’re looking at really are. We are typically interested in exploring systematic relationships, trends, and patterns that hold over broader periods of time and apply to a wide range of actors than are many historical studies. So to be clear from the outset, I am not advocating that quantitative political science should attempt to emulate the professional approach or methods that many historians employ.
My issue, really, stems more from the assertion by many quantitative political scientists that a study that is time bound in some way is “history”. This does not apply to all political scientists, but it’s been my experience that reviewers will often look at the temporal range of a study and dismiss its if it is explicitly bound to a particular period of time that does not overlap with the present (allowing a few years of slack on just what constitutes the “present”, that is). Indeed, some seem to be under the impression that it’s not even political science if the gap between the time period covered by a study and the present is sufficiently large. For example, let’s say I’m interested in understanding the domestic determinants of pre-hegemonic US foreign policy behavior. In this particular instance, there are theoretical reasons to suppose that the behavior of the US could be different in this time period as compared to later periods. Let’s further suppose that I have theoretical expectations regarding the relationship between X and Y, and I write a paper on this relationship for the period between 1900 and 1945. I would not at all be surprised to receive reviews rejecting the paper at least in part because the reviewer(s) questioned the study’s broader relevance, given that it’s bound to a 46-year time period.
I think these sorts of reactions expose some problems and some important assumptions that we often make as political scientists. First, not every study necessarily needs to neatly map onto the present time period—or any other arbitrarily chosen time period. There is nothing inherently unscientific about the notion that certain relationships or phenomena can only be found in a particular temporal context. I’m fairly certain people have not abandoned the study of dinosaurs simply because we no longer see T-Rex roaming around the countryside. And our field is rife with examples of research, the temporal context of which is fundamentally limited in some way. Scholars of American politics provide perhaps the clearest example of an entire sub-field that cannot be held to apply to a period extending back beyond 1789. The sub-field of international relations is similarly dependent in many ways upon the existence of the modern nation-state, which we typically trace back to 1648, and some of the most widely used only go back to 1815. Do we consider our endeavors in these areas “history” because our studies are bound to these time periods?
Also on this point, there seems to be a double standard when we consider the broader implications of our research. Taking my example from earlier, reviewers will commonly ask how the study of US foreign policy between 1900 and 1945 is relevant for today, but rarely do we consider how a study of US foreign policy from 1945–2013 informs our understanding of the 1900–1945 period. It strikes me that this is (1) perpetually moving the goalpost, and (2) that it may be the wrong goalpost. This may seem like an odd point, but what it “relevant” by current standards is something that obviously changes on a day-to-day basis, and it’s a standard that says nothing about the quality of a paper as a piece of social science, or whether or not a paper helps us to understand a particular question about a given set of relationships. Really, this only represents our own innate temporal biases, but it says nothing about how scientific a piece of research is. If the goal is truly generalizable theoretical and empirical knowledge, in a temporal sense, then this kind of consideration should apply just as much as thinking about how a study informs our understanding of the present. Conducting a study with the purpose of expanding our knowledge of systematic relationships between societal actors is not synonymous with expanding our knowledge of systematic relationships between societal actors for the purposes of informing our understanding of the present.
And it’s not as though we don’t attempt to deal with “unique” time periods and cases in our current research. However, it’s often the case that the manner in which we deal with these cases is fairly crude. For example, we might include a dummy variable in a model to control for a time period (or characteristics of a time period) that we believe to be unique in some way. Bipolarity during the Cold War, for example. Similarly, we might include a dummy variable in our model to control for states that we believe to be unique—depending on our topic, it might be a state like Israel, Egypt, the US, or maybe a group of states like the “Great Powers”. Sometimes we might also include an interaction term to account for how the effect of one variable might be conditional upon another.
But these approaches are not always appropriate methods for dealing with the questions that we want to answer. Dummying out a particular time period is only going to tell us whether or not a given time period or group has a higher or lower intercept than the alternative time period or group. This approach is also often atheoretical. For example, we might have a belief that a time period is somehow different, but cannot fully articulate why or how. Similarly, interaction terms with a variable capturing a particular time period, for example, are implicitly suggesting that a given relationship is time-bound in some way. However, these approaches don’t allow us to examine whether or not the remaining variables in our models also have different effects in the context of a particular time period. For example, maybe we’re interested in whether or not both regime type and economic interests have a different effect on conflict propensity during the Cold War as compared to after. This, then, would suggest that maybe splitting our sample into two time periods is the more appropriate means of addressing our question. In fact, the notion that we would dummy out a particular time period because we suspect that it’s “different” in some way, but don’t exactly know how, is exactly the reason why we would want to conduct a study that is temporally bound in the first place.
This points to another issue. I think these biases are somewhat rooted in, and reinforced by, our relatively limited access to “good” data. Almost every journal article contains passages wherein the authors attempt to assert the broader relevance of their work. In the case of international relations, it is also quite common for these articles to then proceed to test their arguments using data that is only available for the post-World War II period. Sometimes, these tests will use data for a single country—often the US. Yet the arguments the authors make are often asserted as applicable to a broader set of countries than just the US, and rarely do such papers even address their own temporal limitations. We implicitly accept the generalizability of papers in which the tests of broader theoretical arguments rely on data from an incredibly narrow and often unrepresentative set of states, but push back when a study openly acknowledges its more narrow temporal confines. Why should we automatically assume that such studies inform our understanding of international relations and state behavior in the 1800s?
This is understandable. Particularly in the field of international relations, the availability of data is exponentially greater in the post-World War II period than before. Commonly used indicators like GDP and trade either don’t exist, are often missing, or are highly inaccurate for earlier time periods. Accordingly, many of our studies focus on this 50–60 year time period—not because it somehow matters more, or because we are interested only in this particular time period, but because this is the period for which we have access to relatively abundant data sources. But even in the post-World War II time period some of the data we use can still be of questionable reliability. Accordingly, when we see an article that focuses on a much earlier time period it sticks out like a sore thumb, and reviewers will often proceed to subject that paper to a different standard than other papers—a standard that really has nothing to do with the execution of the paper or the soundness of its argument.
This knee-jerk reaction against studies that are temporally bound in some way can also have deleterious consequences for our ability to understand the world. Finding that a particular relationship between two variables holds only for a given time period can reveal new and interesting questions. For example, we have evidence that Republicans and Democrats have switched their positions on military spending over the course of the Cold War. If we were looking for a relationship between Republicans and higher military spending over the entirety of this time period, we might erroneously conclude that there is no relationship. Alternatively, the finding that this relationship is temporally bound in some way raises new questions: Why did they switch? What caused the switch? Etc.
If our goal is to continuously develop and refine our understanding of how the world works then we must think carefully about the standards we set. If that standard is that we must only examine relationships that hold for centuries at a time, then we are imposing some very serious limitations on ourselves as researchers. These kinds of “big” systematic relationships are clearly important, but the march of scientific progress is not marked exclusively in these terms.
It's been a while since my last post, so I thought it was time to put up at least a little something:
Senate Republicans are, for the moment, blocking Chuck Hagel's nomination as Defense Secretary. Admittedly, my attention to current events has been spotty over the past couple of months, but it seems that every time I turn my attention back to this the basic rationale for opposing Hagel has changed. Initially it concerned his comments regarding Israel, then it morphed to include a means of obtaining more information about the attacks on Benghazi, and now it seems to have evolved further to include concerns over compensation Hagel received for giving some speeches since he left the Senate. Ultimately I guess I remain unsure as to how this works out for the better for Republicans—now, or in the long run.
A meteor injured several hundred people in Russia.
Spencer Ackerman has a piece up at Wired looking at the mistakes made by the Galactic Empire at the Battle of Hoth. There is also a broader set of responses to Ackerman's piece at Wired, and the folks at the Duck of Minerva have several followup posts debating the shortcomings of the Empire in a wider context. Itemize these posts, I will:
As these discussions have mirrored some of the lunchtime discussions I've had over the past few years with fellow bloggers Chad Clay and Michael Allen, I've enjoyed reading them immensely. I will try to update this list if there are any new additions.
A bit of shameless self-promotion before the holidays. The kind folks at International Studies Quarterly have put a new article by Colin Barry, Chad Clay, and myself up on early view. The link to the article, entitled "Avoiding the Spotlight: Human Rights Shaming and Foreign Direct Investment", is here, and here's the abstract:
Nonstate actors, such as international non-governmental organizations (INGOs) and multinational corporations (MNCs), have attained an increasingly prominent role in modern world affairs. While previous research has focused on these actors’ respective interactions with states, little attention has been paid to their interactions with each other. In this paper, we examine the extent to which the decisions of private actors seeking to invest abroad are affected by the reputational costs of doing business in countries publicly targeted by human rights activists. We find that ‘‘naming and shaming’’ by human rights INGOs tends to reduce the amount of foreign direct investment received by developing states, providing evidence that INGO activities affect the behavior of MNCs. An additional implication of our findings is that shaming by INGOs can impose real costs on targeted states in the form of lost investment.
We have a few projects along these lines that link our respective core research agendas in various ways, so (editorial and reviewer gods willing) be on the lookout for more in the future.
Happy holidays to all!
Discussing the election is all but inevitable. Given my proclivity for numbers, I gathered opinions from 11 willing "experts," including 5 political scientists, 3 bloggers from the QP, 8 PhDs, and members from other related field (Public Policy, Communication, and Planning). Nine of the members are from Boise State. Of course, to reward those who did better, there were points assigned to each category and I will be able to declare a winner likely by tomorrow.
The Battleground States:
Pennyslvania: 11-0 in favor of the Democrats winning.
North Carolina:9-2 in favor of the Democrats winning.
New Hampshire: 10-1 in favor of the Democrats winning.
Iowa: 8-3 in favor of the Democrats winning.
Colorado: 7-4 in favor of the Democrats winning.
Wisconsin: 10-1 in favor of the Democrats winning.
Nevada: 11-0 in favor of the Democrats winning.
Virginia: 9-2 in favor of the Democrats winning.
Florida: 8-3 in favor of the Republicans winning.
Ohio: 11-0 in favor of the Democrats winning.
The Popular Vote for the Winner:
Mean: 50.78
Range: 48.3 - 52.3
The Electoral College for the Winner:
Mean: 306.82
Range: 287-330
Senate Seats:
Massachusetts: 10-1 for the Democrat candidate.
Connecticut: 7-4 for the Democrat candidate.
Missouri: 11-0 for the Democratic candidate.
North Dakota: 10-1 for the Republican candidate.
Indiana:6-5 for the Republican candidate.
Wisconsin: 9-2 for the Democrat candidate.
Arizona: 11-0 for the Republican candidate.
Montana: 7-4 for the Republican candidate.
Nevada: 7-4 for the Republican candidate.
Virginia: 8-3 for the Democrat candidate.
Idaho Proposals (Keeping it local):
Proposition 1: 8-3 predict the measure will fail.
Proposition 2: 7-4 predict the measure will fail.
Proposition 3: 9-2 predict the measure will fail.
HJR 2: 9-2 predict the amendment will pass.
SJR 102: 9-2 predict the amendment will pass.
Seats held by the Democrats in the House
Mean: 200.46
Range: 194-207
Many of those who participated in the poll (from Boise State) are live blogging over at the Blue Review.
Over at the new Political Violence at a Glance blog, Barbara Walter and Elizabeth Martin address a question that I raised last week regarding the reluctance of policymakers to label the Syrian conflict a civil war. Walter and Martin raise three/four points in particular (I may be lumping something together here) to help explain this behavior, but I have a couple of further questions/comments.
Erik Voeten at the Monkey Cage with an update on the amendment that cuts NSF funding for political science programs. A version of the amendment passed the House with a vote of 218-208. You can see the breakdown of the votes here (thanks for Erik for so conveniently providing the link to the votes). Erik also notes a couple of important points: 1) This is not the end of the issue, and 2) No other discipline was singled out in the same way as political science.
Other interesting facts:
Political science awards do not even come close to what economics programs are awarded each year. Consequently, I'm not entirely sure what the fiscal justification for this cut can be. To single out only political science in this way would seem to indicate the gripe is more substantive than fiscal. If not, then why not make a flat cut of 10%-20% across all BSES programs? Spread the cuts evenly? You could cut the annual econ awards by half and that would be the equivalent of cutting all of the annual political science awards AND some of the sociology program. So, what gives?
There is probably a better/more efficient source for summary information than what I used, but you can search NSF records of programs and awards here. My estimates of yearly expenditures are extremely rough. Again, it's important to note that these are awarded amounts, not what is actually spent each year. There is going to be overlap from one year to the next as programs are often funded over the course of a couple of years. There is also some degree of overlap between programs as well. Accordingly, the figures provided should be taken as rough guidelines/comparisons, as well as with a healthy grain of salt. If anyone has a link to actual annual expenditure statistics by program, as opposed to awards, please share. I did a quick search this morning but didn't find anything more helpful than this particular source.
Via Henry Farrell at the Monkey Cage: Republican Jeff Flake of Arizona may today introduce an amendment to cut NSF funding for political science. Both APSA and Farrell call for folks to contact their representatives, and I will echo that call. I've benefitted enormously from NSF funding during my time in grad school and it funds a lot of great programs. As the Monkey Cage has previously noted, even Senator Tom Coburn, who previously campaigned to cut NSF funding for political science, has reaped the benefits of political science research, exploiting that research for his own professional/quasi-public purposes. And beyond that, I mean who cares about things like war, right? Or voting? I could go on...
3 Quarks Daily is hosting its 3rd Annual 3QD Politics & Social Science Prize for "best blog writing in politics & social science:"
As usual, this is the way it will work: the nominating period is now open, and will end at 11:59 pm EST on December 3, 2011. There will then be a round of voting by our readers which will narrow down the entries to the top twenty semi-finalists. After this, we will take these top twenty voted-for nominees, and the four main editors of 3 Quarks Daily (Abbas Raza, Robin Varghese, Morgan Meis, and Azra Raza) will select six finalists from these, plus they may also add up to three wildcard entries of their own choosing. The three winners will be chosen from these by Professor Walt.
The first place award, called the "Top Quark," will include a cash prize of one thousand dollars; the second place prize, the "Strange Quark," will include a cash prize of three hundred dollars; and the third place winner will get the honor of winning the "Charm Quark," along with a two hundred dollar prize.
Stephen Walt is judging:
We are very honored and pleased to announce that Professor Stephen M. Walt, who was also the winner of the 3QD politics prize last year, has agreed to be the final judge for our 3rd annual prize for the best blog writing in politics & social science
Hat tip: Henry at Crooked Timber
Recent Comments