Surface    |    Backfill    |    About    |    Contact


Policy Relevance

The Open Meeting was a bit disappointing. Considering that it was a major international conference that screens abstracts, I was expecting a higher caliber of papers than you see at, for example, the AAG meeting. From talking to other attendees, I gathered that the problem was in part my bad luck in picking sessions. Whatever the explanation, I saw too many presentations that dwelt at length on problem formulation, and had a theoretical section that was more name- and buzzword-dropping than actual explanation of the perspective from which their study was conducted. They would then run out of time and have to flick quickly through their actual empirical results, and end with a set of conclusions that are banal, meaningless, or far too general to really have been rigorously demonstrated by their small study (including the dreaded conclusion that "the phenomenon I'm studying is really complex").

One of the few good presentations I saw was by Barry Smit, who zeroed in on one of these oft-repeated conclusions -- that the work in question can or should affect policy. While he focused his talk on his own field of vulnerability analysis, what he said applies more broadly. Smit's argument was simple: if you're going to claim that your study is policy-relevant, you have to be able to explain how. It's not enough to study a phenomenon that is or should be of importance to policymaking (which vulnerability certainly is). You have to study it in a way that produces results that decisionmakers will be able and willing to use. If your study is motivated by intellectual curiosity rather than policy relevance, that's fine -- but you have to be honest (to yourself and your readers/hearers) about that, instead of inflating how far toward the "applied" end of the spectrum your work sits.


Post a Comment

Subscribe to Post Comments [Atom]

<< Home