Non-Statistical Thinking in the US Foreign Policy Establishment

I’m a few weeks behind in my New Yorker reading and so just recently read this fascinating article by Ryan Lizza on the current administration’s foreign policy. He gives some insights into the transformation Obama from antiwar candidate to a president conducting three wars.

Speaking as a statistician, though, what grabbed my eye was a doctrine of journalist/professor/policymaker Samantha Power. Lizza writes:

In 2002, after graduating from Harvard Law School, she wrote “A Problem from Hell,” which surveyed the grim history of six genocides committed in the twentieth century. Propounding a liberal-interventionist view, Power argued that “mass killing” on the scale of Rwanda or Bosnia must be prevented by other nations, including the United States. She wrote that America and its allies rarely have perfect information about when a regime is about to commit genocide; a President, therefore, must have “a bias toward belief” that massacres are imminent.

From a statistical perspective, this sounds completely wrong! If you want to argue that it’s a good idea to intervene, even if you’re not sure, or if you want to argue that it’s wise to intervene, even if the act of intervention will forestall the evidence for genocide that would be the motivation for intervention, that’s fine. It’s a cost-benefit analysis and it’s best to lay out the costs and benefits as clearly as possible (within the constraints established by military and diplomatic secrecy). But to try to shade the probabilities to get the decision you want . . . that doesn’t seem like a good idea at all!

To be fair, the above quote predates the Iraq WMD fiasco, our most notorious recent example of a “bias toward belief” that influenced policy. Perhaps Power has changed her mind on the virtues of biasing one’s belief.

P.S. Samantha Power has been non-statistical before.

P.P.S. Just in case anyone wants to pull the discussion in a more theoretical direction: No, Power’s (and, for that matter, Cheney’s) “bias toward belief” is not simply a Bayesian prior. My point here is that she’s constructing a belief system (a prior) based not on a model of what’s happening or even on a subjective probability but rather on what she needs to get the outcome she wants. That’s not Bayes. In Bayes, the prior and the utility function are separate.

[Cross-posted at the Monkey Cage]

Andrew Gelman

Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University.