HOW GOOD IS OUR INTELLIGENCE?….Michael Schrage has an interesting suggestion to force intelligence analysts to put their money where there mouths are. Instead of simply writing up their conclusions in vague, bureaucratic language, they should assign probabilities to everything they say:

The first number would state their confidence in the quality of the evidence they’ve used for their analysis: 0.1 would be the lowest level of personal/professional confidence; 1.0 would be ? former CIA director George Tenet should pardon the expression ? a “slam dunk,” an absolute certainty.

The second number would represent the analyst’s own confidence in his or her conclusions. Is the analyst 0.5 ? the “courage of a coin toss” confident ? or a bolder 0.75 confident in his or her analysis? Or is the evidence and environment so befogged with uncertainty that the best analysts can offer the National Security Council is a 0.3 level of confidence?

This sounds like a good idea, and I suppose it might be one. On the other hand, my personal experience with this suggests it might not make as big a difference as Schrage thinks.

I used to subscribe to a market research service that did this. (No, I don’t remember which one, and I don’t know if they still do it.) My initial reaction was quite positive, being the analytic geek that I am, but to my surprise I quickly began ignoring the numbers. Although the idea seemed sound, namely that a hard number provides more information than a bunch of waffly, ass-covering words, it turned out that it really didn’t. In the end, the number often seemed like it was plucked out of the air with less thought than the verbal analysis it accompanied, and was there solely because it was required by the company’s in-house style. (For comparison, think about how carefully you respond to questions asking for your opinion “on a scale of one to ten.” Before long, practically eveything becomes a five.) After reading a few dozen reports with five or ten probabilities each in them, I started to tune them out.

What made a much bigger impact than the probabilities, which eventually seemed like little more than a gimmick, was two things: (1) who wrote the analysis and (2) the evidence they presented to back up their conclusions. Like most people, I paid a lot more attention to analysts who had a good track record and I paid a lot more attention to reports that were backed up by credible data.

Schrage’s idea might still be a good one, but not for use on a routine basis. In fact, it could be harmful, since people often ascribe unwarranted credibility to anything with a number attached to it, even if the number is just a guess, as it would be in this case. Do we really want NSC meetings where the participants are heatedly discussing whether something is really a 60% probability or a 70% probability instead of asking deeper questions about the quality of the underlying data itself?

It’s an idea worth thinking about. But it’s got a downside as well as an upside.

Our ideas can save democracy... But we need your help! Donate Now!