Michael LaCour’s Gay Marriage Study Fraud Exposes a Serious Problem in Modern Academia

Jesse Singal at New York Magazine wrote up one of the more disturbing stories I’ve read in some time, about the curious case of academic fraud perpetrated by one Michael LaCour, political science grad student at UCLA and (until recently) destined for a job at Princeton.

I haven’t followed the story closely until now, but apparently Mr. LaCour made a big splash with (fabricated) data purporting to show that voter opinions on gay marriage shifted dramatically and essentially permanently based on a single in-person canvass conversation with an LGBT person telling their own personal story. His “data” wound up on This American Life, and went on to alter how many LGBT activism organizations conducted their campaigns.

But it was all a lie, eventually and painstakingly exposed by David Broockman, another graduate student at UC Berkeley. While LaCour’s “findings” blithely passed through the peer review process without a hitch under the not-so-watchful eyes of his advisers and into the hands of a series of uncritical research journals and social justice NGOs, Broockman was the first person to seriously study and attempt to replicate his data.

The results were shocking. LaCour’s “methodology” supposedly involved paying out 10,000 incentives at $100 a piece to a no-name survey company to conduct the “canvassing.” This was the very core of his research process, and to anyone with any significant experience in the private sector research industry it’s patently laughable at this point alone. My primary day job is in marketing research, and my eyes popped out of my read the moment I read this:

First, the budget-conscious Broockman had to figure out how much such an enterprise might cost. He did some back-of-the-envelope calculations based on what he’d seen on LaCour’s iPad — specifically, that the survey involved about 10,000 respondents who were paid about $100 apiece — and out popped an imposing number: $1 million. That can’t be right, he thought to himself. There’s no way LaCour — no way any grad student, save one who’s independently wealthy and self-funded — could possibly run a study that cost so much. He sent out a Request for Proposal to a bunch of polling firms, describing the survey he wanted to run and asking how much it would cost. Most of them said that they couldn’t pull off that sort of study at all, and definitely not for a cost that fell within a graduate researcher’s budget. It didn’t make sense.

Of course it didn’t make sense. It’s insane on its face. To someone outside the industry I guess it might seem plausible, but anyone in the industry can tell you how preposterous that sounds as a research process, even if you had the budget for it. Which no graduate student does.

And then, of course, there were the red flags in the data itself. The reason the study gained such wide attention is that respondents supposedly had large, permanent shifts in their opinion on marriage equality–even though no previous study had ever shown such an effect via canvassing. But forget about academic studies. Political professionals across America have collectively tens of thousands of campaigns of experience under our belts, with very specific data on how much effect we can expect canvassing to have on voters in all sorts of circumstances, using all sorts of arguments. The effects being described by LaCour’s “work” could only have seemed plausible to people who had not actually had much campaign field experience in the real gritty world of politics.

It turns out that LaCour’s fraud was able to be perpetuated for so long because the rigidity of academic hierarchy and the ultra-competitive academic job market create a gigantic disincentive to question the work of others, particularly when high-profile professors are backing their data. The high-profile professors themselves are too overworked in the publish-or-perish environment to take a fine-tooth comb to the work of others.

Add to this that the NGOs, the media and other professionals involved in promoting this data were struck with a serious case of wishful thinking and apparently either never looked into the raw data or (more likely) had no one on staff with real-world private-sector marketing research or actual election field campaign experience, and one can see how one graduate student’s wildly fabricated data somehow made it to the national news and affected organizational outputs for years before being exposed as a fraud.

This is a big problem that drives to the heart of a deep sickness within academic humanities, brought on partly by personal pettiness and careerism, partly by the brutal reality of the academic job market, and partly by an apparent shocking lack of real-world experience in applied research and campaigns. It makes one question how much of any of the data coming out of academic political science and the humanities generally can actually be trusted. And that’s not a good thing at all.

The one bright spot is that the fraud did eventually get exposed by the process of data replication by a courageous student. But how many studies fly under the radar at face value without replication of any kind? It’s a very troubling question.

David Atkins

David Atkins is a writer, activist and research professional living in Santa Barbara. He is a contributor to the Washington Monthly's Political Animal and president of The Pollux Group, a qualitative research firm.