The IIES is pleased to announce that Assistant Professor Jonathan de Quidt has had his paper Measuring and Bounding Experimenter Demand (with Johannes Haushofer and Christopher Roth) accepted for publication in the American Economic Review, considered one of the world’s top scholarly journals in economics. We sat down with Jonathan for a short Q&A about his research paper.

Can you briefly summarize what your paper is about?

We study "demand effects", which refer to the concern that participants in social science experiments or surveys might just be acting out what they think the researcher wants them to do, rather than revealing their true or "natural" behavior. We develop a new approach to assessing the potential bias created by these effects, and apply to a wide range of classic experimental tasks. The approach is based on deliberately inducing demand effects, by explicitly signaling what we want participants to do. We find that participants do respond to these signals, so researchers are right to pay attention to potential demand effects. But the good news is that in the classical experimental designs that we study, demand effects seem likely to be small.

How do you think your paper will influence future research in the area?

We see two main uses of our paper by researchers. First, we hope that it will become common practice among practitioners to apply our approach in cases where there are concerns about demand effects – for example because it is hard to avoid participants picking up on what the research objectives are likely to be. If this is not possible, we hope they will use our estimates to assess the potential for bias in their study. Second, we think our method can be usefully applied when demand effects themselves are the object of study, and we have some future work planned along these lines.

What are the most important lessons policy makers and the general public can learn from your paper?

While the paper is primarily a methodological piece and is likely to be mostly of interest to practitioners, an encouraging take-away from our results is that in the classical experimental tasks we study, it doesn't seem that findings are too vulnerable to demand effects. Experimental social science is going through a bit of turmoil right now, with a lot of discussion of whether findings are replicable. It is encouraging therefore that one potential threat – demand effects – seems fairly benign.

Where does the idea for this paper originate from?

There's quite a nice story to this, actually. In my case, I had been thinking about demand effects and approaches like ours since a conversation I had while in grad school, in which we pondered to what extent a researcher who wanted to "show" a particular finding could achieve that by cleverly tweaking their experimental design. Then, to my surprise, Johannes pitched a very similar idea to me during summer 2015 (when he was visiting the IIES), so naturally we began working together. We had already been planning to invite Chris to join us, when to our surprise he proposed the same basic idea to us! So a research team was born...