The field of psychology is experiencing a crisis. Our studies do not replicate. When Science published the results of attempts to replicate 100 studies, results were not confidence-inspiring, to say the least. The average effect sizes declined substantially, and while 97% of the original papers reported significant p values, only 36% of the replications did.
The same difficulty in reproducing findings is found in other scientific fields. Psychology is not alone. We know why so many studies that don’t replicate were published in the first place – because of the intense pressure to publish in order to get tenure, get grants, and teach fewer courses, and because of journals’ preference for publishing counterintuitive findings over less surprising ones. But it is worth noting that one-shot priming studies are far more likely to be flukes than longitudinal descriptive studies (e.g., studies examining changes in language in the second year of life) and qualitative studies (e. g., studies in which people are asked to reflect on and explain their responses and those of others).
In reaction to these jarring findings, journals are now changing their policies. No longer will they accept single studies with small sample sizes and p values hovering just below .05. But this is just the first step. Because new policies will result in fewer publications per researcher, universities will have to change their hiring, tenure, and rewards systems, and granting and award-giving agencies will have to do so too. We will need to stop the lazy practice of counting publications and citations and instead read critically for quality. That takes time.
Good will come of this. Psychology will report findings that are more likely to be true, less likely to lead to urban myths. This will enhance the reputation of psychology and more important our understanding of human nature.