На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

12 подписчиков

Whether psychology research is improving depends on whom you ask

man in superhero costume striking a power pose
The research controversy behind power poses is just one example of how replication and new statistical standards are changing the culture of social psychology.

For more than a decade, psychology has been contending with some of its research findings going up in smoke.

Widely publicized attempts to replicate major findings have shown that study results that scientists and the public took for granted might be no more than a statistical fluke. We should, for example, be primed for skepticism when studying priming. Power posing may be powerless.

A recent piece in the New York Times recalled the success of a popular study on powerful poses, and how efforts to replicate the research failed. The article detailed how the collapse of the research behind power poses took place in the increasingly common culture of public critique and infighting in social psychology. Some of that fighting comes from efforts not to tear down, but to build up the field with better scientific rigor and statistics.

Those efforts are well-intentioned. But have they had any impact? Studies published earlier this year tackle that question, first with a survey of social scientists and then with an analysis of recently published papers. The results show that psychology remains plagued by small sample sizes, pessimism and a strong pressure to publish. But they also suggest that psychologists really are trying to turn their field around, and that online fights might even be part of the solution.

“My general research interest is trying to figure out how people can talk about things that they care about without yelling at each other,” says Matt Motyl, a psychologist at the University of Illinois in Chicago. “Usually that means politics or religion.” But heated op-eds and scathing rebuttals in scientific journals, coupled with raging debates on prominent psychology blogs on the practice of science soon drew his attention. “The vitriol and personal attacks I saw people making seemed to go beyond what the data actually said,” Motyl says. “Show me the data. Is the field getting better or is it getting worse?”

Motyl and his colleague Linda Skitka, also a psychologist at the University of Illinois in Chicago, created a survey to try and understand the views behind the public disagreements. The goal was to find out just how replicable social scientists think the work in their field is, and whether it is better — or worse — than it was 10 years ago. Motyl and Skitka sent the survey to the memberships of three social and personality psychology societies — the Society for Personality and Social Psychology, the European Society for Social Psychology and the Society for Australasian Social Psychologists — and publicized it on Twitter. They got completed surveys from more than 1,100 people, almost 80 percent of whom were social psychologists.

Half of the scientists who filled out the survey felt that their field is producing more dependable results now than it did 10 years ago. Respondents also estimated that less than half of the studies published 10 years ago yielded conclusions that could be replicated, and for recent studies, that figure is about half, Motyl and Skitka reported in the July Journal of Personality and Social Psychology.

Survey participants were also asked if they engaged in questionable research practices such as reporting only experiments that produced positive results, dropping conditions from experiments or falsifying data. Then, they were asked to justify when or if those practices were acceptable.

Unsurprisingly, faking data was deemed never acceptable. But opinions on the other practices were more variable, and many scientists provided explanations to justify when they had used practices such as deciding to collect more data after looking at their results or reporting only the experiments that produced the desired effects. But more than 70 percent of the scientists stated they would be less likely to engage in questionable practices now that they’ve become aware of the problems they cause.

“It’s heartening to see some evidence social and personality psychologists are incorporating better research practices into their work,” says Alison Ledgerwood, a psychologist at the University of California at Davis (and one of the self-identified reviewers of the paper).

The survey participants were very clear about what drove their problematic practice. “The worst practices were external to the researchers themselves,” Skitka says: “It’s the publish-or-perish business” that’s driving bad practice. Pressure to publish findings that appeared to support a hypothesis (and even editor and peer-review requests to do so) drove 83 percent of the scientists to selectively report only the studies that turned out well. When scientists dropped conditions from their studies, 39 percent said they did so out of publication pressure. And 57 percent said the same pressure drove them to report unexpected findings as expected, in the interest of telling a more compelling story — some noting that editors and reviewers wanted them to do it.

“As an untenured junior faculty member, there’s a lot of pressure,” Motyl says. “There’s pressure to make things as publishable as possible.” Success in a research career depends on impressive findings, tight storylines and publishing in important journals. Under such pressure, bad behavior can be glossed over or even rewarded.

“It’s disappointing, but not surprising to me, that researchers are still reporting pressure to selectively report study that…

The post Whether psychology research is improving depends on whom you ask appeared first on FeedBox.

Ссылка на первоисточник
наверх