Top science editor defends peer-review system in climate row
Top science journal Nature was hit with claims last week that its editors -– and those of other leading titles -– have a bias towards papers highlighting negative climate change effects. It denies the allegation.
Scientist Patrick Brown shocked his peers when he said he had tailored his study on California wildfires to emphasise global warming. He claimed it would not have been accepted if it had not pandered to editors' preferred climate "narrative".
Nature's editor-in-chief Magdalena Skipper spoke to AFP about the case and the broader challenges facing academic publishing in the age of climate change and artificial intelligence.
The interview has been edited for length and flow.
- Bias claim -
Q. Are journal editors biased towards studies that emphasise the role of climate change over other factors?
A. "The allegation that the only reason why (Patrick Brown) got the paper published in Nature was because he chose the results to fit a specific narrative makes no sense at all. I'm completely baffled (by the claim). If a researcher provides compelling, convincing, robust evidence that goes against a consensus, that study actually becomes of special interest to us -- that's how science progresses.
"Since (climate change) is a pressing issue, of course there is an awful lot of research that is funded, performed and subsequently published to probe the matter, to understand how grave the problem really is today.
"In this case we had (peer-) reviewers saying that climate change is not the only factor that affects wildfires. The author himself argued that, for the purpose of this paper, he wished to retain the focus solely on climate change.
"We were persuaded that a paper with that focus was of value to the research community because of the contribution made by the quantification (of climate impacts)."
- Studies retracted -
Q. Research shows thousands of published studies across the academic world get retracted due to irregularities. Is the peer-review system fit for purpose?
A. "I think everyone in the scientific community would agree that the peer review system isn't perfect, but it's the best system we have. No system is 100-percent perfect, which is why at Nature, we have been trialling different approaches to peer review. There can be many rounds of peer review. Its complexity depends on the comments of the reviewers. We may decide not to pursue the paper.
"We have had cases at Nature of deliberate scientific misconduct, where somebody manipulates or fabricates data. It happens across disciplines, across scientific publishing. This is extremely rare.
"I think the fact that we see retractions is actually a signal that a system works."
- Pressure to publish -
Q. Is there too much pressure on scientists to get published at any cost?
A. "Science funding is precious and scarce, let's face it. Researchers have to compete for funding. Once an investigation has been funded and carried out, it makes sense for the results to be published.
"On the other hand, PhD students in many educational systems are required to publish one or more scientific papers before they graduate. Is this a helpful requirement when we know that a large proportion of PhD students are not going to continue in research?
"In many cases, early-career researchers waste time, opportunity and money to publish in predatory journals (that, unlike Nature, take a fee without offering proper peer review and editing), where their reputation suffers. They are effectively tricked into thinking that they are genuinely publishing to share information with the community."
- AI in publishing -
Q. What measures is Nature taking to monitor the use of artificial intelligence programs in producing scientific studies?
A. "We do not disallow using LLMs (large-language models such as ChatGPT) as a tool in preparation of manuscripts. We certainly disallow the use of LLMs as co-authors. We want the authors who have availed themselves of some AI tool in the process to be very clear about it. We have published and continue to publish papers where AI was used in the research process.
"I've heard of journals which published papers where leftover text from (AI tool) prompts was included in papers. At Nature, this would be spotted by the editors. But when we work with the research community and the authors who submit to us, there is an element of trust. If we find that this trust has been abused consistently then we may have to resort to some systematic way of scanning for generative AI use."
Q. Do editors have the technical means to scan for use of these AI tools?
A. At the moment, not to my knowledge. It's an incredibly fast-moving field. These generative AI tools are themselves evolving. There are also some really promising applications of AI in accelerating research itself.
C.Conti--PV