Innovating the peer-review research process


A team of scientists led by a Michigan State University astronomer has found that a new process of evaluating proposed scientific research projects is as effective – if not more so – than the traditional peer-review method.

Normally, when a researcher submits a proposal, the funding agency then asks a number of researchers in that particular field to evaluate and make funding recommendations. A system that can sometimes be a bit bulky and slow – not quite an exact science.

“As in all human endeavors, this one has it flaws,” said Wolfgang Kerzendorf, an assistant professor in MSU’s departments of Physics and Astronomy, and Computational Mathematics, Science and Engineering.

Detailed in the publication

Nature Astronomy

, Kerzendorf and colleagues tested a new system that distributes the work load of reviewin¬¬g project proposals among the proposers, known as the “distributed peer review” approach.

However, the team enhanced it by using two other novel features: Using machine learning to match reviewers with proposals and the inclusion of a feedback mechanism on the review.

Essentially, this process consists of three different features designed to improve the peer-review process.

First, when a scientist submits a proposal for evaluation, he or she is first asked to review several of their competitors’ papers, a way of lessening the amount of papers one is asked to review.

“If you lower the number of reviews that every person has to do, they may spend a little more time with each one of the proposals,” Kerzendorf said.

Second, by using computers – machine learning – funding agencies can match up the reviewer with proposals of fields in which they are experts. This process can take human bias out of the equation, resulting in a more accurate review.

“We essentially look at the papers that potential readers have written and then give these people proposals they are probably good at judging,” Kerzendorf said. “Instead of a reviewer self-reporting their expertise, the computer does the work.”

And third, the team introduced a feedback system in which the person who submitted the proposal can judge if the feedback they received was helpful. Ultimately, this might help the community reward scientists that consistently provide constructive criticism.

“This part of the process is not unimportant,” Kerzendorf said. “A good, constructive review is a bit of a bonus, a reward for the work you put in reviewing other proposals.”

To do the experiment, Kerzendorf and his team considered 172 submitted proposals that each requested use of the telescopes on the European Southern Observatory, a 16-nation ground-based observatory in Germany.

The proposals were reviewed in both the traditional manner and using distributed peer review. The results? From a statistical standpoint, it was seemingly indistinguishable

However, Kerzendorf said this was a novel experiment testing a new approach to evaluating peer-review research, one that could make a difference in the scientific world.

“While we think very critically about science, we sometimes do not take the time to think critically about improving the process of allocating resources in science,” he said. “This is an attempt to do this.”

###

Other members of the research team included Ferdinando Patat, Dominic Bordelon and Glenn van de Ven of the European Southern Observatory; and Tyler Pritchard, Center for Cosmology and Particle Physics, New York University.

(Note for media: Please include a link to the original paper in online coverage:

https:/

/

www.

nature.

com/

articles/

s41550-020-1038-y

)

This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-04/msu-itp041620.php

withyou android app