Isn't readability and comprehensibility the job of the editor/journal to check. (after all they're actually paid) maybe not for conference, but peer review is more for checking if the methodology, scope, claim, direction, conclusion and relevances is sound&trustable.
The editor is often not the right person to decide based on technical details. Most often, articles they receive anre outside their field of expertise and they don’t really have a way of deciding if a section is comprehensible or not. It’s very difficult for an outsider to know what bit of jargon is redundant and what bit is actually important to make sense of the results. So this bit of readability check falls to the referees.
In theory editors (or rather copyeditors, the editors themselves have to handle too many papers to do this sort of thing) should help with things like style, grammar, and spelling. In practice, quality varies but it is often subpar.
Highly dependent on journal / field. In mine (mathematics) most associate editors work for free, same as reviwers. The reviewer do all the things you say, and in addition try to ensure readability & novelty. Most journals do have professional copy editing, but that's separate from the content review.
I don't know how refereed conference proceedings work (we don't really use these). The only journals I know of that have professional editors (i.e., editors who are not active researchers themselves) are Nature and affiiliated journals, but someone more knowledgeble should correct me here.
> Isn’t readability and comprehensibility the job of the editor/journal to check
Yes, who do you think ask the reviewers to perform their reviews?
> peer review is more for checking if the methodology, scope, claim, direction, conclusion and relevances is sound&trustable.
No, the parent comment has it right. The only thing being reviewed is the paper, and the point is to make sure it communicates clearly, not that it’s “sound and trustable.”
The editor is basically deferring to people with expertise who can put the paper into context better than they could. The editor might be an expert in the field, but they can’ speak for every aspect of it like someone working day to day in that specific aspect of the field could. Sometimes the authors themselves even recommend potentially relevant reviewers for the editor to contact for peer reviewing.
In CS, the editor / journal don’t do those things. Instead, the reviewers do. (Sometimes reviewers “shepherd” papers to help fix readability after acceptance).
Also, most work goes to conferences; journals typically publish longer versions of published works.
Yes a metadata relationship link would be outstanding. Reproduced in some paper xyz, or by some institution, named individuals, etc. some kind of structured information would be very useful.
Barriers to publication should be lower for replication studies, I think that’s the main problem.
If someone wants to spend some time replicating something that’s only been described in a paper or two, that is valuable work for the community and should be encouraged. If the person is a PhD student using that as an opportunity to hone their skills, it’s even better. It’s not glamorous, it’s not something entirely new, but it is useful and important. And this work needs to go to normal journals, otherwise there’s just be journals dedicated to replication and their impact factor will be terrible and nobody will care.
They're basically no barriers to publication. There are a number of normal journals that publish everything submitted if it appears to be honest research.
Not nice journals, though. At least not in my experience but that’s probably very field-dependent. It’s not uncommon to get a summary rejection letter for lack of novelty and that is one aspect they stress when they ask us to review articles.
But novelty IS what makes those journals nice and prestigious in the first place. It is the basis of their reputation.
It's basically a catch 22. We want replication in prestigious journals, but any Journal with replications becomes less novel and prestigious.
It all comes down to what people value about journals. If people valued replication more than novelty, replication journals would be the prestigious ones.
It all comes back to the fact that doing novel science is considered more prestigious than replication. Institutions can play all kinds of games to try to make it harder for readers to tell novelty apart from replication, but people will just find new ways to signal and determine the difference.
Let's say we pass a law that prestigious journals must published 50% replications. The Prestige from publishing in that journal will just shift to publishing in that journal with something like first demonstration in the title or publishing in that journal Plus having a high citation or impact value.
It is really difficult to come up with the system or institution level solution when novelty is still what individuals value.
As long as companies and universities value innovation, figure out ways to determine which scientists are innovative, and value them more
I wonder if undergrads could be harnessed to enter into this kind of work, maybe under the supervision of doctoral students and a well meaning and interested PI.
Maybe add people as special authors/contributors to the original work.
There always seems to be a contingent of people that think that anything less than %100 solution is inadequate so nothing is done. Peer review has proven itself inadequate and people hang on to it tooth and nail. Some disciplines should require replication on everything - I won't name Psychology or Social Sciences in general but the failure to replicate rate for some is unacceptable.
Let's not make perfect be the enemy of good. We may never be able to replicate every field, but we could start many fields today. It means changing our values to make replication as a valid path to tenure and promotion and a required element of Ph.D studies.
>Peer review does not serve to assure replication, but assure readability and comprehensibility of the paper.
I have had a paper rejected twice in a row over the last year. Both times the comments include something like "paper was very well-wriiten; well-written enough that an undergrad could read it".
It’s a bit more subtle than that. Not all papers are equal and I’d trust an article from a large team where error and uncertainty analysis has been done properly (think the Higgs boson paper) over a handful of dodgy experiments that are barely documented properly.
But yeah, in the grand scheme of things if it hasn’t been replicated, then it hasn’t been proven, but some works are credible on their own.
Given that some experiments cost billions to conduct, it is impossible to implement "Peer Replication" for all papers.
What could be done is to add metadata about papers that were replicated.