1. About
  2. Features
  3. Explore

I'm usually assigned to review one or a couple of papers for a conference. The problem is: I only have access to these single submissions.

How can I give a score (reject/weak reject/neutral/weak accept/accept) to a submission without having access to the other papers, so I could compare and contrast the contributions and originality of them?

  • Even if I had access to the others dozens (or hundreds) of submissions, the question would still hold, as I surely would not be capable of evaluating all of them in the desired timeframe.

  • Each conference has a different impact level and overall quality of submissions. But I'm not sure if I should use this information while reviewing submissions.

  • I know the editor is the one imbued with the responsibility of accepting or rejecting a paper based on the reviews. But then a demanding reviewer could reject a paper, and because he reviews only a few submissions, this would clearly harm the overall process.

1 Answer 1

The goal is to score the papers on their own merits, not to rank them. There are some conferences where there are ranking systems, usually among the scored papers and done by the committee, but just like journal papers, it's possible to evaluate a paper on its own merits.

For example, if there was a paper that was technically the "Best of a bad lot", but still contained massive numbers of errors, poor writing, unintelligible graphics, etc., knowing the context isn't necessarily helpful, they're all bad papers.

Looking at their scoring metric, each one of these can be described in manner consistent with not knowing other papers:

  • reject: This abstract isn't suitable for the venue, is just outright poor quality, etc. and is beyond salvaging.
  • weak reject: This abstract isn't pathologically flawed, but would need a considerable amount of work to be appropriate for the venue.
  • neutral: The "I suppose" category. Filler abstracts that could use a bit of polish, or ones that failed to rouse much of a strong feeling either way.
  • weak accept: Promising submissions that are appropriate for the venue. While not inherently flawed, there's probably improvements they could make to.
  • accept: "I would like to see this accepted"

It's possible for you to get two outstanding papers and simply both think they should be accepted. Or two papers that should be rejected flat out. And things in between. Trying to rank every submission to a conference would be a massive effort, and is still somewhat arbitrary, as your ranking scheme is not inherently an objective one. Nor are you necessarily guaranteed to be qualified to rank each and every submission.

You should take into account the nature of the conference itself - for example, there are some conferences where I am considerably more lenient than I am for others, the same way there are journals where I use somewhat stricter criteria. And yes, a strict reviewer could torpedo otherwise worthwhile papers, but that's why many groups use more than one reviewer, and the probability of any one reviewer being a sufficiently strict reviewer as to meaningfully start harming the overall peer review process is pretty small.