AAAI did an experiment this year where they supplemented human reviews with AI-generated reviews and solicited feedback from authors and the review hierarchy about the process. They've now written up the experiment.
The paper isn't too long, and I'd encourage you to read the whole thing (or, I don't know, put it into notebookLM and make a podcast out of it!). Some interesting points stood out to me as I read the report.
The complexity of the process The process of architecting the AI review was not the cartoonish "hey ChatGPT review this paper for me". It was carefully structured to focus on specific elements of the review (content, readability, evaluation, setup, etc). The system had what is now standard: a second LLM that acted as a critic and was not told where the review came from, and a third LLM that has to integrate the analysis of the critic and the original review into a final review. I've heard plenty of cases where this architecture does better than just getting the review or even just having a judge.To be clear, the critic was only doing a 'meta review'. It didn't have access to the original paper, so its goal was mostly structural/formal: does the review have all the elements and does it avoid things like accidental author reveal, or obnoxious comments etc.
One thing that wasn't clear from the article was how exactly the LLM was checking code, experiments, theorems and proofs, "using the code interpreter as needed". I'd want to see more details about that seemingly agentic handoff. The perception of the results There's a pretty dramatic signal in the survey results (and the number of responses was decent). AI-generated reviews were viewed as better than human generated reviews along six of nine categories. Where humans did better was on not nitpicking, identifying technical errors, and providing useful suggestions, but where AI reviews did better included being thorough and providing useful suggestions for improvement (which reminds of www.refine.ink/)
It was interesting to see that almost across the board, authors were more enthusiastic about the AI reviews than the reviewer hierarchy. If I'm being my burnt-out AC persona, I'd say this is because authors are likely grateful to get any kind of thorough review of their paper, and man do human reviews of papers suck.
The human-AI interaction The survey had free form responses that were interesting from a qualitative perspective. I think this is where the report fell down a bit, because I suspect there's a rich trove of analysis to do on the assessments that people wrote in. A couple of highlighted examples though brought home the important point that perhaps the best use of these AI reviews is before submission itself, kind of like what STOC 2026 did in their experiment. Because the AI reviews are great at identifying lots of small things that a friendly pre-submission review might miss, but they don't have the same kind of judgement and taste that a person has.
A minor notes:
- The process cost less than $1/paper, for 30,000 submissions. That's not a bad amount to spend. But you have to wonder why reviewers can't get compensation for their work but OpenAI gets paid.
By Suresh Venkatasubramanian
AAAI did an experiment this year where they supplemented human reviews with AI-generated reviews and solicited feedback from authors and the review hierarchy about the process. They've now written up the experiment.
The paper isn't too long, and I'd encourage you to read the whole thing (or, I don't know, put it into notebookLM and make a podcast out of it!). Some interesting points stood out to me as I read the report.
The complexity of the process
The perception of the results
The human-AI interaction
- The process cost less than $1/paper, for 30,000 submissions. That's not a bad amount to spend. But you have to wonder why reviewers can't get compensation for their work but OpenAI gets paid.

