Artificial intelligence research has a slope problem, academics say: ‘It’s a mess’ Artificial Intelligence (AI)

by
0 comments
Artificial intelligence research has a slope problem, academics say: 'It's a mess' Artificial Intelligence (AI)

claims to be a single person denote 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at the world’s leading conference on AI and machine learning, have raised questions among computer scientists about the state of AI research.

Author, Kevin Zhu, recently End He holds a bachelor’s degree in computer science from the University of California, Berkeley, and now runs Algoverse, an AI research and mentoring company for high school students – many of whom he co-authors on papers with. Zhu himself graduated from high school in 2018.

he has the papers put out The past two years have covered topics such as the use of AI Find Nomadic Herders In sub-Saharan Africa, Evaluate skin lesionsand to Translation Indonesian dialects. On his LinkedIn, he claims to have published “100+ top conference papers in the past year”, which have been “cited by OpenAI, Microsoft, Google, Stanford, MIT, Oxford, and others”.

Hany Farid, a professor of computer science at Berkeley, said in an interview that Zhu’s papers are a “disaster.” “I’m fairly convinced that the whole thing, top to bottom, is just vibe coding,” he said, referring to the practice of using AI to create software.

Farid recently drew attention to Zhu’s prolific publications on LinkedIn PostThat provoked discussion of other similar cases among AI researchers, who said their newly popular discipline was facing a flood of low-quality research papers, driven by academic pressure and, in some cases, AI tools.

In response to a question from the Guardian, Zhu said he had supervised the 131 papers, which were a “team effort” run by his company, Algoverse. The company charges high-school students and graduate students $3,325 for a select 12-week online mentoring experience – which includes assistance presenting work at conferences.

“At a minimum, I help review the methodology and experimental design in proposals, and I read and comment on full paper drafts before submission,” he said, adding that projects on topics such as linguistics, health care or education tend to involve “principal investigators or advisors with relevant expertise.”

The teams “used standard productivity tools such as reference managers, spell checking, and sometimes language models for copy-editing or improving clarity,” he said in response to a question about whether the papers were written with AI.

Bot viewers are in an uproar

Review standards for AI research differ from those of most other scientific fields. Most work in AI and machine learning does not go through the rigorous peer-review processes of fields like chemistry and biology – instead, papers are often presented less formally, such as at major conferences. neuripsOne of the world’s top machine learning and AI gatherings, where Zhu is scheduled to present.

Zhu’s case points to a larger issue in AI research, Farid said. The number of conferences, including Neurips, is increasing Number Number of submissions: NeurIPS filed 21,575 papers this year, down from 10,000 in 2020. Another top AI conference, the International Conference on Learning Representations (ICLR), reported a 70% increase in its annual submissions for the 2026 conference, to nearly 20,000 papers, up from last year. 11,000 For the 2025 conference.

“Reviewers are complaining about the poor quality of papers, even as they suspect some are AI-generated. Why has this academic feast lost its flavor?”. Asked by Chinese tech blog 36Kr november post Regarding ICLR, it is important to note that the average score that reviewers assigned to papers has declined year-on-year.

Meanwhile, students and academics are facing increasing pressure to rack up publications and keep up with their peers. Academics said it is unusual to generate double-digit numbers – much less triple – in high-quality academic computer science papers in a year. Farid says that sometimes, his students have “vibe coded” papers to increase their publication numbers.

“So many young people want to get into AI. There’s a craze right now,” Farid said.

NeuroIPS reviews papers submitted to it, but its process is far faster and less intensive than standard scientific peer review, said Jeffrey Walling, an associate professor at Virginia Tech. This year, the conference is used A large number of PhD students had to scrutinize the papers, which the chairperson of the NURIPS area said compromised the process.

“The reality is that many times conference referees have to review dozens of papers in a short period of time, and there are usually little or no revisions,” Walling said.

Walling agreed with Farid that there are just too many papers being published, saying that he has encountered other authors with more than 100 publications in a year. He said, “Academics are rewarded for quantity of publications more than quality…Everyone loves the myth of super productivity.”

On Zhu’s Algoverse FAQ page, North discusses how the company’s program can help applicants’ future college or career prospects, saying: “The skills, accomplishments, and publications you gain here are highly regarded in academic circles and can really strengthen your college application or resume. This is especially true if your research is admitted to a top conference – a prestigious accomplishment even for professional researchers.”

Farid says he now advises students not to go into AI research, because there is a “craze” in the field and a large amount of low-quality work is being done by people hoping to improve their career prospects.

Skip past newsletter promotions

He said, “It’s just a mess. You can’t persist, you can’t publish, you can’t do good work, you can’t be thoughtful.”

slope flood

Many excellent works have still emerged from this process. Famously, Google’s paper on transformers, all you need is attention – the theoretical basis of the advances in AI that led to ChatGPT – was presented at NeurIPS in 2017.

NeurIPS organizers agree that the conference is under pressure. In a comment to the Guardian, a spokesperson said that the growth of AI as a field had brought about a “significant increase in paper submissions and an increase in the value placed on peer-reviewed acceptance at NeuroIPS”, placing “considerable pressure on our review system”.

NeurIPS organizers said Zhu’s presentations were primarily for workshops within NeurIPS, which have a different selection process than the main conference and often present early-career work. Farid said he did not find any convincing explanation for one person putting his name on more than 100 papers.

Farid said, “It doesn’t seem to me a solid argument for putting your name on 100 papers to which you couldn’t possibly make a meaningful contribution.”

This problem is bigger than the flood of papers in NeurIPS. ICLR used According to a recent article in Nature, the AI ​​would review large amounts of submissions – resulting in apparently hallucinatory quotes and feedback that was “very verbose with lots of bullet points”.

The feeling of decline is so widespread that finding a solution to the crisis has become the subject of papers. A May 2025 position paper – an academic, evidence-based version of a newspaper op-ed – written by three South Korean computer scientists, which proposed a solution to “unprecedented challenges with the growth of paper submissions, growing concerns over review quality and reviewer responsibility”, won an award for outstanding work at the 2025 International Conference on Machine Learning.

Meanwhile, Farid says, major tech companies and smaller AI security organizations are now dumping their work on arXiv, a site once reserved for less-viewed preprints of math and physics papers, flooding the Internet with work that is presented as science — but not subject to peer review standards.

The price of this, Farid says, is that it’s almost impossible to know what’s really going on in AI – for journalists, the public, and even experts in the field: “As an average reader you have no chance of trying to understand what’s going on in the scientific literature. Your signal-to-noise ratio is basically one. I can barely go to these conferences and figure out what the hell is going on.”

“I tell students that, if you’re trying to optimize paper publishing, you know, it’s really not that hard to do. Just do really shoddy low-quality work and bomb conferences with it. But if you want to do really thoughtful, careful work, you’re at a disadvantage because you’re effectively unilaterally disarmed,” he said.

Related Articles

Leave a Comment