Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Japanese Startup Sakana says its AI has been generated First peer-parallelThe However, the claim is not untrue, but there are significant warnings.
The AII around the AI and its role in the scientific process Day by day increases intense. Many researchers do not believe that AI is pretty ready to serve as “co-scientists”, others think that there is a possibility-but it acknowledges the first days.
Sakana falls in the next camp.
The agency says it has used an AI system called AI Scientist-V2 to create a paper that Sakana is running a long-time and submitted a paper at the ICLR at the ICLR conference. Sakana claimed that the organizers of the workshop, as well as the ICLR leadership, also agreed to work with the company to conduct a test to review AI-imposed manuscripts twice-blind.
Sakana said the peer reviews cooperated with researchers at the University of Columbia and Oxford at the University of Columbia for submitting three AI-exposed documents for the Peer Review. AI scientist-V2 scientific estimates, experiments and experimental codes, data analysis, visualization, text and title have claimed “last to last” papers.
“We have created research concepts by providing AI’s workshop abstract and details,” Sakana’s research scientist and founding member, Robert Lange, told TechCrunch. “It has confirmed that the papers generated were in the subject and appropriate submissions.”
One of the three was adopted at the ICLR Workshop – a paper that gives a critical lens in the training strategies of AI models. Sakana said that it was immediately withdrawn the paper before it was revealed in the interest of transparency and respect for the ICLR conventions.

“Both recognized paper introduces a new, committed method for training of neural networks and shows that there are experienced challenges,” Lange said. “It provides an interesting data point for the onset of a more scientific investigation.”
But the achievement is not so impressive as it may seem at first glance.
In a blog post, Sakana admits that its AI occasionally produces “embarrassing” quotes, for example, incorrectly blamed a method on the 2016 paper instead of the original 1997 work.
Sakana’s paper has not been investigated as some other pierted publications. Since the company withdrew it after the initial peer review, the paper did not receive an additional “meta-review”, at this time the organizers of the workshop could theoretically reject it.
Then it is true that the rate of acceptance for the conference workshop is higher than the rate of acceptance for the original “conference track” – a true Sakana clearly mentioned in his blog post. The agency says its internal bar has not been passed for publishing the ICLR conference track in any of its AI-exposed studies.
Alberta University AI researcher and assistant professor Matthew Gujdial called Sakana the result of Sakana as “somewhat confusing.”
“Sakana people have selected the papers from some generated people, which means they were using the output to pick outputs using humanitarian judgment that they thought they might be,” he said through the email. “What I think is that people can be in effect plus AI, AI alone cannot create scientific progress.”
Mike Cook Peer Reviewers, a research fellow at AI, expert Kings College London questioned the hardness of the workshop.
“This new workshop is often reviewed by more junior researchers,” he told TechCrunch. “It is also important to note that this workshop is about negative results and disadvantages – which is great, I have been running the same workshop before – but it is easy to get AI to write about failure with the view.”
Cook added that he was not surprised that any AI peer review could pass, considering that AI gained skills in writing human-smarting prose. PartlyAI Papers The passing journal review is not even new, Cook has indicated, or moral hesitation that breaks it for science.
AI’s technical defects – such as its tendency Hallucinate – Warning many scientists to support it for serious work. Furthermore, experts fear that AI can make it easier The end of the word generated word In scientific literature, progress is not increased.
“Whether we need to ask ourselves [Sakana’s] About how good AI is in conducting and conducting AI tests, or how good it is to sell to people – which we know that AI is already great, “Cook said.
Sakana, for its achievement, does not claim that its AI Groundbreaking – or even especially the novel – can create scientific work. Rather, the goal of the test was to “study the quality of the AI-exposed research”, the company said and highlighted the urgent need for “AI-exposed science rules”.
“[T]There are tough questions about whether here is [AI-generated] To avoid prejudice against this, science should first judge on the basis of its own merit, “the company wrote.” In advance, we will continue to exchange views with the research community on this technology. It will not become a situation where its sole purpose is to pass the Pierre review, thereby making the meaning of the scientific peer review. “