Autonomous research with large language models

  • Thread starter Astronuc
  • Start date
  • #1
Astronuc
Staff Emeritus
Science Advisor
2023 Award
21,919
6,359
I made the title generic, but it comes from an article: Autonomous chemical research with large language models
https://www.nature.com/articles/s41586-023-06792-0

Abstract - we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.

From the article
In this work, we present a multi-LLMs-based intelligent agent (hereafter simply called Coscientist) capable of autonomous design, planning and performance of complex scientific experiments. Coscientist can use tools to browse the internet and relevant documentation, use robotic experimentation application programming interfaces (APIs) and leverage other LLMs for various tasks. This work has been done independently and in parallel to other works on autonomous agents23,24,25, with ChemCrow26 serving as another example in the chemistry domain. In this paper, we demonstrate the versatility and performance of Coscientist in six tasks: (1) planning chemical syntheses of known compounds using publicly available data; (2) efficiently searching and navigating through extensive hardware documentation; (3) using documentation to execute high-level commands in a cloud laboratory; (4) precisely controlling liquid handling instruments with low-level instructions; (5) tackling complex scientific tasks that demand simultaneous use of multiple hardware modules and integration of diverse data sources; and (6) solving optimization problems requiring analyses of previously collected experimental data.

This is so new that Google has no references to it.

My institution is heavily into AI/ML for 'doing science' and enhancing/promoting innovation.

I expect in the near term, humans are still needed to write the rules. AI will become more autonomous when it can write the rules itself, and manipulate digital systems and robotics.
 
  • Like
Likes OmCheeto
Computer science news on Phys.org
  • #2
Likely true.. I’ve heard of one experimental system where the AI self corrects running code when an error occurs. Imagine what a leap forward that would be: No need to test prior to release simply run trials, the code corrects itself, the failure rate drops below some agreed upon level and then it becomes a product.

I know years ago IBM had memory chips in its mainframes that when a memory error occurred would reconfigure to disable the section that failed. At the time, it was clever electronics but in the future it could be much more.

it looks like the coscientist system could be headed toward drug discovery and testing.

While searching for coscientist vs copilot, I found this link:

https://engineering.cmu.edu/news-events/news/2023/12/20-ai-coscientist.html
 
  • Like
Likes OmCheeto and Astronuc
  • #3
  • Like
Likes jedishrfu and Astronuc

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
10
Views
8K
  • Biology and Medical
Replies
1
Views
6K
  • STEM Academic Advising
Replies
1
Views
2K
Replies
1
Views
847
  • STEM Academic Advising
Replies
5
Views
1K
  • STEM Career Guidance
Replies
5
Views
2K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
8
Views
3K
Back
Top