英國政府支持能自行進行實驗室研究的人工智慧

英國政府支持能自行進行實驗室研究的人工智慧

Hacker News·

英國政府透過ARIA(先進研究與發明機構)資助開發「AI科學家」的專案,這些AI能自主設計、執行並分析實驗室研究,顯示出AI驅動科學研究的快速進展。

The UK government is backing AI that can run its own lab experiments

A competition calling for research projects involving so-called AI scientists shows just how fast this technology is moving.

Image

A number of startups and universities that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work.

Related Story

ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.

“There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer.

ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.

Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.

“We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”

Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.

And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. The experiments will then be run in an automated lab by Sandia National Laboratories in the US.

Taking the temperature

Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.

Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we've got to know what the frontier is.”

For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”

Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.

AI scientists sit in a layer above those human-made tools and call ton hose tools as needed, says Rowstron. “But there’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”

That’s still some way off, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.

There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off track or making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included changes in the initial specifications and “overexcitement that declares success despite obvious failures.”

“Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”

“But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”

by Will Douglas Heaven

Share

Popular

Deep Dive

Artificial intelligence

The great AI hype correction of 2025

Four ways to think about this year's reckoning.

What’s next for AI in 2026

Our AI writers make their big bets for the coming year—here are five hot trends to watch.

Meet the new biologists treating LLMs like aliens

By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.

An AI model trained on prison phone calls now looks for planned crimes in those calls

The model is built to detect when crimes are being “contemplated.”

Stay connected

Get the latest updates fromMIT Technology Review

Discover special offers, top stories,
upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences.
Try refreshing this page and updating them one
more time. If you continue to get this message,
reach out to us at
[email protected] with a list of newsletters you’d like to receive.

The latest iteration of a legacy

Advertise with MIT Technology Review

© 2026 MIT Technology Review

About

Help

Hacker News

相關文章

  1. 英國將為AI及量子研究人員退還簽證費用

    3 個月前

  2. 英國啟動 6.75 億美元主權人工智慧基金

    Wired - AI · 7 天前

  3. 人工智慧科學家:AI 驅動科學發現的崛起

    MIT Technology Review · 1 天前

  4. 深化英政府合作,AI加速科學發現

    Demis Hassabis · 4 個月前

  5. 向超過140位立法者簡報AI威脅後,我們學到了什麼

    Lesswrong · 2 個月前