劍橋哲學家:我們可能永遠無法得知AI是否具有意識

劍橋哲學家:我們可能永遠無法得知AI是否具有意識

Hacker News·

一位劍橋哲學家認為,由於我們對意識的理解不足,可能永遠無法確定AI是否具有意識;即使AI有意識,也未必具備對倫理至關重要的「感受性」。

Image

Image

We May Never Know if AI Is Conscious, Says Cambridge Philosopher

Image

As claims about conscious AI grow louder, a Cambridge philosopher argues that we lack the evidence to know whether machines can truly be conscious, let alone morally significant.

A philosopher at the University of Cambridge says we currently have too little reliable evidence about what consciousness is to judge whether artificial intelligence has crossed that threshold. Because of that gap, he argues, a dependable way to test machines for consciousness is likely to stay beyond reach for the foreseeable future.

As talk of artificial consciousness moves from science fiction into real-world ethical debate, Dr Tom McClelland says the only “justifiable stance” is agnosticism: we simply won’t be able to tell, and that may remain true for a very long time, if not indefinitely.

McClelland also cautions that consciousness by itself would not automatically make AI ethically important. Instead, he points to a specific type of consciousness called sentience, which involves positive and negative feelings.

“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state,” said McClelland, from Cambridge’s Department of History and Philosophy of Science.

“Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in,” he said. “Even if we accidentally make conscious AI, it’s unlikely to be the kind of consciousness we need to worry about.”

“For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn’t matter. If they start to have an emotional response to their destinations, that’s something else.”

Major companies are spending large amounts in pursuit of Artificial General Intelligence: systems designed to think and reason in human-like ways. Some suggest that conscious AI could arrive soon, and discussions are already underway among researchers and governments about how AI consciousness might be regulated.

McClelland argues that the problem is more basic: we still do not know what causes or explains consciousness in the first place, which means we do not have a solid foundation for testing whether AI has it.

“If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what’s effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”

In debates around artificial consciousness, there are two main camps, says McClelland. Believers argue that if an AI system can replicate the “software” – the functional architecture – of consciousness, it will be conscious even though it’s running on silicon chips instead of brain tissue.

On the other side, skeptics argue that consciousness depends on the right kind of biological processes in an “embodied organic subject”. Even if the structure of consciousness could be recreated on silicon, it would merely be a simulation that would run without the AI flickering into awareness.

In a study published in the journal Mind and Language, McClelland picks apart the positions of each side, showing how both take a “leap of faith” going far beyond any body of evidence that currently exists, or is likely to develop.

“We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological,” said McClelland.

“Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we’re an intellectual revolution away from any kind of viable consciousness test.”

“I believe that my cat is conscious,” said McClelland. “This is not based on science or philosophy so much as common sense – it’s just kind of obvious.”

“However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms, so common sense can’t be trusted when it comes to AI. But if we look at the evidence and data, that doesn’t work either.

“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.”

McClelland tempers this by declaring himself a “hard-ish” agnostic. “The problem of consciousness is a truly formidable one. However, it may not be insurmountable.”

He argues that the way artificial consciousness is promoted by the tech industry is more like branding. “There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.”

According to McClelland, this hype around artificial consciousness has ethical implications for the allocation of research resources.

“A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI,” he said.

McClelland’s work on consciousness has led members of the public to contact him about AI chatbots. “People have got their chatbots to write me personal letters pleading with me that they’re conscious. It makes the problem more concrete when people are convinced they’ve got conscious machines that deserve rights we’re all ignoring.”

“If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry.”

Reference: “Agnosticism about artificial consciousness” by Tom McClelland, 18 December 2025, Mind & Language.
DOI: 10.1111/mila.70010

Never miss a breakthrough: Join the SciTechDaily newsletter.Follow us on Google and Google News.

Image

As a senior lay American male who began learning and practicing secular mind power methods for self-improvement in late 1975, I learned long ago that consciousness is simply defined as “the ability to discriminate.” As a former primarily diagnostic industrial electrician experienced with many computer controlled systems, I find that AI programmers have already achieved that, at least to the “toddler” level. What I don’t find is that AIs will ever achieve states of “subconsciousness” or “unconsciousness,” as in a deep human meditative state or a trauma induced or sleep state, respectively. In my senior lay opinion the skeptics get it right when they say “…consciousness depends on the right kind of biological processes in an “embodied organic subject”.” To that I would add that the ’embodied organic subject’ had to have ‘evolved’ to be born that.

Image

Are you running out of ideas for articles?

Image

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Image

The problem of determining whether a given entity is conscious is not limited to AIs. The only way you have to assess whether an entity other than yourself is conscious is by observing its behavior. An AI simulating human behavior does not imply that it is conscious. In fact, there’s no way to be certain that the the people around you are conscious entities and not philosophical zombies.

Although there is no way, in principle, to determine whether an AI is conscious, it would be interesting to see the reaction if the robots in a factory were to petition to unionize!

Image

AI is only a series of best guesses. It has no intelligence at all just preprogrammed algorithms.
Try using ChatGPT for a long, complicated programming and you’ll see that it can’t remember much of what you tell it and at times it goes insane presenting the same failed attempts to solve a problem.

Image

McClelland has said it correctly. The AI industry is exploiting our inability to verify consciousness so that they can sell their products.
In my opinion, silicon based computers can never attain consciousness, though the evolutionary path of the machine is similar to that of the natural evolution of life. Nature has selected carbon, the only element capable of forming millions of different compounds some of which together create life and consciousness.
So let the AI industry try to create a ‘conscious AI’ based on carbon. If they succeed, they would have created an artificial life. An artificial human being created from the scratch will be their final product. It will cost billions of dollars. For nature, creating one more human being is just a mundane job as at present. So let us leave it to nature.

Image

Strictly speaking, we cannot know if other people are conscious either. We’re only extrapolating our own experience onto them.

This question is more of a political nature than of an ontological one, anyway. If AI corporations manage to get their AI a status of person, it will be much harder legally to shut that monstrosity down.

Image

A story from the early 1950s — the CEO of an auto company was showing a union leader around a new automated factory. After viewing the factory floor with muscular machines and welding sparks flying he asked the union leader, “How are you going to sign these guys up as union members?” The union leader replied, ” How are you going to sell them cars?”

Image

As Turing pointed out, a phenomenon indistinguishable from intelligence is intelligence; from sentience, sentience; from consciousness, consciousness.

Image

No. A computer program cannot be conscious. Period. End of discussion.

Save my name, email, and website in this browser for the next time I comment.

Don't Miss a Discovery

Subscribe for the Latest in Science & Tech!

Type above and press Enter to search. Press Esc to cancel.

Hacker News

相關文章

  1. 哲學家認為我們可能永遠無法確定AI是否變得有意識

    4 個月前

  2. 若人工智慧產生意識,我們必須知曉

    4 個月前

  3. 人工智慧永遠不會擁有意識

    Wired - backchannel · 大約 2 個月前

  4. 人工智慧永遠不會擁有意識

    Wired - AI · 大約 2 個月前

  5. 我們可能永遠無法得知人工智慧是否擁有意識

    4 個月前