第六條:是時候談論AI的倫理了

第六條:是時候談論AI的倫理了

Hacker News·

本文認為,關於擴展性認知(特別是AI相關的討論)不僅是理論層面的探討,更蘊含著重要的倫理意涵。作者反思了最初抗拒的道德考量,最終如何能提供對科技角色更準確且人本的理解。

Image

Mcauldronism's Substack

Article 6: It's Time to Talk About Ethics

Image

The Cringe

I was listening to Andy Clark on Sean Carroll’s Mindscape podcast.

They were deep into extended cognition. The notebook. The phone. The whole “where does the mind end?” question that I’ve been writing about for weeks.

And then Clark said something that made me cringe.

He talked about ethics. About how there’s a moral dimension to how we think about extended cognition. About how the choices we make here aren’t just theoretical — they’re ethical.

My first reaction: No, Andy. Don’t do this.

I loved the theory. It was clean. Elegant. Philosophical. Adding morals felt like it would make everything... wooly. Soft. Less rigorous.

I wanted extended cognition to be a matter of fact, not a matter of values.

The Click

But then, over the following months, something clicked.

He was right.

It really is a moral judgement.

Not just an intellectual one. A moral one.

The Wheelchair

Think about someone in a wheelchair.

They move through the world. They go places. They navigate, travel, arrive.

Are they mobile?

Of course they are.

Now imagine someone who says: “Well, technically, they’re not mobile. The wheelchair is mobile. They’re just sitting in it.”

How do we feel about that person?

We don’t think they’re making a clever philosophical point. We think they’re being an asshole.

Because the wheelchair IS part of how that person moves. Denying that isn’t rigorous. It’s cruel.

Otto

Back to Otto — the man from Clark & Chalmers’ original 1998 paper.

Otto has Alzheimer’s. He uses a notebook to remember things. When he needs to know where the museum is, he looks in his notebook, finds the address, and goes there.

Clark & Chalmers argued: that notebook is part of Otto’s memory. Functionally, practically, meaningfully — it’s how he remembers.

Now imagine someone who says: “Otto doesn’t really remember where the museum is. The notebook remembers. Otto just reads it.”

Again — how do we feel about that person?

We don’t think they’re being precise. We think they’re being pedantic at best. Ableist at worst.

Otto remembers with his notebook. The notebook is part of his cognitive system. Denying that isn’t philosophy. It’s gatekeeping.

The Judgement

This is the moral dimension Clark was talking about.

We get to choose how we see this.

We can look at someone using a tool — a wheelchair, a notebook, a phone, an AI — and say: “That’s not really you doing that.”

Or we can recognise that humans have always extended themselves through tools. That this is what we do. That this is what we ARE.

The choice between those two responses isn’t a matter of fact.

It’s a matter of values.

And Now: AI

Which brings us to the question we’re all going to have to answer very soon.

If someone achieves something great with AI — something they couldn’t have done alone — how do we feel about that?

Do we say: “That wasn’t really you. The AI did it.”

Or do we say: “You thought with your tools. You extended your cognition. You made something.”

The Honest Answer

I’ve written these articles with Claude.

Not “Claude wrote them and I pressed publish.” But not “I wrote them alone” either.

I thought with Claude. I extended my cognition into a new kind of tool. The ideas are mine. The arguments are mine. The logic is mine. The words emerged from a process that included both of us.

Is that “real” writing?

Is Otto’s notebook “real” memory?

Is a wheelchair “real” mobility?

The Choice

You get to decide.

But know that the decision you make isn’t just about AI.

It’s about how you see human beings. What you think we are. What you think counts.

Andy Clark was right. There’s ethics here.

And we’re going to have to face it — not as a thought experiment, but as a daily reality.

Someone is going to make something beautiful with AI.

Someone is going to solve a problem that matters.

Someone is going to create something that moves you.

And in that moment, you’ll make a choice.

Will you be the person who says “that doesn’t count”?

Or will you recognise what extended cognition has always meant:

That the mind was never trapped in the skull.

That we have always thought with our tools.

That cognition was never solo.

[Part 1: Where Do You End?] [Part 2: The Maintenance Cost is Zero] [Part 3: The Cauldron in the Spectrogram] [Part 4: UNRELEASED] [Part 5: The Case Against Code]

Image

No posts

Ready for more?

Hacker News

相關文章

  1. 打造具備類人哲學思辨能力的人工智慧

    Lesswrong · 3 個月前

  2. 父母悖論:人工智慧、倫理與機器道德的局限性

    大約 2 個月前

  3. AI對齊與哲學能力之間的衝突

    Lesswrong · 4 個月前

  4. Eleos人工智能意識與福祉會議的重點總結

    Lesswrong · 5 個月前

  5. 我曾試圖使其更易理解的問題

    Lesswrong · 5 個月前