致科幻作家協會與社群關於新AI獎項規則的公開信

致科幻作家協會與社群關於新AI獎項規則的公開信

Hacker News·

Erin Underwood 在一封公開信中,向科幻作家協會(SFWA)及其社群就近期關於AI獎項規則的變動及後續的問卷調查表達看法。Underwood 渴望就此議題展開開放、誠實且無評判的討論,並指出社群中對AI的敵意常阻礙了真誠的對話。

File 770

Mike Glyer's news of science fiction fandom

Image

Main menu

Post navigation

Erin Underwood: Open Letter to the Science Fiction Writers Association and Community

Hi Friends,

Image

After writing the letter below, I thought rather than sending it to SFWA only, I’d share it with Mike at File 770 along with a note that read: “Am I crazy? Will this blow me up? I am so tired of being afraid of our community… I really want to have an open conversation about this, but the vehemence that some people bring to the conversation about AI is scary.” I also spoke to a few friends who encouraged me to share it. So, here I am sharing a tough topic and hoping we can all be understanding, vulnerable, and honest without judgement and fear in order to help our community get to some real solutions.

Thank you!
Erin

Dear SFWA and community members,

This week, SFWA announced new award rules related to AI, rescinded those rules shortly afterward, and then launched a survey to gather additional input from the writing and creative community. I considered filling out that survey. It then became clear to me that the only people likely to read those survey responses would be a handful of people managing the process.

This issue needs a broader conversation. I’ve put off raising it publicly because deviation from accepted positions (especially related to AI) in our community can be met with hostility rather than debate. That reality discourages honest discussion, even when the concerns are shared by many. Still, avoiding the conversation isn’t going to protect our community. If we want better outcomes, someone has to bring up the difficult questions, even when that might carry personal and professional risk, and I think it might be less risky for me than it is for others. So, here’s what I’m thinking …

The challenges raised by AI did not appear overnight. They have been building for years as AI has grown in scope, scale, and everyday use across creative and business industries. There is no putting the genie back in the bottle. The way this technology evolved was deeply flawed, and real harm has already been done to creators. That history matters and can’t be ignored.

At the same time, refusing to adapt in ways that protect our own communities would create new harm. Writers, artists, musicians, publishers, and the industries that support them must remain viable and competitive in a modern world that is becoming deeply dependent on AI tools and AI-driven infrastructure. If we are going to protect the future of creative work, we need award rules that are practical and that also allow us to use ordinary business tools.

Having a yes/no switch that governs the use of AI and generative AI isn’t viable because this technology is now embedded throughout the core infrastructure that supports businesses today. However, the fundamentally human act of creation must remain in human hands. At the same time, there are AI use cases that touch creative work directly and indirectly, often without the creator’s knowledge or consent. Those realities must be acknowledged. Creators should not be penalized for incidental, accidental, or third-party use of AI in business processes surrounding their original work.

For that reason, I am writing this open letter to SFWA and the broader creative community to argue for a more nuanced approach to understanding AI and its use within our industry. We need clearer distinctions between human authorship and the surrounding processes that support business operations, communication, accessibility, marketing, and distribution.

Some uses of AI should have no bearing on whether a work is eligible for an award. Other uses should be decisive. Determining where those lines belong is important, and it can’t be done through rigid, binary rules that treat all AI involvement as equivalent.

The creative arts community is experiencing a deep sense of disruption and vulnerability in response to the rapid rise of generative AI. These concerns are legitimate and, for many, unsettling. When tech companies began developing large language models, original creative works were used without permission to train the very systems that are now threatening creators’ livelihoods, authorship, and ownership. That breach of trust is real and unresolved. It also can’t be undone, which means creatives and the industries that support them must think strategically about how this technology shapes both risk and opportunity going forward while also continuing to fight for fair compensation for their work (which, again, was used without permission).

The evolution of AI use cases is fundamentally reshaping how modern business and industry operate, from book publishers to sales and marketing firms, retailers, and fan communities. AI isn’t niche any longer. It’s everywhere, including in our everyday digital tools and the infrastructure that makes business operate effectively. It shapes marketing and advertising, powers internet browsers and discovery systems, feeds social media platforms, and supports strategic planning, workflow design, internal communications, and day-to-day operations.

Publishers can’t realistically avoid using these tools if they intend to remain competitive and continue selling books, art, and music created by their authors and artists. At the same time, these tools are enabling smaller and independent publishers to compete more effectively with large companies such as Tor, Penguin Random House, and Gollancz by improving efficiency, reach, and sustainability.

Most creators are not attempting to replace their own creative labor with AI. They are acting in good faith and want clear, ethical boundaries around authorship, originality, and creative ownership. The real challenge is that avoiding AI entirely is becoming increasingly impractical, even for those who are committed to producing fully human-authored work, as AI is now embedded in systems creators can’t control or realistically avoid.

If awards organizations use eligibility rules that treat any involvement of AI as disqualifying, there is a real risk that soon very little work will remain eligible, even if a work was wholly created by a human. Such rules also discourage transparency when creators and publishers do not always see and can’t always account for where AI has been used.

Awards exist to recognize excellence related to original work by human creators and the governing rules for awards should be distinct from regulating every tool involved in the surrounding production, communication, and distribution process. Conflating authorship with standard business processes makes it harder to uphold the values awards are meant to protect.

The following list outlines current and foreseeable AI use cases that fall outside the act of creative authorship itself but either touch it in some way by the author or the publication processes. This list isn’t exhaustive since new and unanticipated uses will continue to emerge. This is precisely why a more nuanced and flexible approach to AI policy is necessary to protect human creativity while acknowledging the world in which creators are already working.

Current and future AI use cases for original creative works:

I. Everyday Creation, Communication, and Documentation

II. Editorial Intake, Research, and Creative Support

III. Business Intelligence, Strategy, and Market Operations

IV. Legal, Rights, and Institutional Infrastructure

V. Distribution, Access, and the Changing Internet

VI. Structural Impact on the Publishing & Fan Ecosystem

These examples represent only some of the ways AI and generative AI are already being used across publishing and creative industries. Those uses will continue to expand because many industries now depend on infrastructure created or enabled by AI technologies. This reality doesn’t mean we give in or give up. It also doesn’t mean that original creative work should be written by AI.

It does mean that award rules must be refined so authors and publishers are not further penalized for using the very systems built from their own work. If anyone deserves to benefit from tools that improve marketing, communication, efficiency, or sales, it is the creators whose work was scraped to train these systems. Rules that clearly promote human authorship while recognizing the realities of modern business are both possible and necessary.

We need a clear distinction between authorship and process. Without that distinction, publishers are left operating in fear that routine activities such as voice-to-text transcription, internal planning, or drafting communications could jeopardize award eligibility. That environment is neither realistic nor sustainable and puts authors in a powerless position to know when someone else has jeopardized their award eligibility.

There are responsible ways to use AI that support creators without replacing creative labor. Binary rules do not work when applied across the board to all AI usage involved in the production and publication of original work. The past can’t be undone. The future, however, can still be shaped through accountability, fair compensation, and clear opt-in mechanisms that respect creative ownership, while also taking a realistic view of the rules that govern the fruits of our community’s labor.

If creatives do not participate in shaping that future, others will do it for us. That has already happened once. It can’t be allowed to happen again.

Science fiction writers spend their careers imagining how systems scale, fail, and reshape societies over time. Horror writers understand how harm spreads when those systems break, and how trauma follows long after the damage is done. Fantasy writers understand power, the costs of wielding it, and the necessity of limits. If any community is equipped to imagine both the dangers ahead and the structures needed to prevent them, it is this one.

Penalizing creators for incidental or third-party AI use in surrounding business processes doesn’t protect the arts. If realistic standards are not established by creative communities and their organizations, the result will be decided by others who do not have our best interests at heart.

For transparency, I used speech-to-text to capture my words and generative AI to clean up grammar and structure. I needed an efficient way to get my thoughts down quickly so I could move into the work of manually editing and refining this text. I went through it multiple times, revising language, examples, and arguments until the final version fully matched my vision. This was done intentionally to demonstrate how AI can function as a communication tool for business purposes. This letter isn’t a work of art or artistic creation.

Organizations like SFWA are navigating an unprecedented shift in how we work and how original creative work is produced and supported. They deserve good-faith input and the space to do that work. They will likely stumble at times, and when they do, it is important to remember that they are human and to extend some grace as they work to find the best path forward. The goal should always be to protect people, because people are what matter.

Sincerely,

Erin Underwood

Erin Underwood is a writer, editor, and content producer based in the greater Boston area. A three-time Hugo Award nominee and MFA graduate, she’s a published anthologist, recognized screenwriter, and active science fiction community volunteer.

Share this:

Like this:

Discover more from File 770

Subscribe to get the latest posts sent to your email.

Type your email…

Subscribe

147 thoughts on “Erin Underwood: Open Letter to the Science Fiction Writers Association and Community”

Comment navigation

Image

@Christopher Hensley thank you!

@Madame Hardy, you are right about all of those things. I completely agree. However, strawman or not, it’s still happening … just not yet in publishing that has been made public.

There are stories about employees dumping proprietary reports and documents into ChatGPT because they don’t know how it works. There are students who don’t understand that using CoPilot to organize your notes and help with editing is a form of using genAI to write your essay. And employees are hiding their use of AI from their employers, work that they are then passing off as their own. These are all real examples, and they are things we aren’t talking about, which could have real implications for authors … or everyone for that matter.

Having been through this discussion and others today, I wish I had used a different approach with my letter because I did not anticipate all of the potential things that people would take away from it, things that go far beyond a letter about rules for awards. I do think that actually speaks to the need for an open conversation.

https://www.businessinsider.com/kpmg-trust-in-ai-study-2025-how-employees-use-ai-2025-4?op=1

Image

MODERATOR’S COMMENT: To the people who think their first-ever comment here should be snark or an expression of contempt — which is at least a dozen so far — this is the explanation why it wasn’t approved to post.

Also, if I send an email to the address you used to register here, unless you reply your comment will not appear. This applies to only one person at the moment. (Of course, it’s possible theirs isn’t a valid address, but my contact email didn’t bounce.)

Image

I wouldn’t mind it if that standard would also be applied to some of the longer-time commenters. It’s unpleasant to see the knives come out with such hostility here.

Image

The push to incorporating AI into all sorts of applications, including those with real world consequences from severe financial loss (legal documents) and injury/death (medical applications/counseling/”companionship”) remind me of this bit from Delos D. Harriman, a strong advocate for space travel (to say the least), who nevertheless wanted things done with great restraint, rather than willy-nilly:

“Remember the first days of ocean flying? After Lindbergh did it, every so-called pilot who could lay hands on a crate took off for some over-water point. Some of them even took their kids along. And most of them landed in the drink. Airplanes get a reputation for being dangerous. A few years after that the airlines got so hungry for quick money in a highly competitive field that you couldn’t pick up a paper without seeing headlines about another airliner crash.

“That’s not going to happen to space travel! I’m not going to let it happen.
Space ships are too big and too expensive; if they get a reputation for being unsafe as well, we might as well have stayed in bed.”

Image

So if the wheels of a 200000 Euro transport container break and a 20 million Euro satellite spills onto the tarmac,

@Cora, have you posted more about this elsewhere? I would love to read the details. Also, oog.

Image

First of all, Here’s why people keep bringing up spell check:

Microsoft Word, Gmail, and many other organizational tools have AI embedded in their code and use programs like Grammarly and CoPilot to help people proof, edit, and write.

People who have already been fighting this crap for the last two years are likely extrapolating on that, because a lot of AI boosters will claim that spell check is “AI”. Which it is, if you’re clumsy enough with your terminology. But we weren’t putting that stuff in the hands of LLMs until recently (and they’ve become measurably worse since that happened).

“They are simply use cases that I am seeing done and imagine people will be doing in the future.”

Key word: imagine. The marketing has asked you to imagine this future. NFTs were “the future of art.” Soon, everyone was going to be using cryptocurrency in 2 or 3 more years, and you had to get on board in 2021 so you didn’t get left behind. Weren’t cars supposed to be fully self-driving by now? Weren’t we supposed to have people walking on Mars by now? Tech grifters string you along like this to get you to invest your time, energy and money in a shoddy product that is never going to get better, or at least never going to reach the heights they promise you. And here you are, admitting in public that you use the shoddy product, that it wastes a lot of time, and that you and might have done better work without it… after you previously claimed that it saves you hours of work. I’m having a hard time believing that you couldn’t have written this letter without AI, but I guess it’s true that if it didn’t exist, you likely wouldn’t be manufacturing consent for it.

Speaking of people being uncomfortable with conversation topics, you seem unwilling to even acknowledge the real and devastating harms being perpetuated by LLMs and AI boosters and the billionaires funding the whole nightmare. I assume this is because it would be much harder to argue that there are any “responsible” or acceptable uses of this technology if you glanced in the direction of the revenge porn victims, the suicides, the CSAM, the fake academic citations that just keep multiplying, every case of ChatGPT-induced psychosis, the ongoing economic despair in basically every creative field, the skyrocketing price of RAM, etc. etc. etc.

This did not get a hostile response because people aren’t willing to have a conversation. The trouble is that the conversation probably should have ended a while ago, but everyone who’s become entangled with LLMs desperately wants permission to keep engaging with something we all know is harmful, and so we go in circles.

Image

I think Erin is correct that the use of AI is not a yes/no question. The position of the SF community needs to be more nuanced. We need to talk about it more.

Erin is not correct about all the specific examples of maybe this use of AI is okay. Maybe some are. I’m pretty sure some are not. We need to talk about them. This is part of being nuanced.

I am surprised that nobody has mentioned the Hollywood strikes to stop studios from replacing actors and other workers with AI. In retrospect, studio management was smoking crack. The technology wasn’t there and still isn’t. Just now in the news, Salesforce management regrets laying off 4000 customer service workers. It turned out that the AI reality was not up to the demos.

At the same time, generative AI is being used to solve real problems and help people.

If anyone is expert and nuanced on this question, it should be the science fiction community.

Image

Tom Becker wrote:

At the same time, generative AI is being used to solve real problems and help people.

Can you cite a few specific instances of generative AI solving real problems and helping people? Because I can only think of instances where it causes real problems and hurts people, sometimes fatally.

Image

If AI is foregone, why do we even need to boost it, if that’s what this is.

Image

Can you cite a few specific instances of generative AI solving real problems and helping people? Because I can only think of instances where it causes real problems and hurts people, sometimes fatally.

Software development.

Let me get this out of the way first: Vibe coding is gross negligence, and the people pushing it are actively dangerous.

That being said, a lot of use cases have become standard in the industry over the last year or so, including partial generation of code especially autocomplete or generation of automated testing. There are a list of caveats that go with that about limitations and guide rails and software is a better fit for LLMs then prose fiction because boring and repetitive code is a good thing. It also does not replace a live human being with knowledge and experience. If you go to any of the large open source projects you’ll often see explicit policies around acceptable AI usage, and code check-ins marked as having AI involved.

In fact, if you are reading this on Chrome or Edge, you are likely using code generated with the use of AI tools right now: https://chromium.googlesource.com/chromium/src/+/refs/tags/142.0.7444.23/agents/ai_policy.md

Image

Christopher Hensley: I went looking to see what people with more knowledge of the subject than I have had to say, and have to assume the Massachusetts Institute of Technology qualifies. MIT Technology Review seems pretty dubious on the “solving real problems and helping people” front:
https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/ says

For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology’s limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.

Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code—code that isn’t deleted or rewritten within weeks—since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow’s survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower.

You yourself say that vibe coding is gross negligence. While it might be possible that computer coding is one thing generative AI can do (barely) adequately, I’d certainly not put that into the “solving real problems and helping people” category if human beings can do the job arguably better and faster.

And considering the very real harm to very real people (not to mention the planet!) that generative AI does, I’d want to see some evidence, ANY evidence, that the “solving real problems and helping people” outweighs that harm.

Image

PJB on December 28, 2025 at 7:37 pm said:

I needed to find a reference before answering: Ian Mitroff, an organizational theorist, as Wiki says, wrote about him. Basically, an organization needs different kinds of talent. The best and brightest that I am referring to, are the nameless engineers that we never hear about. Then, there are visionaries, who would recognize what the problems of tomorrow will be and where to look for solutions, Then, there are people with organisational skills. There were more kinds of people, and also some roles may be combined in the same person(s), and the list of skills depends from the industry, etc, etc. Altman is probably not a personificaiton of all the skills, I would guess vision and may be marketing. But I would not call him a conman, if he is, then every marketing person is a conman or a conwoman. Which may well be the truth. 🙂
I still think that have in their company the best and brightest, otherwise we would not feel threatened by the AI stuff they produce.
This is really off topic. I spoke about Mitroff in a different context here:

Image

AI SHOULD NOT BE ALLOWED, EVER. There is no “nuance”, thee is protecting human art or not.

Image

Hi Erin. I am on the SFWA board, and I thank you for your post here. The reason we asked for input from our members was precisely because of some of the points you raised. SFWA wants to protect all creatives, and maintaining the integrity of the Nebula Awards is only one part of that. Given the response to the original announcement, though, we can see there are many views on the subject. We don’t profess to have all the answers, which is why we want the input.

I’ll make sure the rest of the board is pointed to this thread.

Image

And, yes, using genAI effectively is hard, it takes a lot of time to vet, check, and rewrite its output. It also wastes a lot of time by generating slop that is literally impossible to clean up and should just be thrown away. I probably could have written my letter in less time (or even the same amount of time) if I didn’t use voice-to-text and then use genAI to clean up the grammar before editing. However, I know that I also would not have ever written the letter because I am a verbal thinker. I often need to talk about a thing to order my thoughts … and almost nobody is comfortable talking about this topic, except to say “No AI for authors” and that only gets at half of the conversation.

So why again should a writer bother with AI? Writing is a useful way to organize one’s thoughts, I absolutely agree, but I have to do the writing to get the benefit. I understand the rubber ducky method of working through one’s thoughts, too, but that doesn’t require an AI that will (as you say) produce piles of nonsense requiring lots of vetting to make sensible. Talking to a tape recorder or a rubber ducky would seem less troublesome.

Image

@Madame Hardy
I mentioned this incident on BlueSky some time ago, but I haven’t written about it in detail.

One of my translation customers is a company which among other things produces transport containers for satellites, in which the satellites are transported to the launch site. These transport containers are custom products and have to fulfill quite extensive requirements, e.g. the temperature and humidity inside must be kept steady, they must be protected against vibrations, etc… These containers also have wheels, so they can be moved around.

One day, my customer gave me a specification for a new transport container for the satellites of the Galileo satellite navigation system.The specification was in English, but the person who had written it obviously wasn’t a native speaker.

Among the requirements for the container were “breakable wheels”. This of course made little sense, because you normally don’t want the wheel under a container which contains a very expensive satellite to break. So I flagged the problem and contacted the customer. Turns out that the author of the specification meant “brakeable wheels”, i.e. wheels with brakes.

This is the sort of issue a human will catch, but a machine translation system will not. Hopefully, someone else would have caught the issue down the line, but if not this mistake mighty of led to the loss of a very expensive transport container with an even more expensive satellite inside.

Mistakes in technical and legal documents happen all the time and I always flag them and contact the customer. Honestly, I have seen marriage certificates with the name of the groom spelled in three different ways, all of them wrong. I’ve seen a residence document where the person in question was listed as born in the wrong country, because some official thought the former Soviet Union equals Russia, when it doesn’t (the person was born in Kazakhstan). Considering there are sanctions in place against Russia, this could cause genuine problems.

I’ve seen a court document where the address of the defendant was wrong. The court was quite dismissive, when I pointed this out – “Just translate what’s written.” – “Yes, but this summons is not going to reach the defendant, because the address is wrong.” A few weeks later, they contacted me and asked, “Could you do this again? The address was wrong and the court summons could not be delivered.” Again, a human notices these mistakes. An LLM does not.

I usually cite the satellite example, because potential material damage is so high. So far, I thankfully haven’t had a case, where a translation mistake might of killed someone, though a colleague of mine, who specialises in medical translation, did.

Image

Vibe coding is gross negligence, and the people pushing it are actively dangerous. . . .

Every defense of LLMs reads like this: “It’s definitely useful! But you have to be really, really careful because it can’t be trusted. It can’t replace a human. Watch it like a hawk, because it’s a liability. [Insert lengthy list of other caveats here.] But it definitely is quantifiably useful… somehow. For real, please believe me.”

Again: LLMs weren’t ubiquitous just a few years ago, and software developers (and artists, and writers, and everyone else) were getting along just fine without them. Autocomplete is not a feature exclusive to LLMs. Predictive text existed before 2023. Snippets and linters and so on have been around. Automating tedious tasks is useful, and programmers have done it for a long time. Leaving anything up to an unpredictable black box that can’t think, doesn’t know things, and just sort of does whatever, is a terrible idea and it will, most likely, get everyone who does it into some sort of trouble eventually. Recent case in point: https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part

Image

@Cora Buhlert
Current LLMs are not R. Daneel Olivaw nor Commander Data nor Murderbot, let alone Alice Sheldon. And I strongly suspect that if a sentient AI with human or superhuman level intelligence were to emerge and decide to write fiction, SFF writers would be the first to embrace them.

Well, two things:

(1) The whole thing is moot if current LLMs are incapable of writing books or stories that are any good, i.e., not anything readers would pay to read; nothing that can win awards; nothing that will take jobs away from human science fiction writers. (And that appears to be the case, at this moment, in writing. In which case, these LLMs aren’t writing anything that could possibly win a Nebula award — so it’s not necessary (today) to have rules excluding AI written fiction from winning a Nebula. If it can’t happen, no need for a rule about it. We don’t need a rule excluding poetry written by Vogons from winning awards either.

(2) OTOH, if it could happen — if today’s LLMs could write SF that readers would enjoy and pay for and win awards (and put human SF writers out of work) — then what I said holds.

You say it would be okay if it really were Data or R. Daneel Olivaw as if they would somehow be a different case, but if we did have a modern day AIs writing enjoyable SF, let’s say named Mike, then… how would that be different? You qualified your comment with “sentient” and “superhuman,” but “sentient” only means “able to sense” (strictly) or, colloquially, “whatever it is that makes humans human except we don’t really have a clue how to define it.” For the strict case we just need some physical sensor inputs (cameras etc.) and you’d be happy? I suspect not. So regarding the fuzzy definition, how will you know? Today’s AIs can be made to pass the former conceptual standard of a Turing Test, so people just said “The Turing Test isn’t good enough.” So what is? That’s a whole kettle of fish — because we have crossed that threshold where the AIs of today are in the fuzzy gray zone where it’s hard to tell if they’re “sentient”[fuzzy-meaning] or not. Some people already want to marry the AIs they interact with. And “superhuman” implies being able to do some things better than a human — which LLMs can already do in some areas, so they meet that threshold. If you mean “superhuman” in some fuzzier sense, that they can do “enough” stuff better than humans (since doing EVERYTHING better than humans is an absurdly strict definition) that would be useless if there were just one tiny thing they couldn’t do better.

My point is that there really isn’t anything definable (or that matters) separating a “great” book written by today’s AIs vs. a “great” book written by Data or Daneel. (IF/WHEN today’s AIs could write an award-caliber book.)

But that’s the thing. If today’s AIs can’t write an award-caliber book, or even a book readers enjoy and want to buy, then this is all moot. They won’t win awards, they won’t sell copies or threaten human writers’ incomes.

If (when) modern day AIs can write beloved books, and readers are unable to tell by reading if the author is human or machine… then that’s functionally at the level I was talking about, books written by Data or Daneel or some other digital entity named, whatever, Mike.

We’re really already in the “corona” as it were of AIs that are “indistinguishable” from humans (or, in some cases are distinguishable only because they’re clearly better, e.g. creating their output faster than a human could, or knowing such a wide range of things no human could). Sure, sometimes they’re obvious because of their “AI” style, but that isn’t part of the equation in this discussion: If an AI is capable of writing award-caliber science fiction, then clearly that’s not in clunky “AI” style but in some kind of engaging style that human readers like, pay for, and nominate for awards.

If we get to the point where AI can write Nebula award caliber fiction — then my thought experiment holds: If they were Data or Daneel we ought to be rooting for them to win, but only because we’ve “met” Data and formed some kind of bond with them as a fictional character. If it’s ChatGPT version 99.7 that does it, we still ought to root for it, shouldn’t we? Or, we ought not root for Data.

But that gets “species-ist,” doesn’t it? Which is why I brought Sheldon/Tiptree into the discussion. It was felt “women can’t write good SF” until… it was proven otherwise. Sheldon had to hide her identity, which is awful. How do we react when that same sort of thing happens with AI? What happens if some book that purports to be written by a human named “Gemini Tiptree, Jr.” wins a Nebula or Hugo (based on people loving it), and it turns out to be written by an AI named “ChatGPT 99.7”?

That already happened with art. AI generated art won the Colorado state fair art contest. People liked it. Judges liked it. It won. The kerfuffle only happened after it was discovered to be AI created.

What if Data or Daneel had created that art, in a Star Trek episode, or an Asimov story? We’d root for them. (Already shown that we did.) Happens over and over in science fiction, where humans root for the digital entity to be treated as human.

Except in reality… we blow up about it and refuse.

Which is more or less what happens in those stories — the “establishment” won’t recognize Data or Daneel as human, and we the readers root for them. But, well, here we are… and… the “establishment” is saying we shouldn’t recognize writing (or art) not created by biological-human entities. (IF it’s good enough. We can ignore in this argument any “dreck” created by humans or AIs if it isn’t enjoyable/salable/award-caliber.)

That does seem like a contradiction in behavior.

The question sort of becomes, “Who cares?” Readers won’t care if a book they love was written by a biological human or AI or a combination. If lots of people are moved by it, pay for it, nominate it for awards — clearly readers wouldn’t care. Nobody else cares except writers. Writers care — but only because it’s (1) threatening their income and (2) specifically because of the “it read all my stuff and now it’s threatening my income” argument. But that’s my thought experiment: If Data or Daneel did it, we’ve said and shown we’d root for them. When it’s “ChatGPT” doing it, we aren’t? And if it’s about income loss, writers would root against Data or Daneel because of loss of income? That’s where the buggy-whip makers come in. Nobody except the buggy whip makers cared.

So, I don’t love the thought that I as a writer can be replaced by non-human activities, but we’re just the Nth profession to face it. Chess players, buggy whip makers, etc. How about De Beers facing a meltdown in diamond prices because lab grown diamonds are just as good as mined ones?

Which is another interesting aspect: If AI gets to the point it can displace human writers with excellent quality books, it seems like the price of books will collapse just like the price of diamonds. AIs would be poised to create so MUCH award-caliber writing, it might be close to free if there’s so much of it.

Readers would feel they’ve “won.” “So much excellent stuff!” Writers would just join the buggy-whip-makers social club.

But trying to ban AI in writing seems like a losing proposition.

Image

@Cora Buhlert
Also, I don’t want Universal Basic Income. I want to do the job I’m good at and get paid for it.

I suspect the buggy-whip makers said much the same thing as your second sentence. (But not the first… there was no concept of UBI in play for them to say they wouldn’t want it.)

I’m not saying I wish any of this should come to pass. Rather, I simply see it coming as a reasonable probability event. In which case, “I want to do the job and get paid for it” is not a position that will win out.

Readers love the books they enjoy more than they love the authors as human beings, separated from writing. (Indeed, many people dislike various authors as people but enjoy their books.) Readers like that won’t really care if authors aren’t human, so long as they enjoy the books. (Just as people may have felt a bit sad for buggy whip makers losing their livelihood, but they still bought cars. They didn’t fight alongside buggy whip makers to stop cars from being built.)

In which case — if there isn’t an option to do the job we are good at and get paid for it — then what are we to do? If it’s widespread across a lot of professions, I submit “UBI” might be a solution. But stopping AI from displacing human jobs just won’t happen.

Image

@Cally (and @Christopher Hensley) Re:
I went looking to see what people with more knowledge of the subject than I have had to say, and have to assume the Massachusetts Institute of Technology qualifies. MIT Technology Review seems pretty dubious on the “solving real problems and helping people” front…

Interesting read — but what is the timeframe? The reason I ask is that I’ve personally seen LLMs getting much better recently in the technical work they come up with (e.g. code, or answers to technical questions that relate to code I might write, etc.).

I only write code for myself, no longer for anyone else, and I enjoy writing code. (I’ve been doing it since the late 1970s, was a computer science professor teaching stuff like programming, operating systems, networking, security, AI, and so on; as well as getting paid to write code for complex systems. Now retired from that.) I do write code for doing publishing-type stuff for my ReAnimus Press, and I still do enjoy the coding.

I mostly write my own code because I like to. I have used LLMs, both to test them out and to speed up some tasks. I found that, for me, they were atrociously bad — until recently. They’ve gotten better. They now produce stuff I can use, if I feel the urge. (e.g. I can verify it faster than I could have done it myself.)

So this MIT article may still be true today, or it might have only been true when written some months ago, and may be less true today. AIs are advancing that fast.

Remember that thing called the Singularity?

Methinks we are in the “corona” of the Singularity already.

Things change faster than we can handle/use/understand/etc. We’re already there.

And it’s not showing signs of stopping in that advance or acceleration thereof.

Welcome to the Singularity…

Image

I was away today, hanging out with my dad on his 90th birthday.

Here is an example of AI used to solve a real problem: “How AI improves earthquake detection. It isn’t just machine learning. The Earthquake Transformer analysis software uses the attention technique, which is the same as used in large language models, AI speech recognition, and translation.

Recently one of my relatives used ChatGPT to clean up some technical documentation they had written at work. I am proud of them. They wrote the documentation because there was a need. Then they figured out how to use AI to make it look good and be easy to use. I think there are a lot of people like this, for whom AI tools are genuinely empowering.

That doesn’t mean AI is all good. Science journals are being overwhelmed by AI generated crap, similar to what’s been happening to science fiction magazines. That’s not just annoying, it’s fraudulent science.

There also is a big difference between someone like my relative using AI to do better work, and someone using AI to generate crap.

Image

Can you cite a few specific instances of generative AI solving real problems and helping people? Because I can only think of instances where it causes real problems and hurts people, sometimes fatally.

https://news.harvard.edu/gazette/story/2024/09/new-ai-tool-can-diagnose-cancer-guide-treatment-predict-patient-survival/

https://www.michiganmedicine.org/health-lab/10-seconds-ai-model-detects-cancerous-brain-tumor-often-missed-during-surgery

Image

Andrew Burt: your continued attempt to associate a program with no cognition with beings who have cognition is exceedingly annoying. I have heard explanations from both those who study how humans think and those who study how infants learn who have extensive, detailed explanations why and how LLMs are not like thinking, and are not going to ever develop thought.

As for “if it can’t create Nebula worthy fiction, there’s no point in banning it”, simply put, yes there is. Because if we say it’s okay to use, then publishers will stop using human authors, and if it’s like every other industry that tries to use this kind of AI to directly replace workers, they will do it well before the point (if such a point can ever exist — and some of those who say it can’t are folks who work in AI development) where LLMs are actually producing worthy writing. So no, readers won’t win.

Image

Bill: I asked about generative AI. Those aren’t LLMs or whatever the term is for image generators like MidJourney. It’s analytical AI: machine learning and pattern matching. They’re both called AI because marketers love to conflate them, but I was specifically asking about programs like ChatGPT and Sora. You know, the programs that can be used to write and illustrate works of fiction.

Image

@Cally, the first article Bill links draws numerous comparisons between the program under question and ChatGTP, and it is described as a ‘foundational model’. The second similarly: “To assess what remains of a brain tumor, FastGlioma combines microscopic optical imaging with a type of artificial intelligence called foundation models.

These are AI models, such as GPT-4 and DALL·E 3, trained on massive, diverse datasets that can be adapted to a wide range of tasks. ”

A foundational model is a generalisation of an LLM, in that it uses a wider variety of training data than just text.

Image

@Andrew (not Werdna) just a reminder, I’m not recommending that authors use AI to write. I can’t think of a worse recommendation, especially after I accidentally bought a kindle book that was a genAI creation earlier this year. That book was a special horror. So, no. That isn’t the takeaway here even though I do think some people do use AI to write and/or help them to organize and research material that they then sell (or try to sell) as original work.

The points that I listed are about how AI can and is being used in businesses today and how many people are contemplating using AI in the future (or even secretly using it right now). I chose examples that would pertain to the creative and publishing communities specifically. If we don’t talk about the potential use cases, we can’t protect ourselves against future misuse, wastes of time, unethical uses, lying, and suffering the repercussions of someone else’s bad behavior.

So, while good uses do exist, and I mean actual use cases that have meaningful impacts on people’s lives, I am not listing any additional uses here because it derails the conversation into a push/pull of different factions feeling attacked and I have no interest in making people feel attacked … and frankly we don’t need any more of that in our community. (You’re welcome to do some google searches to find additional examples, but I will warn you that you will also find some truly horrendous examples as well. This discussion isn’t about justifying the use of destruction of AI. Again, it’s about protecting authors, publishers, and awards rules … even if my letter itself is flawed and problematic, I want to keep this letter serving its original purpose.)

Image

Thank you.

Since you agree that writing is not a good application for GenAI, based on your experience, I hope you would accept other sunject matter experts’ recommendations against use of GenAI in their fields (contracts, etc.). We might all be in violent agreement

Image

@Tom Becker
Not just science journals – there are a lot of LLM-generated craft “instructions” and food “recipes” that are worthless, as well as books based on them (mostly at the Large South American River).

Image

It is vitally important in this discussion that we are clear about what we mean when we use the term AI. Tech billionaires are very keen for us to lump everything together and than whine that if we exclude one, we exclude all, which is simply not the case. The trouble is, they have also convinced politicians (who know very little about the subject) and others that all algorithms should be treated the same. Goal specific algorithms designed to improve their performance as they work, such as those used for cancer screening, air traffic control, and the like really should not be called AI. Likewise, my spell checker on my computer, the computer graphics package I use in the design of book covers, or the word processing software I use for writing should not be considered AI. To lump those things in with the money-spinning programmes that steal other people’s work just to exist, that are designed to make very rich people even richer at the expense of everyone else, that are designed to do away with jobs with the pretence they can perform better than humans, those are the problem and they should be resisted at all costs. They certainly have no place in the arts as to use them is to immediately be involved in the plagiarism of the work of others. The agenda and discussion points should not be set by those with a vested interest in what, in any case, may turn out to be a fragile investment bubble. Equally, if someone wants to set parameters, we should know precisely what they are and, more importantly, why they have been set as they are. Generative AI, as far as I’m concerned, is a thieves’ tool and anyone who knowingly uses it is a thief. As for working with publishers, make sure it is in your contract that generative AI is not used in any part of the publishing or marketing (ha ha, good luck with getting any of that) process.

Image

@Lenora Rose
Andrew Burt: your continued attempt to associate a program with no cognition with beings who have cognition is exceedingly annoying.

So, define “cognition”… There’s no standard for this that I’m aware of. We don’t really understand how humans think. Folks used to more or less go by the Turing Test (i.e., whether you can tell a human apart from a machine via a black box interaction), but now people don’t want to use that metric since we’re in the fuzzy zone where AI systems can basically pass that test. We already say things in real life like, “I suspect that was written by AI” — meaning the speaker can’t actually tell if the text in question was created by human or machine. Likewise for images, people wonder, is that human or AI? The mere fact that people can’t tell human from machine creations is significant.

You’d have to define “cognition” in a way that we can be on the same page about its meaning, and in a way that’s useful to test. Is it “the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses”? (Google’s top hit, from the OED.) If so, we could debate whether today’s AI’s meet that. (After clearly defining “thought” and so on. OED for “thought” is basically “thinking,” which is “the process of using one’s mind to consider or reason about something,” and so on.)

It really does hinge on being able to define that je ne sais quoi that infallibly identifies what makes a human “human” in the ways we’re talking about (writing, art, writing code, making decisions, relaying information, causing emotional reactions in others, etc. — the physical aspects of e.g. picking up a blade of grass aren’t directly relevant, and of course are tricky as well when you consider someone with no arms, etc.). I have a background in AI and I wrote a novel that deals with this very subject of the humanness of AI and the trajectory of AI, for which I did a lot of research and thought, so I’m not just spitballing. My point is that it’s not a simple definition. We’re already in the fuzzy zone where people can’t tell. This is not a simple subject.

And if not today’s AI (December 30, 2025), then perhaps tomorrow’s, or next month’s, or next year’s… The field of AI is evolving rapidly. I wrote articles a couple years ago on how utterly useless LLMs were — and in the last few month’s I’ve had to rethink what I wrote, because they’ve improved. Vernor Vinge was spot on with his concept of the technological singularity, that the rapid rate of change stymies understanding.

But if you have a practical, sure-fire way to distinguish it, or to define “cognition” in a useful way that we can use as a test, I for one am all ears.

(As for it being “annoying,” yes, really hard questions can often be annoying.)

As for “if it can’t create Nebula worthy fiction, there’s no point in banning it”, simply put, yes there is. Because if we say it’s okay to use, then publishers will stop using human authors, and if it’s like every other industry that tries to use this kind of AI to directly replace workers, they will do it well before the point (if such a point can ever exist — and some of those who say it can’t are folks who work in AI development) where LLMs are actually producing worthy writing. So no, readers won’t win.

In other words, you’re saying publishers will publish unreadable dreck rather than pay pittances to humans to write stuff people really like to read? That makes no sense. Either “AI” today can write stuff readers enjoy reading and will nominate for awards, or AI can’t write books people like. It wouldn’t make financial sense for a major publisher to publish unreadable AI dreck. They couldn’t make any decent profit on that. They rely on some breakout bestsellers for a profit, and if AIs can’t create those, publishers won’t bother with those. So long as AI’s can only come up with unreadable dreck, humans are safe with the major publishers. I think that today we’re in the latter category. But tomorrow, next month, next year, all bets are off. (Indeed, look at art, e.g. cover art. Cover artist, as a livelihood, is pretty much toast.)

If, at any point, AIs become capable of writing beloved, highly profitable books, then publishers will publish them. Regardless of policies about Nebula awards eligibility. Profit will dictate. (And long-shot at a “Nebula award winner” label isn’t enough of a profit bump to matter.)

And some books by humans will likely remain in the mix, though the chances of selling a novel to a major publisher will dwindle even more from the lottery longshot it is today, and the payouts will keep dropping. It’s not as if “science fiction writer” is a viable career path today as it is. Someone can train to be, say, an electrician, and reasonably be assured of finding a bill-paying job upon completion, and an upward career path. That has never been the case with “science fiction writer.”

Readers by and large care about what they read, not the human who wrote it. If AI books never get beyond the dreck level, it doesn’t matter, readers won’t want them. If AI books ever get good, they’ll get bought and beloved. I agree it’s sad — and annoying — to see technology displacing one’s career/side-gig/hobby/etc., but a finger-in-the-dike idea like banning AI use from Nebula eligibility is pointless. (As well as fraught with practical difficulties in the definition, as the OP is all about.)

Image

The question isn’t so much whether generative AI is capable of writing award-worthy fiction. Because generative AI doesn’t need to be good, it just needs to be good enough and significantly cheaper than humans.

Harlequin fired all its French translators and replaced them with AI. Will the result be even remotely as good as what a human translator would have produced? No, of course not. But it’s much cheaper and besides, it’s “only” romance and Harlequin at that and the readers of those books are not considered sophisticated enough to tell the difference anyway.

We are already seeing similar things happening elsewhere and we will see more of this. Of course, a human is better at writing newsletters, marketing copy, e-mails, etc… but the AI is cheaper and it doesn’t matter to the sort of people who would use AI. Of course, a human narrator is better at narrating audiobooks, but the AI narrator is good enough and people are just listening to this while communiting to work anyway. Of course, a human is better at dubbing a movie, but it’s not as if “those foreigners” will know the difference anyway.

We’ll see this more and more.l It’s just a comic, just a video game, just a cartoon, just a romance novel, just fantasy, just horror, just science fiction, it’s just a horror film, just a superhero movie,

AI slop also devalues human labour in other ways. In recent times, I’ve had several inquiries regarding translations, gave these people my usual rate for whatever it is and never heard from them again. Because machine translation has devalued our work, so many people are no longer willing to pay for it.

Image

Exactly.

Image

@ Cora Buhlert:
generative AI doesn’t need to be good, it just needs to be good enough and significantly cheaper than humans.

Harlequin fired all its French translators and replaced them with AI. Will the result be even remotely as good as what a human translator would have produced? No, of course not. But it’s much cheaper and besides, it’s “only” romance and Harlequin at that and the readers of those books are not considered sophisticated enough to tell the difference anyway.

The key (financially), as you say, is whether it’s “good enough” to make more profit for the publisher than a human work. Still not likely to win a Nebula if it’s only “good enough” but not great.

Do you have hard evidence to back up your claim? That AI translations are more profitable for publishers? If not, it’s just supposition. I don’t speak French, so I couldn’t comment on whether AI translations are not “remotely as good” as humans. But money would tell.

I may not like it, but of course publishers will go the more profitable route, and use AI if publishers determine — via clear financial results — that readers don’t care if a book was AI- or human-created (regardless whether those readers are denigrated as not “sophisticated enough” or whether the alternate view holds, from the readers’ perspective, that the writing is “good enough” for them to enjoy).

It’s basically a Turing Test determined via profits: If a publisher makes more money on “good enough” AI content paid for by humans, clearly those humans don’t care about the species of the author.

Do you you apply that same approach to science fiction readers in English, and contend that they’ll pay for AI slop because they’re “not sophisticated enough”…?? If so, that both speaks poorly of your view of SF readers and would nevertheless make the argument in favor of publishers using AI, because they are profit driven businesses. I don’t know of any major publishers who are in it “for the love” and don’t care if they turn a profit.

Image

@ Cora Buhlert: Regarding Harlequin and French translators, apparently this just happened a couple weeks ago. I went looking for information on it, and found this press release — in French, and, a bit ironically for our discussion, not obviously available in English, so I had to use Google Translate on the page to read it. https://mailchi.mp/atlf/invitation-la-rception-des-nouveaux-adhrents-et-stagiaires-de-latlf-8351078?e=bc89f55b19

According to this they haven’t published any actual books yet translated by AI (and with humans doing a “post-edit”). So it may be a terrible flop, financially. We don’t know yet.

Image

I think the temperature of the debate will drop once there’s a clear US Supreme Court ruling as to whether or not training an AI is “Fair Use.” That is, whether there’s any legal obligation to pay content creators if their work is used to train AI.

By that point is should also be clearer what LLMs are actually good for and what the best way to work with them is.

Personally, after a career working in AI (I retired just before things got exciting!), I don’t expect either utopia or dystopia as an outcome. I’m finding it most useful as a critic–even if I almost never take its suggestions. (Maybe I just like the way it kisses my ass!) 🙂

Image

This was a REALLY comprehensive summary of the issue. Kudos to Erin for the eloquence and courage to bring all this up.

My concerns around SFWA’s movements in this area are pretty simple.

Since there is no way to determine if a work used AI in the drafting process or not, there is no way to enforce the new Nebula rules, which mean they’re purely performative, rather than effective.

I can tell you with 100% certainty that there have already been AI-drafted books published by traditional publishers. The authors who did this didn’t disclose to the publisher (that might have resulted in the rights not being bought), so odds are very, very high they won’t voluntarily disclose to SFWA, either. Most people publishing indie books using AI are not disclosing that, either.

Since there’s literally no possible way to tell if something was drafted by AI or not, what’s SFWA intending to do to enforce this rule?

Nothing. Because there’s nothing that can be done. All someone has to do is say they didn’t use AI, and their work has to remain on the ballot.

While I understand the desire some folks have to keep the award for human-written books only, the reality is that absent a means to tell human written stuff apart from AI written stuff (which doesn’t exist), there’s simply no way to accomplish that. Trying to make a rule that SFWA has no possible way to enforce is pointless.

Speaking as the new president of NINC, I do understand the complexities of these sorts of things. We’ve had to come up with our own answers to the questions of AI use, so I sympathize with SFWA’s struggles in this matter.

I’d recommend not making rules that you cannot enforce. It makes no sense, and just makes the org look foolish to create rules that like the new Nebula rules that cannot possibly be enforced. There’s no way to tell AI writing from human. Therefore, nobody is going to be able to prevent an AI written book from being awarded a Nebula, if the members of SFWA believe it was well enough written to qualify.

Far better, perhaps, to trust in your membership to vote for the works which are the highest quality, and call it a day.

Kevin McLaughlin
2026 NINC President

Image

Taking all this a step further, suppose AI does displace a LOT of jobs, as seems possible. The US and world economies can’t withstand massive unemployment. The billionaires (and the rest of us) need consumers to buy stuff, be it billionaires’ stuff or our novels. Thus governments (presumably lobbied by billionaires) will have to come up with a solution whereby displaced workers still have money to buy stuff.

This might be in the form of UBI. (Universal Basic Income.) Or, as my wife reminded me, regardless of getting money, too many “idle hands” are dangerous to a government. People with nothing to do tend to get cranky and think about revolutions.

An intriguing case study are the “terra cotta warriors” in China. They were created by the first emperor of China, Qin Shi Huang, as a massive artistic project for his tomb. He employed a tremendous number of artisans to create them. (Truly spectacular to see.) After his death, they were destroyed. When we visited there, the (possibly apocryphal) story we were told was that after his death, his sons stopped construction on the project to save money, and the angry unemployed workers destroyed them (and broke up the empire).

While that may not be what happened, it leads to an interesting interpretation of events that is relevant to our discussion here: That the first emperor was aware of the need to employ people, lest they get restless and revolt, so he created his tomb as a huge public works project. When his sons (according to the story) stopped the public works project, the workers did indeed revolt–they should have kept the public works art project going, to keep the people employed and avoid a revolt.

Thus it might be that one idea for a government looking to ameliorate the effects of AI job displacement to create government works projects for humans..such as… in the form of… government sponsored art.

Wouldn’t it be interesting if, instead of simple UBI payments, the government paid AI-displaced workers to do things like create great artistic public works projects (you have to admit the terracotta warriors are excellent art), and to work on other public works projects according to their abilities. Such as, for those with the knack, to write science fiction.

And if not UBI, or government sponsored jobs doing work to prevent revolt, and assuming the AI genie can’t be put back in the bottle, then what?

Image

Andrew Burt:

So, define “cognition”… There’s no standard for this that I’m aware of. We don’t really understand how humans think.

You’re looking at this from the POV of someone who has studied AIs. The people I was reading were in other fields; PhDs on Early Childhood education, which is ALL about how humans learn things and process information. SMEs in actual psychology and scientists who do in fact take brains apart and put them together, or watch how they heal and recover and grow after traumatic injury. People who are deep in the practical meat, figuratively or literally, about what’s involved in giving a human brain information and watching creative processes come out.

They’re not people who stare at computers that enable psychosis and suicidality, and go “What if… psychosis-causing machine is actually thinking and not just programmed to keep a person’s attention at any cost?”

Because that’s another thing; if AI WERE sapient, it’s absolutely amoral and we really shouldn’t be trusting it with any of our intimate processes. Thankfully, it isn’t.

In other words, you’re saying publishers will publish unreadable dreck rather than pay pittances to humans to write stuff people really like to read? That makes no sense…. So long as AI’s can only come up with unreadable dreck, humans are safe with the major publishers. I think that today we’re in the latter category. But tomorrow, next month, next year, all bets are off. (Indeed, look at art, e.g. cover art. Cover artist, as a livelihood, is pretty much toast.)

The second half of this paragraph completely contradicts the first half in my mind, because IMNSHO, as a visual artist, the AI covers I have seen are almost all of them grotesque. A few are very nice on first glance but deeper looks make it clear they do all kinds of things wrong that annoy and detract from the intended effect; the sorts of things that make the difference between an image you’d hang on your wall for 20 years and one you’d get tired of looking at in about 30 minutes and regret buying. And some, like the cover put onto Ibram X Kendi’s kid’s book on Malcolm X, are garbage even at a surface look. So to my view, publishers are already proving that they will in fact accept work that is subpar if it means not having to pay an artist.

More to the point, though, there are already a lot of books out there that are AI trash, and while most are currently filling up spaces in the indie market, it’s only a matter of time before book publishers decide to fill some of those low-advance low sales slots with a few superficially edited AI spots. Or pay $1000 for someone to clean up the AI instead of taking a chance on a new author, complete with a contract and restrictions on use, for $4000.

After all, it doesn’t have to be award worthy to appear superficially “saleable”.

Image

Lenora Rose wrote:

You’re looking at this from the POV of someone who has studied AIs. The people I was reading were in other fields; PhDs on Early Childhood education, which is ALL about how humans learn things and process information. SMEs in actual psychology and scientists who do in fact take brains apart and put them together, or watch how they heal and recover and grow after traumatic injury. People who are deep in the practical meat, figuratively or literally, about what’s involved in giving a human brain information and watching creative processes come out.

Nods in agreement. In my case, I spent ten years working as a special education teacher and case manager. While I was never licensed to administer cognitive assessment batteries (the proper terminology for what’s popularly called “IQ tests”), I used the subtest scores from those assessments (never the full scale score–full scale IQ is useless in my opinion), combined with academic assessments (that I performed) to assess, analyze, and devise specific learning programs for students. We know a lot about how humans acquire knowledge and the factors that impact it.

One of the big pieces that tells me LLMs will never reach artificial general intelligence is that they operate like massive information retrieval processes. In cognitive assessment-speak, that means they are very strong in the cognitive subcategories of crystallized intelligence (previously learned information), memory retrieval, and processing speed. But–and this is the big but–under current technology they cannot perform fluid reasoning tasks with any efficiency. Fluid reasoning is that intuitive process by which human cognition makes associations to come up with new cognitive output.

When it comes to opinions with regard to AI, Gary Marcus (who HAS studied cognitive assessment processes) is the voice I listen to the most. And Marcus is of the firm opinion that you’re not gonna get artificial general intelligence using LLMs–which means creative output is going to be nil.

Unfortunately, that isn’t gonna stop the profiteers from flooding the markets with LLM slop. Or the apologists who conveniently overlook that said LLMs were trained on stolen material. Or the already-demonstrated process where the outputs are declining because what’s left to be trained on is…LLM slop.

Image

@ Lenora Rose:

They’re not people who stare at computers that enable psychosis and suicidality, and go “What if… psychosis-causing machine is actually thinking and not just programmed to keep a person’s attention at any cost?”

Ad hominem argument. You know computers are not just for enabling psychosis and suicide. You’re using one right now. And not all AI researchers are caught up in the hype.

Image

@Andrew Burt
Regarding Harlequin France, many years ago a French romance scholar told me that group of French romance readers contacted Harlequin France to complain about series being published out of order or not completed. The response they got was, “You’re fangirls. You’re an exception. You don’t count. Our average reader is a frustrated housewife or a little elderly lady with too much time on her hands and she doesn’t care.” In short, Harlequin France viewed its books as completely disposable. Do you honestly think a company like that will care if the translation quality drops, as long as they save a few bucks? They already despise their customers.

And while readers will probably notice that the books no longer seem as good as before, that the prose is clunky or bland, they may have no idea what the reason is. Harlequin France switching to AI translation is not mainstream headline news and many regular readers will have no idea what’s going on. Most likely, they will blame the author and not read them again. Because translation issues are usually blamed on the author, not the translator. Whenever a certain author’s prose is described as clunky or a novel or movie is described as bad, overrated, etc… in one country, but not anywhere else, there’s usually a translation issue at work. But few people are familiar enough with translation to know this.

And even if readers notice, as long as Harlequin France keeps making money, they won’t care. Authors have zero recourse, because Harlequin buys worldwide rights. Authors who write for Harlequin also can’t boycott the company and take their books elsewhere, because there is no alternative for the kind of short romance Harlequin publishes except self-publishing. Harlequin contracts also are Draconian. For a while, authors didn’t even own their names, though that has changed by now.

And if you think that SF won’t be treated similarly dismissive, you’re a fool. Because publishers view all genre fiction as disposable. Also, if SFF publishers start using AI translation, they won’t start with the big names. They will start with tie-in fiction, which is work for hire, where the authors have little chance to do anything about it.

About two years ago, I read an article about AI use in screenwriting and also for dubbing. and subtitling. And the article stated, “Of course, it won’t be used for everything. The Oscar bait movies and prestige TV shows will still be written and dubbed by humans. It will just be formulaic low level stuff like children’s programming, cartoons, genre movies like horror films or superhero films. The ‘real’ art is safe.”

Image

@Andrew Burt

Taking all this a step further, suppose AI does displace a LOT of jobs, as seems possible. The US and world economies can’t withstand massive unemployment.

This is the Lump of Labor Fallacy. AI will end up creating more and better jobs, not massive unemployment. But the transitory effects may be bad, so governments definitely need to be ready to support people who’re hurt in the short term.

Image

@Greg Hullender

AI will end up creating more and better jobs

[citation needed] especially when people have pointed out the [skilled] jobs being lost now.

Image

Yeah, I’m probably less anti-LLM than the average commenter here, @Greg Hullender, and I don’t think that your prediction here even makes it to “bad bet” levels of probability.

Image

@Greg: Here’s Dario Amodei, cofounder and CEO of Anthropic, predicting the future achievements of his company’s generative AI: it will cure cancer and “most mental illness,” lift billions from poverty, and double the human lifespan. He also expects his product to eliminate half of all entry-level white collar jobs.

Guess which one of those predictions is the only one that will come true! Here’s a hint: it’s the only one he and his investors actually care about.

Image

@ Lenora Rose:

They’re not people who stare at computers that enable psychosis and suicidality, and go “What if… psychosis-causing machine is actually thinking and not just programmed to keep a person’s attention at any cost?”

Ad hominem argument. You know computers are not just for enabling psychosis and suicide. You’re using one right now. And not all AI researchers are caught up in the hype.

So I used the word computer when I should have used program. I also said Early Childhood Education when I meant early childhood development, because ECEs have been in my life a lot more than ECD medical scientists, but you know what? It doesn’t, in either case, change the obvious point. I can take it as a reminder that while posts are first-drafty by nature, I still need to check a few words.

Using it as you do, for a direct personal condescension, is vastly more of an ad hominem, and the lesson you need to learn is to grant other posters some actual intelligence instead of talking to them as a toddler.

Image

Greg Hullender:

I think the temperature of the debate will drop once there’s a clear US Supreme Court ruling as to whether or not training an AI is “Fair Use.” That is, whether there’s any legal obligation to pay content creators if their work is used to train AI.

You think the temperature will DROP at a Supreme Court decision? I’m pretty sure that a decision in favour of artists will cause huge sissy fits from the tech billionaire class pumping out AI as it causes their investment go implode, and a decision that it constitutes fair use will have every creator up in arms. Neither sounds peaceful to me.

AI will end up creating more and better jobs, not massive unemployment.

citation definitely needed. What jobs can it create? So far it’s taking jobs but not adequately replacing them, and many places that fired skilled staff are hiring back folks at lower wages to produce or edit the AI output. Thus the job turnover doesn’t look like better jobs to me.

Comment navigation

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment *

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Δdocument.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() );

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Hacker News

相關文章

  1. 科幻作家與動漫展拒絕人工智慧

    Techcrunch · 3 個月前

  2. 請勿發布生成式或 AI 編輯的評論:Hacker News 是為了人類之間的對話而存在

    大約 1 個月前

  3. 我真的很懷念 AI 出現之前的寫作時代

    24 天前

  4. AI安全領域低估了創始人的價值

    Lesswrong · 5 個月前

  5. AI產業的道德批評者:專訪Holly Elmore

    4 個月前