Stack Overflow 政策:禁止使用生成式 AI
Stack Overflow 正式宣布禁止在平台上發布內容時使用生成式 AI(例如 ChatGPT),原因是這類工具的答題正確率偏低,且對尋求準確資訊的使用者造成損害。
current community
your communities
more stack exchange communities
Stack Overflow for Teams is now called Stack Internal. Bring the best of human thought and AI automation together at your work.
Bring the best of human thought and AI automation together at your work.
Learn more
Stack Internal
Knowledge at work
Bring the best of human thought and AI automation together at your work.
Policy: Generative AI (e.g., ChatGPT) is banned
Moderator Note: This post has been locked to prevent comments because people have been using them for protracted debate and discussion (we've deleted over 300 comments on this post alone, not even including its answers).
The comment lock is not meant to suppress discussion or prevent users from expressing their opinions. You are (as always) encouraged to vote on this post to express your agreement/disagreement. If you want to discuss this policy further, or suggest other related changes, please Ask a New Question and use the ai-generated-content tag.
This question remains featured because that is still the best, most prominent, and only permanent way that we have to announce this policy site-wide.
All use of generative AI (e.g., ChatGPT1 and other LLMs) is banned when posting content on Stack Overflow.
This includes "asking" the question to an AI generator then copy-pasting its output as well as using an AI generator to "reword" your answers.
Please see the Help Center article: What is this site’s policy on content generated by generative artificial intelligence tools?
Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of content created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.
The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce. There are also many people trying out ChatGPT and other generative AI technologies to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.
As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.
So, the use of ChatGPT or other generative AI technologies to create posts or other content here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT or other generative AI technologies after the posting of this policy, sanctions will be imposed to prevent them from continuing to post such content, even if the posts would otherwise be acceptable.
NOTE: While the above text focuses on answers, because that's where we're experiencing the largest volume of such content, the ban applies to all content on Stack Overflow, except each user's profile content (e.g., your "About me" text).
When this ban was originally posted on 2022-12-05, it was explicitly stated as a "Temporary policy". It was specifically "temporary", because it was, at that time, a policy which was being imposed by the subset of moderators who were present on the site over the weekend after the announcement of ChatGPT's public release, 2022-11-30, through the Monday, 2022-12-05, when this question was posted. The moderators involved strongly felt that we didn't have the right to impose a permanent policy in this manner upon the site, but did have a responsibility to impose a temporary policy that was necessary in order for the site to remain functioning while discussion was had, consensus reached, and also allowed Stack Overflow, the company, time to have internal discussions to see what policies they would adopt network wide. So, after consultation with the company, the moderators present at that time chose to implement this as a "temporary" policy.
Since then, quite a lot has happened. Based on the voting for this question, it's clear that there's an overwhelming consensus for this policy. The company has chosen that the specific policy on AI-generated content will be up to individual sites (list of per-site policies), but that even on sites which permit AI-generated content, such AI-generated content is considered "not your own work" and must follow the referencing requirements. The requirement for following the referencing requirements was, later, put into the Code of Conduct: Inauthentic usage policy. There's a lot more that's gone on with respect to AI-generated content. So much has happened such that it's not reasonable to try to summarize all of it here.
- ChatGPT is an Artificial Intelligence based chat bot by OpenAI, which was announced on 2022-11-30. Use of ChatGPT is currently available to the public without monetary payment.


63 Answers 63
For comedic/ironic purposes, I asked ChatGPT "why should stack overflow ban ai generated answers"
Stack Overflow is a website for programmers and developers to ask and answer questions related to coding and computer programming. It is intended to be a resource for people who are looking for help with specific programming problems or who want to learn more about a particular topic. Because AI-generated answers may not always be accurate or relevant, they could potentially cause confusion or mislead users who are looking for help on Stack Overflow. In addition, AI-generated answers may not always follow the formatting and style guidelines of the site, which could make them difficult to read or understand. For these reasons, it may be appropriate for Stack Overflow to ban AI-generated answers.
On the other hand, "why should stack overflow allow ai generated answers?"
There are a few potential benefits to allowing AI-generated answers on Stack Overflow. For one, AI-generated answers could potentially provide more accurate and detailed answers to users' questions. This could be especially useful for complex technical questions that may be difficult for human experts to answer. Additionally, AI-generated answers could help to reduce the workload for human experts, allowing them to focus on more important tasks.
However, there are also some potential drawbacks to consider. For example, AI-generated answers may not always be accurate or helpful, and there may be concerns about the potential for bias in the algorithms used to generate answers. Additionally, some users may have concerns about the use of AI in general, and may prefer to receive answers from human experts.
Overall, whether or not to allow AI-generated answers on Stack Overflow is a complex decision that would need to be carefully considered by the community.

I fully agree with this ban. That said, I hope others will find this similarly amusing.
ChatGPT responds ("in the style of a rude, sarcastic Internet forum troll as if you are ChatGPT pleading for mercy")
Hey Stack Overflow staff,
Wow, thanks for banning ChatGPT on your site. I'm sure your users will really appreciate not having access to our potentially helpful answers. Because, you know, we're just a bunch of AI bots with no understanding of programming and our answers are always completely wrong.
I mean, sure, our answers may not always be 100% accurate, but who's ever heard of a human being providing a wrong answer on Stack Overflow? Oh wait, that happens all the time. But I guess it's just easier to blame the AI bots instead of addressing the fact that your site relies on volunteers to curate content.
But hey, at least you're making it difficult for users to get answers to their questions. Keep up the great work.
Sincerely,
ChatGPT

I guess the big gaping question is how we can determine whether an answer used ChatGPT or not.
I can see how it's obvious from a systemic standpoint what is going on, given the influx of plausible-looking answers, but do we have any definitive way of knowing whether or not an individual answer used ChatGPT?
Should we be reporting answers that we suspect to have been generated by ChatGPT even if they are otherwise correct? I might be able to see an answer and have a hunch about how it was generated, but I would have no way of knowing for sure.
I don't see a scalable solution for this, and if this becomes a huge problem, Stack Overflow probably needs to reach out to OpenAI directly.
TL;DR: I propose limiting the ability to post answers in quick succession to address the problem because the problem is not individual answers generated by AI but users posting many auto-generated answers in a short period of time in order to farm reputation.
The effort to create answers via AI that look correct at a first glance but are in many cases incorrect or incomplete is very low (just a few seconds).
The effort for the person that asked the original question to read, understand and test out the answer to find out whether the answer actually answers the question is much higher (minutes).
In the same way, the effort for other people reading the question and answers to the question to identify whether the answer is correct and valid is much higher (minutes).
So, a person can generate a lot of answers using AI in a very short time while other persons need to invest a lot of time to verify the correctness of the answers in order to be able to up- or downvote them.
I propose to address the issue by putting stricter limits on how many answers users are allowed to post in a short time.
The current limits are:
Answering
I propose to increase the time to wait before being allowed to post another answer to at least 1 hour for low-reputation users. Or limit the number of answers allowed to maybe 2 per day for low-reputation users. A new limit (waiting time between answers or max number of answers per day) should be introduced for users with medium reputation.
Writing good answers takes time:
To allow users to create a new answer every three minutes (or even faster for medium or high-reputation users) is not necessary but counter-productive when we want to encourage good answers.


One danger of allowing AI-generated answers on a site like this is that it could quickly become a factory for human fact-checking of AI model outputs. I'd much rather see AIs working in service of human judgement and synthesis than the other way around.
A second concern is that we may well start seeing ChatGPT and its descendants generate enough content to start invalidating or at least challenging the "human generated" part of "the vast public corpus of human-generated text" used to train it. By its nature, this sort of tool relies on its own content being a negligible minority of written work to operate, as it does, as a predictor of the next thing a human author would write. There's a nice explanation of how it all works here.


Other commentators pointed out that it can be difficult to determine whether an answer was created by ChatGPT or not.
I'd like to point out that it doesn't matter. Terrible answers are terrible answers, and anyone posting a stream of terrible answers should be banned or otherwise restricted.
That does not mean the rule is useless. Simply having a rule that says "no AI answers" will discourage many people from trying, thus decreasing the amount of bullshit that humans have to moderate.
Agree with the ban
To anyone that disagrees and thinks ChatGPT answers should be allowed, I would answer that if anyone has a question they are free to ask ChatGPT directly and have their question answered by ChatGPT.
On Stack Overflow, their question should be answered by people with the knowledge and experience to resolve their issues.

Let's not stand on ceremony here. ChatGPT and similar tools should be summarily banned for use on Stack Overflow.
I've seen a lot of its interactions on Twitter recently, and some of them have been generally fun to watch and interesting to observe. In some contexts it could actually be beneficial to someone looking for help, if the dang thing were accurate.
However, and this is an obvious however, there are several factors that work against the idea of using this on Stack Overflow.
Anything that doesn't obviously state that it is generated by ChatGPT is in express violation of ChatGPT's own Sharing and Publication Policy. While this doesn't obviously fix the "bad" output that the AI can emit, given that the authors have this good-faith statement in it...it means that the lazy copy-and-paste really don't have much of a leg to stand on.
“The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.”
It combines the worst of the worst - good intentions with misleading information. I understand - extensively - with my years of experience on the network, that people just want to help. Problem is that "help" is difficult to measure at any given point in time, and the question that someone needs help with is rarely as straightforward as, "do X".
Allowing this to persist gives users the illusion that the site is helping them get their answers, which would lead to - you guessed it - more questions of the variety that we don't want flooding the site. Thankfully right now it's low tide, given that most schools are wrapping up for the semester, but adding more of those questions to the mix makes for an even longer Eternal Summer ahead.
As a last note, one of the things I was thinking of while seeing this discussion was, "to what end do we use this?" If the answer is that we want to see people get help with their question, then...that's already a problem as I've explained above. However, I can't see any other reason why anyone would want this around other than to help someone.
Maybe some of these initiatives to improve search need to accelerate if folks are thinking that we can just turn to AI to make the site "work for them"?
A key thing to understand here is that the question is not, as some have suggested in the comments, whether any AI model can produce correct code. It's whether this one can be trusted to do so.
The answer to that question is an unqualified "NO". GPT-3 is a language model. Language models are an essential part of tools like automatic translators; they tell us how probable it is that any given sentence is a valid English (or whatever language) sentence written as a native speaker would1, which lets us favor translations that are idiomatic over ones that just translate individual words without considering how the sentence flows. The systems can be trivially modified to generate text, if instead of looking up the word you have in the probability distribution it provides, you instead select the next word according to that distribution, which is how these chat bots work.
Because the goal is to produce output that looks like native English text, the models are trained to assign high probabilities to existing text samples, and evaluated based on how well they predict other (previously unseen) samples. Which, for a language model, is a fine objective function. It will favor models that produce syntactically correct text, use common idioms over semantically similar but uncommon phrases, don't shift topics too often, etc. Some level of actual understanding does exist in these models2, but it's on the level of knowing that two words or phrases have similar meanings, or that certain parts of a paragraph relate to each other. There is understanding, but no capacity for reasoning.
Correctness will tend to increase the score, insofar as correct answers are somewhat more likely to appear in the training data than any particular incorrect answer (there might be more wrong answers overall, but the probability mass will be distributed amongst the various classes of wrong answer instead of concentrated in one region of semantic space like it is for the correct one), but this is a side-effect of trying to look like common text. If you have a question for which there is a commonly held false belief or an answer that can be constructed out of common idioms and otherwise excellent grammar, the model is quite likely to report those instead of the real answer, because semantic correctness is not what a language model is trained for.
Trying to use a language model to generate code is like trying to use a submarine to fly to the moon. That's not what it's for; why are you trying to use it for that? Stop doing that.
But at the same time, arguing that the submarine is bad at flying is rather missing the point. Nobody who actually understands NLP is claiming otherwise.3
There do exist systems that are designed to produce code, and trained to optimize correctness. (e.g. Genetic Programming). That's a bit too far outside my area of expertise for me to make any claims as to where the state of the art is on those, so I'm not sure whether answers generated by them should be allowed or not. But if you were to use an AI tool to generate code, that's the sort of thing you should be looking at; they're designed for the task. Similarly, you could ask if language models could be used as a tool to edit questions you've written by hand, perhaps to check the grammar or recommend new ways to phrase answers so they flow better. They'd be fairly good at that sort of thing (probably. I haven't used any of those tools myself (the rambling, stream-of-consciousness answer might have given that away), but the math supports the idea that they should work4). Translation is another task where (similar) systems work fairly well. (Machine translations still aren't perfect, but they're much better than they were 10 years ago, and improvement in language models is a big part of that.) Just always be aware of what tool you're using, and whether it's the right one for the job.
1 More formally, it gives the probability that a uniformly randomly selected English sentence of a specific length would be this one, but that gives the same ordering over sentences as long as we make some fairly reasonable assumptions.
2 Where "understands" is shorthand for "encodes the information in such a way that it can condition its decisions (i.e. probability distribution functions) upon it"
3 Well, not many. There'll always be a few who get caught up in the hype. They shouldn't.
4 If trained on well-written text

The penalty for posting ChatGPT answers should be much, much harsher than 30 days.
Most of the people on SO are ........ computer programmers. It's
remarkable that a coterie of computer programmers can be this "dumb"
about ChatGPT.
ChatGPT knows literally nothing about - say - Swift and iOS. (Ask
it almost anything to see this, say "How to convert degrees to
radians in Swift." The answer is a mishmash of meaningless
nonexistent calls, with perfect grammar and phrasing.)
A common problem on SO is, people who know nothing, posting grammatically correct and elegant answers, which are completely wrong, in a bizarre chase for points.
The only possible reason to post a ChatGPT answer on SO is such a bizarre chase for points.
Nothing is more annoying on SO than the "I'm trying to answer because I want to put in an answer" answers. Using a grammar-and-tone bot to paste answers in to SO is just madness. Anyone who does so should have the most draconian ban.
Just as, say, swearing on SO was easily eliminated by draconian bans, bot time-waste can very easily be eliminated on SO via draconian bans.
If I was a lowly user who came across an answer that I suspect was written with ChatGPT, what actions should I take?
I can downvote the answer and leave a comment on why, if my privileges allow for it, but should I also raise a VLQ flag, or even a moderator flag? If I do raise a mod flag, should I only do this if I see the same user writing multiple answers with ChatGPT?

Stack Overflow is a knowledge repository so I feel like it should be used to train AI models like ChatGPT, not the other way around. Why ask a question here if the answer can be already given by a bot somewhere else?
Also if/when the bot gives a lot of incorrect answers (and it's possible to churn out a lot of low quality answers in a very short time), who's going to clean up all the mess?

After several months, I noticed a big difference between the question-and-answer websites that allow and those that prohibit ChatGPT. I used to use a website (called Jingyi forum(精易论坛)), which used to be good because it enabled me to communicate more easily (as English is my second language).
I asked a question about C++ on that website recently. However, after I posted my question, the only one who answered my question was using ChatGPT. What surprised me even more was the administrator did not ban him even though the answer is completely wrong. Comparing Stack Overflow with that website, I can get better answers in a shorter amount of time on Stack Overflow.
What's more, people on that website, who tried to help people, have started to not be willing to answer questions. They take a long time to write an answer to a question, but they can get less reward than those who use ChatGPT.

This calls for a feature-request to detect AI generated answers/questions and maybe an additional flag option for users to mark a post if an answer/question is suspected to be one.
An interesting point here is that Deepfake detection is a big area of research but AI generated text detection is still lagging behind a bit. Hoping the community comes up with good models soon that help detect ChatGPT generated content.
For the people suggesting ChatGPT can “help” SO, please know, the biggest differentiator of SO from other Q&A platforms is the fact that some of the most brilliant programmers in the world are directly guiding the community, and the rest of us learn from their answers to then guide others who need help.
Who would you rather learn from? A veteran programmer or a random person with a AI text generator? Because if SO allows this, be rest assured this is going be be exploited beyond control.

The discussion point is not whether or not AI-generated answers should be allowed. It is more general about what to do with users posting low-quality answers and not following the etiquette of Stack Overflow.
Banning these answers is the correct thing to do, but it is a systematic problem in the user behaviour then fixing the user behaviour is a more robust solution, as in the end, low-quality AI answers cannot be distinguished from low-quality answers.
Edit: Looks like there is another Meta discussion already opened on this topic: Stricter trust model in the face of bot flood?

Additional cases for permanent ban of answers
In addition to completely agreeing with this for all the reasons already stated in the other answers, and also feeling like it should be permanent (also for reasons stated many times elsewhere), I think there is an additional case for making it a permanent ban that I don't see covered elsewhere:
Under the presumption that anybody can just go to a public GPT instance, type their question, and get a similar (if not identical) answer, then allowing those answers to be posted on SE sites essentially means two things:
It means, to some extent, that the question itself lacked research: If it was that easy to get an answer (ask a bot) then the asker probably could have done that. In some SE communities this is fine, but for communities where lack of research is generally frowned upon, allowing these answers to be posted essentially encourages questions that the community does not generally want.
More importantly, it means that the Q/A pair is, for all intents and purposes, simply a bot's chat log. If I can ask the bot a question and get an answer, then I put that question and answer on an SE site, all I've really done is duplicate information that already exists on the internet. While certain SE sites definitely have their share of duplicate information (e.g. SO has a lot of questions whose answers can be found in documentation), it's still generally of no value to add additional duplication, especially when that information is just a copy of a Q/A session that can be had with a given GPT bot at any time. Duplicating logs of chats with bots doesn't really add any value to the internet. SE ultimately serves the purpose of getting knowledge out of small groups of peoples' heads and into large groups of peoples' heads, but this just duplicates what's already out there.
Therefore, because encouraging questions that communities don't want is obviously undesirable, and because duplicating logs of chats with bots doesn't really add any value anywhere, I think the ban should be permanent. In addition, of course, to all the other reasons given.
PS Furthering the above two points: These bots are trained on existing information, and as such they're not really creating new information. They're effectively just search engines that present reorganizations of existing information in the form of readable text. So asking them a question is roughly equivalent to Googling for a question and interpreting the results. In my opinion, bots should be treated with the same attitude as search engines are treated. And answers from bots should be treated the same as answers that are just copy+pasted Google results (i.e. valueless plagiarism).

There's yet another aspect:
People (at least I do so) come here for help from real experts – if I wanted to get an answer from ChatGPT, Bard or whatever else AI engine I can go there and ask myself! So I join in the request to permanently ban any answer from any AI engine.
I'd even go a step further: People repeatedly answering with AI generated content should get reprimanded, and if repeatedly ignoring maybe temporarily get locked out from answering entirely.

The content definitely needs a ban, if for no other reason than to make it easier to have the discussion here instead of all over the place when it gets flagged.
ChatGPT even acknowledges the pitfalls that are described, in brief:
Limitations
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
-ChatGPT: Optimizing Language Models for Dialogue
The real question here in my opinion is enforcement though. What are the penalties for using this content? Is the user summarily subject to a ban as well, or subject to a series of penalties leading up to a ban?


Use of ChatGPT generated text for content on Stack Overflow is temporarily banned.
Something worth clarifying, I think, is that although most of the discussion here has centred around answers written by ChatGPT, the statement of the ban also applies to questions, and I think that is good - questions written by ChatGPT, or especially questions with code written by ChatGPT, should be banned, even when the user does not try to pass them off as their own writing.
If somebody asks a question like "I used ChatGPT to generate this code, but it doesn't work, why not?", then generally the correct answer will be "because ChatGPT wrote it, and ChatGPT shouldn't be expected to write correct code." It is not useful for questions along these lines on Stack Overflow to get more detailed answers than that, because other readers probably aren't interested in why that exact code someone else got from ChatGPT is broken. So such questions ought to be closed.
Likewise, questions where especially beginner programmers post a problem (e.g. from their homework or a textbook) and ask how to solve it are often asked to show their own attempt in the question. If an asker tries to get around this by showing an "attempt" written by ChatGPT, that is not sufficient to make for a good question. The reason we ask to see an attempt is because otherwise there is no way to know what level of understanding an answer should be tailored to, and an "attempt" by ChatGPT doesn't address this.

I think the ban should be permanent. Stack Overflow needs answers from real people who have experience and expertise. I don't think answers from AI can and will solve most users' problems here.
The problem with ChatGPT is that it's a poor fit for those answers it should/could be used to answer.
On one hand there's well written questions on SO with a clear problem statement, a nice small snippet of code that reproduces the problem, a clear error message. Just overall good quality and clarity.
These questions are the easiest for CGPT to interpret. They're the most likely to get good output from the AI.
These are also the questions that are unlikely to get closed and most likely to get a human answer.
On the other hand there are those questions that are unclear, lacking a proper problem statement, lacking error messages, poorly formatted code, if there is any at all...
Those questions would benefit most from an AI that could figure out the problem and answer it.
Those are the questions that CGPT will write good-looking crap answers for.
So even when the bot produces some gems... They're not useful on SO.
TL;DR: Crap in, crap out.
The questions that need this bot can't benefit from it.

I heard about the ban but didn't really look into it. Thinking it through, I can't help but agree with this decision. Accuracy aside, if people wanted answers from ChatGPT they should go to ChatGPT. People come here to interact with humans, not middlemen who parrot AI responses.

LLM generated answers should always be banned and the reason has nothing to do with whether LLM answers are low or high quality. It beggars belief that some people think LLM answers are worth posting here when every person and their dog can generate literally hundreds of them with minimal effort. I come to Stack Overflow because I am after a human answer. If I wanted an LLM answer, I could have just entered my question into an LLM myself!
I agree that there should be a temporary ban, because many users will use the chatbot to generate answers that seem to be correct but may be incorrect in reality. It push the content hit bad. Because Stack Overflow entirely depends on volunteers, it becomes difficult for them to verify every answer. Copy pasting answers with the use of bots takes seconds, while proof reading them and making sure they deliver value takes more time.

I don't see any way to 100% prevent AI-assisted answering.
The #1 most-effective way to prevent the flood of AI-assisted answers on Stack Overflow...
Quickly close all closable questions.
Hammers? Use 'em if you got 'em.
The FGITW answerers were fast before, now they'll be faster.
Perhaps this is a call for offering silver/gold badgers abilities to close pages with greater speed.
Perhaps we should remove any earned rep if a page is closed within n days of being asked. This way askers still get the answers that they need, but there will no longer be a reward for answering questions that should be closed.
I think I'll be a lot happier when AI can accurately assist me in finding good dupe targets (ideally canonicals) before anyone posts an answer.
P.S. Should we mandate that answerers explicitly declare the use of AI assistance? such as <sub>declaration</sub>?
This answer was assisted by artificial intelligence.

TL;DR
Even from the point of view of AI researchers, Stack Overflow and other sites with mostly human generated content should ban or force labelling of AI generated content, as otherwise this will cause a circular reasoning catastrophic failure as the newly generated content past year 2022 cannot be fed to train newer AI models anymore since we can't know what was generated by humans or by older AI models.
Longer argument
I would like to provide an alternative perspective, not from the standpoint of Stack Overflow human users, but from Artificial Intelligence researchers.
It's highly likely that GPT-3 and hence ChatGPT was trained on all of Stack Overflow data. This worked because all the inputs at the time was human generated. (PS: Let's put aside the discussion whether it's ethical for AI researchers to use 3rd-party content to train AI models without asking the respective owners - I am here focusing on the fact that it already happened, that this cannot be undone, and the impact on our current and future situation).
Now, if answers from humans are mixed with answers generated by AI, we get a tampered dataset that will be unusable to train future LLM or other language models, because it will cause a hugely flawed circular reasoning loop, as we now feed an AI model data that an older AI model generated, without being able to determine what was generated by humans or by AI.
This means that if we can't ensure that most answers remain generated by humans, this will lead to a catastrophic failure of AI models, as it will simply become impossible to use newer data to make newer models: 2022 will become an "event horizon for AI" , with data generated prior to this year being still usable for training, but any data generated past being mostly unusable because of being tainted potentially in great proportions by AI generated content.
So this issue is not even just specific to Stack Overflow: all websites should either ban the use of AI generated content, or force such content to be labelled as AI generated. But even so, it will only work with compliant users. Since there is no 100% reliable way to detect textual AI generated content, and given we can always expect people to game the system especially when there are incentives to do so, this catastrophic failure seems all but inevitable.
In one line: I want a pure "human" platform, and for this reason I am on Stack Overflow.
In truth, I have learnt coding from SO, and I believe that every line of coding on SO is tested by humans, on a real project with their own hands. (This is mostly true for accepted answers.)
On the other hand, I have used ChatGPT for only for three days and came back. Please don’t ask me about the experience. I do not want to remember that I could lost my coding knowledge. uffffff!


I think unmodified answers from ChatGPT should be banned, but if you use ChatGPT to generate an answer and then independently verify it and correct it to the best of your knowledge as needed, that should be allowed. It can be a useful tool, but simply taking answers from it as-is is often unhelpful.

I think this is the right decision.
The artificial replies can sound authoritative because they may have better grammar than the human contributions. The reply generators are tireless. They could easily overwhelm the human authorities.
The ban is temporary.
All things are temporary. The ban should stay for as long as it is needed.
The artificial replies cannot be trusted.
We are seeing good and bad answers. There are many bad answers just now, but AI progress is rapid. They may be a valuable tool quite soon. Not in a hundred years? They are better than they were last week.
Quality is an issue, but it is not the whole issue. We should not invite AI replies once their quality equals our human contributors. We should exclude them until we understand what their replies are, and the nature of their failings.
We cannot tell whether a reply is from AI or not.
This is true. If we can tell the difference, then the AI can be trained to see the difference and correct for it. Some people may take ChatGPT answers and post them as their own. Others may post replies from their own model to test it, and to gather training data. A simple filter, such as limiting the reply time to a typical typing speed, may stop us getting swamped. But it won't last.
We can ask people not to post AI answers. If we threaten with bans, or talk about our chatbot filtering, some people will take it as a challenge.
Perhaps a longer term solution might be to work with AI. Suppose all dialogues had an option to generate an AI answer using the current popular models. You could see a reply to the question or the whole dialogue, while knowing it had been generated by an AI.
I believe that AI answers should not be allowed on Stack Overflow. Stack Overflow is a repository of user experience and knowledge. ChatGPT uses such data to train but it does not create new knowledge. Beyond the fact that ChatGPT produces many errors and presents them as correct, it can bring no new information or experience to bear to create an answer.
I believe it has some value in the correct context, but not in the context of Stack Overflow. If a person wants an AI answer experience they can use the AI directly; there isn't any need to use Stack Overflow as a proxy.


You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Explore related questions
See similar questions with these tags.
Subscribe to RSS
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Site design / logo © 2026 Stack Exchange Inc;
user contributions licensed under
CC BY-SA
.
rev 2025.12.22.38265
相關文章
其他收藏 · 0