AI不只是寫作工具,而您大學的AI計畫可能只是一份PDF

AI不只是寫作工具,而您大學的AI計畫可能只是一份PDF

Hacker News·

文章指出,大學誤將AI視為單純的寫作問題,導致解決方案流於表面。作者認為,AI已成為知識工作的通用介面,需要大學採取比政策備忘錄或抄襲偵測更深層次的應對策略。

Image

Synthetic Minds by Christopher Kanan

What Universities Should Do About AI

AI Is Not Just a Writing Tool, but Your University’s AI Plan Is Probably a PDF

Image

Image

A lot of university leaders have quietly converged on a comforting story: AI is mostly a writing problem.

That story is psychologically convenient. Writing is visible. Writing is common. Writing is where faculty first felt the ground move under their feet due to AI. But it is also the wrong abstraction, and it pushes institutions toward the wrong fixes (policy memos, detector arms races, writing-course “guidelines”) while the real transformation rolls on.

Thanks for reading Synthetic Minds by Christopher Kanan! Subscribe for free to receive new posts and support my work.

Treating AI as “writing-tech” is like treating electricity as “better candles.”

The “writing problem” is real, but it’s the wrong center of gravity

Yes, generative AI breaks traditional take-home assessments. Students can produce plausible essays on command. That forces changes in pedagogy.

But the deeper issue is not prose quality or plagiarism detection. The deeper issue is that AI has become a general-purpose interface to knowledge work: coding, data analysis, tutoring, research synthesis, design, simulation, persuasion, workflow automation, and (increasingly) agent-like delegation.

If university leadership frames the AI moment as “how do we integrate AI into writing courses,” the institution ends up optimizing one surface symptom while missing the whole disease.

The problem is much bigger than writing

1) Students are already using AI at scale

If you want to understand what is about to hit universities, look one step earlier in the pipeline.

College Board survey work found reported high-school GenAI use for schoolwork rising from 79% to 84% between January and May 2025, with common uses including brainstorming, revising essays, and research.

Pew found about a quarter of U.S. teens had used ChatGPT for schoolwork by 2024, double the share from 2023.

In higher ed, student-use surveys keep landing in the “this is normal now” zone. For example, one survey reported 85% of students had used generative AI for coursework in the last year, with common uses like brainstorming and tutoring-style Q&A.

The practical implication is simple: bans do not stop AI use. They mostly push it underground.

2) The cost of misuse is not “cheating,” it’s knowledge loss

My core worry is the one administrators often miss: students can deprive themselves of knowledge while still producing “acceptable work.”

AI lets a student behave like a CEO delegating tasks. But real executives succeed because they understand what to delegate, how to judge outputs, and when a failure is subtle but catastrophic. In an AI-heavy workplace, the value shifts upward from “can you produce text” to “can you specify, supervise, verify, and integrate work.” They have to prepare the assignments for an AI to work on.

A student who uses AI to skip the learning loses the very background knowledge they will need to command AI effectively.

That’s why the skills students need start looking less like “write an essay” and more like “manage an AI collaborator.” They need:

domain fundamentals (so they can spot nonsense),

metacognition (so they know what they don’t know),

verification habits (so they don’t launder errors into decisions),

and ethical and privacy judgment (so they don’t leak data or automate harm).

3) Careers are already shifting toward “AI fluency” expectations

This is not limited to computer science.

Employer demand for AI skills in job postings is rising sharply. Job postings referencing AI skills more than doubled year-over-year from 2024 to 2025.

Meanwhile, entry-level software pathways are being squeezed as coding tools improve and companies restructure around “fewer juniors, more seniors plus AI.” Reuters documents the collapse of parts of the bootcamp pipeline and ties it directly to shrinking entry-level roles and AI coding capability.

Even elite grads are feeling it. One widely-circulated example is the LA Times reporting on Stanford CS grads struggling to get jobs as AI coding tools reshape the entry-level market.

So the “expected graduate profile” is changing. Students are increasingly expected to know how to use AI tools productively and responsibly, not just avoid them. Expecting students to not use AI tools means courses can’t teach how to use them correctly.

4) The social and psychological impacts are part of “AI literacy,” too

Another leadership blind spot: many students (and plenty of adults) anthropomorphize these systems. Teens and young adults spend huge time interacting with AI companions, sometimes treating them as conscious agents.

The downstream risks are not speculative. There is now mainstream reporting on severe harms tied to chatbot relationships, including youth mental health crises in the orbit of character-style AI systems. There is also serious coverage of “AI psychosis” as a reported phenomenon, and what it might mean for vulnerable users. People are even falling love with AI systems. While frontier AI systems can easily pass the Turing test and have a kind of general intelligence, they are not self-aware and don’t even think except when prompted.

If a university wants to claim it is preparing students for the real world, then “AI literacy” has to include: what these systems are, what they are not, why they feel human, and how that can mislead people. I have previously written about why existing AI systems (large language models) lack the prerequisite capabilities to be self-aware or to have human-like intelligence. That does not mean that they aren’t incredibly useful and powerful tools.

What not to do: reactive policy, superficial rebranding, and “AI + X” as a shortcut

1) The reactive loop

A lot of institutions are still cycling through:

panic about essays,

blanket bans that are unenforceable,

AI detectors (which are brittle in practice),

and faculty-level improvisation with no coherent institutional plan.

This is not strategy. It’s institutional anxiety expressed as PDFs.

2) The “new degree names will save us” move

A more sophisticated form of reaction is to create shiny new “AI + X” degrees alongside existing programs, without deeply reframing what each degree means in an AI-rich world.

SUNY Buffalo is an important case study because it represents a real investment and a real attempt to respond. Their Department of AI and Society describes an “AI+X model” with a Society Core, a Technology Component, integrative courses, and a cross-major capstone. That’s thoughtful in several ways, and it will help some students. But I still think it’s strategically incomplete if the institution treats “AI+X degrees” as the main answer.

Why?

It risks becoming parallel curriculum, rather than transformation of the core.

It can imply, unintentionally, that “AI readiness” is for students who opt in, instead of a baseline expectation for everyone.

It may encourage rebranding over introspection, when what is needed is brutal clarity about which parts of each degree are now AI-trivial and which parts need to become deeper, more conceptual, more synthetic, more human.

Universities need to rethink every major, not just attach “AI” to some of them.

3) AI literacy is needed everywhere, but “AI degrees” should mean building AI

A comprehensive AI education should exist across nearly every degree at some level. That’s AI literacy.

But universities should also offer AI degrees aimed at creating and maintaining AI systems, not merely using tools. Those programs need to start early and go deeper than a standard CS or data science track can easily accommodate. They cannot just be cobbled together from existing courses or created as “cash grabs” which will cause reputational harm and not serve student’s effectively.

You can see demand for exactly that kind of degree emerging. Reporting on new AI majors and AI colleges describes large enrollment surges, and highlights institutions creating standalone AI programs (not just concentrations). In a subsequent post I’ll describe what I think a good AI degree program would look like.

The right model is probably a two-layer system:

AI literacy for all students, integrated into every discipline.

Dedicated AI degrees for students who will design, deploy, evaluate, secure, and maintain AI systems in the real world.

What some universities are doing right

Here’s the distinguishing feature of the better approaches: they are top-down, structural, and explicit. They do not outsource “AI strategy” to individual departments, schools, or committees.

Brown: centralized leadership with an AI provost-level role

Brown appointed Michael Littman as its inaugural Associate Provost for Artificial Intelligence, with a mandate spanning AI development, use, governance, research coordination, educational expansion across disciplines, and operational adoption. Littman is a legitimate AI expert and researcher who is extremely distinguished. This is exactly the kind of move universities need if they want coherence instead of fragmentation. Another pattern I’ve observed is appointing Chief AI Officers in organizations, who lack AI expertise and skills. If a University’s AI faculty don’t respect the person in charge of AI for the university in terms of their AI expertise, it almost surely will lead to bad decisions.

Ohio State: AI fluency as an institutional learning outcome

Ohio State’s AI Fluency initiative explicitly targets a world where every graduate must be “bilingual,” fluent in their discipline and in applying AI within it. It includes foundational exposure in required first-year experiences, workshops, a broadly available course (“Unlocking Generative AI”), and published learning outcomes that emphasize concepts, limitations, evaluation, discipline-specific use, and responsible implementation.

This is not “AI in writing.” It is “AI as a baseline competency.”

Purdue: an AI competency graduation requirement

Purdue has positioned AI competency as a graduation expectation beginning in 2026, tied to partnerships and an explicit “working competency” framing. They also wrap this in an ecosystem story: course catalog, guidelines, toolkits, and industry partnership.

The point is not that every student becomes an AI engineer. The point is that every graduate has a functional baseline.

ASU: tool access at scale, with governance

ASU’s approach emphasizes broad access to institutional-grade tools (ChatGPT Edu and others), projects at scale (hundreds), and an enterprise agreement that emphasizes privacy and separation from training data. They explicitly frame use across teaching, learning, research, and operational efficiency.

That matters because “everyone is using random consumer tools” is a privacy and governance disaster. Central access can enable responsible norms.

What universities should be building toward

If leadership wants a clean target, it’s this:

AI fluency is not the ability to generate text.AI fluency is the ability to direct AI systems, evaluate outputs, and apply them responsibly in your field.

That implies structural changes:

Rework assessment so it measures understanding in an AI-rich environment.

Teach verification habits and epistemic humility.

Build explicit norms for attribution, privacy, and appropriate use.

Create top-down leadership so strategy is coherent, not improvised department by department.

Deliver AI literacy across the entire curriculum.

Offer deep AI degrees for the students who will build the systems everyone else will use.

Universities that keep treating this as a “writing issue” are going to discover, too late, that they were solving the wrong problem with impressive administrative efficiency. You can read my earlier thoughts about why ignoring AI poses an existential threat to universities and what universities should do to avoid extinction.

Thanks for reading Synthetic Minds by Christopher Kanan! Subscribe for free to receive new posts and support my work.

Image

No posts

Ready for more?

Hacker News

相關文章

  1. 關於AI在教育中的爭論為何總是抓不到重點

    3 個月前

  2. 報告指出:學校使用AI的風險大於益處

    3 個月前

  3. 寫作對決人工智慧

    4 個月前

  4. 我們正訓練學生寫出更差的文章以證明自己不是機器人

    大約 2 個月前

  5. AI輔導教學在隨機對照試驗中超越課堂主動學習

    4 個月前