337p人体粉嫩胞高清图片,97人妻精品一区二区三区在线 ,日本少妇自慰免费完整版,99精品国产福久久久久久,久久精品国产亚洲av热一区,国产aaaaaa一级毛片,国产99久久九九精品无码,久久精品国产亚洲AV成人公司
網(wǎng)易首頁 > 網(wǎng)易號 > 正文 申請入駐

深度故事|新手老師AI時代的課堂手記

0
分享至

教師大戰(zhàn)聊天機器人:我在人工智能時代的課堂之旅


我當時是個新手,第一次要應(yīng)對所有課堂上常見的難題。把人工智能也一股腦兒地塞進來,感覺就像在恐慌發(fā)作時猛灌一杯咖啡。

彼得·C·貝克

2026年3月3日星期二 05:00 GMT

兩年前,39歲的我開始接受教師培訓(xùn)。我想教英語——幫助年輕人成為更優(yōu)秀的讀者、作家和思考者,并與他們建立更深層次的文學聯(lián)系。在做了15年的自由撰稿人和小說家之后,我自信自己能夠有所貢獻。但隨著培訓(xùn)的深入,我卻越來越感到迷茫。有一個問題始終縈繞在我心頭,讓我不知所措:我們該如何應(yīng)對人工智能?

眼前的難題是:所有學生現(xiàn)在都能免費使用在線聊天機器人,并能按需生成流暢且相當復(fù)雜的文章,這對英語教學意味著什么?這個問題其實是一系列由來已久的教學難題中最棘手的一個:我們在學校里究竟想做什么?我們應(yīng)該如何去做?我們?nèi)绾闻袛嗍欠癯晒Γ课耶敃r是個新手,第一次面對這一切。把人工智能引入教學,感覺就像在恐慌發(fā)作時猛灌一杯咖啡。

我開始瘋狂地搜尋關(guān)于人工智能和英語課堂的各種觀點:教育學播客、教育學Substacks、教育學YouTube頻道。我的算法推送捕捉到了我的興趣,并開始迎合我的需求,為我提供源源不斷的資訊——其中也包括科技公司鋪天蓋地的廣告——這些內(nèi)容都聲稱能幫助我思考這些迫切的問題,并確保我能為我的學生盡到應(yīng)盡的責任。

我很快意識到,這是一個充滿激烈爭論、甚至常常充滿敵意的世界。一方(簡單來說)是人工智能的反對者:教師和教育專家們認為,人工智能無異于貪婪的科技公司對課堂教學核心活動的生存威脅。他們認為,學生需要學習如何克服困難:閱讀復(fù)雜的文本,構(gòu)建復(fù)雜的論點。他們需要明白,這些過程充滿摩擦和不確定性,他們需要學會接受這一事實,而不是逃避。而一鍵式寫作工具的出現(xiàn),讓逃避變得太容易了。

人工智能反對者分享了一些令人震驚的故事:學生們提交的論文竟然是人工智能生成的,連最簡單的問題都回答不了,或者引用了聊天機器人“憑空想象”出來的根本不存在的資料。他們還發(fā)布了一些研究,表明使用聊天機器人會削弱學生的推理能力,甚至阻礙大腦的生理發(fā)育。他們提出了諸多倫理方面的擔憂,包括人工智能對環(huán)境的影響、聊天機器人對受版權(quán)保護的文字的依賴,以及大型科技公司的寡頭政治傾向。對大多數(shù)反對者來說,解決方案是構(gòu)建一個人工智能無法觸及的課堂。他們討論了轉(zhuǎn)向課堂論文寫作,或許可以采用手寫的方式。他們還探討了恢復(fù)口試和測驗的可行性。

另一邊則是人工智能的擁護者。我說的不是那些瘋狂的科技高管,他們大多是男性,歇斯底里地宣稱人工智能很快就會終結(jié)我們所知的學校教育,或者已經(jīng)意味著讀書是浪費時間。我說的是那些教師和專家,他們常常熱情洋溢地論證,盡管人工智能存在諸多教學風險,但也蘊藏著巨大的潛力。聊天機器人并非作弊機器,而是強大的助教,能夠同時與課堂上的每一位學生互動,確保每個人都能在需要時獲得個性化的反饋,并巧妙地引導(dǎo)每位學生沿著最適合自己的學習路徑,實現(xiàn)最佳學習效果。在擁護者看來,那些反對者本能地排斥人工智能工具,反映出他們對人工智能的潛力缺乏了解;這也對他們的學生造成了損害,因為學生們畢業(yè)時并沒有掌握任何能在大學和未來職業(yè)生涯中發(fā)揮作用的技術(shù)技能。

當我努力理清反對者和支持者之間的爭論,試圖解讀他們各自援引的統(tǒng)計數(shù)據(jù)和學術(shù)研究時,我的焦慮與日俱增。我注意到教師群體(包括我自己)的一個共同點:因為我們對自己的職責無比認真,所以我們常常害怕做錯事:使用無效或已被證偽的教學策略,未能給予學生他們真正需要的東西。我們常常從經(jīng)驗中體會到,優(yōu)秀的教師能夠改變?nèi)松晃覀円仓溃愀獾慕處熗瑯訒粝律羁痰挠∮洠绕涫窃谟⒄Z教學領(lǐng)域,他們往往是教師兼作家凱莉·加拉格爾所說的“閱讀殺手”——扼殺學生對閱讀的美好感受——的罪魁禍首。我們渴望被歸入正確的行列,卻又害怕被歸入錯誤的行列。

我認為,在這種恐懼之下,隱藏著一種更為根本的恐懼:害怕被視為——更不用說害怕自己真的就是——與時代脫節(jié)的失敗者,躲在教室里和孩子們待在一起,因為在這個瞬息萬變的成人世界里,我們無處容身。我對這種恐懼感同身受。我決心不被科技炒作所蒙蔽,但我也不想因為拒絕考慮可能有用的新工具而讓自己陷入困境。

我需要的只是一個初步的結(jié)論。我不需要決定人工智能究竟是邪惡的騙局,還是未來的發(fā)展方向。我也不需要決定人工智能對教育的未來意味著什么。我需要決定的是,人工智能對我即將執(zhí)教的高中英語課意味著什么。我忐忑不安地下載了更多播客,在郵箱里塞滿了Substacks的郵件,還看了更多YouTube視頻,希望通過吸收更多相關(guān)資料,能提高我做出正確判斷的幾率,或者至少能減輕我對犯錯的恐懼。

去年春天,我開始每周花15個小時在芝加哥郊區(qū)一所大型學校觀摩一位資深英語老師的教學。這所學校是那種很多家庭專門為了“好學校”而搬遷的地方。我的觀摩老師——我們姑且稱她為艾米麗——教兩個年齡段的學生:一個是剛上高中的14歲學生,另一個是即將畢業(yè)的18歲學生。我在她的課堂上看到的景象,立刻讓我下定決心要加入那些反對她教學的行列。

我親眼目睹了人工智能與課堂相關(guān)文章中所描述的種種顛覆性影響:完全由人工智能生成的論文;人工智能臆造的引言;師生之間就“究竟什么才是可證明的”展開的緊張對話。我陪著艾米麗批改作業(yè),和她一起為那些模棱兩可的案例而焦慮,試圖區(qū)分學生的胡言亂語和人工智能的胡言亂語,區(qū)分學生的進步和人工智能的潤色。

我之所以成為一名教師,很大程度上是因為我想花時間與年輕人的寫作相處,認真傾聽并給予他們應(yīng)有的尊重。在艾米麗的指導(dǎo)下,我看到了人工智能的存在(甚至是潛在的存在)是如何干擾這一過程的。我體會到了面對一篇論文時,那種獨特的絕望感——不是努力尋找最佳的回應(yīng)方式,而是試圖探究其背后的原因。我還發(fā)現(xiàn),教師們自身也時刻受到人工智能輔助工具的狂轟濫炸,這些工具不僅來自電子郵件和社交媒體廣告,更重要的是,它們還集成在學校的電子郵件和成績管理軟件中。

艾米麗的學生都配備了學校發(fā)的筆記本電腦,她的電腦上裝了一個程序,可以讓她監(jiān)控每個學生的屏幕內(nèi)容;所有學生的屏幕內(nèi)容同時顯示在屏幕上,排列成網(wǎng)格狀,就像一排閉路電視監(jiān)控器。使用這個程序總是讓人感到不安——老大哥就在我身邊——但也總是讓人著迷。有些學生完全不用人工智能,至少在課堂上不用。而另一些學生則一有機會就用,幾乎是下意識地把正在做的問題都輸入進去。至少有一個學生習慣把每個新科目都輸入到ChatGPT里,讓它生成筆記,以便在被點名時可以查閱。我經(jīng)常看到,即使學生們并非有意使用人工智能,他們也會不知不覺地被引導(dǎo)到人工智能的使用中。我習慣了看著學生在谷歌上搜索某個主題(比如“羅密歐與朱麗葉的關(guān)鍵主題”),閱讀現(xiàn)在大多數(shù)谷歌搜索結(jié)果頂部顯示的AI生成的答案,然后點擊“在AI模式下深入探索”——突然間,他們就開始和谷歌的聊天機器人Gemini聊天了,而Gemini總是樂于推銷自己的功能。“我應(yīng)該詳細闡述其中一個或多個主題嗎?我應(yīng)該為一篇關(guān)于這個主題的文章起草一個開頭段落嗎?”

艾米莉告訴我,她現(xiàn)在布置的大部分閱讀作業(yè)都必須在課堂上完成,而且她會朗讀很多內(nèi)容,尤其是在學年伊始。我感到震驚。沒錯,我讀過無數(shù)關(guān)于“當代閱讀危機”的報紙專題報道,但親眼目睹青少年閱讀水平的普遍下降,仍然讓我感到沮喪。當初我決定成為一名教師時,我的腦海中充滿了浪漫的憧憬:我?guī)ьI(lǐng)學生們(“哦,船長,我的船長! ”)與文學的復(fù)雜性及其與生活的聯(lián)系展開較量。在我的憧憬中,閱讀本身大多發(fā)生在課堂之外,在教室的圍墻之外。我的許多學生似乎缺乏自主閱讀的能力——而且,到了寫作的時候,他們中的許多人都會下意識地求助于人工智能——這對我的教師抱負意味著什么?我沮喪地想,我是否選擇了一項注定會被歷史的不可阻擋的力量摧毀的事業(yè)。

但當我看到艾米莉給全班朗讀時,我的心情頓時好了起來。對于一個作家來說,描述所謂的課堂魔力有點像描述性愛;很多時候,這種嘗試只會寫出讓人尷尬又缺乏說服力的句子。然而:我還是覺得有必要告訴你,朗讀時間有時確實充滿魔力。

我到校后不久,低年級的學生們就開始讀《西線無戰(zhàn)事》。起初,學生們都難以置信:我們真的又要讀一整本書嗎?后來,在艾米莉的幫助下,他們逐漸理解了這本書的內(nèi)容:第一次世界大戰(zhàn)、年輕的德國士兵、塹壕戰(zhàn)、天真無邪的喪失、每日與死亡擦肩而過帶來的心理創(chuàng)傷、與后方的隔絕。筆記本電腦和手機都被收了起來。(按照學校規(guī)定,它們都放在教室門口的袋子里。)每個人都知道,他們可以隨時舉手提問或發(fā)表意見。有時,艾米莉會停下來,指出她懷疑學生們感到困惑但又不敢承認的地方,或者他們自己都沒意識到的誤讀,又或者那些可以有多種解讀的句子。日復(fù)一日,在不易察覺的細微變化中,這本書從一本晦澀難懂的巨著變成了一個熟悉的伙伴。

不知何時,學生們不再抱怨,而是開始投入其中:他們渴望知道故事的結(jié)局,為跌宕起伏的情節(jié)驚嘆不已,情不自禁地、充滿感情地思考著書中人物的行為動機。埃里希·瑪麗亞·雷馬克為何要如此描寫?然后,有一天,奇跡發(fā)生了:一群2025年的美國14歲少年,仿佛置身于一部講述1910年代德國19歲青年故事的書中,他們既透過自身生活的視角審視這本書,又透過書中的視角審視自己的生活。我能真切地感受到:教室里彌漫著一種微妙的能量流動,這種能量在學生、老師以及近一個世紀前寫在紙上的文字之間交織碰撞。

我親眼目睹的人工智能種種惡作劇令人沮喪,而我親眼目睹的不使用人工智能的教學卻令人振奮。在觀察期結(jié)束前,艾米麗讓我親自帶領(lǐng)一些閱讀活動,有好幾次我都感到無比暢快。我感覺自己恨不得大聲宣告:我拒絕人工智能——而且我為此感到自豪!

然而,整個夏天,我的疑慮又悄然襲來。盡管在艾米麗的課堂上閱讀課令人振奮,但我知道它并沒有真正解答我關(guān)于人工智能和課堂教學的所有(甚至任何)疑問。我知道秋季我將重返校園,這次是以實習教師的身份,承擔大部分的備課和批改作業(yè)的責任。我需要做出更多決定,其中最核心的是關(guān)于寫作的。考慮到我對聊天機器人的擔憂,我應(yīng)該讓學生寫些什么?何時寫?又該如何寫?

因為我接觸過(并且還在繼續(xù)接觸)大量關(guān)于人工智能和教學的內(nèi)容,所以我能夠在腦海中進行一場截然不同的觀點之間的辯論。

我:“全班一起閱讀,沒有任何人工智能或電子設(shè)備的輔助,這種感覺很棒。我對此非常肯定。我想以此為起點。”

我也是:“但是學生們到底學到了什么?你怎么知道?”

我:“嗯,我得以實時聆聽他們的想法演變過程。”

我也是:“但是每個學生都參與了嗎?”

我:“嗯,沒有。但是之后他們都做了很多寫作練習——在教室里,用手寫——我能夠讀懂那些練習。”

我也是:“讀了他們寫的之后,你真的認為每個學生都學到了他們理論上應(yīng)該學到的所有東西嗎?他們都學到了你想讓他們學到的所有內(nèi)容嗎?”

我:“嗯……我想不是。不是全部。不是所有。”

我也是:“如果學生們在閱讀和討論之后,坐下來寫作時,每個人都能使用一個人工智能聊天機器人,該機器人可以根據(jù)他們現(xiàn)有的理解水平和學習風格提供量身定制的反饋,那會怎么樣?如果作為老師的你能夠訓(xùn)練這個聊天機器人,使其行為與你對作業(yè)和整個課程的目標完全一致,那又會怎么樣?”

我:“嗯,那本來就是我的工作——給他們提供個性化的反饋。”

我也會說:“但是你到底有多少時間做這些?你真的能在每次需要的時候都介入嗎?學生在家寫作的時候怎么辦?作業(yè)截止前一天晚上,他們卻完全搞錯了怎么辦?你為什么不想讓他們知道呢?”

我:(大汗淋漓)

為了盡職調(diào)查,我開始試用人工智能聊天機器人,包括那些專為課堂設(shè)計的,或者帶有某種“學生模式”的機器人。首先,我評估了它們完成最糟糕任務(wù)的能力:我布置了一份作業(yè),并添加了一些簡單的指令——“這篇作業(yè)應(yīng)該像一個15歲學生寫的”、“請?zhí)砑右恍┏R姷钠磳懞驼Z法錯誤”、“不要寫得太流暢”——然后生成了一篇我無法區(qū)分學生寫作水平的文章。在2023年的美好時光里,人們普遍認為老師可以立即識別出機器寫作。但如今,無論好壞,情況已經(jīng)完全不同了。

接下來,我測試了這些聊天機器人一些不那么明顯的惡意用途,比如對草稿進行評論,或者回答關(guān)于作業(yè)的疑問。不同機器人的表現(xiàn)參差不齊,但有些表現(xiàn)非常出色。事實上,它們的表現(xiàn)讓我印象深刻,以至于我開始偶爾把自己的雜志文章草稿發(fā)給這些機器人,時不時地就能收到一些真正有用的即時反饋。坐在電腦前,我仿佛看到一群啦啦隊員在我身后聚集,準備慶祝勝利。

我反復(fù)回想起在艾米麗課堂上一起閱讀的時光,試圖分析究竟是什么讓我覺得如此特別。我想,部分原因在于這項活動如何集中了每個人的注意力。因為所有的筆記本電腦和手機都被收起來了,所以每個人都全神貫注。這真是令人驚嘆。

我開玩笑的。那是在學校。班上同學的注意力時而分散,時而又集中在青少年們不得不考慮的各種事情上:下一節(jié)課的考試;周末的計劃,或者令人擔憂的無所事事;暗戀對象是否也喜歡自己;昨晚聽到父母吵架;還有移民執(zhí)法人員在附近巡邏。但是,多虧了閱讀時間的安排,集中注意力總是觸手可及。學生總能找到重新集中注意力的方法,而不會被明亮、可滾動的屏幕誘惑——那永遠開啟的通往更多干擾的入口——所干擾。

我確信,在學習和科技誘惑之間人為地劃清界限是件好事。我本能地想要盡可能地在他們的寫作過程中也做到這一點。是否有可能設(shè)計出一個能夠提供可靠且有用的寫作反饋的聊天機器人?也許可以。能否控制聊天機器人反饋的頻率,使其不至于成為一種依賴?大概率可以。能否命令聊天機器人不向?qū)W生提供一鍵修改功能?當然可以。但是,每個高中生——忙碌、壓力巨大、對寫作感到焦慮、渴望在晚上或周末結(jié)束學校作業(yè)——都知道,在公共互聯(lián)網(wǎng)上,這些省力的選擇只需輕輕點擊一下鼠標即可獲得。

我無法將聊天機器人從他們的世界中徹底抹去,就像我無法刪除手機里的內(nèi)容一樣。我所能做的,就是決定在多大程度上引導(dǎo)學生使用聊天機器人,又在多大程度上引導(dǎo)他們體驗其他事物。

我:“所以……我想秋季我會盡量減少人工智能的使用。我認為學生最需要的是持續(xù)的閱讀和寫作體驗——包括這些過程中涉及的所有摩擦和不確定性——而不受科技干擾。”

我也是:“但學會應(yīng)對科技帶來的干擾是生活的一部分。而且,他們未來肯定需要人工智能來增強思維能力,成為更有競爭力的勞動者。”

我:“也許吧。但是,如果你還沒學會如何思考,你能增強你的思考能力嗎?我不是經(jīng)常看到硅谷高管的采訪,他們都在嚴格限制自己的孩子上網(wǎng)和使用電子屏幕嗎?”

我也是:“你是不是把自己浪費太多時間在網(wǎng)上這件事,以及你希望如果有人能幫你關(guān)掉這些網(wǎng)站,你就能成為更優(yōu)秀、更成功的作家的想法,投射到別人身上了?”

我:“是的,有可能。”

弗洛伊德認為,教師是“不可能的職業(yè)”之一。你永遠無法宣稱自己取得了完全的成功,甚至無法確切地知道自己所做的事情會產(chǎn)生怎樣的全部效果。(更糟糕的是:“你甚至可以預(yù)先確定自己會得到不盡如人意的結(jié)果。”)整個秋季,我每天都提醒自己這一點,試圖讓自己感覺好一些,因為我對幾乎所有事情都感到深深的不確定。

當我把課堂時間用來閱讀時,感覺很棒。但隨后我又擔心,正因為感覺這么好,我是不是做得太多了,這就好比老師為了健康只吃菠菜一樣。當我讓學生完全在課堂上完成論文時,我感覺自己很了不起,因為我摒棄了科技公司那些腐蝕大腦的捷徑。(伊恩·麥克萊恩飾演的甘道夫,面對著高大可怕的炎魔,堅定地咆哮著“你不能通過!”的畫面,成了我的一個標志性畫面。)

然后,到了晚上,回想白天的種種挑戰(zhàn),我會擔心,把寫作作業(yè)限制在課堂上,是不是讓學生錯過了我最珍視的寫作體驗:那種反復(fù)推敲、修改重組文字的挫敗感與樂趣交織,從草稿到最終定稿的迭代過程,以及作品與生活其他部分相互影響、彼此交融的感受。當我布置更具挑戰(zhàn)性的作業(yè),并給予學生完成這些作業(yè)所需的額外時間——包括必要的自主學習時間——我又會感到欣慰。然而,我的腦海中又會浮現(xiàn)出學生們在家,把我的作業(yè)要求粘貼到 ChatGPT、Gemini、Claude、Copilot 和 Grammarly 等各種軟件里的畫面。

我花了很多時間試圖想出一些跳出固有思維的寫作作業(yè),這些作業(yè)結(jié)構(gòu)精巧、引人入勝,完全不像過去那些僵化的公式化作文,讓學生們沒有理由逃避它們。

想象一下你在好萊塢工作:我們剛剛讀完的這本書要被拍成電影,你需要選擇電影配樂;解釋哪些歌曲與哪些場景相配以及原因,并通過這樣做來表明你理解這些場景的基調(diào)以及它們在整個故事中的作用。

請用你認為經(jīng)常被誤解的、對你來說很重要的事物來代替Binyavanga Wainaina 的諷刺散文《如何描寫非洲》 ,寫出你自己的版本,以此來展示你對 Wainaina 修辭選擇的理解。

我喜歡閱讀這些作業(yè)。我喜歡了解學生們?nèi)绾卫斫馕覀兯x的內(nèi)容。我喜歡聽他們的音樂。我喜歡了解他們對性別的看法、他們的文化背景、他們居住的社區(qū),并記錄我的感想。但這份喜愛并沒有讓我停止擔憂。

誰知道呢——也許聊天機器人能幫上忙。我確信在某些情況下它們確實起到了作用。每次布置作業(yè),我都能發(fā)現(xiàn)有人用聊天機器人作弊。當我提出這個問題時,作弊者往往立刻承認,聲稱是時間緊迫加上沒理解我的要求。我懇求他們:如果你們不明白,就告訴我!但我忍不住想:如果我訓(xùn)練一個聊天機器人,讓它以我認可的方式回答他們的問題呢?會不會少一些人作弊?(我甚至知道到底有多少人作弊了嗎?)他們的寫作水平會不會提高得更快?或者,會不會有更多人,在作弊這條路上,興高采烈地走下去?我想信任他們;但我確信我必須設(shè)定界限。這些決定感覺是不可能的,令人稍感安慰的是,一位喜歡可卡因的奧地利精神分析學家在 1937 年也表達了同樣的觀點。

除了閱讀之外,還有一種課堂活動讓人感覺相對安全,不受這種揮之不去的疑慮影響。那就是我們直接討論人工智能的時候——我會嘗試解釋我對這個主題的看法(包括我的疑慮),同時也會征求同學們的想法。我給高年級的學生發(fā)放了人工智能問卷,引導(dǎo)他們描述自己使用哪些人工智能工具,用于什么用途,使用時長以及感受。一些學生告訴我,他們從未使用過人工智能,也從未想過要使用——因為這讓他們感到毛骨悚然。一些學生表達了對人工智能未來就業(yè)前景的擔憂。還有一些學生描述了他們?nèi)绾问褂昧奶鞕C器人生成學習卡片和復(fù)習題,獲取穿搭建議,編輯社交媒體帖子,用聊天機器人代替谷歌搜索,獲取烹飪建議、運動訓(xùn)練建議、健康建議以及寵物健康建議。

幾乎所有填寫問卷的人都表達了某種擔憂(或者至少意識到了這一點),即人工智能可能會削弱他們的獨立思考能力。我明白,他們中的一些人可能察覺到了我的抵觸情緒,所以說了些我愛聽的話。我也知道,他們中的一些人可能隱瞞了一些他們不想告訴我的事情,比如他們使用聊天機器人來緩解孤獨感。盡管如此,他們對自身認知能力的擔憂依然讓我感到真誠。

然而,學生們是否真正理解原創(chuàng)思維的本質(zhì),從而意識到這種思維方式何時被繞過,這一點并不總是顯而易見的。不止一位學生曾堅定地表示要培養(yǎng)自己的思考能力——但幾行之后,他們又分享了一些“負責任的”人工智能使用案例,在我看來,這些案例恰恰破壞了他們原本希望培養(yǎng)的能力。比如,我會讓人工智能給我一個論點,然后我自己寫論文;我會讓人工智能給我?guī)讉€論點,然后我選一個,讓人工智能幫我擬定提綱;我會讓人工智能寫一個初稿,然后我自己修改,讓它變成原創(chuàng)作品。

只有一位學生表示,他使用人工智能完成了他不想做的作業(yè)。他解釋說,他并非有意冒犯我,只是生活繁忙,“有些老師”習慣布置重復(fù)性作業(yè),他覺得這些作業(yè)不值得花費時間。這位學生的父親在家長會上找到我,告訴我他理解我制定人工智能政策的初衷,但也感到擔憂。在他自己的職業(yè)生涯中,他看到雇主在招聘和晉升討論中非常重視人工智能技能。難道他兒子的教育不應(yīng)該鼓勵他掌握這項技能嗎?

我明顯感覺到,即使是那些最常使用人工智能的學生,對這項技術(shù)的背景知識也極其匱乏。有一天,我突發(fā)奇想,提出給任何能用通俗易懂的語言(不看屏幕)解釋聊天機器人如何生成文本的人一大筆額外的加分。結(jié)果無人能做到。我還分享了一封我收到的來自美國作家協(xié)會的郵件,郵件解釋了如何確定我是否有資格從一起代表圖書作家對人工智能公司Anthropic提起的集體訴訟中獲得賠償。Anthropic開發(fā)了Claude,一些作家認為Claude是他們最喜歡的聊天機器人。我問,Anthropic憑什么要賠償像我這樣的作家?一片沉默。

所以我試著談?wù)撨@件事。感覺有點尷尬。我很快意識到,自己用淺顯易懂的語言解釋聊天機器人文本來源,在分享之后并沒有我想象中那么簡單明了。但感覺也不錯。我感覺到,隨著我們探討關(guān)于世界以及我們在世界中的位置的問題,學生們的注意力——坦白說,也包括我自己的——都更加集中了。

我預(yù)感未來我會尋找更多機會將人工智能引入課堂,但同時對于人工智能工具的使用仍會保持高度謹慎。我希望學生們能夠更好地思考文學作品,沒錯——但也要更好地思考他們接觸到的所有語言,包括廣告、政治演講、報紙評論和社交媒體內(nèi)容。如果這些語言機器將成為他們與世界互動的重要組成部分,我希望他們能夠就這些機器提出問題。我希望他們能夠解釋人工智能公司的商業(yè)模式,這些商業(yè)模式對聊天機器人的行為有何影響,以及低收入工人在聊天機器人輸出中扮演的角色。我希望學生們了解并回應(yīng)那些因與聊天機器人互動而導(dǎo)致自殘、精神錯亂甚至自殺的人的經(jīng)歷。我希望他們知道,許多人工智能高管都曾公開預(yù)測,人工智能的發(fā)展最終將導(dǎo)致地球表面大部分被數(shù)據(jù)中心覆蓋,我想聽聽他們對此的看法。

實習的最后一天,我留下來批改了一大堆低年級學生的作業(yè)。我們花了幾個星期閱讀短篇小說,探討人類與老師、導(dǎo)師和榜樣之間錯綜復(fù)雜的關(guān)系。我沒有讓他們寫作文,而是讓他們從單元學習的內(nèi)容中挑選人物,構(gòu)思原創(chuàng)的情節(jié),將這些人物聯(lián)系起來,并使其與單元的主題相呼應(yīng)。

我允許這些學生在課外時間創(chuàng)作這些故事,并以電子方式提交。但我也讓他們在課堂上繼續(xù)創(chuàng)作,并要求他們與我見面,解釋他們的創(chuàng)作思路。據(jù)我觀察,只有一兩個學生明顯把這項任務(wù)交給了聊天機器人(如果你好奇的話,聊天機器人做得相當不錯)。

總的來說,我對學生們故事的創(chuàng)意和質(zhì)量,以及他們對其他作家作品的深刻理解感到非常欣喜。令我驚訝的是,他們中的許多人都借鑒了一篇在課堂上被普遍認為“太怪異”而遭到冷落的故事:馬克·吐溫的《神秘的陌生人》。在我們讀的版本中(吐溫至少修改過三次),一群年輕人被一個名叫撒旦的天使所蠱惑——他向他們保證,這不是那個撒旦,那是他的叔叔。這個撒旦,不管他是誰,都精通各種酷炫的魔法,起初男孩們覺得這些魔法妙不可言。然而,最終,這卻是一個恐怖故事。盡管撒旦表面上魅力十足,但他看待人類的態(tài)度卻冷漠、蔑視,甚至充滿敵意。年輕人與他接觸越多,就越有可能在不知不覺中吸收他類似的態(tài)度。

很多學生筆下的撒旦都表現(xiàn)得極其相似,簡直就是最新聊天機器人的翻版,這一點顯而易見。撒旦會主動幫角色做作業(yè),潤色他們已經(jīng)完成的作品,讓他們騰出時間去做更輕松愉快的事情。我發(fā)誓,這一切都是他們自發(fā)進行的,完全沒有我的任何提示。盡管我一向比較保守,但我之前從未想到過馬克·吐溫筆下的撒旦還可以有這樣的解讀。

閱讀那些故事的時光令我無比愉悅,也基本擺脫了整個學期以來一直縈繞在我心頭的AI焦慮。這種愉悅最大的威脅,來自我文字處理軟件、郵箱和作業(yè)管理工具中嵌入的AI工具源源不斷地發(fā)出請求。我是否希望機器為我學生的作文打分?幫我評分?還是根據(jù)它檢測到的相似之處進行分類?

我沒有。我想讀讀學生們寫的東西。整個學期我都在跟他們說,寫作是人類創(chuàng)造的禮物,是我們跨越時空了解自己和彼此的一種方式。如果說了這么多,最后我卻把點評他們文章的任務(wù)交給算法,那又意味著什么呢?我把剩下的故事打印出來,關(guān)掉了電腦。

我是否記錄下了人工智能作弊的每一個例子?我肯定沒有,而且我肯定現(xiàn)在有些老師——無論是反對者還是支持者——都在搖頭嘆息我的天真。但我了解我的學生;這不就是我的工作嗎?我在課堂上觀察過他們的草稿進展;我讓他們當面解釋他們的故事——那些奇特的、滑稽的、感人的故事。這一切肯定都有意義。我意識到自己可能是在自欺欺人。但我卻感到出奇的平靜。我做了我認為在這個學期里最合適的事情。在未來的學期里,教學方法肯定會發(fā)生我無法預(yù)料的變化。那也是我的工作。我拿起筆,從那堆稿子里抽出下一篇,開始閱讀。

Teacher v chatbot: my journey into the classroom in the age of AI


Illustration: Jack Purling/The Guardian

I was a newcomer, negotiating all of usual classroom difficulties for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack

Peter C Baker

Tue 3 Mar 2026 05.00 GMT

Share

Two years ago, at the age of 39, I began training to be a school teacher. I wanted to teach English – to help young people become stronger readers, writers and thinkers, with a deeper connection to literature. After 15 years of working as a freelance writer and as a novelist, I felt confident that I had something to offer. But the further I progressed in my training, the more uncertain I felt. One particular question taunted me for my lack of an answer. What to do about artificial intelligence?

The immediate dilemma: what does it mean for English instruction that all pupils now have access to free online chatbots that can produce fluid, fairly complex prose on demand? This question sits atop a teetering pile of timeless pedagogical quandaries: What are we actually trying to do in school? How should we go about doing it? How do we know if we’ve succeeded? I was a newcomer, negotiating all of this for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack.

I started frantically seeking out perspectives on AI and the English classroom wherever I could find them: pedagogy podcasts, pedagogy Substacks, pedagogy YouTube channels. My algorithmic feeds picked up on this interest and started catering to it, serving me an apparently endless supply of content – including endless advertising from tech companies – that promised to help me think through these urgent questions and ensure I did right by my students.

I quickly learned that this was a world of heated, often acrimonious, debate. On one side (to simplify a bit) were the AI rejectionists: teachers and education pundits for whom AI was nothing less than an existential assault by rapacious tech companies on the defining activities of the classroom. What students needed, they argued, was to learn how to push themselves through difficulty: to read complex texts and develop complex arguments. They needed to learn that these were processes full of friction and uncertainty, and they needed to learn how to embrace that fact, rather than running away from it. Access to a one-click writing machine made it too easy to run away.

AI rejectionists shared horror stories of students handing in AI-generated papers about which they couldn’t answer the simplest questions, or citing nonexistent sources their chatbots had “hallucinated”. They posted studies suggesting that chatbot use dulled students’ reasoning faculties, or even impeded the physical development of their brain. They raised ethical concerns, including AI’s environmental costs, chatbots’ reliance on copyrighted writing, and the oligarchal leanings of big tech companies. For most rejectionists, the solution was to build a classroom that AI couldn’t touch. They talked about shifting toward in-class essays, perhaps written by hand. They debated the feasibility of reviving oral tests and quizzes.

On the other side were the AI cheerleaders. I’m not talking about their crazy uncles, the mostly male tech execs who spoke maniacally about how AI would soon mean the end of schooling as we knew it, or already meant that reading books was a waste of time. I’m talking about teachers and pundits who argued – often quite passionately – that, for all AI’s pedagogical risks, it also carried great potential. Instead of cheating machines, chatbots could be powerful assistant teachers, able to engage with every student in a classroom simultaneously, making sure everyone got personalised feedback exactly when needed, carefully nudging each student down their particular path to maximum learning. From the cheerleaders’ perspective, the rejectionists’ instinct to shun AI tools represented a lack of understanding about their possibilities; it also did a disservice to their students, who would leave school without having acquired tech skills they could use to their advantage at university and in their future careers.

As I waded through arguments between the rejectionists and the cheerleaders, attempting to parse their duelling deployment of statistics and academic studies, my anxiety increased. I’ve noticed something about teachers, including myself. Because we take our responsibilities so seriously, we often fear doing the “wrong” thing: using ineffective or discredited teaching strategies, failing to give our students what they need. We believe, often from experience, that good teachers can change people’s lives; we know really bad teachers can leave a mark, too, especially in English, where they are often a culprit in what the teacher and writer Kelly Gallagher calls “readicide”: the killing off of good feelings about reading. We long to be in the right category, and dread being in the wrong one.

Beneath this fear, I think, is a more fundamental one: the fear of being seen as – not to mention the fear of actually being – out-of-touch losers, hiding with children in the classroom because there’s nowhere else in the ever-changing adult world we quite fit. I know this fear well. I was resolved not to get suckered by tech hype, but I also didn’t want to sucker myself by refusing to even consider a potentially useful new tool.

All I needed was a provisional ruling. I didn’t need to decide if AI was an evil scam or the future of everything. I didn’t need to decide what AI meant for the future of education, writ large. What I had to decide was what AI meant for the high-school English classes I was on the verge of teaching. I nervously downloaded more podcasts, clogged my inbox with still more Substacks and watched more YouTube videos, hoping that by absorbing more materials on the subject I could increase my chances of getting it right, or at least tamp down my terror of getting it all wrong.

Last spring I started spending 15 hours a week observing a veteran English teacher in a large school in a Chicago suburb: the type of place that families move to specifically “for the schools”. My host teacher – let’s call her Emily – taught two age groups: 14-year-olds just starting high school and 18-year-olds almost done with it. What I saw in her classroom immediately disposed me to join the rejectionists.

I witnessed all the disruptive effects you read about in articles about AI and the classroom: fully AI-generated papers; AI-hallucinated quotes; tense student-teacher conversations about what exactly was provable. I sat with Emily while she marked papers and joined her in stressing over ambiguous cases, trying to sort student nonsense from AI nonsense, student improvement from AI-powered polish.

I’d become a teacher in large part because I wanted to spend time with young people’s writing, honouring it with close attention. Watching over Emily’s shoulder, I saw how AI’s presence (or even its potential presence) interfered with this process. I became acquainted with the unique variety of despair produced by looking at a paper and, rather than figuring out how to best respond to it, trying to divine its origins. I also saw how teachers are themselves constantly bombarded with offers of AI assistance, not just via email and social media advertisements, but also – more, actually – from AI tools integrated into their schools’ email and gradekeeping software.

Emily’s students all had school-issued laptops, and her computer had a program that allowed her to surveil the content of every one of her students’ screens; they all appeared on the screen simultaneously, in a grid that recalled a bank of CCTV monitors. Using this program was always discomfiting – Big Brother, c’est moi – and always transfixing. Some students didn’t use AI at all, at least in class. Others turned to it every chance they got, feeding in whatever question they were working on almost as a reflex. At least one student was in the habit of putting every new subject into ChatGPT, having it generate notes that he could refer to if called on. Often, I saw students getting funnelled toward AI use even when they hadn’t necessarily been looking for it. I got used to watching a student Google a subject (“key themes in Romeo and Juliet”), read the AI-generated answer that now appears atop most Google search results, click “Dive deeper in AI mode” – and suddenly be chatting with Gemini, Google’s chatbot, which was always ready to advertise its own capabilities. “Should I elaborate on one or more of these themes? Should I draft a first paragraph for an essay on the subject?”

Emily told me that most of the reading she assigned now had to happen in class and that she read much of it aloud, especially toward the beginning of the year. I was shocked. Yes, I’d read countless newspaper features on the “contemporary reading crisis” but it was still dismaying to encounter the diminished baseline state of teen reading in the wild. When I decided to become a teacher, my head had been filled with romantic visions in which I led students (“O captain, my captain!”) into battle with literary complexity and its connections to life. In these visions, the reading itself took place mostly off-camera, beyond the walls of the classroom. What did it mean for my teacherly ambitions that so many of my students appeared unequipped to read on their own – and that, when it came time to write, so many of them turned reflexively to AI? I wondered, depressively, if I’d signed up for something that unstoppable forces of history were on the brink of wiping out.

But then I watched Emily read to the class and my spirits lifted. For a writer, describing alleged classroom magic is a bit like describing sex; so often, the attempt produces sentences that are both cringe-inducing and unconvincing. And yet: I feel obliged to tell you that reading time was sometimes magic.

Shortly after I’d arrived, the younger classes started All Quiet on the Western Front. Students began by expressing disbelief: We’re really reading another whole book? Then, with Emily’s help, they got their bearings: first world war, young German soldiers, trench warfare, the loss of innocence, the psychological toll of daily proximity to death, the disconnect from the home front. Laptops were away, as were phones. (Per school policy, they were in pouches by the classroom door.) Everyone knew they could raise a hand any time to ask for clarification or make an observation. Sometimes, Emily stopped to highlight moments that she suspected were producing confusion that students might be afraid to admit to, or misreadings they weren’t even conscious of, or sentences ripe with multiple possibilities for interpretation. Day by day, and mostly in imperceptible micro-movements, the book transformed from an imposing monolith into a familiar companion.

At some point the students stopped complaining and started getting into it: expressing a desire to know how it all turned out, gasping at dramatic turns, wondering aloud, and with feeling, why characters were doing what they were doing. Why had Erich Maria Remarque written it like that? And then, one day, it happened: a room full of American 14-year-olds in 2025 was inside a story about German 19-year-olds in the 1910s, simultaneously viewing the book through the lens of their lives and their lives through the lens of the book. I could feel it on my skin: the room quietly crackling with the crisscrossing lines of energy between students and teacher and words first committed to paper almost a century before.

The AI shenanigans I’d witnessed had been depressing: the AI-free teaching I’d witnessed had been inspiring. Before my observation period ended, Emily let me lead some of the readings myself, and a couple of times I experienced a full-body high. I felt ready to scream it from the rooftops: I’m an AI rejectionist – and proud of it!

Over the summer, though, my doubts came creeping back. As stirring as reading time in Emily’s classroom had been, I knew it hadn’t actually answered all (or any) of my questions about AI and the classroom. I knew that in the fall I would be returning, this time as a student teacher, taking most of the responsibility for lesson planning and marking. I had more decisions to make, centrally about writing. What, given my concerns about chatbots, would I have students write? And when, and how?

Because I’d consumed – and was continuing to consume – so much content devoted to AI and teaching, I was capable of staging an internal debate, in my head, between radically different takes.

Me: “Reading together as a class without any AI or devices felt great. I know that for sure. I want to use that as my starting point.”

Also me: “But what did the students really learn? How do you know?”

Me: “Well, I got to hear their thoughts evolving in real time.”

Also me: “But did every single student participate?”

Me: “Well, no. But they all did a lot of writing afterward – in the classroom, by hand – and I was able to read that.”

Also me: “Having read what they wrote, do you really think every student learned as much as they theoretically could have? Did they all learn everything you wanted them to?”

Me: “Well … I guess not. Not all of them. Not everything.”

Also me: “What if, after your AI-free reading and discussion, when students sat down to write, they each had access to an AI chatbot that could give them feedback tailored exactly to their existing comprehension level and learning style? What if you, the teacher, could train that chatbot, aligning its behaviour precisely to your goals for the assignment and the class overall?”

Me: “Well, that’s already my job – to give them personalised feedback.”

Also me: “But how much time do you have for that? Can you really intervene every single time it would be useful? What about when your students are writing at home? What about when it’s the night before an assignment is due and they’re off to a completely wrong start? Why wouldn’t you want them to know that?”

Me: [sweating profusely]

In the name of due diligence, I started playing around with AI chatbots, including those designed specifically for classrooms, or with some kind of “student mode” included. First, I evaluated their ability to do the Worst Thing: take one of my assignments, add a few simple instructions – “This should sound like was written by a 15-year-old student”, “Please insert a realistic sprinkling of common typos and grammatical errors”, “Don’t make it too smooth” – and generate something I could not distinguish from student writing. In the halcyon days of 2023, it was a reassuring article of faith that machine writing was instantly detectable by a teacher. I can report that, for better or worse, that’s simply no longer the case.

Next I tested these chatbots on less obviously poisonous uses, such as making comments on drafts, or answering clarifying questions about assignments. Performance varied from bot to bot, but some were very good at it. In fact, I was impressed enough that I started occasionally feeding these same bots drafts of my own magazine pieces, now and then getting instant feedback that felt truly useful. Sitting at my computer, I felt an imaginary squad of cheerleaders gathering behind me, ready to claim a victory.

I kept returning to my memories of reading time in Emily’s classroom, trying to analyse what had felt so special. Part of it, I decided, had to do with how the activity structured everyone’s attention. Because all the laptops and phones were away, everyone was fully engaged at all times. It was truly astonishing to see.

I’m kidding. It was school. Some shifting amount of the class’s collective attention was on all the things teenagers have to think about. Next period’s test. Their plans for the weekend, or worrisome lack thereof. Whether their crush liked them back. The fight they heard their parents having the night before. The presence of ICE officers in the neighbourhood. But, thanks to the architecture of reading time, the possibility of paying attention was always close at hand. A student could find their way back to it without being waylaid en route by the temptations of a bright, scrollable screen, an always-on portal to more distractions.

It was good – I was sure of it – to have some enforced separation between the learning and the temptations of tech. My reflex was to enforce, to the extent possible, that same separation on their writing processes. Is it possible to design a chatbot that gives reliably useful writing feedback? Maybe. Can the frequency of chatbot feedback be regulated so that it doesn’t become a crutch? Probably. Can a chatbot be ordered not to offer students one-click rewrites? Yes. But every high-school student – busy, overwhelmed, nervous about writing, eager to be done with school work for the night or weekend – knows that, on the public internet, these labour-saving options sit a mere click away.

I couldn’t wipe chatbots from their world, any more than I can wipe phones. All I could do was decide how much I would steer students toward them and how much I would nudge them toward other experiences.

Me: “So … I think in the fall I’ll try making things as AI-free as possible. I think what the students need most are sustained experiences of reading and writing – with all the friction and uncertainty those processes involve – without tech distractions in the mix.”

Also me: “But learning to deal with tech distractions is part of life. And surely they’ll need AI, in the future, to supercharge their thinking and be competitive workers.”

Me: “Maybe. But can you supercharge your thinking when you haven’t learned how to think yet? Aren’t I always reading interviews with Silicon Valley executives where they describe strictly limiting their own kids’ access to the web and screens?”

Also me: “Any chance you’re projecting some of your own concerns about how much time you waste online, and what a better, more successful writer you want to think you’d be if someone would just turn them off on your behalf?”

Me: “That’s possible, yes.”

Teaching, according to Freud, is one of the “impossible professions”. It is never possible to declare total success, or even know for sure the full effects of what you are doing. (Worse: “One can be sure beforehand of achieving unsatisfying results.”) Through the fall I reminded myself of this idea daily, trying to make myself feel better about how profoundly unsure I felt about almost everything I did.

When I devoted class time to reading, it felt great. But then I worried that because it felt so great I was doing too much of it, the teacherly equivalent of trying to be healthy by eating only spinach. When I had students write their essays entirely in class, I felt virtuous for having banished big tech’s brain-rotting shortcut machine. (The image of Ian McKellen-as-Gandalf, standing firm in the face of the monstrous, towering Balrog, bellowing “YOU SHALL NOT PASS!” became a companion.)

Then, at night, looking over the battles of the day, I would worry that, by confining work for written assignments to class time, I wasn’t exposing students to the very aspects of writing that I valued most: the intertwined frustrations and pleasures of picking apart what you’ve written and reassembling it, the movement from draft to draft, the experience of living with a piece over time, your engagement with it colouring and being coloured by the rest of your life. When I set more ambitious assignments, and gave students the extra time that ambition required – including, by necessity, unsupervised time – I would feel virtuous again. Then my mind’s eye would be invaded by visions of my students at home, pasting my instructions into ChatGPT, into Gemini, into Claude, into Copilot, into Grammarly.

I spent a lot of time trying to come up with outside-the-box writing assignments that were so well constructed – so damn interesting, so not the rigidly formulaic essays of yesteryear – that students would feel no desire to skip them.

Imagine you work in Hollywood: the book we’ve just read is being made into a movie and you have to select the soundtrack; explain which songs go with which scenes and why, and by doing so demonstrate that you understand those scenes’ tone and role in the overarching story.

Write your own version of Binyavanga Wainaina’s satirical essay How to Write About Africa, replacing “Africa” with something important to you that you feel is often misrepresented, and by doing so demonstrate your understanding of Wainaina’s rhetorical choices.

I loved reading these assignments. I loved learning how students understood what we were reading. I loved hearing their music. I loved learning about their relationships to gender, their cultural backgrounds, their neighbourhoods, making notes about my responses. But this love didn’t stop me from worrying.

And who knows – maybe chatbots could have helped. I’m sure in a few cases they did. For every assignment, I caught a few people using them to cheat. When I floated the question, the culprits tended to admit it right away, claiming a combination of time pressure and failure to understand what I’d asked them to do. I implored them: when you don’t understand, just let me know! But I couldn’t help thinking: what if I’d trained a chatbot to answer their questions in ways that I approved? Might fewer of them have done the Worst Thing? (Did I even know how many actually had?) Might their writing have got better, faster? Or would more of them, set at the foot of the garden path to full-blown cheating, have merrily traipsed down it? I wanted to trust them; I felt sure I had to set limits. The decisions felt impossible, and it was of limited consolation that an Austrian psychoanalyst with a fondness for cocaine had said as much in 1937.

Besides reading, there was one other type of classroom activity that felt relatively safe from this hovering cloud of doubts. These were the times when we talked directly about AI – when I tried to explain my thinking on the subject (including my uncertainty) and also to solicit the class’s thoughts. I gave my older students AI questionnaires, prompting them to describe what AI tools they used for what, how long they’d been using them, and how they felt about it. A few of them told me they’d never used AI and never wanted to – that it creeped them out. Some expressed concern about what it meant for jobs. Others described using chatbots to generate flash cards and test review questions, to get advice on what to wear, to edit their social media posts, as a replacement for Google searches, to get cooking advice, to get athletic training advice, to get health advice, and to get health advice for their pets.

Almost everyone who filled out the questionnaire expressed some fear (or at least recognition) that AI could erode their capacity for original thought. I recognise that some of them, having intuited my rejectionist leanings, might have been telling me what they thought I wanted to hear. I also knew some of them were likely leaving out things they understandably didn’t want to tell me, such as that they used chatbots to alleviate loneliness. Still, their concerns about their own cognitive lives felt genuine.

It wasn’t always clear, though, that the students understood the nature of original thinking well enough to understand when it was being bypassed. More than one expressed firm resolve to develop their own thinking abilities – then, a few lines later, shared examples of “responsible” AI usage that, from my perspective, trashed exactly what they were hoping to cultivate. I’ll have AI give me a thesis statement, but then I’ll write the paper. I’ll have AI give me a few thesis statements, then I’ll pick one and have AI do the outline. I’ll have AI write a first draft, then go in and change things to make it original.

Only one student said that he used AI to complete, start to finish, assigned writing that he didn’t want to do. He meant no offence to me personally, he explained, but his life was busy and “some teachers” were in the habit of giving repetitive assignments that he felt confident weren’t worth his time. This same student’s father approached me at a parents’ night to tell me that, while he understood where I was coming from with my AI policies, he was also worried. In his own professional life, he saw how much employers emphasised AI fluency in discussions about hirings and promotion. Shouldn’t his son’s education be encouraging that fluency?

I got a distinct sense that, even among students who used AI the most, contextual knowledge about the technology was extremely low. One day, I spontaneously offered a much-too-large heap of extra credit to anyone who could produce (without looking at a screen) a plain-language account of how chatbots generate text. No one could. I also shared an email I’d received from the US Authors Guild, explaining how to determine my eligibility for compensation from a class-action lawsuit brought on behalf of book writers against the AI firm Anthropic, creator of Claude, a chatbot some of them had identified as their favourite. On what grounds, I asked, might Anthropic owe writers like me money? Silence.

So I tried to talk about it. It felt a little awkward. My own plain-language explanation of chatbot text provenance was, I quickly realised upon sharing it, not as plain as I’d hoped. But it also felt good. I sensed my students’ attention – and, frankly, my own – slipping into higher gear as we took on questions about the world and our place in it.

I suspect that in the future I’ll be seeking out more opportunities to bring the subject of AI into the classroom, even as I maintain an extreme caution about doing the same with AI tools. I want students to get better at thinking about literature, yes – but also about all the language they encounter, including in advertisements, politicians’ speeches, newspaper op-eds and social media content. If these language machines are going to be a major part of how they’re interfacing with the world, I want them to be able to ask questions about the machinery. I want them to be able to explain the business models of AI companies, what those business models can mean for how chatbots behave, and the role played in chatbot outputs by low-wage workers. I want students to know about, and respond to, the experience of people for whom chatbot interactions end in self-harm, psychosis and suicide. I want them to know that multiple AI executives have openly predicted that AI growth will eventually result in the surface of our planet being mostly covered by data centres, and I want to hear what they think about it.

On my last day of student teaching, I stayed late, grading a pile of my younger students’ work. We’d spent several weeks reading short stories on the complicated relationships we humans have with our teachers, mentors and role models. In place of essays, I’d asked them to write short stories where they plucked characters from across the unit and came up with original scenarios that brought them together in ways that reflected the unit’s themes.

I’d allowed these students to work on these stories outside class, and to submit them digitally. But I also had them work on them in class time and made them meet me to describe their choices. Only one or two, that I could tell, had obviously tossed the task over to chatbots (which, if you’re wondering, did a pretty serviceable job).

Overall, I was delighted by the inventiveness and quality of my students’ stories, and the depth of understanding of other authors’ work that they demonstrated. To my surprise, many of them drew on a story that, in class, had been widely dismissed as “too weird”: Mark Twain’s The Mysterious Stranger. In the version we read (Twain re-wrote it at least three times), a group of young men falls under the sway of an angel named Satan – not thatSatan, he assures them; that’s his uncle. This Satan, whoever he is, knows all kinds of cool magic, which at first the boys find totally delightful. In the end, though, it’s a horror story. For all Satan’s surface charms, he is revealed to view humanity with a combination of indifference, scorn and hostility. The more the young men interact with him, the more they risk unthinkingly absorbing a similar attitude.

Multiple students had their Satans act in ways that, it was impossible to miss, mirrored the behaviour of the latest chatbots. Satan offered to do characters’ homework, to take work they’d done and make it more polished, to free up their time for more immediately pleasurable activities. They did this, I swear, without any prompting from me. Despite my rejectionist inclinations, this way of looking at Twain’s Satan had never occurred to me.

The hours I spent reading those stories were a joy, and mostly uncomplicated by the AI anxieties that had colonised my mind for so much of the semester. The biggest threat to this joy was the steady stream of solicitations from the AI tool embedded in my word processing software, from the AI tool embedded in my email inbox, and the AI tool embedded in my digital assignment-management tool. Did I want the machine to give me notes on my students’ stories? To grade them for me? To put them in categories based on similarities it detected among them?

I didn’t. I wanted to read what my students had written. I’d been telling them all semester that writing was a gift humanity had made for itself, a way for us to know ourselves and each other across space and time. What would it mean if, after all that, I gave over the task of responding to their writing to an algorithm? I printed the remaining stories out and shut my computer.

Did I clock every single instance of AI cheating? I’m sure I didn’t, and I’m sure some teachers out there – rejectionists and cheerleaders alike – are shaking their heads right now at my naivety. But I knew my students; that was the job, wasn’t it? I’d watched their drafts’ progress in class; I’d made them explain their stories – their weird, hilarious, touching stories – to my face. Surely all that counted for something. I was aware of the possibility that I was fooling myself. But I felt surprisingly at peace. I’d done what I thought was right for the semester. In future semesters, the approach will surely change in ways I can’t yet predict. That, too, is the job. I picked up my pen, grabbed the next story from the pile, and began to read.

Listen to our podcasts here and sign up to the long read weekly email here.

特別聲明:以上內(nèi)容(如有圖片或視頻亦包括在內(nèi))為自媒體平臺“網(wǎng)易號”用戶上傳并發(fā)布,本平臺僅提供信息存儲服務(wù)。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關(guān)推薦
熱點推薦
塞爾維亞若買下這60架殲-10C和5架空警500,整個歐洲都要多看一眼

塞爾維亞若買下這60架殲-10C和5架空警500,整個歐洲都要多看一眼

達文西看世界
2026-03-22 12:21:40
俄烏沖突把西方人都打傻了?北約秘書長:中國將迫使俄軍進攻歐洲

俄烏沖突把西方人都打傻了?北約秘書長:中國將迫使俄軍進攻歐洲

泛舟碧波湖水
2026-03-22 22:39:18
俄高官:歐盟和英國燃料危機迫在眉睫,或?qū)⒍颗浣o

俄高官:歐盟和英國燃料危機迫在眉睫,或?qū)⒍颗浣o

界面新聞
2026-03-22 16:45:15
鐘振振去世,享年76歲

鐘振振去世,享年76歲

極目新聞
2026-03-22 19:00:23
長沙火車站的“防睡椅”,真把算計做到骨子里!一旅客吐槽引熱議

長沙火車站的“防睡椅”,真把算計做到骨子里!一旅客吐槽引熱議

火山詩話
2026-03-22 07:38:36
丹麥男友去世后,東北姑娘仍為他生下遺腹子,還為了公婆定居丹麥

丹麥男友去世后,東北姑娘仍為他生下遺腹子,還為了公婆定居丹麥

星星沒有你亮
2026-03-22 08:48:35
一男子失業(yè)拿了50萬賠償回村里,逢人說欠了30萬外債,誰料第二天叔伯兄弟,都上門來“送溫暖”了

一男子失業(yè)拿了50萬賠償回村里,逢人說欠了30萬外債,誰料第二天叔伯兄弟,都上門來“送溫暖”了

不二大叔
2026-03-19 21:29:20
特朗普威脅退出北約,英國不慣著!不到48小時英國對美下達逐客令

特朗普威脅退出北約,英國不慣著!不到48小時英國對美下達逐客令

卷史
2026-03-22 18:14:34
一個城市的衰落,往往從按摩店倒閉開始

一個城市的衰落,往往從按摩店倒閉開始

虔青
2026-03-16 10:48:41
慧妍雅集晚宴歷代港姐云集 朱玲玲高貴現(xiàn)身 楊思琦“胸”戰(zhàn)陳茵媺 亞姐林寶玉“踩場”

慧妍雅集晚宴歷代港姐云集 朱玲玲高貴現(xiàn)身 楊思琦“胸”戰(zhàn)陳茵媺 亞姐林寶玉“踩場”

TVB資訊臺
2026-03-22 21:11:24
掀掉洋蔥頂,整治宗教泛濫的第一步

掀掉洋蔥頂,整治宗教泛濫的第一步

黑哥講現(xiàn)代史
2026-03-14 15:46:38
要把女生送往伊朗前線的博主,銷號跑路了

要把女生送往伊朗前線的博主,銷號跑路了

大張的自留地
2026-03-22 15:33:24
中國車市投訴榜單第一名!

中國車市投訴榜單第一名!

詩與星空
2026-03-20 08:00:06
杜淳把4個億砸進《逐玉》,2.6億沒給明星,全燒在你看不見的地方

杜淳把4個億砸進《逐玉》,2.6億沒給明星,全燒在你看不見的地方

西樓知趣雜談
2026-03-21 21:33:29
代碼遭駁回后,AI智能體自主發(fā)布抹黑文章攻擊開發(fā)者

代碼遭駁回后,AI智能體自主發(fā)布抹黑文章攻擊開發(fā)者

IT之家
2026-03-22 15:54:47
75歲張藝謀口碑塌房?前副導(dǎo)自曝是系黑幕,16歲劉浩存也未能幸免

75歲張藝謀口碑塌房?前副導(dǎo)自曝是系黑幕,16歲劉浩存也未能幸免

阿廢冷眼觀察所
2026-03-23 00:40:26
要“赴湯蹈火、在所不辭”!中央紀委常委會今年首次集體學習,提新要求

要“赴湯蹈火、在所不辭”!中央紀委常委會今年首次集體學習,提新要求

極目新聞
2026-03-22 22:04:23
釋永信行賄的是何方神圣?

釋永信行賄的是何方神圣?

方清云
2026-03-22 17:26:33
小米AI的尷尬:匿名時被封神,公開認領(lǐng)后馬上被罵

小米AI的尷尬:匿名時被封神,公開認領(lǐng)后馬上被罵

互聯(lián)網(wǎng).亂侃秀
2026-03-21 14:32:17
雷軍回應(yīng)60加60爭議:口誤多講了一句,“相當于120km/h時速撞墻”,確實說錯了,感謝網(wǎng)友們指正

雷軍回應(yīng)60加60爭議:口誤多講了一句,“相當于120km/h時速撞墻”,確實說錯了,感謝網(wǎng)友們指正

揚子晚報
2026-03-21 13:16:42
2026-03-23 01:12:49
科學的歷程 incentive-icons
科學的歷程
吳國盛、田松主編
3156文章數(shù) 15007關(guān)注度
往期回顧 全部

科技要聞

嫌臺積電太慢 馬斯克要把芯片產(chǎn)能飆升50倍

頭條要聞

媒體:特朗普48小時通牒砸向伊朗 不排除美國鋌而走險

頭條要聞

媒體:特朗普48小時通牒砸向伊朗 不排除美國鋌而走險

體育要聞

46歲生日快樂!巴薩全隊穿10號致敬小羅

娛樂要聞

47歲“國際章”身材走樣?讓嘲笑她的人閉嘴

財經(jīng)要聞

睡夢中欠債1.2萬?這只“蝦”殺瘋了

汽車要聞

14.28萬元起 吉利銀河星耀8遠航家開啟預(yù)售

態(tài)度原創(chuàng)

數(shù)碼
游戲
親子
時尚
軍事航空

數(shù)碼要聞

古爾曼:蘋果Apple TV、HomePod和HomePod mini庫存告急

日本RPG大作實裝人工智能NPC!讀取操作全程伴游

親子要聞

孩子無意中說這樣的話,可能隱藏求救信號,寶媽們要警惕!

她憑這件旗袍在賽場圈粉無數(shù)

軍事要聞

伊導(dǎo)彈擊中以核設(shè)施附近 爆炸視頻公布

無障礙瀏覽 進入關(guān)懷版