
人工智慧工具擴大了科學家的影響力,卻縮減了科學的焦點
一項分析了4130萬篇研究論文的研究顯示,儘管人工智慧工具顯著提升了個別科學家的生產力和職涯發展,但其廣泛採用卻導致科學研究主題的集體縮減。
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Advertisement
Artificial intelligence tools expand scientists’ impact but contract science’s focus
Nature
(2026)Cite this article
977 Accesses
724 Altmetric
Metrics details
Subjects
Abstract
Developments in artificial intelligence (AI) have accelerated scientific discovery1. Alongside recent AI-oriented Nobel prizes2,3,4,5,6,7,8,9, these trends establish the role of AI tools in science10. This advancement raises questions about the influence of AI tools on scientists and science as a whole, and highlights a potential conflict between individual and collective benefits11. To evaluate these questions, we used a pretrained language model to identify AI-augmented research, with an F1-score of 0.875 in validation against expert-labelled data. Using a dataset of 41.3 million research papers across the natural sciences and covering distinct eras of AI, here we show an accelerated adoption of AI tools among scientists and consistent professional advantages associated with AI usage, but a collective narrowing of scientific focus. Scientists who engage in AI-augmented research publish 3.02 times more papers, receive 4.84 times more citations and become research project leaders 1.37 years earlier than those who do not. By contrast, AI adoption shrinks the collective volume of scientific topics studied by 4.63% and decreases scientists’ engagement with one another by 22%. By consequence, adoption of AI in science presents what seems to be a paradox: an expansion of individual scientists’ impact but a contraction in collective science’s reach, as AI-augmented work moves collectively towards areas richest in data. With reduced follow-on engagement, AI tools seem to automate established fields rather than explore new ones, highlighting a tension between personal advancement and collective scientific progress.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Buy this article
USD 39.95
Prices may be subject to local taxes which are calculated during checkout
Additional access options:




Similar content being viewed by others

Artificial intelligence and illusions of understanding in scientific research

SciSciGPT: advancing human–AI collaboration in the science of science

Quantifying the use and potential benefits of artificial intelligence in scientific research
Data availability
The OpenAlex dataset for research papers and researchers is available at https://docs.openalex.org/download-all-data/openalex-snapshot. The Web of Science dataset for research papers and researchers is available at https://clarivate.com/academia-government/scientific-and-academic-research/research-discovery-and-referencing/web-of-science/web-of-science-core-collection. The Journal Citation Report dataset for the journal quantile is retrieved from https://jcr.clarivate.com/jcr/browse-journals. The author contribution dataset is available at https://zenodo.org/records/6569339. The pre-trained parameters for the BERT language model are available at https://huggingface.co/docs/transformers. The pre-trained parameters for the SPECTER 2.0 text embedding model are available at https://huggingface.co/allenai/specter2. Source data are provided with this paper.
Code availability
This study used Python 3.11.0 with software packages to conduct data analysis. Required packages are NumPy (v.1.26.4), pandas (v.2.2.3), SciPy (v.1.15.2), scikit-learn (v.1.6.1) and matplotlib (v.3.10.1). The t-SNE algorithm used is imported from the sklearn package. The codes developed in this study can be found at https://github.com/tsinghua-fib-lab/AI-Impacts-Science.
References
Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023).
Article
ADS
PubMed
Google Scholar
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).
Article
ADS
MathSciNet
PubMed
PubMed Central
Google Scholar
Hopfield, J. J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl Acad. Sci. USA 81, 3088–3092 (1984).
Article
ADS
PubMed
PubMed Central
Google Scholar
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Article
ADS
PubMed
Google Scholar
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60, 84–90 (2012).
Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).
Article
ADS
MathSciNet
PubMed
Google Scholar
Hinton, G. E. Training products of experts by minimizing contrastive divergence. Neural Comput. 14, 1771–1800 (2002).
Article
PubMed
Google Scholar
Kuhlman, B. et al. Design of a novel globular protein fold with atomic-level accuracy. Science 302, 1364–1368 (2003).
Article
ADS
PubMed
Google Scholar
Jumper, J. et al. Highly accurate protein structure prediction with alphafold. Nature 596, 583–589 (2021).
Article
ADS
PubMed
PubMed Central
Google Scholar
Gao, J. & Wang, D. Quantifying the use and potential benefits of artificial intelligence in scientific research. Nat. Human Behav. 8, 2281–2292 (2024).
Evans, J. A. Electronic publication and the narrowing of science and scholarship. Science 321, 395–399 (2008).
Article
ADS
PubMed
Google Scholar
Adıgüzel, T., Kaya, M. H. & Cansu, F. K. Revolutionizing education with AI: exploring the transformative potential of ChatGPT. Contemp. Educat. Technol. 15, ep429 (2023).
Akgun, S. & Greenhow, C. Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 431–440 (2022).
Article
PubMed
Google Scholar
Meskó, B. & Topol, E. J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digital Med. 6, 120 (2023).
Article
Google Scholar
Loh, H. W. et al. Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011–2022). Comput. Methods Prog. Biomed. 226, 107161 (2022).
Ahmed, I., Jeon, G. & Piccialli, F. From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE Trans. Indust. Inform. 18, 5031–5042 (2022).
Article
Google Scholar
Varadi, M. et al. Alphafold protein structure database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucl. Acids Res. 50, D439–D444 (2022).
Article
PubMed
Google Scholar
Degrave, J. et al. Magnetic control of tokamak plasmas through deep reinforcement learning. Nature 602, 414–419 (2022).
Article
ADS
PubMed
PubMed Central
Google Scholar
Fawzi, A. et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47–53 (2022).
Article
ADS
PubMed
PubMed Central
Google Scholar
Boiko, D. A., MacKnight, R., Kline, B. & Gomes, G. Autonomous chemical research with large language models. Nature 624, 570–578 (2023).
Article
ADS
PubMed
PubMed Central
Google Scholar
Stokel-Walker, C. & Van Noorden, R. What ChatGPT and generative AI mean for science. Nature 614, 214–216 (2023).
Article
ADS
PubMed
Google Scholar
Gilson, A. et al. How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educat. 9, e45312 (2023).
Article
Google Scholar
Salimi, A. & Saheb, H. Large language models in ophthalmology scientific writing: ethical considerations blurred lines or not at all? Am. J. Ophthalmol. 254, 177–181 (2023).
Article
PubMed
Google Scholar
Liang, W. et al. Mapping the increasing use of LLMs in scientific papers. In Proc. 1st Conference on Language Modeling (COLM, USA, 2024).
Hwang, T. et al. Can ChatGPT assist authors with abstract writing in medical journals? Evaluating the quality of scientific abstracts generated by ChatGPT and original abstracts. PLoS ONE 19, e0297701 (2024).
Article
PubMed
PubMed Central
Google Scholar
Kobak, D., González-Márquez, R., Horvát, E.-Á. & Lause, J. Delving into LLM-assisted writing in biomedical publications through excess vocabulary. Sci. Adv. 11, eadt3813 (2025).
Article
ADS
PubMed
PubMed Central
Google Scholar
Wojtowicz, Z. & DeDeo, S. Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier Vol. 39, 1592–1600 (2025).
Frank, M. R., Wang, D., Cebrian, M. & Rahwan, I. The evolution of citation graphs in artificial intelligence research. Nat. Mach. Intell. 1, 79–85 (2019).
Article
Google Scholar
OpenAlex (OpenAlex, 2025); https://openalex.org/.
Clarivate (Web of Science, 2025); https://www.webofscience.com.
Mongeon, P. & Paul-Hus, A. The journal coverage of web of science and scopus: a comparative analysis. Scientometrics 106, 213–228 (2016).
Article
Google Scholar
Devlin, J. et al. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 57th Annual Meeting of the Association for Computational Linguistics 4171–4186 (ACL, Italy, 2019).
Wolf, T. et al. Transformers: State-of-the-art natural language processing. In Proc. 58th Annual Meeting of the Association for Computational Linguistics 38–45 (ACL, 2020).
Beltagy, I., Lo, K. & Cohan, A. SciBERT: a pretrained language model for scientific text. In Proc. 57th Annual Meeting of the Association for Computational Linguistics 3613–3618 (ACL, Italy, 2019).
Cohan, A., Feldman, S., Beltagy, I., Downey, D. & Weld, D. S. SPECTER: document-level representation learning using citation-informed transformers. In Proc. 58th Annual Meeting of the Association for Computational Linguistics 2270–2282 (ACL, 2020).
Singh, A., D’Arcy, M., Cohan, A., Downey, D. & Feldman, S. SciRepEval: a multi-format benchmark for scientific document representations. In Proc. 61st Annual Meeting of the Association for Computational Linguistics 5548–5566 (ACL, Canada, 2023).
Landis, J. R. & Koch, G. G. The measurement of observer agreement for categorical data. Biometrics 33, 159–174 (1977).
Fleiss, J. L. Measuring nominal scale agreement among many raters. Psychol. Bull. 76, 378 (1971).
Article
Google Scholar
Chu, J. S. & Evans, J. A. Slowed canonical progress in large fields of science. Proc. Natl Acad. Sci. USA 118, e2021636118 (2021).
Article
PubMed
PubMed Central
Google Scholar
Journal Citation Reports (Clarivate, 2021); https://jcr.clarivate.com/jcr/home.
Ioannidis, J. P., Boyack, K. W. & Klavans, R. Estimates of the continuously publishing core in the scientific workforce. PloS ONE 9, e101698 (2014).
Article
ADS
PubMed
PubMed Central
Google Scholar
Kendall, D. G. Birth-and-death processes, and the theory of carcinogenesis. Biometrika 47, 13–21 (1960).
Article
MathSciNet
Google Scholar
Fortunato, S. et al. Science of science. Science 359, eaao0185 (2018).
Article
PubMed
PubMed Central
Google Scholar
Milojević, S. Quantifying the cognitive extent of science. J. Informetrics 9, 962–973 (2015).
Article
Google Scholar
McMahan, P. & Evans, J. Ambiguity and engagement. Am. J. Sociol. 124, 860–912 (2018).
Article
Google Scholar
Merton, R. K. The matthew effect in science: the reward and communication systems of science are considered. Science 159, 56–63 (1968).
Article
ADS
PubMed
Google Scholar
Borger, J. G. et al. Artificial intelligence takes center stage: exploring the capabilities and implications of chatgpt and other AI-assisted technologies in scientific research and education. Immunol. Cell Biol. 101, 923–935 (2023).
Article
PubMed
Google Scholar
Lawrence, N. D. & Montgomery, J. Accelerating AI for science: open data science for science. Royal Soc. Open Sci. 11, 231130 (2024).
Article
ADS
Google Scholar
King, R. D. et al. The automation of science. Science 324, 85–89 (2009).
Article
ADS
PubMed
Google Scholar
Burger, B. et al. A mobile robotic chemist. Nature 583, 237–241 (2020).
Article
ADS
PubMed
Google Scholar
Krauss, A. Debunking Revolutionary Paradigm Shifts: Evidence of Cumulative Scientific Progress Across Science Vol. 480, 20240141 (The Royal Society, 2024).
Microsoft Academic Graph (Microsoft, 2015); https://www.microsoft.com/en-us/research/project/microsoft-academic-graph.
Open Academic Graph (Aminer, 2020); https://old.aminer.cn/oag-2-1.
Porter, A. & Rafols, I. Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics 81, 719–745 (2009).
Article
Google Scholar
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).
Article
ADS
Google Scholar
LeCun, Y. et al. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551 (1989).
Article
Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual Learning for image recognition. In CVPR'16: Proc. 2016 IEEE conference on computer vision and pattern recognition 770–778 (2016).
Face, H. Bert for Sequence Classification (Hugging Face, 2025); https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForSequenceClassification.
Sekara, V. et al. The chaperone effect in scientific publishing. Proc. Natl Acad. Sci. USA 115, 12603–12607 (2018).
Article
ADS
PubMed
PubMed Central
Google Scholar
Chen, T. & Guestrin, C. XGBoost: a scalable tree boosting system. In KDD'16: Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 785–794 (2016).
Hill, R. et al. The pivot penalty in research. Nature 642, 999–1006 (2025).
Milojević, S., Radicchi, F. & Walsh, J. P. Changing demographics of scientific careers: the rise of the temporary workforce. Proc. Natl Acad. Sci. USA 115, 12616–12623 (2018).
Article
ADS
PubMed
PubMed Central
Google Scholar
Xu, F., Wu, L. & Evans, J. Flat teams drive scientific innovation. Proc. Natl Acad. Sci. USA 119, e2200927119 (2022).
Article
PubMed
PubMed Central
Google Scholar
Lin, Y., Frey, C. B. & Wu, L. Remote collaboration fuses fewer breakthrough ideas. Nature 623, 987–991 (2023).
Article
ADS
PubMed
Google Scholar
Kingman, J. F. C. Poisson Processes Vol. 3 (Clarendon, 1992).
Meisling, T. Discrete-time queuing theory. Operat. Res. 6, 96–105 (1958).
Article
MathSciNet
Google Scholar
Download references
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (grant no. U23B2030, 23IAA02114 and 62472241), the joint project of Infinigence AI & Tsinghua University, and Tsinghua University-Toyota Research Institute to Y. L. and F.X. J.E. received support from Novo Nordisk Foundation (Simulations of Science for Society), NSF (grant no. 2404109) and the United States Department of Defense (Defense Advanced Research Projects Agency - Modeling and Measuring Scientific Creativity). The funders had no role in study design, data collection, analysis, preparation of or decision to publish the manuscript.
Author information
Authors and Affiliations
Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, P. R. China
Qianyue Hao, Fengli Xu & Yong Li
Zhongguancun Academy, Beijing, P. R. China
Yong Li
Knowledge Lab and Department of Sociology, University of Chicago, Chicago, IL, USA
James Evans
Santa Fe Institute, Santa Fe, NM, USA
James Evans
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Contributions
F.X., Y.L. and J.E. jointly launched this research and designed the research outline. Q.H. analysed the data and prepared the figures. All authors jointly participated in writing and revising the manuscript.
Corresponding authors
Correspondence to
Fengli Xu, Yong Li or James Evans.
Ethics declarations
Competing interests
J.E. has a commercial affiliation with Google, but Google had no role in the design, analysis, or decision to publish this study. The authors declare no other competing interests.
Peer review
Peer review information
Nature thanks Giovanni Colavizza, Luis Nunes Amaral, Catherine Shea, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
Extended Data Fig. 1 Illustration for the method of identifying AI usage in research papers with fine-tuned language models.
(a) Structure of our deployed language model, which consists of the tokenizer, the core BERT model, and the linear layer. (b) Procedure of the two-stage model fine-tuning process, where we design specific approaches for constructing positive and negative data at each stage.
Extended Data Fig. 2 Procedure of accuracy evaluation via expert evaluation.
We randomly sample 1320 papers and delegate three experts to scrutinize the identification results for each paper. We then draw the final expert label of each paper from the three experts according to the principle of the minority obeying the majority and validate the result of the language model with it. Results indicate strong consistency among experts and high accuracy with our identification results.
Extended Data Fig. 3 Comparison of the total citations of AI and non-AI papers published in different eras.
Results show that AI papers consistently attract more citations over different eras (P < 0.001, n = 27,405,011), indicating a higher academic impact than non-AI papers. 99% CIs are shown as error bars centred at the mean, and the statistical tests use a two-sided t-test.
Source data
Extended Data Fig. 4 Annual publications of researchers adopting AI and their counterparts without AI.
Results show that in all 6 scientific disciplines, researchers adopting AI are more productive than their counterparts without AI (P < 0.001, n = 5,377,346). On average, researchers adopting AI annually publish 3.02 times more papers compared with those not using AI. 99% CIs are shown as error bars centred at the mean, and the statistical tests use a two-sided t-test.
Source data
Extended Data Fig. 5 Scientists’ career role transition.
(a) The career role transition of researchers. We consider the last author of each paper as research project leader and researchers who have been research project leaders as established researchers. Researchers who have yet to lead a research project are junior researchers, and they have two potential role transition pathways in the future: (1) become established researchers (solid arrow), and (2) abandon academia (dashed arrow). (b) Change in the ratio of conceptual work across the research career, before and after becoming an established researcher. The ratio increases rapidly before the role transition to established researchers, while it remains stable and high after that transition. 99% CIs are shown as error bands centred at the mean.
Source data
Extended Data Fig. 6 Team composition of AI and non-AI papers.
(a) AI research is associated with reduced research team sizes, averaging 1.33 fewer scientists (P < 0.001, n = 33,528,469). Specifically, the average number of junior scientists decreased from 2.89 in non-AI teams to 1.99 in AI teams (31.14%), while the number of established scientists decreased from 4.01 to 3.58 (10.77%). (b)-(d) Change in team size, average number of junior researchers, and average number of established researchers. These findings indicate that within the overall trend of increasing size of scientific research teams, AI adoption primarily contributes to a reduction in the number of junior scientists in teams, while a decrease in the number of established scientists is more moderate. (e) The average career age of team leaders in AI and non-AI papers. (f) The average career age of all involved established researchers in AI and non-AI papers. Results indicate that AI accelerates the transition from junior to established scientists, enabling AI-adopted researchers to become established at a younger age than those without AI. For all panels, 99% CIs are shown as error bars or error bands centred at the mean. All statistical tests use a two-sided t-test.
Source data
Extended Data Fig. 7 Model fitting the role transition time of junior scientists.
(a) (c) (e) Survival functions for the transition from junior to established researcher in (a) biology (n = 625,093), (c) medicine (n = 1,137,076), and (e) physics (n = 120,366). (b) (d) (f) Survival functions for the transition from junior researcher to leave academia in (b) biology (n = 625,093), (d) medicine (n = 1,137,076), and (f) physics (n = 120,366). All survival functions can be well-fit with exponential distributions, where the expected time for junior scientists to become established is shorter for those who adopt AI (P < 0.001), while the expected time for junior scientists to abandon academia is similar or slightly longer for those who adopt AI. Results indicate that AI not only provides junior scientists opportunities to become established scientists at a younger age, but also reduces the risk of their exiting academia early. For all panels, 99% CIs are shown as error bars centred at the mean. All statistical tests use a two-sided t-test.
Source data
Extended Data Fig. 8 The knowledge extent of AI and non-AI papers.
Here we visualize the embeddings of a small random sample of 2,000 papers, half of which are AI papers and half are non-AI papers. To eliminate randomness introduced by the t-SNE algorithm, here we simply pick out the first two dimensions of the high-dimensional embeddings to flatten them into a 2-D plot, and we provide 5 different random batches for each field to ensure robustness. As shown by the solid arrows and circular boundaries, the knowledge extent of AI papers is smaller than that of a comparable sample of non-AI papers, which is consistent across the fields studied in our analysis.
Extended Data Fig. 9 The knowledge extent of AI and non-AI papers in each subfield.
Compared with conventional research, AI research is associated with a shrinkage in the collective knowledge extent of science, where the contraction of knowledge extent can be observed in more than 70% of over two hundred sub-fields (n = 1,000 samples in each subfield). For all subfields, 99% CIs are shown as error bars centred at the mean.
Source data
Extended Data Fig. 10 The Matthew effect in citations to AI and non-AI papers.
In AI research, a small number of superstar papers dominate the field, with approximately 20% of top papers receiving 80% of citations and 50% receiving 95%. This unequal distribution leads to a higher Gini coefficient in citation patterns surrounding AI research (P < 0.001, n = 100 sampled paper groups for each discipline). Such disparity in the recognition of AI papers is consistent across all fields examined. For all panels, 99% CIs are shown as error bars or error bands centred at the mean. All statistical tests use a two-sided t-test.
Source data
Supplementary information
Supplementary Information
Supplementary Sections 1–4, Figs. 1–37 and Tables 1–12. These provide further detail and background information, and numerous extended analyses and robustness tests to the main results.
Reporting Summary
Peer Review file
Supplementary Data
Source data for Supplementary Figs. 1 and 4–37.
Source data
Source Data Fig. 1.
Source Data Fig. 2.
Source Data Fig. 3.
Source Data Fig. 4.
Source Data Extended Data Fig. 3.
Source Data Extended Data Fig. 4.
Source Data Extended Data Fig. 5.
Source Data Extended Data Fig. 6.
Source Data Extended Data Fig. 7.
Source Data Extended Data Fig. 9.
Source Data Extended Data Fig. 10.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
About this article
Cite this article
Hao, Q., Xu, F., Li, Y. et al. Artificial intelligence tools expand scientists’ impact but contract science’s focus.
Nature (2026). https://doi.org/10.1038/s41586-025-09922-y
Download citation
Received: 02 January 2025
Accepted: 14 November 2025
Published: 14 January 2026
Version of record: 14 January 2026
DOI: https://doi.org/10.1038/s41586-025-09922-y
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Associated content
AI tools boost individual scientists but could limit research as a whole
AI can turbocharge scientists’ careers — but limit their scope
Advertisement
Explore content
About the journal
Publish with us
Search
Quick links
Nature
(Nature)
ISSN 1476-4687 (online)
ISSN 0028-0836 (print)
nature.com sitemap
About Nature Portfolio
Discover content
Publishing policies
Author & Researcher services
Libraries & institutions
Advertising & partnerships
Professional development
Regional websites
© 2026 Springer Nature Limited
Sign up for the Nature Briefing: Careers newsletter — what matters in careers research, free to your inbox weekly.
相關文章