2026年,AI將從炒作轉向務實

2026年,AI將從炒作轉向務實

Techcrunch·

文章預測,2026年AI的重點將從大型語言模型的炒作轉移到實際應用,強調更小、可部署的模型以及與人類工作流程的整合。

Image

Image

Topics

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Staff

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Image

In 2026, AI will move from hype to pragmatism

If 2025 was the year AI got a vibe check, 2026 will be the year the tech gets practical. The focus is already shifting away from building ever-larger language models and towards the harder work of making AI usable. In practice, that involves deploying smaller models where they fit, embedding intelligence into physical devices, and designing systems that integrate cleanly into human workflows.

The experts TechCrunch spoke to see 2026 as a year of transition, one that evolves from brute-force scaling to researching new architectures, from flashy demos to targeted deployments, and from agents that promise autonomy to ones that actually augment how people work.

The party isn’t over, but the industry is starting to sober up.

Scaling laws won’t cut it

Image

In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton’s AlexNet paper showed how AI systems could “learn” to recognize objects in pictures by looking at millions of examples. The approach was computationally expensive, but made possible with GPUs. The result? A decade of hardcore AI research as scientists worked to invent new architectures for different tasks.

That culminated around 2020 when OpenAI launched GPT-3, which showed how simply making the model 100 times bigger unlocks abilities like coding and reasoning without requiring explicit training. This marked the transition into what Kian Katanforoosh, CEO and founder of AI agent platform Workera, calls the “age of scaling”: a period defined by the belief that more compute, more data, and larger transformer models would inevitably drive the next major breakthroughs in AI.

Today, many researchers think the AI industry is beginning to exhaust the limits of scaling laws and will once again transition into an age of research.

Yann LeCun, Meta’s former chief AI scientist, has long argued against the over-reliance on scaling, and stressed the need to develop better architectures. And Sutskever said in a recent interview that current models are plateauing and pre-training results have flattened, indicating a need for new ideas.

“I think most likely in the next five years, we are going to find a better architecture that is a significant improvement on transformers,” Katanforoosh said. “And if we don’t, we can’t expect much improvement on the models.”

Sometimes less is more

Large language models are great at generalizing knowledge, but many experts say the next wave of enterprise AI adoption will be driven by smaller, more agile language models that can be fine-tuned for domain-specific solutions.

“Fine-tuned SLMs will be the big trend and become a staple used by mature AI enterprises in 2026, as the cost and performance advantages will drive usage over out-of-the-box LLMs,” Andy Markus, AT&T’s chief data officer, told TechCrunch. “We’ve already seen businesses increasingly rely on SLMs because, if fine-tuned properly, they match the larger, generalized models in accuracy for enterprise business applications, and are superb in terms of cost and speed.”

We’ve seen this argument before from French open-weight AI startup Mistral: it argues its small models actually perform better than larger models on several benchmarks after fine-tuning.

“The efficiency, cost-effectiveness, and adaptability of SLMs make them ideal for tailored applications where precision is paramount,” said Jon Knisley, an AI strategist at ABBYY, an Austin-based enterprise AI company.

While Markus thinks SLMs will be key in the agentic era, Knisley says the nature of small models means they’re better for deployment on local devices, “a trend accelerated by advancements in edge computing.”

Learning through experience

Image

Humans don’t just learn through language; we learn by experiencing how the world works. But LLMs don’t really understand the world; they just predict the next word or idea. That’s why many researchers believe the next big leap will come from world models: AI systems that learn how things move and interact in 3D spaces so they can make predictions and take actions.

Signs that 2026 will be a big year for world models are multiplying. LeCun left Meta to start his own world model lab, and is reportedly seeking a $5 billion valuation. Google’s DeepMind has been plugging away at Genie, and in August launched its latest model that builds real-time interactive general-purpose world models. Alongside demos by startups like Decart and Odyssey, Fei-Fei Li’s World Labs has launched its first commercial world model, Marble. Newcomers like General Intuition in October scored a $134 million seed round to teach agents spatial reasoning, and video generation startup Runway in December released its first world model, GWM-1.

While researchers see long-term potential in robotics and autonomy, the near-term impact is likely to be seen first in video games. PitchBook predicts the market for world models in gaming could grow from $1.2 billion between 2022 and 2025 to $276 billion by 2030, driven by the tech’s ability to generate interactive worlds and more life-like non-player characters.

Pim de Witte, founder of General Intuition, told TechCrunch virtual environments may not only reshape gaming, but also become critical testing grounds for the next generation of foundation models.

Agentic nation

Agents failed to live up to the hype in 2025, but a big reason for that is because it’s hard to connect them to the systems where work actually happens. Without a way to access tools and context, most agents were trapped in pilot workflows.

Anthropic’s Model Context Protocol (MCP), a “USB-C for AI” that lets AI agents talk to the external tools like databases, search engines and APIs, proved the missing connective tissue, and is quickly becoming the standard. OpenAI and Microsoft have publicly embraced MCP, and Anthropic recently donated it to the Linux Foundation’s new Agentic AI Foundation, which aims to help standardize open-source agentic tools. Google also has begun standing up its own managed MCP servers to connect AI agents to its products and services.

With MCP reducing the friction of connecting agents to real systems, 2026 is likely to be the year agentic workflows finally move from demos into day-to-day practice.

Rajeev Dham, a partner at Sapphire Ventures, says these advancements will lead to agent-first solutions taking on “system-of-record roles” across industries.

“As voice agents handle more end-to-end tasks such as intake and customer communication, they’ll also begin to form the underlying core systems,” Dham said. “We’ll see this in a variety of sectors like home services, proptech, and healthcare, as well as horizontal functions such as sales, IT, and support.”

Augmentation, not automation

Image

While more agentic workflows might raise worries that layoffs may follow, Katanforoosh of Workera isn’t so sure that’s the message.

“2026 will be the year of the humans,” he said.

In 2024, every AI company predicted they would automate jobs out of needing humans. But the tech isn’t there yet, and in an unstable economy, that’s not really a popular rhetoric. Katanforoosh says next year, we’ll realize that “AI has not worked as autonomously as we thought,” and the conversation will focus more on how AI is being used to augment human workflows, rather than replace them.

“And I think a lot of companies are going to start hiring,” he added, noting that he expects there to be new roles in AI governance, transparency, safety, and data management. “I’m pretty bullish on unemployment averaging under 4% next year.”

“People want to be above the API, not below it, and I think 2026 is an important year for this,” de Witte added.

Getting physical

Image

Advancements in technologies like small models, world models, and edge computing will enable more physical applications of machine learning, experts say.

“Physical AI will hit the mainstream in 2026 as new categories of AI-powered devices, including robotics, AVs, drones and wearables start to enter the market,” Vikram Taneja, head of AT&T Ventures, told TechCrunch.

While autonomous vehicles and robotics are obvious use cases for physical AI that will no doubt continue to grow in 2026, the training and deployment required is still expensive. Wearables, on the other hand, provide a less expensive wedge with consumer buy-in. Smart glasses like Meta’s Ray Bans are starting to ship assistants that can answer questions about what you’re looking at, and new form factors like AI-powered health rings and smart watches are normalizing always-on, on-body inference.

“Connectivity providers will work to optimize their network infrastructure to support this new wave of devices, and those with flexibility in how they can offer connectivity will be best positioned,” Taneja said.

Topics

Image

Senior Reporter

Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.

You can contact or verify outreach from Rebecca by emailing [email protected] or via encrypted message at rebeccabellan.491 on Signal.

Image

Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.

Techcrunch

相關文章

  1. 預見 2026 年的 AI 工程領域發展

    Hacker News · 4 個月前

  2. 2026 AI 預測:小型語言模型、消費級 AI、市場與泡沫

    Hacker News · 4 個月前

  3. 2026年的人工智慧

    Hacker News · 4 個月前

  4. 2026年人工智慧的發展方向

    Hacker News · 4 個月前

  5. 2026年人工智慧的下一步發展

    Hacker News · 4 個月前