ElevenLabs執行長:語音是下一個AI介面

ElevenLabs執行長:語音是下一個AI介面

Techcrunch·

ElevenLabs執行長Mati Staniszewski表示,語音正成為AI的主要介面,將超越文字和螢幕。這項願景也獲得了公司近期5億美元的融資支持。

Image

Image

Topics

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Staff

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Image

ElevenLabs CEO: Voice is the next interface for AI

ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI – the way people will increasingly interact with machines as models move beyond text and screens.

Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech — including emotion and intonation – to working in tandem with the reasoning capabilities of large language models. The result, he argued, is a shift in how people interact with technology.

In the years ahead, he said, “hopefully all our phones will go back in our pockets, and we can immerse ourselves in the real world around us, with voice as the mechanism that controls technology.”

That vision fueled ElevenLabs’s $500 million raise this week at an $11 billion valuation, and it is increasingly shared across the AI industry. OpenAI and Google have both made voice a central focus of their next-generation models, while Apple appears to be quietly building voice-adjacent, always-on technologies through acquisitions like Q.ai. As AI spreads into wearables, cars, and other new hardware, control is becoming less about tapping screens and more about speaking, making voice a key battleground for the next phase of AI development.

Iconiq Capital general partner Seth Pierrepont echoed that view onstage at Web Summit, arguing that while screens will continue to matter for gaming and entertainment, traditional input methods like keyboards are starting to feel “outdated.”

And as AI systems become more agentic, Pierrepont said, the interaction itself will also change, with models gaining guardrails, integrations, and context needed to respond with less explicit prompting from users.

Staniszewski pointed to that agentic shift as one of the biggest changes underway. Rather than spelling out every instruction, he said future voice systems will increasingly rely on persistent memory and context built up over time, making interactions feel more natural and requiring less effort from users.

TechCrunch Founder Summit 2026: Tickets Live

TechCrunch Founder Summit: Tickets Live

That evolution, he added, will influence how voice models are deployed. While high-quality audio models have largely lived in the cloud, Staniszewski said ElevenLabs is working toward a hybrid approach that blends cloud and on-device processing — a move aimed at supporting new hardware, including headphones and other wearables, where voice becomes a constant companion rather than a feature you decide when to engage with.

ElevenLabs is already partnering with Meta to bring its voice technology to products including Instagram and Horizon Worlds, the company’s virtual reality platform. Staniszewski said he would also be open to working with Meta on its Ray-Ban smart glasses as voice-driven interfaces expand into new form factors.

But as voice becomes more persistent and embedded in everyday hardware, it opens the door to serious concerns around privacy, surveillance, and how much personal data voice-based systems will store as they move closer to users’ daily lives — something companies like Google have already been accused of abusing.

Topics

Image

Senior Reporter

Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.

You can contact or verify outreach from Rebecca by emailing [email protected] or via encrypted message at rebeccabellan.491 on Signal.

Image

Tickets are live at the lowest rates of the year. Save up to $680 on your pass now.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings.

Techcrunch

相關文章

  1. ElevenLabs執行長表示:語音AI新創公司去年營收突破3.3億美元

    3 個月前

  2. ElevenLabs 獲得紅杉資本 5 億美元投資,估值達 110 億美元

    3 個月前

  3. 在TechCrunch Disrupt 2025上,與Mati Staniszewski一同探索語音AI的未來

    7 個月前

  4. ElevenLabs執行長預測AI音訊模型將隨時間變得商品化

    6 個月前

  5. AI的未來是語音

    Hacker News · 3 個月前