
2025年,AI成為我的聯合創始人
作者反思在2025年,AI代理如何成為他三人新創公司營運不可或缺的一部分,顯著提升了他們處理資訊和學習的能力,儘管公司尚未找到產品市場契合點。

Richard's Substack
In 2025, AI Became My Co-Founder
How I'm using AI agents to help me to find PMF

I posted this on LinkedIn today. It was a joke. Mostly.

I’d just finished reviewing a recent product experiment we completed with Claude Code. It walked me through how we’re still making mistakes on our experimentation loop.
Thanks for reading Richard's Substack! Subscribe for free to receive new posts and support my work.
We just passed the end of 2025, and my startup still hasn’t found PMF. We’re still getting things wrong. But at the same time, our ability to ingest and process information from the real world has never been better. And more importantly, we’re able to continuously improve the rate at which we’re learning. That’s in large part thanks to now roughly forty AI agents that help me to run my three-person startup. Let me explain how we got here.
My Year in Review
For me, 2025 has been a year of upskilling on product. My professional background is ML/AI engineering in Big Tech. When I started my startup Naptha AI, there was so much to learn about software engineering and architecture that I didn’t think about product at all. In hindsight, we predicted a lot of the technology really well. We had something like MCP internally, about a year before the first Anthropic release. We had something like MCP servers, an MCP registry, even an MCP agent framework. As a result, we managed to fundraise $6M to build the platform for the Internet of Agents. However, by the time we launched, MCP had come out and was getting a lot of traction. We had to scramble to adopt it. The space became hyper-competitive and incumbents moved in. We missed the wave.
That’s when I started reading to analyze what had gone wrong. I reread The Lean Startup (along with various Marty Cagan product books, Demand-Side Sales, and more). I’d first read it coming out of college, about six years earlier. The knowledge was in my head, and I could have recited the theory, but I’d still made every mistake in the book. We built in stealth for too long. We didn’t talk to users. It was painful to read.
At that point, we downsized the team to just five people and adopted a more experimental product-based approach. Throughout 2025, we launched four products, and started work on a fifth. We were among the first to do MCP hosting and agent auth. We created a benchmark for how well agents use libraries and APIs (that one hit the front page of Hacker News). We tried to make agent-guided developer onboarding for devtool companies a thing. In December, we pivoted and started discovery on a new product (which we referred to as Cursor for Knowledge Work or Notion for Agents). We completed our first experiment loop and, as I mentioned, are still making significant mistakes in our approach to finding PMF.
What I continue to realize with every iteration loop is that it’s actually very difficult to run a startup or product operating system faithfully. The hard part isn’t knowing Lean (or whatever else), it’s operationalizing it. There’s so much to keep in your head at one time. You need to be familiar with multiple frameworks (e.g. Lean, OKRs, sales), know at what stage they apply, and how they integrate with each other. Your approach continuously requires deep thinking, discussion, and analysis. But you don’t have time because you’re too busy shipping . You end up skipping steps, doing things out of order, or defining the experiment and then the hypothesis. You forget the retrospective. You don’t improve the learning loop itself.
What I Learned about AI
Most people think AI agents are about doing work faster e.g. replacing headcount, automating tasks. That framing misses what’s actually happening. The agents I’m in the loop with 8-10 hours a day don’t do my work. They hold things in mind that I can’t. These aren’t productivity tools. They’re more like external organs for thinking.
Knowing Lean isn’t the bottleneck. The bottleneck is sustaining the cognitive load of actually living inside the framework while also executing. Frameworks require constant vigilance. Humans get tired. The ritual replaces the meaning.
AI agents change the limiting factor. They externalize the vigilance (“are we actually doing what we said we’d do?”). They externalize organizational cognition itself. Suddenly a three-person startup can maintain the learning discipline that used to require a team of managers and consultants to enforce.
I now believe a startup with just a few people can operate with the strategic rigor of a much larger team. That’s what I’m trying to prove. And in this newsletter, I plan to share that. I’ll show you how agents help me to operate, how to strategize and run experiments, and how to decide whether to pivot or persevere. I’ll be transparent about the mistakes I continue to make, and how I use them to continuously iterate on my learning loop. Plus I’ll talk about what I’m learning from other founders doing the same.
Where might this all go? It’s kind of wild to be a founder in the midst of this transformation, and I believe we’re still very early. I sometimes think about how I’ll reflect on this time in a decade, successful or not.
Here’s my prediction. I think we’re moving towards a future where trillions of agents enable millions of independent small organizations to coordinate as effectively as large enterprises. If true, most of the disruption is yet to come.
But first, I have to figure out how to build a PMF machine.
How you can help me
What’s your approach to building products or finding PMF? How faithfully do you think you follow it? Do you ever get feedback on your approach? What’s the last mistake you found or improvement you made? Respond in the comments.
Thanks for reading Richard's Substack! Subscribe for free to receive new posts and support my work.
![]()
No posts
Ready for more?
相關文章