我如何在 Sublime Text 中整合 AI 工具

我如何在 Sublime Text 中整合 AI 工具

Hacker News·

一位經驗豐富的網頁開發者分享了他在 Sublime Text 中使用 AI 的個人化方法,強調透過對話式程式碼生成和迭代,而非內嵌自動補全,以適應其多樣化的技術堆疊。

How I use AI in Sublime Text

Go home

My Editor Situation

In my previous article about Sublime Text (Why I Still Like Sublime Text In 2025), I mentioned all the reasons I still like using Sublime Text over any of the other editors I have tried.

In quick summary, the overall speed/performance, snippets, LSP support, project workspaces, keybinding features, build system, and plugin ecosystem/interface are still unrivaled (in my opinion) when compared across the board with other editors.

Before you mention your favourite editor, I have likely tried it—maybe with the exception of Emacs. Even then, I am unlikely to switch to anything other than maybe Zed.

How I Use My AI

Sublime SimpleAI demo using TOML file

Sorry, your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player!

I have been a professional web developer since about 2012, so I have quite a bit of experience. The way I use AI is likely very different from someone who only has a couple of years of experience or is only familiar with TypeScript and React.

During my day job, I do use TypeScript and React most of the time. However, I am also using Astro, Alpine.js, various CI/CD scripts for BitBucket and GitHub Actions, Dockerfiles, raw CSS, and occasionally Twig, Liquid, and PHP via Laravel with and without Livewire.

Outside work projects, I am regularly writing fish shell, Lua and Luau, Python, and GDScript. Keep that in mind regarding what my needs are from AI tools.

I typically do not want or need inline autocomplete driven by AI. "AI autocomplete" is really not for me. I prefer to converse with an AI about what I am trying to do and iterate on a snippet of code until it is what I want. Then I bring it into my editor. I have various ways of doing this, which I will cover later.

Checking my hand written code, adding comments, writing tests, PR reviews, generating fake data. These are all things I think it is great at. These are chores I struggle to find the time to do myself. So I will delegate those to the robot for 0.01 cents per million tokens.

I am also someone who still reads the docs. Shocking, I know. But I still think the best way to get started in a framework is to read the entire documentation. Yes, read all the official documentation. I recently did this with Astro and it was incredibly helpful.

It is not a quick endeavour, but by reading about every detail of the tool, I believe it embeds something in your subconscious that helps you in ways you may not realize. But I digress.

I will usually try to visit the docs for the thing I am working on to see if there is a solution there before I turn to AI to actually generate the code for me. I think this also primes your prompting to make it more specific and hardened against common pitfalls or outdated LLM knowledge.

On to how I do use it.

In The Editor

Given the proliferation of AI tools, it has never been easier to generate plugins for Sublime. I found an existing plugin that I liked, forked it, and made my own changes. I called the plugin sublime-simpleai and it has two main features: "complete the current line" and "prompt with the current file or selection".

The way this plugin works is that it supports OpenAI-compatible API endpoints and sends one-off requests to them. The plugin itself is very basic.

The reason it is not pinned to a specific API is that I like to bounce around to multiple LLMs depending on the task. I also like to test out new releases when they come out. This helps me do that.

One interesting part is that I used Sublime's snippet system for managing the two prompts. This is handy because it allows you to use the $TM_ variables (editor context variables like line position, file name, project name, etc.) in your prompt.

I also insert custom variables that are helpful for writing prompts based on the editor context. One such variable is $SYNTAX, which uses the current syntax that the file is set to instead of a flaky mapping of the file extension to a file type. This takes advantage of how you have your editor set up. Imagine working on a .eslintrc file that you have set to "JSON" syntax. That file should be treated as JSON, and the file extension (of which there is none in this case) has nothing to do with the actual source code.

I even made it so that you can override any of the plugin settings at the project workspace level, including the prompts and which AI service/endpoint and model you are using. This is similar to what you would expect a VSCode-style editor to do if the integration was native.

The two functions work as follows:

All this does is try to complete the current line given the entire file as context. The prompt is currently:

The idea is to generate a stand-alone snippet that replaces the current line or selection. I almost never use this.

Using the whole file or a selection, this attaches an additional prompt/message and opens a new "scratch" view (like a read-only tab/buffer) with the response from the AI.

The prompt for this one is quite a bit more involved, but I have had very consistent results with it and it gives me what I want most of the time.

A small breakdown:

Sublime SimpleAI demo using TOML file

Sorry, your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player!

Sublime SimpleAI demo using HTML file

Sorry, your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player!

This may seem like a pretty crude way to use AI given all the new tools, but this approach allows me to reduce the distance I have to travel to get an AI answer for something I am in the middle of working on. This also puts the tool in my editor of choice in the way I prefer.

That is not the only way I use AI, though. I have a variety of other tools I use in the CLI rather than directly in the editor.

Outside The Editor

In order of how often I use them:

Crush

I use Crush perhaps the most out of the AI terminal tools. The UI is super slick and I like that it is integrated with LSP and MCP to help provide the best experience when working on code. It seems like other tools are also going this way, as Claude Code is now using LSP more.

Vibe

I have been using Vibe since it was released. I really like the speed and it seems well-tuned to following tasks.

OpenCode

I use OpenCode a little less than I used to, but they are constantly adding new features. I actually use their Zen service for Crush—which I’m sure is blasphemy if you know the history between the two.

Goose

I find Goose a little hit-or-miss, but I do find their recipe system to be a nice way to create reusable logic. They had this before "skills" and MCP actually took off.

Gemini CLI

I sometimes lean on Gemini CLI as they have a generous free tier and some cool capabilities with their "extensions" system. Extensions package up different tools (MCP, prompts, etc.) in one package that makes them easy to install and update.

Aider

I used to use Aider quite a bit, but it fell out of favor given the lack of MCP support. There are a few PRs for this, but as of this post, none are in a current (~0.86.0) release.

I’ll toss an honorable mention to gptme and Simon Willison's LLM tool with the llm-cmd plugin. I will often use the llm-cmd plugin to help me write more complicated snippets in the terminal.

Brave "Ask"

I really like the Brave "Ask" search tool. I have used Perplexity and Phind but have since ditched both for Brave. I like the emphasis it puts on search and references; they make it really easy to find the actual links where the results came from.

ChatWise

I use ChatWise as my desktop AI/LLM application. It has a simple UI with a solid UX.

Gemini App Site

I will often use the Gemini App Site with the "canvas" tool turned on. I like using this to generate single HTML files. I actually used this workflow to write a couple of Sublime plugins as well. Very handy.

I have dabbled with Codex and Claude a little bit. But the costs of those models just turn me off. I would prefer to use OpenRouter and then select the models through there but not be tied to a subscription or have to worry about switching tools or accounts. I just switch the model when I need one of the other ones.

Since I am employed as a company that is a Google Workspace customer, I have access to Gemini Pro. I have found it to be more than enough for my needs when I really need to churn on something. Especially with Gemini 3.

For The Skeptics

There is clearly a lot of pressure for people to use these AI tools. Maybe it is because they are all losing money and are in a battle for your monthly payment?

There are clearly cases where it can be a productivity booster. Checking my hand written code, adding comments, writing tests, PR reviews, generating fake data. These are all things I think it is great at. These are chores I struggle to find the time to do myself. So I will delegate those to the robot for 0.01 cents per million tokens. And that is on the high side depending on which you pick.

So if you have been skeptical or have been struggling to find a place for AI in your workflow, maybe you are just using the wrong tools. The LLM tools are as ubiquitous as NPM packages these days, so I’m sure you can find one that works for you. Or do as I did, and build your own!

Go home

Image

James Doyle

I'm a full-stack developer, co-organizer of PHP Vancouver meetup, and winner of a Canadian Developer 30 under 30 award. I'm a huge Open Source advocate and contributor to a lot of projects in my community. When I am not sitting at a computer, I'm trying to perfect some other skill.

Hacker News

相關文章

  1. AI輔助技術寫作的四種模式

    3 個月前

  2. 駕馭AI代理:我的AI輔助編碼「Spec-Test-Lint」工作流程

    3 個月前

  3. 運用AI作為設計工程師

    3 個月前

  4. AI協同編輯的真實難度

    3 個月前

  5. 我如何意外成為科技巨頭中的AI重度使用者

    3 個月前