展示RLM Analyzer:使用遞歸語言模型進行AI程式碼分析(MIT CSAIL研究)

展示RLM Analyzer:使用遞歸語言模型進行AI程式碼分析(MIT CSAIL研究)

Hacker News·

RLM Analyzer是一款由MIT CSAIL研究人員開發的新型AI程式碼分析工具,它利用遞歸語言模型和Google的Gemini 3,能夠處理遠超典型上下文限制的程式碼庫。該工具提供命令列介面(CLI)和程式設計API,並具備與AI程式設計助手整合的能力。

Security Update: Classic tokens have been revoked. Granular tokens are now limited to 90 days and require 2FA by default. Update your CI/CD workflows to avoid disruption. Learn more.

rlm-analyzer

Image

RLM Analyzer

AI-powered code analysis using Recursive Language Models

Image

Image

Analyze any codebase with AI that can process 100x beyond context limits. Powered by Gemini 3 and based on MIT CSAIL research on Recursive Language Models.

Features

Table of Contents

Installation

Global Installation (Recommended for CLI)

Local Installation (For programmatic use)

npx (No installation required)

Quick Start

1. Configure API Key

Get a free API key from Google AI Studio, then:

2. Analyze Your Code

CLI Reference

Commands

Options

Examples

MCP Server Integration

RLM Analyzer includes an MCP (Model Context Protocol) server for integration with AI coding assistants like Claude Code and Cursor.

Setup with Claude Code

Add to your Claude Code configuration (~/.claude.json or project .mcp.json):

Available MCP Tools

Example MCP Usage

Once configured, you can use these tools in Claude Code:

Programmatic API

Basic Usage

Factory Functions

Low-Level Orchestrator API

Advanced Features API

Model Configuration

Available Models

Configuration Priority

Model selection follows this priority order:

Using Model Aliases

Environment Variables

Config File

Create ~/.rlm-analyzer/config.json:

Advanced Features

RLM Analyzer implements cutting-edge techniques from the RLM paper for efficient token usage:

Context Compression (50-70% savings)

Automatically compresses sub-LLM results by extracting key information:

Sliding Window History

Keeps recent turns in full detail while compressing older context:

Memory Bank

Extracts and stores key findings for later synthesis:

Adaptive Compression

Compression level adjusts based on context usage:

Context Rot Detection

Detects when the model loses track of context and injects memory reminders.

Parallel Sub-Agent Execution

Runs multiple sub-LLM queries concurrently for faster analysis.

Iterative Refinement (opt-in)

Multi-pass analysis for quality improvement on complex queries.

Configuration

API Key Storage

Your API key can be stored in multiple locations (checked in order):

File Filtering

Default file extensions analyzed:

Default directories excluded:

Supported Languages

How It Works

RLM Analyzer uses Recursive Language Models (RLMs) to analyze codebases that exceed traditional context limits:

The RLM Approach

This enables analysis of codebases 100x larger than traditional context windows.

Cost Savings with MCP Integration

When used as an MCP tool with Claude Code or Cursor, RLM Analyzer significantly reduces costs by offloading expensive analysis to Gemini's more affordable API.

Pricing Comparison (Jan 2026)

Gemini is 6-30x cheaper per token than Claude.

Real-World Cost Example

Analyzing a 100-file codebase (~500KB, ~125K tokens):

Savings: ~60% on typical analysis tasks.

Larger Codebases (500+ files)

Why It Works

Additional Savings Options

Security

Troubleshooting

"API key not configured"

"No files found to analyze"

Make sure you're in a directory with code files, or specify a directory:

Analysis is slow

Execution errors in verbose mode

Some codebases trigger security filters (e.g., files containing process.env). The analysis will still complete but may take more turns.

MCP server not connecting

TypeScript Types

All types are exported for TypeScript users:

License

MIT

Credits

Based on research from MIT CSAIL:

Contributing

Contributions welcome! Please read our contributing guidelines before submitting PRs.

Support

Readme

Keywords

Package Sidebar

Install

npm i rlm-analyzer

Repository

Gitgithub.com/zendizmo/rlm-analyzer

Homepage

github.com/zendizmo/rlm-analyzer#readme

Version

1.3.0

License

MIT

Unpacked Size

316 kB

Total Files

56

Last publish

13 minutes ago

Collaborators

Image

Footer

Support

Company

Terms & Policies

Hacker News

相關文章

  1. GLM Proxy:將 Anthropic Messages API 請求轉換為 Z.ai GLM-4.7 API 格式

    4 個月前

  2. Show HN:CodeRLM – 支援 Tree-sitter 的程式碼索引,用於 LLM 代理

    2 個月前

  3. TLDR:AI代理的程式碼分析

    3 個月前

  4. 使用 Lynkr 在您自己的模型上運行 Cursor、Claude Code、Codex 等 AI 工具

    3 個月前

  5. AI 程式碼代理如何運作—以及使用時應注意的事項

    4 個月前