TLDR:AI代理的程式碼分析

TLDR:AI代理的程式碼分析

Hacker News·

TLDR 是一個旨在協助 AI 代理理解和處理大型程式碼庫的工具,它透過提取結構資訊和追蹤依賴關係,實現顯著的 token 節省和更快的查詢速度。

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

95% token savings. 155x faster queries. 16 languages. LLMs can't read your entire codebase. TLDR extracts structure, traces dependencies, and gives them exactly what they need.

License

Uh oh!

There was an error while loading. Please reload this page.

parcadei/llm-tldr

Folders and files

Latest commit

History

Repository files navigation

TLDR: Code Analysis for AI Agents

Image

Image

Image

Give LLMs exactly the code they need. Nothing more.

Your codebase is 100K lines. Claude's context window is 200K tokens. Raw code won't fit—and even if it did, the LLM would drown in irrelevant details.

TLDR extracts structure instead of dumping text. The result: 95% fewer tokens while preserving everything needed to understand and edit code correctly.

How It Works

TLDR builds 5 analysis layers, each answering different questions:

Why layers? Different tasks need different depth:

The daemon keeps indexes in memory for 100ms queries instead of 30-second CLI spawns.

Architecture

The Semantic Layer: Search by Behavior

The real power comes from combining all 5 layers into searchable embeddings.

Every function gets indexed with:

This gets encoded into 1024-dimensional vectors using bge-large-en-v1.5. The result: search by what code does, not just what it says.

Why this works: Traditional search finds authentication in variable names and comments. Semantic search understands that verify_access_token() performs JWT validation because the call graph and data flow reveal its purpose.

Setting Up Semantic Search

Embedding dependencies (sentence-transformers, faiss-cpu) are included with pip install llm-tldr. The index is cached in .tldr/cache/semantic.faiss.

Keeping the Index Fresh

The daemon tracks dirty files and auto-rebuilds after 20 changes, but you need to notify it when files change:

Integration options:

Git hook (post-commit):

Editor hook (on save):

Manual rebuild (when needed):

The daemon auto-rebuilds semantic embeddings in the background once the dirty threshold (default: 20 files) is reached.

The Workflow

Before Reading Code

Before Editing

Before Refactoring

Debugging

Finding Code by Behavior

Quick Setup

1. Install

2. Index Your Project

This builds all analysis layers and starts the daemon. Takes 30-60 seconds for a typical project, then queries are instant.

3. Start Using

Real Example: Why This Matters

Scenario: Debug why user is null on line 42.

Without TLDR:

With TLDR:

Output: Only 6 lines that affect line 42:

The bug is obvious. Line 28 uses user without going through the null check path.

Command Reference

Exploration

Analysis

Cross-File

Semantic

Diagnostics

Daemon

Supported Languages

Python, TypeScript, JavaScript, Go, Rust, Java, C, C++, Ruby, PHP, C#, Kotlin, Scala, Swift, Lua, Elixir

Language is auto-detected or specify with --lang.

MCP Integration

For AI tools (Claude Desktop, Claude Code):

Claude Desktop - Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

Claude Code - Add to .claude/settings.json:

Configuration

.tldrignore - Exclude Files

TLDR respects .tldrignore (gitignore syntax) for all commands including tree, structure, search, calls, and semantic indexing:

Default exclusions:

Customize by editing .tldrignore:

CLI Flags:

Settings - Daemon Behavior

Create .tldr/config.json for daemon settings:

Monorepo Support

For monorepos, create .claude/workspace.json to scope indexing:

Performance

Deep Dive

For the full architecture explanation, benchmarks, and advanced workflows:

Full Documentation

License

AGPL-3.0 - See LICENSE file.

About

95% token savings. 155x faster queries. 16 languages. LLMs can't read your entire codebase. TLDR extracts structure, traces dependencies, and gives them exactly what they need.

Resources

License

Contributing

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

Packages

  0

Contributors

  7

Image

Image

Image

Image

Image

Image

Image

Languages

Footer

Footer navigation

Hacker News

相關文章

  1. AI 程式碼代理如何運作—以及使用時應注意的事項

    4 個月前

  2. 程式碼編寫代理的組成要素

    Sebastian Raschka'S Blog · 19 天前

  3. 展示RLM Analyzer:使用遞歸語言模型進行AI程式碼分析(MIT CSAIL研究)

    3 個月前

  4. 透過分層文件將 AI 編碼模式違規率從 40% 降至 8%

    3 個月前

  5. Show HN:CodeRLM – 支援 Tree-sitter 的程式碼索引,用於 LLM 代理

    2 個月前