Show HN:Stability First AI – 無需訓練資料即可恢復記憶

Show HN:Stability First AI – 無需訓練資料即可恢復記憶

Hacker News·

此篇 Hacker News 的「Show HN」貼文介紹了 Stability First AI,一個利用遞歸時間架構(Recursive Time architecture)、主動睡眠(Active Sleep,生成式重播)和時間性 LoRA(Temporal LoRA)來解決神經網路災難性遺忘問題的專案。該專案旨在透過無需重新訓練即可恢復記憶,來驗證「拉撒路效應」(Lazarus Effect)。

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

[ENG] Solving catastrophic forgetting with Recursive Time architecture, Active Sleep (generative replay), and Temporal LoRA. Proving the "Lazarus Effect" in neural networks. [RUS] Решение проблемы катастрофического забывания: Архитектура с Рекурсивным Временем, Active Sleep (генеративная память) и Temporal LoRA для GPT-2.

License

Uh oh!

There was an error while loading. Please reload this page.

vitali-sialedchyk/stability-first-ai

Folders and files

Latest commit

History

Repository files navigation

⏳ Recursive Time & Stability-First AI

Image

A collection of experiments exploring memory, catastrophic forgetting, and temporal modularity in neural networks.

Author: Vitali Sialedchyk

🧠 Core Thesis

Modern AI systems exist in "instantaneous time" — optimizing only for the current data batch. This project implements the Stability-First hypothesis:

Time in an AI system is defined by structural inertia. By treating weight stability as "System Time", we can prevent catastrophic forgetting and achieve modular, reversible learning.

📂 Project Roadmap

🚀 Quick Start ("Hero" Experiment)

If you want to run just one experiment, choose Temporal LoRA. It demonstrates dynamic context switching in GPT-2.

Watch as the model automatically learns to route "To code or not to code" to the Shakespeare adapter, and "import torch" to the Python adapter.

📊 Key Results

1. Lazarus Effect (Latent Reversibility)

We proved that even when model accuracy on Task A drops to 0.00% after training on Task B, knowledge remains encoded in the backbone.

Recovery: 94.65% accuracy recovered with just 50 examples.

2. Time Mixer Accuracy (GPT-2)

In our Temporal LoRA experiment, the gating network successfully learned to distinguish semantic epochs.

Router accuracy: 100.0% after contrastive calibration.

3. Subjective Time (The Critic)

In experiment #6, we showed how a system can autonomously regulate its learning rate (λ) based on prediction error (Surprise). This mimics dopamine function in the brain.

Result: Lambda dynamically adapts from 1805 (high Surprise) to 2647 (low Surprise).

📁 Project Structure

🚀 Running All Experiments

Project 1: Active Sleep (MNIST)

Result: Task A retention: 96.30% ✅

Project 2: Temporal LoRA (GPT-2) 🌟 HERO

Result: Router Accuracy: 100.0% ✅

Project 3: Stability-First (Basic)

Result: Task A retention: 93.52% ✅

Project 4: Stability-First (Reversibility)

Result: Task A retention: 94.65% ✅

Project 5: Recursive-Time (Full Suite)

Result: All methods show 94-95% retention ✅

Project 6: Subjective Time (The Critic)

Result: Lambda adapts dynamically (1805 → 2647) ✅

📈 Results Comparison Table

🎯 Key Takeaways

🔧 Technical Details

Windows Fixes

Dependencies

📚 Documentation

🤝 Citation

If you find this research useful, please use the following citation:

⚖️ License & Commercial Use

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).

Want to use Stability-First AI in your product?
We offer commercial licensing options including support and architectural consulting.
📩 Contact: [email protected] or via GitHub Issues.

See the LICENSE file for full terms and conditions.

🏆 Achievements

Last updated: 2026

About

[ENG] Solving catastrophic forgetting with Recursive Time architecture, Active Sleep (generative replay), and Temporal LoRA. Proving the "Lazarus Effect" in neural networks. [RUS] Решение проблемы катастрофического забывания: Архитектура с Рекурсивным Временем, Active Sleep (генеративная память) и Temporal LoRA для GPT-2.

Topics

Resources

License

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

Packages

  0

Languages

Footer

Footer navigation

Hacker News

相關文章

  1. Show HN:Remember Me AI (完整發布) – AI 記憶體系統成本大幅降低 40 倍

    4 個月前

  2. 衡量AI無思維鏈數學時間跨度(單次前向傳播)

    Lesswrong · 4 個月前

  3. 突破線性障礙:用於長時程AI工程的遞迴蜂群

    3 個月前

  4. 打造AI數據分析師:那些沒人告訴你的工程惡夢

    4 個月前

  5. 打造千萬節點AI記憶體系:架構、失敗與寶貴經驗

    4 個月前

其他收藏 · 0