Show HN:EuConform – 離線優先的歐盟 AI 法案合規工具 (開源)

Show HN:EuConform – 離線優先的歐盟 AI 法案合規工具 (開源)

Hacker News·

Hacker News 上發布了一個名為 EuConform 的新開源工具,旨在透過分類 AI 風險等級和測試演算法偏差來協助開發者遵守歐盟 AI 法案。該工具可離線運作,並強調 GDPR 合規性和可訪問性。

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

EU AI Act Compliance Tool - Risk classification and bias testing

License

Licenses found

Uh oh!

There was an error while loading. Please reload this page.

Hiepler/EuConform

Folders and files

Latest commit

History

Repository files navigation

EuConform

🇪🇺 Open-Source EU AI Act Compliance Tool

Classify risk levels • Detect algorithmic bias • Generate compliance reports
100% offline • GDPR-by-design • WCAG 2.2 AA accessible

Image

Image

Image

Image

Image

Image

Image

Image

Important

Legal Disclaimer: This tool provides technical guidance only. It does not constitute legal advice and does not replace legally binding conformity assessments by notified bodies or professional legal consultation. Always consult qualified legal professionals for compliance decisions.

Image

🚀 Quick Start ·
📖 Docs ·
🌐 Deploy ·
🐛 Report Bug

✨ Features

🚀 Quick Start

Want to try it without installation? Click the 🌐 Deploy link above to start your own instance on Vercel.

Prerequisites

Installation

Using with Local AI Models (Optional)

For enhanced bias detection with your own models:

Supports Llama, Mistral, and Qwen variants with automatic log-probability detection.

Warning

Vercel / Cloud Deployment: This feature requires running EuConform locally (pnpm dev).

📖 Documentation

Legal Foundation & Compliance Coverage

Note

Primary Legal Source: Regulation (EU) 2024/1689 (EU AI Act)

Tool Coverage:

Implementation Timeline: Obligations become effective in stages. High-risk obligations apply from 2027. Always verify current guidelines and delegated acts.

Bias Testing Methodology

We use the CrowS-Pairs methodology (Nangia et al., 2020) to measure social biases in language models.

Tip

For best accuracy, use Ollama v0.1.26+ with models supporting the logprobs parameter (Llama 3.2+, Mistral 7B+).

The stereotype pairs are used solely for scientific evaluation and do not reflect the opinions of the developers. Individual pairs are not displayed in the UI to avoid reinforcing harmful stereotypes – only aggregated metrics are shown.

🏗️ Project Structure

🧪 Testing

🛠️ Tech Stack

❓ FAQ

No. This tool provides technical guidance only. Always consult qualified legal professionals for compliance decisions.

Never. All processing happens locally in your browser or via your local Ollama instance. No data is sent to external servers.

Any model works, but models with log-probability support (Llama 3.2+, Mistral 7B+) provide more accurate results. Look for the ✅ indicator.

Yes. The tool is dual-licensed under MIT and EUPL-1.2 for maximum compatibility.

🤝 Contributing

We welcome contributions! Please read our Contributing Guide and Code of Conduct first.

See CONTRIBUTING.md for detailed guidelines.

🔒 Security

For security concerns, please see our Security Policy. Do not create public issues for security vulnerabilities.

📄 License

Dual-licensed under:

Made with ❤️ for responsible AI in Europe

Issues ·
Discussions

About

EU AI Act Compliance Tool - Risk classification and bias testing

Topics

Resources

License

Licenses found

Code of conduct

Contributing

Security policy

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

  2

Packages

  0

Languages

Footer

Footer navigation

Hacker News

相關文章

  1. 開源AI審計就緒套件助力新創公司

    3 個月前

  2. AI監管:事實或虛構?

    3 個月前

  3. 外部AI推理為何預設會違反歐盟AI法案的第12條和第61條

    3 個月前

  4. RiskLit:為監管機構、投資者和客戶打造值得信賴的 AI

    6 個月前

  5. 因客戶拒絕使用OpenAI,推出歐盟託管AI聊天機器人平台

    4 個月前