為AI設計設計合約

為AI設計設計合約

Hacker News·

本文探討如何讓AI遵守特定設計規則的挑戰,提出一個「設計合約」綱要,用以強制執行UI意圖,避免生成通用輸出,超越了簡單的提示詞方法。

Image

AskCodi Blog

Designing a Design Contract for AI

What if AI wasn’t allowed to ignore your design rules? I built a system that makes it impossible.

Image

Image

How I built a system that forces AI to respect design decisions

[Previously: I discovered that AI generates generic UIs because it lacks context, not capability. System prompts and metadata didn’t work. I needed something stronger.]

Thanks for reading AskCodi Blog! Subscribe for free to receive new posts and support my work.

What Should a UI Intent Look Like?

I started designing what a UI Intent schema should be.

It needed to be:

Comprehensive: Cover all aspects of UI design

Deterministic: Same intent = same quality of output

Explicit: No ambiguity or interpretation

Enforceable: Technical mechanism to block violations

Evolvable: Can change over time with audit trail

Here’s what I came up with:

This wasn’t just metadata. This was a design contract.

The Enforcement Challenge

The schema was one thing. Enforcement was another.

I had three options:

Option 1: Prompt-Based Enforcement

Include intent in system prompt. Tell AI “You MUST follow this intent”. Hope it listens.

Pros: Easy to implementCons: Not reliable, AI can ignore prompts

Option 2: Post-Generation Validation

Let AI generate code. Validate against intent. Reject if violations found.

Pros: Catches violationsCons: Wastes tokens, slow feedback loop

Option 3: Runtime Enforcement Gate

AI must classify alignment before file operations. Block operations that conflict with intent. Require explicit intent update for conflicts.

Pros: Bulletproof enforcementCons: Complex implementation

I chose Option 3. Here’s why:

If the AI wants to create a login page but the intent says "no_authentication", I don’t want to:

Let it generate the code and then reject it (wasteful)

Hope the AI notices the conflict in the prompt (unreliable)

I want a hard gate that stops the operation and says:

“This request conflicts with UI Intent. You cannot create authentication features. You must either:

Adapt the request to align with intent (suggest alternative)

Update the intent explicitly (with reason) before proceeding”

This forces intentionality. The AI can’t accidentally drift. The user sees why requests are blocked. Intent changes are explicit and audited.

The Classification Protocol

Here’s how I implemented enforcement:

Step 1: Before any file operation, AI must classify the request

The AI calls a classification tool that returns structured data with four key fields:

Intent Alignment: One of three states (ALIGNED, PARTIALLY_ALIGNED, or CONFLICTING)

Violations: List of specific rules broken (e.g., “no_authentication”, “product_type_mismatch”)

Reasoning: Human-readable explanation of why this classification was chosen

Recommended Action: Whether to proceed, adapt the request, or update intent first

This isn’t free-text evaluation. It’s machine-readable JSON that the enforcement gate can validate programmatically.

Step 2: Runtime gate validates classification

ALIGNED: Proceed immediately with file operations

PARTIALLY_ALIGNED: Check if change type is in the allowed list (visual refinements yes, backend logic no)

CONFLICTING: Block file operations unless AI first calls the update intent tool with explicit reason

Step 3: Intent updates are versioned

Every intent change is recorded with:

What changed (JSON diff from previous version)

Why it changed (mandatory reason field provided by AI)

Who changed it (user, AI, or system attribution)

When it changed (timestamp with millisecond precision)

This creates an append-only audit trail of design decisions that can never be silently modified.

The Three Layers of UI Intent

UI Intent ended up having three layers:

Layer 1: Base Intent (User Defined)

Product type, audience, goals. High-level design tone and constraints. Non-goals and forbidden features.

This is the source of truth.

Layer 2: Design Profile (Derived)

Concrete design rules derived from base intent. Layout strategies, component allowlists and blocklists. Typography rules, spacing scales. Anti-patterns and quality checklists.

Deterministic derivation (same intent → same profile).

Layer 3: UI Archetypes (Hard Constraints)

Product-type-specific layout rules. Landing pages vs dashboards vs internal tools. Maximum CTAs, navigation requirements. Forbidden patterns per archetype.

Non-negotiable constraints.

This layering ensures that:

User has control (Layer 1)

AI gets explicit rules (Layer 2)

System prevents nonsense (Layer 3)

Deriving Concrete Rules from Abstract Intent

The most complex part was deriving concrete design rules from abstract intent.

For example, if the base intent says:

The derived design profile should specify concrete, actionable constraints:

Layout Strategy: Linear sectional flow (hero → features → social proof → CTA)

Navigation: Top navigation only, minimal with max 5 items

Density: Spacious section spacing (py-16 on mobile, py-24 on desktop)

Visual Motifs: Structured borders, subtle shadows, no gradients

Allowed Components: Hero sections, feature grids, testimonials, FAQ sections

Forbidden Components: Sidebars, data tables, admin panels

Interaction Rules: Maximum 1 primary CTA, simple forms only

Anti-Patterns: Generic centered div heroes, more than 3 competing CTAs, login/signup flows

Quality Checklist: Hero with clear value prop, single prominent CTA, sectional vertical flow

I built deterministic mapping matrices that translate abstract intent to explicit rules:

Product Type Rules (4 types): Landing pages get spacious sectional layouts. Dashboards get dense grids with sidebars. SaaS apps get hybrid marketing and product sections. Internal tools get compact, function-first layouts.

Design Tone Motifs (4 tones): Minimal gets flat design with minimal shadows. Professional gets subtle shadows and structured borders. Playful gets rounded corners and accent colors. Bold gets high contrast and strong borders.

Experience Level Adaptations (3 levels): Non-technical users get guided flows and plain language. Technical users get dense controls and keyboard shortcuts. Mixed audiences get balanced complexity with progressive disclosure.

Each combination produces specific, actionable rules that the AI must follow. Same intent always produces same profile. Completely deterministic.

Why This Architecture Works

1. Structured Classification (Not Free Text)

The AI can’t give vague responses like “this seems okay” or “maybe violates intent”. It must provide machine-readable JSON with specific violation types.

This makes enforcement programmable.

2. Three Tiers (Not Binary)

ALIGNED gives zero friction for common requests.

PARTIALLY_ALIGNED lets AI adapt within constraints (e.g., “client-side only” instead of backend).

CONFLICTING forces explicit intent update for major changes.

Binary allow/deny would be too restrictive.

3. Append-Only Versioning

Every intent change creates an immutable record. You can audit the entire evolution of design decisions. You can rollback to any previous version.

This is critical for understanding why an app evolved the way it did.

4. Explicit Change Attribution

Every update tracks who initiated it (user, AI, or system). You can see if drift was user-requested or AI-suggested.

This builds trust in the system.

The Tools I Added to AI’s Toolkit

I added two new tools to the AI’s available functions:

Tool 1: classify_intent_alignment

The AI is now required to call this tool before file operations. It can’t skip evaluation or provide vague reasoning.

Tool 2: update_ui_intent

The reason field forces transparency. The AI can’t silently change intent. It must provide justification like:

“User requested authentication. Updating non_goals to remove ‘no_authentication’ because user explicitly needs user management features.”

This makes intent evolution transparent and deliberate.

The Enforcement Gate Logic

The runtime validation follows a strict waterfall:

First Check: Read-Only OperationsRead file operations always proceed (they don’t modify state, so no risk).

Second Check: Classification RequiredIf no classification was provided, block immediately with error: “Must classify intent alignment before file operations.” No exceptions.

Third Check: ALIGNED StateIf classification is ALIGNED, proceed immediately.

Fourth Check: PARTIALLY_ALIGNED StateValidate the change type. Only specific types allowed without intent updates:

visual_refinements (colors, shadows, spacing)

layout_adjustments (grid changes, positioning)

copy_changes (text content only)

component_styling (CSS classes)

Any other change type (backend_logic, authentication, new_capabilities) is auto-reclassified as CONFLICTING and blocked.

Fifth Check: CONFLICTING StateSearch recent tool calls for an update_ui_intent call. If found, the conflict was explicitly resolved and operations proceed. If not found, block with error: “Request conflicts with intent. Call update_ui_intent first.”

This gate is non-bypassable. It runs before tool execution, not after. The AI cannot generate conflicting code, even if it tries.

What I Learned Building This

1. Constraints Are Features

I thought constraints would feel restrictive. They actually feel liberating. When the AI knows exactly what’s allowed, it produces better output faster.

2. Explicitness Beats Cleverness

I could have used ML to infer violations. But explicit rules are better. They’re debuggable. They’re predictable. They’re trustworthy.

3. Auditability Matters

Being able to see why intent changed over time is incredibly valuable. It turns intent drift from a bug into a feature.

4. Structure Enables Automation

Because classification is structured JSON, I can build tools on top of it. Intent debuggers. Diff viewers. Rollback systems. Analytics.

Continue to Part 3: The Implementation

In the next post, I’ll show you how I integrated this into AskCodi, the bugs I hit along the way, and the moment when everything finally clicked.

This is Part 2 of a 5-part series on building UI Intent. I’m Sachin, founder of AskCodi, and I’m sharing how I built a system that forces AI to respect design decisions.

Thanks for reading AskCodi Blog! Subscribe for free to receive new posts and support my work.

Image

No posts

Ready for more?

Hacker News

相關文章

  1. AI 系統工程模式

    4 個月前

  2. 從AI程式碼輔助中獲益的團隊有何不同之處

    4 個月前

  3. 無需逐行審查即可信任AI

    3 個月前

  4. 透過AI規則,從零散程式碼邁向一致性輸出

    3 個月前

  5. AI的樣貌:人工智慧設計的用戶體驗模式

    4 個月前