Show HN:Use-AI - 輕鬆為 React 應用程式添加 AI 自動化

Show HN:Use-AI - 輕鬆為 React 應用程式添加 AI 自動化

Hacker News·

Use-AI 是一個在 Hacker News 上發布的新 React 客戶端/框架,讓開發者能夠輕鬆地將 AI 自動化整合到他們的應用程式前端。它的目標是簡化讓 AI 控制使用者介面的過程。

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

A React client/framework for easily enabling AI to control your users frontend.

License

Uh oh!

There was an error while loading. Please reload this page.

meetsmore/use-ai

Folders and files

Latest commit

History

Repository files navigation

@use-ai

Image

Image

Image

Image

Image

Image

A React client/framework for easily enabling AI to control your users frontend.

Image

Demo video

Table of Contents

Overview

TodoList.tsx

index.tsx

Installation

Frontend

Server

The use-ai server coordinates between your frontend and AI providers. Choose one of the following methods:

Using docker run:

Using docker-compose:

Create a docker-compose.yml file:

Then run:

If you want to integrate the server into your existing application:

See Server > UseAIServer for programmatic usage.

Quick Start

Define your component, and call useAI with some tools.

Run the server (see Installation > Server for more options):

Start your frontend:

Example

If you just want to play with a working example:

Visit http://localhost:3000 to see some examples of use-ai in action.
The example app code is in apps/example.

How it works

Image

Why?

You can get a large amount of power from use-ai, even by only implementing a handful of tools.
This is partly because use-ai supports MultiTool calls, so the LLM can ask to batch execute tools in one generation step, which the frontend can then do all at once.

For example, with our todo list example:

We can already achieve the following in one shot:

Even with only add, delete, and toggle, you can already unlock quite a lot of power.

Because the tools are all clientside, we don't need to worry about auth for the MCP tools, because we are only doing things that the clientside application can already do (as we're invoking clientside code).

📦 Structure

Features

General

AG-UI Protocol

@use-ai partially implements the AG-UI protocol for communication between @meetsmore-oss/use-ai-client and @meetsmore-oss/use-ai-server.

Not all aspects of AG-UI protocol are implemented now, but it feel free to open a PR to add any parts of the protocol you need.

There are some minor extensions to the protocol:

Message Types:

Client

useAI hook

The fundamental building block for adding AI capabilities to any React component:

UseAIProvider

Component State via prompt

When you call useAI, you can provide a prompt that is used to tell the LLM the state of the component in a text-friendly way.

If tools or prompt change, they will cause useAI to be re-rendered, so the LLM will always have the latest state whenever you invoke it.

Returning results of a tool to the AI

While prompt is good enough to reflect state of a component, your tool call may not update state, or you may trigger side effects.

useAI tools can return a result back to the AI:

Tool Definition with Zod safety

When you use defineTool, zod schemas are used to define the input arguments for the tool.
These are used for validation (to ensure the LLM didn't generate nonsense for your arguments).
The types of the callback function are also matched against the types of the zod schema, so you will get TypeScript errors if they don't match.

MultiTool Use

LLMs can invoke multiple tools at once (return multiple tool calls in a response).
These are handled in order by useAI, but in one batch, which means that you can get bulk-editing functionality just by declaring single-item mutations.

User: "add a shopping list to make tonkotsu ramen"

The AI automatically calls addTodo multiple times for each ingredient, even though you only defined single-item operations.

Multiple Components of the same type

Use the id parameter to differentiate between component instances.

You should use something that the AI can contextually understand, rather than a randomly generated UUID.

Or use the component's id attribute:

Invisible (Provider) components

You may want to expose AI tools from structural components rather than visual ones.
A common use case for this is to provide 'global' tools that are always accessible to the AI on every page, and not bound to a specific component.

You need to tell useAI that the component will not re-render when a tool call happens, by providing the invisible: true argument.

Use enabled: false to conditionally disable the hook:

Suggestions

If the user opens a brand new chat, it's helpful to give them a call-to-action prompt that they can use, to understand what they can do with your app using AI.

You can do this using the suggestions argument of useAI:

The UseAIProvider chat selects 4 random suggestions from all mounted components for display in empty chat pages, users can click them to instantly send them as a message.

confirmationRequired

For destructive operations, use confirmationRequired:

This will try its best to get the AI to request confirmation from the user via chat before taking action.

Chat History

By default, there is locally stored chat history for up to 20 chats.

The user can switch between them and resume old chats.

If you wanted to have chats stored on the server, with the users account, you can provide your own ChatRepository implementation to do that:

Error Code Mapping

There are errors which can occur when using LLM APIs, (e.g. rate limiting, overload, etc).
These are defined internally using error codes:

On the client, you will want to show friendly errors to the user.
By default, there are reasonable messages in English, but if you needed to localize them to another language, you can pass your own mapping of error codes -> strings:

Using the AI directly (without chat UI)

TODO: This needs to be easier, using the client currently is awkward.
User should get a similar interface to useAIWorkflow.

Custom UI

If you don't like the default UI, you can customize both the floating-action-button and the chat UI itself.

You can also disable them by passing null:

Slash Commands

Save and reuse common prompts with slash commands:

Provide custom storage with commandRepository:

File Upload

Enable file uploads in chat:

Theme Customization

Customize the chat UI appearance:

Internationalization

Localize UI strings:

Multi-agent Support

When multiple agents are configured, users can select which agent to use:

Server

'Batteries included' server

For most use cases, you can just use @meetsmore-oss/use-ai-server as-is, and customize only the environment variables:

UseAIServer

If you want to integrate the use-ai server into your existing server, for example if you don't want to deploy another instance in your infrastructure, or you want to use some capabilities in your existing server, you can use @meetsmore-oss/use-ai-server as a library and run an instance of UseAIServer:

External MCPs

use-ai supports providing additional tools using external MCPs, defined by mcpEndpoints.
These MCP endpoints should follow the MCP protocol to return a set of tools when called.

The server will invoke these on start, with a refresh interval to reload them periodically.

To configure these in @meetsmore-oss/use-ai-server, you can use the environment variables:

If your MCP tools need auth (e.g. you want to do things on behalf of the user, in the backend), you can use the @meetsmore-oss/use-ai-client mcpHeadersProvider prop to do that:

picomatch is used for patterns, so you can use any picomatch compatible pattern.

The flow works like this:

Rate Limiting

UseAIServer supports rate limiting by IP.
This allows you to implement use-ai without auth, and just rely on rate limiting to prevent abuse of your token spend.

You can configure it using environment variables if using @meetsmore-oss/use-ai-server directly:

Or you can use arguments to UseAIServer:

Langfuse

Langfuse is an AI observability platform that provides insights into your AI usage.
The use-ai AISDKAgent supports this out of the box, just set these environment variables:

Bundled Client Library (optional)

If you have dependency conflicts (e.g. zod 4.0+), you can use the bundled version of @meetsmore-oss/use-ai-client instead:

Note that this is much larger (206 KB gzipped) than the unbundled dependency (16 KB gzipped).

Plugins

@meetsmore-oss/use-ai-server has a plugin architecture allowing you to extend the AG-UI protocol and add more handlers.

This is primarily used to avoid polluting the main library with the code used for providing workflow runners (see @meetsmore-oss/use-ai-plugin-workflows)

@meetsmore-oss/use-ai-plugin-workflows

@meetsmore-oss/use-ai-plugin-workflows provides the capability for running workflows using AI workflow engines like Dify.

Only DifyWorkflowRunner is supported for now, but you can write your own Runners very easily (feel free to open a PR).

Because it's awkward to get API keys for workflows from Dify, you can use a mapping of names -> API keys:

@meetsmore-oss/use-ai-plugin-mastra

@meetsmore-oss/use-ai-plugin-mastra provides a MastraWorkflowAgent that runs Mastra workflows as conversational agents.

Set MASTRA_URL environment variable to configure the Mastra server endpoint.

About

A React client/framework for easily enabling AI to control your users frontend.

Resources

License

Contributing

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

Packages

  0

Uh oh!

There was an error while loading. Please reload this page.

Uh oh!

There was an error while loading. Please reload this page.

Contributors

  3

Uh oh!

There was an error while loading. Please reload this page.

Languages

Footer

Footer navigation

Hacker News

相關文章

  1. A2UI:Google 針對 AI 代理的聲明式使用者介面協定

    4 個月前

  2. 確保代理式AI基礎:無廢話指南 - 第一部分

    4 個月前

  3. 理解與部署AI代理:一份綜合指南

    3 個月前

  4. Show HN:Atom – 開源 AI 工作團隊與多代理協調器

    3 個月前

  5. 完整開發者教學:使用 A2UI 和 A2A 協議建構 AI Agent 使用者介面

    3 個月前