Show HN:RamaLama AI 容器的 Python SDK
RamaLama 推出了一款新的 Python SDK,這是一個利用 OCI 容器協調 AI 的開源系統,讓使用者能在任何硬體上本地運行大型語言模型。
Navigation Menu
Search code, repositories, users, issues, pull requests...
Provide feedback
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
License
ramalama-labs/ramalama-sdk
Folders and files
Latest commit
History
Repository files navigation

Programmable AI on any device.
Run LLMs locally on any hardware. If you can build a container you can deploy AI.
RamaLama is an open source container orchestration system which makes working with AI simple, straightforward, and familiar using OCI containers.
Ramalama lets you add AI features to your local application while running entirely on device. It can be used on any device with an applicable container manager like docker or podman with support for most model repositories.
Installation
Requirements
pypi
Quick Start
Python
For multiturn conversations the chat method accepts an additional history argument which can also be used to set system prompts.
Models can be pulled from a variety of sources including HuggingFace, Ollama, ModelScope, any OCI registry, local files, and any downloadable URL.
The full suite of supported prefixes can be found below.
The Model exposes a variety of customization parameters including base_image which allows you to customize the model container runtime. This is especially useful if you need to run inference on custom hardware which requires a specifically compiled version of llama.cpp, vLLM, etc...
The Async model API is identical to the sync examples above.
Documentation
🚧 Other Languages
WIP
Next Steps
Repository Structure
SDKs
Support
About
Resources
License
Uh oh!
There was an error while loading. Please reload this page.
Stars
Watchers
Forks
Releases
Packages
0
Languages
Footer
Footer navigation
相關文章