Sanctum vs Osaurus
Side-by-side comparison for macOS
Sanctum
8.0Run LLMs locally
Osaurus
7.0LLM server built on MLX
| Metric | Sanctum | Osaurus |
|---|---|---|
| Category | Security & Privacy | Developer Tools |
| AI Score | 8.0 | 7.0 |
| 30-day Installs | 2 | 616 |
| 90-day Installs | 6 | 2.3K |
| 365-day Installs | 44 | 4.5K |
| Version | 1.9.1 | 0.18.9 |
| Auto-updates | No | No |
| Deprecated | No | No |
| GitHub Stars | 2.9K | 4.0K |
| GitHub Forks | 318 | 163 |
| Open Issues | - | 30 |
| License | MIT | MIT |
| Language | PHP | Swift |
| Last GitHub Commit | 2mo ago | 1mo ago |
| First Seen | Oct 6, 2024 | Sep 19, 2025 |
Reviews
Sanctum
Sanctum is a macOS app that allows users to run large language models (LLMs) locally, providing privacy, offline functionality, and efficiency. It's ideal for developers and privacy-conscious users looking to harness AI capabilities without relying on cloud services.
Runs large language models locally with a focus on privacy and efficiency.
Pros
- + Enables local execution of LLMs for enhanced privacy
- + Efficient and lightweight design
- + Open-source with active community support
Cons
- - Requires technical knowledge for setup
- - Limited feature set compared to cloud-based alternatives
Osaurus
Osaurus is a local LLM server for macOS, enabling users to run and manage AI models locally or in the cloud. It supports Apple's foundation models and integrates with tools like Ollama, making it ideal for developers and AI enthusiasts.
Osaurus provides a runtime environment for AI models, allowing users to run and share models locally or via the cloud.
Pros
- + Compatible with Apple's foundation models and Ollama
- + Supports both local and cloud-based model execution
- + Built using Swift for native macOS integration
Cons
- - Lacks auto-update functionality
- - Past issues with performance and memory usage