← Home
Ollama
ollama-app · v0.23.1 • auto-updates
10
8.0
Get up and running with large language models locally
brew install --cask ollama-app 12.4K
30-day installs
27.2K
90-day installs
55.4K
365-day installs
Install trend
Trust Score 10/10
✓ Open source (MIT) +3
✓ Active development +2
✓ Licensed +1
✓ Auto-updates +1
✓ Not deprecated +1
✓ Established (>1yr) +1
✓ Popular (>1K installs/mo) +1
Version History
v0.23.1today0.23.0→0.23.1
2d ago0.22.1→0.23.0
5d ago0.22.0→0.22.1
7d ago0.21.2→0.22.0
9d ago0.18.0→0.21.2
Review
Mar 10, 2026Ollama enables users to run large language models locally, offering a powerful tool for developers and data scientists. It supports various models and hardware, including AMD GPUs, making it versatile for different computing needs.
Runs large language models locally on your machine.
Maturity: The project is mature and actively maintained, as evidenced by its strong GitHub presence and frequent updates.
Community: Community sentiment is positive, with notable mentions on Hacker News regarding AMD GPU support and new features. Reddit discussions suggest integration with other tools and technical inquiries, indicating a growing niche community.
Pros
- + Enables local running of large language models for privacy and bandwidth efficiency.
- + Supports multiple models and hardware, including AMD GPUs, broadening its accessibility.
- + Active development and strong community support enhance reliability and future potential.
Cons
- - Niche appeal, primarily targeting developers and data scientists familiar with local AI setups.
- - Setup and management of models may be complex for less technical users.
Community Mentions
Positive sentiment Negative sentiment Neutral / unknown