Ollama

Ollama

Open Source MIT

Run large language models locally with ease

Website
ollama.com
Difficulty
● Easy
Last check
2026-02-26

Description

Ollama allows you to run large language models such as Llama, Mistral, Gemma, and many more locally on your own machine. Installation is remarkably simple, and models can be downloaded and started with a single command. Ollama provides a REST API and is compatible with the OpenAI API, making integration into existing applications straightforward.

Preview

Ollama Preview

Platforms

Linux Windows macOS Docker

Replaces the following proprietary tools

ChatGPT Ollama
Midjourney Ollama
DALL-E Ollama

Self-Hosting

Ollama can be self-hosted on your own infrastructure. Visit the official documentation for installation instructions.

Documentation →

Tags

#llm #lokal #sprachmodelle #rest-api #go
← Back to AI & Machine Learning