Your Local AI Assistant

A powerful command-line interface and REPL for interacting with local Ollama models.
Featuring tool execution, screen vision, memory management, and RAG capabilities.

Key Features

Interactive REPL

A modern terminal-based chat interface with rich Markdown support.

Slash Commands

/newStart a fresh session context
/historyView list of past sessions
/model-set <name>Switch active Ollama model
/clearClear the current screen
/exitClose the application

Non-Interactive Mode

Execute single-shot tasks directly from your shell.

CLI Args

-p "..."Prompt to execute immediately
--rag <db>Use specific RAG database
# Example ollama-agent -p "Summarize README.md" --rag docs_db

Deep RAG

Contextual Retrieval Augmented Generation using local vector stores.

Management Commands

/rag-create <name>Initialize a new knowledge base
/rag-add <path>Ingest file or directory
/rag-load <name>Activate a specific DB

Mem0 Memory

Long-term persistence powered by Mem0 and Qdrant.

The agent automatically stores and retrieves user preferences and context across sessions.

Actions

mem0_add_memoryTool to explicitly save facts
mem0_searchTool to recall information

Screen Vision

Give your agent eyes. Capture and analyze your screen contents.

Supported on Linux (X11/Wayland).

Syntax

@dp0Capture primary display
@dp1Capture second monitor
# Example prompt "Look at @dp0 and help me fix this error message"

Task Automation

Define reusable workflows in YAML to automate complex queries.

Task Commands

/tasksList available tasks
/task-run <id>Execute a specific task
/task-createInteractive task builder
# ~/.ollama-agent/tasks/analyze.yaml title: Code Analysis model: codellama prompt: | Analyze the current file for security vulnerabilities and suggest improvements.

MCP Support

Extend capabilities with Model Context Protocol servers.

Edit ~/.ollama-agent/mcp_servers.json to add servers.

{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"] } } }

Tool Execution

The agent can interact with your system to perform real-world actions.

Built-in Capabilities

execute_commandRun shell commands securely
# Example prompt "Find all Python files in src/ modified in the last 24h"

Getting Started

Requirements

Ensure you have Ollama running with a tool-capable model (like llama3.1) and an embedding model.

ollama pull gpt-oss:20b
ollama pull nomic-embed-text

Installation

pipx install git+https://github.com/arrase/ollama-agent.git

Interactive Mode

Start the chat interface to begin a session.

ollama-agent

One-off Commands

Execute a single prompt directly from your shell.

ollama-agent -p "Find large files in /var/log and summarize them"