🏠 Home 🧩 Skills 🔌 Plugins 📖 Docs ⚖ Compare 📋 Releases ⬇ Download
📖 Documentation

Akasha Docs

Everything you need to install, configure and extend Akasha.

Introduction

Akasha is a privacy-first personal assistant built around a long-running daemon. With the embedded model, you get answers without signing up or calling the cloud. You can add Ollama or OpenAI / OpenRouter when you need larger models — keys stay in your local vault.

Skills extend what the agent can do (calculator, custom workflows, …). Optional channels (Telegram, Slack, Discord) let the same daemon serve multiple interfaces.

💡
Quick start: Jump straight to Installation if you just want to get running.

Key principles

  • Local-first — data directory on your machine; no mandatory cloud account.
  • Explicit trust — tool and path access is deny-by-default until you edit policy.
  • Extensible — skills, plugins, optional channels.
  • Binary distribution — prebuilt releases on this site; the Akasha engine source is not publicly available.

What's new in v0.7.0

Multi-agent orchestration

  • Dedicated orchestrator route — add task_types.orchestrator in llm_router.yaml so decomposition uses its own model; without it, the system route is used (backward compatible).
  • Verified deliverables — for multi-step plans, the orchestrator checks that expected workspace files exist before marking a step done.
  • Deliverable safety — absolute paths and .. escapes are rejected; deliverables stay inside the allowed workspace.
  • Targeted retry — if a deliverable is missing, a focused retry can run to produce the file.
  • Plan trace.akasha/plan_<task_id>.md in the workspace updates as the plan evolves.
  • Meta-response detection — sub-agents that reply with “ready for phase 2”-style meta text instead of real output trigger an automatic retry.

Security & LLM robustness

  • Path policycan_read / can_write use proper path-prefix checks (no /data vs /database confusion).
  • RAG / uploads — Tauri document upload aligned with the API (content_base64, mime_type); user RAG uses POST /api/user-rag/documents.
  • Config clamping — very large max_tokens, top_k, num_ctx, num_gpu values in llm_router.yaml are clamped cleanly (no silent truncation oddities).
  • Azure OpenAI — default max_tokens aligned to 4 096 like other OpenAI-compatible providers.
  • Logging — response truncation from max_tokens is logged at WARN, not ERROR.

Plugins & UI

  • Plugins — status, dynamic routing rules, and reputation reset (per plugin or all) in the UI; HTTP API below.

Earlier major themes (gateway / task OS, policy engine, observability, calendar) remain; full changelog: Releases. Comparison: Compare. Public plugin catalog: Plugins.

API, observability & plugins

When the daemon is running (default port 3876), these HTTP surfaces are useful for monitoring, tooling, and automation:

  • Timeline — inspect recent events for debugging.
  • Metrics — operational counters and health-style signals.
  • AgentsGET /api/agents — loaded agents (exact paths as in your build).

Plugin HTTP API (overview)

Typical endpoints (v0.7+):

  • GET /api/plugins — installed plugins, scores, disabled-by-reputation state.
  • POST /api/plugins/reload — reload plugin manifests without restarting the daemon.
  • POST /api/plugins/reputation/reset — body {"plugin_id":"<id>"} to reset one plugin, or empty body / omit id to reset all.
  • GET /api/plugins/routing_rules — optional ?message=… — debug dynamic routing rules.
  • POST /api/plugins/routing_rules/match — JSON body to test routing match.

These listen on 127.0.0.1 by default — not exposed to the internet unless you deliberately reverse-proxy. For day-to-day use, the TUI Router tab and akasha router metrics cover most needs; use Plugins to browse the public catalog.

💡
Authentication and exact OpenAPI docs may vary by build; treat this section as a capability overview. If you need a formal API spec for integration, request it via GitHub Issues.

Installation

Requirements

  • OS: Windows 10+, macOS 11+, or Ubuntu 20.04+
  • Embedded model by default — no external installation required for core use. Ollama and cloud providers (OpenRouter, OpenAI) are optional; configure them during akasha init if you want to use them.

One-line install

Paste one of these commands in your terminal to download, install, and configure Akasha (daemon auto-start and optional config wizard).

Windows (PowerShell):

powershell -ExecutionPolicy Bypass -c "irm https://raw.githubusercontent.com/azerothl/Akasha_app/main/scripts/get-akasha.ps1 | iex"

Linux / macOS:

curl -sSL https://raw.githubusercontent.com/azerothl/Akasha_app/main/scripts/get-akasha.sh | bash

Download (manual)

Go to the Releases page on this site or GitHub Releases (Akasha_app)binaries are published here (public mirror). For a complete install, download akasha-full-<os> (CLI + daemon + TUI + desktop bundle + scripts). Extract to e.g. C:\Akasha or ~/Akasha.

Windows: double-click INSTALL.cmd at the root of the extracted folder to run the installer (no PowerShell execution policy prompt), or run .\scripts\setup.ps1. Linux / macOS: run chmod +x scripts/setup.sh && ./scripts/setup.sh.

You get the akasha (or akasha.exe), akasha-daemon, akasha-tui executables, a docs folder with the user guide, and in the full zip a ui subfolder with the desktop app installer.

First run (Linux / macOS)

chmod +x akasha akasha-daemon akasha-tui
./akasha init
./akasha start
./akasha tui   # terminal interface

First run (Windows)

.\akasha.exe init
.\akasha.exe start
.\akasha.exe tui   # terminal interface

Run akasha start from the folder where you extracted the archive so the in-app Doc tab can show the user guide (the daemon serves it from the docs folder in that directory).

First Run

akasha init runs an interactive setup: LLM provider (embedded, Ollama, OpenRouter, OpenAI), vault, optional channels, and optional Docker services (Ollama, TTS/STT, BitNet) if the akasha-models compose directory is found — set AKASHA_MODELS_DIR or use akasha services install --compose-dir … later. Use akasha init --defaults for minimal setup without prompts.

Then start the daemon and open the interface:

akasha start      # background daemon (auto-restart on crash)
akasha tui        # terminal UI: Chat, Scheduled, Router, Doc, Tasks, Calendar, Memory

If you have the desktop app (Akasha UI), launch it; it connects to the daemon on port 3876 (configurable via AKASHA_PORT).

Commands Reference

Daemon

akasha start              # Start daemon (auto-restart)
akasha start --foreground # Run in foreground (logs in terminal)
akasha stop               # Stop daemon

Setup & diagnostic

akasha init              # Interactive setup
akasha init --defaults   # Minimal setup, no prompts
akasha doctor            # Check system and daemon
akasha doctor --fix      # Create data_dir and minimal config if missing
akasha doctor --advice   # Diagnostic advice (daemon must be running)
akasha paths             # Show data_dir and config file paths

Vault (secrets)

akasha vault list        # List stored keys
akasha vault set KEY [value]  # Store a secret
akasha vault get KEY     # Show value (use with care)
akasha vault delete KEY  # Remove a key

Config & router

akasha config models routes   # Models per category
akasha config models set CATEGORY PROVIDER MODEL
akasha config models fetch   # Fetch Ollama model info
akasha config models get [CATEGORY]   # Show models
akasha config env list   # List env vars in akasha.env
akasha config env get KEY / set KEY [value]
akasha config provider list   # List providers in llm_router.yaml
akasha config provider set-ollama [--url URL] [--category CAT] [--model MODÈLE]
akasha config provider add-openai [--api-key KEY] [--category CAT] [--model MODÈLE]
akasha config provider add-openrouter [--api-key KEY] [--category CAT] [--model MODÈLE]
akasha router metrics    # LLM router metrics
akasha router discover   # Discover Ollama instances
akasha router show MODEL # Ollama model info

Plugins

akasha plugin list      # List installed plugins
akasha plugin reload    # Reload plugins (no daemon restart)
akasha plugin install PATH   # Install from directory
akasha plugin uninstall ID  # Uninstall by id
akasha plugin catalog   # Bundled / local catalog of available plugins

The Plugins page on this site lists the community catalog from the Akasha_plugins repository (always up to date with main). Use akasha plugin install with a local path or follow each plugin’s README.

Docker services (optional)

Requires Docker and the akasha-models compose folder. Set AKASHA_MODELS_DIR to that directory, or pass --compose-dir.

akasha services install [--ollama] [--voice] [--bitnet] [--all] [--compose-dir PATH]
akasha services status   # docker compose ps
akasha services stop     # docker compose down

Updates llm_router.yaml / voice_router.yaml in the data directory when services are enabled.

Interfaces

akasha tui   # Terminal UI (Chat, Scheduled, Router, Doc, Tasks, Calendar, Memory)

Update

akasha update check    # Check for a new version; prints download URL if available
akasha update install   # Open the latest release download page in the browser

Configuration

All config files live in the data directory. Run akasha paths to see its path (e.g. C:\Users\…\akasha on Windows, ~/akasha on Linux/macOS).

Main files

  • llm_router.yaml — LLM providers (Ollama, OpenAI, OpenRouter, embedded) and models per task type.
  • tools_policy.yaml — Tool permissions: file paths, allowed commands, web search, skill-install hosts (see Tools policy).
  • akasha.env — Persistent env vars (also editable via akasha config env set).
  • connectors.env — Telegram, Slack, Discord activation (see Channels).
  • agent_profile.json — Agent profile: name, role, personality, rules. Editable in Settings → Agent profile (web UI) or by editing the file and restarting the daemon.

LLM router & orchestrator

In llm_router.yaml, task types map categories (e.g. conversation, system) to provider/model. v0.7.0 adds an optional task_types.orchestrator block: when present, the multi-agent orchestrator uses it for decomposition instead of system. If you omit it, behaviour matches older configs (fallback to system). See the engine’s llm_router.example.yaml in the shipped docs for a full example.

akasha init or akasha doctor --fix creates the data_dir and minimal config files if missing.

Interfaces

The TUI (akasha tui) has 7 tabs: Chat, Scheduled (planned returns — agent responses for recurring/scheduled tasks, separate from the main chat), Router (LLM metrics), Doc (user guide), Tasks, Calendar, Memory. Use Tab to switch; R to refresh Router, Scheduled, or task list; F2 to cycle themes.

The desktop app (Akasha UI, if installed) connects to the same daemon on port 3876 and adds: Settings (Paramètres) with Agent profile (name, role, personality, rules), User RAG (My documents — add/remove documents; excerpts are used by the agent in answers), and attachments in Chat (images, PDFs, text). Use keys 1–8 to switch tabs when focus is not in an input field.

Tools policy (tools_policy.yaml)

The tools_policy.yaml file in the data directory controls what the agent can do on your machine:

  • allowed_read_paths / allowed_write_paths — Directories or files allowed for read/write. By default (empty or absent), everything is denied. Example: ["."] for current directory.
  • allowed_commands — Executable names allowed for shell commands (e.g. cargo, npm, git). Commands required by installed skills are added automatically.
  • web_search_enabled — Set to true to allow web search (Brave API) for weather, news, etc. Requires akasha vault set brave_api_key YOUR_KEY or BRAVE_API_KEY env var.
  • allowed_skill_install_hosts — Hosts allowed for skill installation (default: GitHub only). Use ["*"] to allow any HTTPS host.

If the file is missing, akasha doctor --fix creates a minimal one; edit it as needed.

Channels (Telegram, Slack, Discord, Teams)

Optional connectors let the agent respond on external platforms. Store tokens in the vault and set the matching env var to 1. Activation vars are loaded from connectors.env in the data directory (created or updated by akasha init).

  • Telegramakasha vault set telegram_bot_token YOUR_TOKEN, then AKASHA_TELEGRAM_ENABLED=1. Message the same bot as the token (check daemon log line bot_username). In private, plain text or /akasha …; in groups, /akasha@BotUsername …. Chat ID for notifications: send a message to @userinfobot, use the numeric id in AKASHA_TELEGRAM_NOTIFY_CHAT_ID. Only one process may poll Telegram per bot token (no second daemon / no webhook elsewhere). Optional startup ping may fail until you have opened the bot chat at least once.
  • Slack — Vault slack_signing_secret, AKASHA_SLACK_ENABLED=1. Point the Slack slash command to POST /channels/slack/command on your daemon URL.
  • Discordakasha vault set discord_bot_token YOUR_TOKEN, then AKASHA_DISCORD_ENABLED=1. Prefix: !akasha <message>.
  • Microsoft TeamsAKASHA_TEAMS_ENABLED=1 when your Teams integration is configured (see vault / deployment docs for your build).

Environment variables

Persist values in data_dir/akasha.env via akasha config env set KEY value (loaded when the daemon starts). Connector flags often live in connectors.env instead.

Daemon & UI

  • AKASHA_PORT — HTTP port (default 3876).
  • AKASHA_DATA_DIR — Config, vault, skills, DB (default %USERPROFILE%\akasha / ~/akasha).
  • AKASHA_LOGtrace | debug | info | warn | error (default info).
  • AKASHA_LANG — TUI language: value starting with en → English; else French. Overrides LANG / LC_ALL.
  • AKASHA_APP_BASE_URL — Site root for api/latest.json (update check). Default: this site’s GitHub Pages URL.

LLM & chat behaviour

  • AKASHA_MAX_RESPONSE_TOKENS — Max tokens per chat completion (default 4096).
  • AKASHA_MAX_CONTEXT_TOKENS — Token budget before short-term history compaction (default 8192).
  • AKASHA_SYSTEM_TASK_MAX_TOKENS — Max tokens for system tasks (summaries, extraction). Default 4096; raise to 8192 if “thinking” models return empty outputs.
  • AKASHA_LLM_TIMEOUT_SECS — Total LLM call timeout (default 300). First load of the embedded model can be slow—increase if needed.
  • AKASHA_LLM_STREAM_IDLE_SECS — Max gap between stream chunks (default 60).
  • AKASHA_LLM_FIRST_CHUNK_SECS — Max wait for the first chunk (embedded model load); defaults to min(300, AKASHA_LLM_TIMEOUT_SECS).
  • AKASHA_MAX_TOOL_ROUNDS — Agent tool loop iterations per reply (default 10).
  • AKASHA_EMBEDDED_MODEL — Embedded backend id (e.g. qwen3_0_6b; build-dependent).
  • OLLAMA_HOST — Ollama base URL (default http://localhost:11434) if not in llm_router.yaml.

Orchestration & session limits

  • AKASHA_MAX_CONCURRENT_DELEGATIONS — Parallel sub-tasks (default 15).
  • AKASHA_MAX_PARALLEL_SUBTASKS — Parallel LLM subtask workers (default 4).
  • AKASHA_MAX_COST_PER_SESSION_USD — Stop chat session when estimated LLM cost exceeds this (optional).
  • AKASHA_MAX_TOKENS_PER_SESSION — Stop session when token usage exceeds this (optional).
  • AKASHA_DEGRADED_MODE=1 — Router uses only local providers (e.g. Ollama + embedded).

Channels (also connectors.env)

  • AKASHA_TELEGRAM_ENABLED=1, AKASHA_SLACK_ENABLED=1, AKASHA_DISCORD_ENABLED=1, AKASHA_TEAMS_ENABLED=1
  • AKASHA_TELEGRAM_NOTIFY_CHAT_ID — Chat id for startup notification.

Cluster (advanced)

  • AKASHA_CLUSTER_ENABLED=1 — Multi-node mode (NATS, leader election).
  • AKASHA_NODE_ID — Node id (defaults to hostname).
  • AKASHA_NATS_TLS_CA, AKASHA_NATS_CLIENT_CERT, AKASHA_NATS_CLIENT_KEY — mTLS paths for NATS.

Services & models on disk

  • AKASHA_MODELS_DIR — Directory containing docker-compose.yml for akasha services install (Ollama, voice, BitNet).
  • HF_HOME — Hugging Face cache for embedded weights (default user cache). Point into data_dir to keep everything in one place.

API keys & integrations (common)

  • BRAVE_API_KEY or vault brave_api_key — Web search (with web_search_enabled in tools policy).
  • OPENAI_API_KEY, OPENROUTER_API_KEY — Cloud LLM (also storable in vault).
  • OPENROUTER_SITE_URL, OPENROUTER_APP_TITLE — OpenRouter dashboard metadata.
  • AKASHA_MESSAGE_WEBHOOK_URL — Outbound webhook for the message tool.

Debug & power users

  • AKASHA_AGENT_DEBUG=1 — Verbose agent logging.
  • AKASHA_LOG_LLM_RESPONSE=1 — Log full LLM text at info level.
  • AKASHA_TOOLS_JOURNAL_PATH — Append-only log path for file write tool calls.
  • AKASHA_PLAYWRIGHT_RUNNER — Path to Playwright runner for browser automation.
  • AKASHA_PLAYWRIGHT_AUTO_INSTALL=0 — Disable auto npm install / browser download.
  • AKASHA_GRAPH_EXPAND=1 — Memory graph expansion (when supported).
  • AKASHA_VAULT_MASTER_KEY — Master key for file-based vault (if used).
💡
Windows PowerShell: $env:AKASHA_TELEGRAM_ENABLED="1" · CMD: set AKASHA_TELEGRAM_ENABLED=1

Troubleshooting

  • Doc tab is empty — Start the daemon from the folder where you extracted the archive (the one that contains the docs folder).
  • Empty responses with a “thinking” model — Some cloud models return empty content for system tasks. Increase AKASHA_SYSTEM_TASK_MAX_TOKENS (e.g. akasha config env set AKASHA_SYSTEM_TASK_MAX_TOKENS 8192) and restart the daemon.
  • Daemon won’t start — Run akasha doctor and akasha doctor --fix to create missing data_dir and config files.
  • No responses or timeouts — First load of the embedded model can take several minutes (download + CPU). Increase AKASHA_LLM_TIMEOUT_SECS / AKASHA_LLM_FIRST_CHUNK_SECS if needed. Check akasha doctor or /embedded. Point HF_HOME to a disk with enough space for model cache.
  • Ollama / cloud — Ensure Ollama is running or API keys are in the vault; akasha config models routes and the Router tab show active models.
  • Download link 404 on GitHub — Binaries are mirrored to Akasha_app releases after each upstream release. If a version has notes but no zip files yet, wait for the Update Release Notes workflow to finish or open an issue.

Using Skills

Ask the agent in chat to install a skill from a URL, e.g. « Install the calculator skill from https://github.com/azerothl/Akasha_skills/tree/main/skills/calculator ». The agent will download it and reload skills.

To allow other hosts (GitLab, custom HTTPS), edit tools_policy.yaml in the data_dir and set allowed_skill_install_hosts (e.g. ["*"] for any HTTPS).

If you add or change skill files manually in the data_dir, type /skills reload in the chat to reload without restarting the daemon.

💡
Browse the Skills Library for install URLs and descriptions.

Creating your own skill

If you want to create a skill for your own use (e.g. hosted elsewhere or only in your data_dir), the daemon only needs a valid SKILL.md. The format below is what Akasha expects so the agent can discover and use your skill.

Required file: SKILL.md

  • Filename: SKILL.md (uppercase).
  • Frontmatter (YAML between ---): Required: name (identifier, e.g. my-skill), description (short text: when the agent should use this skill; used for routing). Optional: license, compatibility, metadata (e.g. version: "1.0").
  • Body: Markdown with instructions for the agent (when to use the skill, which tools to call, guidelines, examples). See the Agent Skills specification.

skill.json (for the gallery / catalog)

If you publish the skill in the Skills Library or another tool that reads the catalog, add a skill.json with: id, name, version, description, author, category, tags, icon (e.g. Lucide name), featured, install_url, install_command.

For a user-only skill (files in data_dir/skills/ or install via URL without listing in the gallery), only SKILL.md is required; the daemon does not need skill.json.

Optional directory structure

references/ (extra docs), scripts/, assets/ — see What are skills?. When installing from GitHub, the daemon can fetch the repo tree; for other hosts, only the file at the install URL (usually SKILL.md) is downloaded.

Installing your skill

Via agent: Ask in chat, e.g. “Install the skill from https://…” The URL must point to a raw SKILL.md or a path where SKILL.md is available.

Manual: Copy your skill folder into data_dir/skills/<name>/, then type /skills reload in chat. For non-GitHub hosts, set allowed_skill_install_hosts in tools_policy.yaml (see Configuration).

Minimal SKILL.md example

---
name: my-skill
description: Use when the user wants to do X. Do Y and Z.
---

# My Skill

When the user asks for X, use the tools A and B. Return the result in format …

Slash commands (in chat)

In Chat (TUI or web), lines starting with / are commands (not sent to the LLM). /help or /? prints the authoritative list for your build. Reference below:

CommandDescription
/help, /?List all slash commands.
/task create "message"Create a task (same as sending that message to the agent).
/schedule create NAME INTERVAL_SEC "description"Recurring schedule (interval in seconds).
/schedule delete SCHEDULE_IDRemove a schedule.
/stop TASK_ID, /cancel TASK_IDCancel a running or queued task.
/newsessionNew session: clear short-term chat context.
/statusDaemon status.
/doctorHealth checks (Ollama, vault, embedded model, etc.).
/adviceDiagnostic help using RAG + LLM (daemon must be up).
/embeddedEmbedded local model status (loaded or not).
/embedded reloadUnload embedded weights; next request reloads.
/metricsLLM router metrics.
/modelsAll models from configured providers.
/models listPrimary + fallback model per category.
/models set CATEGORY PROVIDER MODELSet category model (e.g. /models set conversation ollama llama3.2); previous primary becomes fallback; saved to llm_router.yaml.
/routesSame routing view as /models list.
/config listKeys in akasha.env.
/config get KEYShow one env value.
/config set KEY valueSet persistent env (restart may be needed).
/vault listVault key names only (no values).
/pluginsInstalled plugins.
/reloadReload plugins without daemon restart.
/skills, /skills listInstalled skills (name + description).
/skills reloadRescan skill folders after add/edit.
/skills uninstall <name>Remove a skill by id.
/restartRestart daemon (when using the supervisor).

Vault: add or delete secrets only via CLI — akasha vault set KEY [value], akasha vault delete KEY — or API DELETE /api/vault with body {"key":"…"}. No slash command writes vault values.

Update API

Akasha checks the following endpoint on startup (if auto-update is enabled):

GET https://azerothl.github.io/Akasha_app/api/latest.json

Response format

{
  "version": "v0.7.0",
  "release_date": "2026-04-02",
  "release_notes_url": "https://azerothl.github.io/Akasha_app/releases.html",
  "download_url": "https://github.com/azerothl/Akasha_app/releases/tag/v0.7.0",
  "changelog": "…",
  "min_supported_version": "0.5.0"
}

Version comparison

Akasha uses semantic versioning (semver). An update is triggered when version in the response is greater than the installed version.

Run akasha update check to see if a newer version is available; the command prints the download link.

Community

This showcase site and its releases are hosted in the Akasha_app repository. You can report issues, suggest improvements, or contribute to the site there.