Stop letting your agents work blind.

/mymir gives coding agents structured, persistent context. Tasks, dependencies, and execution records in a live knowledge base your agents reason from.

Open source · Self-hosted · Context & Project Management Tool · Agent Plugin

claude
brainstorm

Most of us aren’t really writing code anymore — we’re directing agents that do. But those agents have no memory. Every session starts from zero, and engineers spend their time re-explaining what was built, why decisions were made, and what still needs to happen.

That’s not engineering. That’s babysitting.

HOW IT WORKS

From idea to
implementation

mymir manages the full lifecycle. Describe your idea and it decomposes into tasks with dependency edges, determines what's ready, and hands your agent the exact context it needs for that stage.

01

Brainstorm

Describe your idea in plain language. A specialized agent explores it with you — scoping features, surfacing edge cases, and shaping the vision before a single task exists.

02

Decompose

The idea breaks into implementable tasks with typed dependency edges. Each task gets a spec, category, and position in the DAG. The context network comes alive.

03

Refine

Sharpen specs with acceptance criteria, architectural decisions, and file paths. Every refinement is captured in the context network for downstream tasks.

04

Plan

Plannable tasks get implementation blueprints with build sequences. Your agent receives upstream decisions, prerequisite specs, and related work automatically.

05

Execute

Ready tasks get full execution context: upstream decisions, file paths, and acceptance criteria. Your agent walks in knowing exactly what to build and why.

06

Track

Execution records capture what was built, what changed, and why. Downstream tasks inherit this knowledge — no manual handoff, no context lost between sessions.

Core Concept

Context Network

A living map of your project that captures not just what was built, but why decisions were made, what was tried and abandoned, and how different parts of the codebase relate to each other.

Tasks with specs, acceptance criteria, and status lifecycle
Typed dependency edges (depends_on, relates_to)
Decisions captured at the point they're made
Execution records that document what was built and why
File paths and implementation details per task

Core Concept

Context Retrieval Interface

The layer that lets agents query and use project knowledge at the right moment, so they walk into every session already knowing the story so far.

Stage-aware: plannable tasks get specs, ready tasks get full execution context
Upstream decisions, file paths, and acceptance criteria bundled automatically
Token-optimized context blocks agents can reason from directly
Critical path analysis — finds unblocked tasks on the shortest path
Auto-invocation via /mymir skill when project intent is detected

CAPABILITIES

How it ships

6 MCP Tools

project, task, edge, query, context, analyze — native Model Context Protocol.

3 Specialized Agents

Brainstorm shapes ideas. Decompose breaks them down. Manage tracks progress.

Auto-Invocation

The /mymir skill detects project intent. Your agent knows when to use context.

Web UI + CLI

Structure and Graph views in the browser. Full terminal workflow via Claude Code.

Structure meets
visibility

A two-panel workspace where your task list lives alongside full detail views. Refine specs, track progress, and review execution records — all without switching contexts.

1.0 Workspace

StructureGraph
Progress29%
4 done2 active6 planned14 total
Foundation3/3
Data Pipeline1/3
Query & Alerting0/3
API & Realtime0/2
Frontend0/2
Quality0/1
Task/Metrics Platform

Stream Processor

Data Pipeline
kafkareal-time
DRAFT
PLANNED
IN PROGRESS
DONE

Description

Implement a stream processing pipeline that consumes events from the Ingestion API, applies transformations and aggregations, and writes to the Time-Series Store.

Acceptance Criteria

Consume events from Ingestion API message queue
Apply configurable window aggregations (1m, 5m, 1h)
Write aggregated metrics to Time-Series Store
Handle backpressure and dead-letter queue

Decisions

Use Kafka Streams over Flink for lower operational overhead

Apr 06

Tumbling windows instead of sliding — simpler semantics, sufficient for dashboard resolution

Apr 05

DLQ writes to separate Kafka topic, not PostgreSQL — avoids coupling to main DB

Apr 06

Relationships

depends_onIngestion APIdone
blocksTime-Series Storeplanned
blocksQuery Engineplanned

Files

src/services/stream-processor/
src/services/stream-processor/consumer.ts
src/services/stream-processor/aggregator.ts
src/services/stream-processor/dlq-handler.ts
src/config/kafka.ts

GET STARTED

Up and running in minutes

Self-hosted. Open source. Your data stays yours.

setup
$ git clone git@github.com:FrkAk/mymir.git
$ cd mymir && bun install
$ cp .env.local.example .env.local
$ docker compose up -d
$ bun run db:setup
$ bun run dev
claude code plugin
$ cd mcp && bun install
$ claude --plugin-dir ./mcp
Ready
6 MCP tools loaded
3 specialized agents ready
/mymir skill active

Built for the agent-native era

We believe everyone should have access to tools that help them build better things. Self-hosted is free and always will be. A hosted version is coming for teams who want the experience without the setup.

AGPL-3.0Self-HostedMCP NativeClaude Code Plugin

We're building mymir using mymir — everything described here is something we're living in real time.