Architecture

PlanToCodeReview and merge implementation plans

PlanToCode turns tasks into structured implementation plans you can read, compare, and edit before any agent runs. Generate multiple drafts, merge the best approach, and hand off to Claude Code or your terminal with full context.

Source available on GitHub

This site is the front page for the PlanToCode repository. Browse the code, docs, and architecture from here.

BSL 1.15GitHub stars
Hosted uses managed model access; BYOK is available only for self-hosting. Self-hosting guide
Plan reviewMulti-model planningPlan mergeMobile plan readerSource available (BSL)

Plan-first workflow overview

A short walkthrough from task intake to plan review, merge instructions, and execution handoff.

  • Run multiple plan drafts against the same file context
  • Merge the best ideas with explicit instructions
  • Review and edit plans on desktop or mobile
  • Hand off to Claude Code or a local terminal while logs stay in PlanToCode

Plan review before execution

Plans are artifacts you can review, edit, and approve before any agent runs. Logs and history keep changes traceable.

File-by-file plans with exact paths

Implementation plans break down changes by file and operation so scope is explicit.

Review, edit, approve

Plans can be revised, annotated, and approved on desktop or mobile. Every revision is preserved.

Execution handoff

Approved plans are handed to terminals or agent CLIs with full context while logs stay in PlanToCode.

Plan review workflow in the app

See how file discovery, multi-model planning, merge instructions, and execution handoff keep agent work transparent and traceable.

Plan-first workflow overview

A short walkthrough from task intake to plan review, merge instructions, and execution handoff.

  • Run multiple plan drafts against the same file context
  • Merge the best ideas with explicit instructions
  • Review and edit plans on desktop or mobile
  • Hand off to Claude Code or a local terminal while logs stay in PlanToCode

Multi-model plan drafts

Run the same task through multiple models and compare drafts before merging or execution.

  • Plan jobs include selected file contents + directory tree
  • Explicit file operations with exact paths
  • Structured plan metadata captured per draft
  • Merge prompt uses <source_plans> and <user_instructions>
  • Final plan stored alongside source drafts
Multi-model plan drafts
View full size

Plan merge instructions

Provide merge guidance, keep source traceability, and store the merged plan alongside its inputs.

  • Source plans pulled by job ID
  • Merge instructions stored in metadata
  • File contents + directory tree add context
  • Merged plan stored alongside inputs
  • Mobile voice dictation for merge instructions
Plan merge instructions
View full size

Claude Code handoff

Copy buttons format plan context for Claude Code, Cursor, or custom CLIs so execution stays grounded in the current plan.

  • Templates sourced from task model config
  • Placeholders resolved against the active plan
  • Preferred CLI settings include Claude Code, Cursor, Codex, Gemini, or custom
  • Actions keep handoff consistent with the active plan
Claude Code handoff
View full size

File Discovery Pipeline

A four-stage Rust workflow: LLM-assisted root selection, regex filtering, relevance scoring, and extended path finding to build a focused file set.

  • Root folder selection uses the directory tree and task prompt
  • Regex filter generates pattern groups and applies git ls-files
  • Relevance scoring chunks file contents with token estimates
  • Extended path finder expands context with file + tree data
File Discovery Pipeline
View full size

Plan history and logs

Plans, merge drafts, and job outputs are stored in SQLite so you can review what happened before and after execution.

  • Workflow stages stored as job records
  • Plan drafts and merges persisted per session
  • Terminal output logged alongside plan artifacts
  • Session history survives restarts
Plan history and logs
View full size

System prompts and model control

See and edit system prompts, choose models per task, and understand exactly what is sent.

  • Per-task allowed models and defaults
  • System prompts served by the server API
  • Project-level prompt overrides in project_system_prompts
  • Local key_value_store for runtime preferences
System prompts and model control
View full size

Background Job Monitoring

Rust job processors stream progress and state transitions to the UI while persisting job history in SQLite.

  • Created, queued, preparing, running, completed/failed/canceled
  • Streaming updates via Tauri events
  • Token usage captured per run
  • Cancel long-running jobs
Background Job Monitoring
View full size

Optional screen recording analysis

Screen recordings can be sent to the /api/llm/video/analyze endpoint with a focus prompt and FPS hint to generate analysis summaries.

  • Multipart upload includes durationMs and framerate
  • Model format is provider/model (google/* required)
  • Usage and cost recorded per job
  • Summary stored in background_jobs response and can be applied to the task description
Optional screen recording analysis
View full size

Usage and Cost Ledger

Server-side usage entries and job metadata capture model usage across providers.

  • Per-job token and cost metadata
  • Provider-aware usage entries
  • Billing endpoints expose usage summaries
  • Usage history for model spend
Usage and Cost Ledger
View full size

Ready to review plans before agents run?

Download the desktop app to try multi-model planning, plan merge, and execution handoff.

Transparency and control

System prompts, source code, and self-hosting details are visible and documented.

System prompts you can read

Default prompts are stored in the repo and server database so you can inspect them and override per project in the app.

Prompt types docs ->

Source available (BSL 1.1)

The full system is on GitHub under the Business Source License so you can audit the architecture.

View GitHub repo ->

Self-hosting and BYOK

Run the server yourself to control provider routing and supply your own API keys.

Server setup guide ->

Workflow questions

Common questions about the planning pipeline, data flow, and execution handoff.

Yes. Planning, merge, transcription, and analysis run through LLM providers. The hosted app uses managed provider access; self-hosting lets you supply your own keys.
Only the task prompt and the files or excerpts you select are sent. Local project state, terminal logs, and plan drafts remain in the SQLite database unless you explicitly export them.
Yes. Plans are structured around explicit file paths and operations (create, modify, delete) so you can review scope before execution.