feat: Agent Orchestrator — multi-project agent dashboard
Tauri + Svelte 5 + Rust application for orchestrating multiple AI coding agents. Includes Claude, Aider, Codex, and Ollama provider support, multi-agent communication (btmsg/bttask), session anchors, plugin sandbox, FTS5 search, Landlock sandboxing, and 507 vitest + 110 cargo tests.
126
.claude/CLAUDE.md
Normal file
|
|
@ -0,0 +1,126 @@
|
|||
# Agent Orchestrator — Claude Behavioral Guide
|
||||
|
||||
## Workflow
|
||||
|
||||
- v1 is a single-file Python app (`bterminal.py`). Changes are localized.
|
||||
- v2 docs are in `docs/`. Architecture in `docs/architecture.md`.
|
||||
- v2 Phases 1-7 + multi-machine (A-D) + profiles/skills complete. Extras: SSH, ctx, themes, detached mode, auto-updater, shiki, copy/paste, session resume, drag-resize, session groups, Deno sidecar, Claude profiles, skill discovery.
|
||||
- v3 Mission Control (All Phases 1-10 + Production Readiness Complete): project groups, workspace store, 15+ Workspace components, session continuity, multi-provider adapter pattern, worktree isolation, session anchors, Memora adapter, SOLID refactoring, multi-agent orchestration (btmsg/bttask, 4 Tier 1 roles, role-specific tabs), dashboard metrics, auto-wake scheduler, reviewer agent. Production: sidecar supervisor (auto-restart, exponential backoff), FTS5 search (3 virtual tables, Spotlight overlay), plugin system (Web Worker sandbox, permission-gated), Landlock sandbox (kernel 6.2+), secrets management (system keyring), OS+in-app notifications, keyboard-first UX (18+ palette commands, vi-nav), agent health monitoring (heartbeats, dead letter queue), audit logging, error classification (6 types), optimistic locking (bttask). Hardening: TLS relay, SPKI pinning (TOFU), WAL checkpoint (5min), subagent delegation fix, plugin sandbox tests (26), SidecarManager actor pattern, per-message btmsg acknowledgment, Aider autonomous mode. 507 vitest + 110 cargo + 109 E2E.
|
||||
- Consult Memora (tag: `bterminal`) before making architectural changes.
|
||||
|
||||
## Documentation References
|
||||
|
||||
- System architecture: [docs/architecture.md](../docs/architecture.md)
|
||||
- Architecture decisions: [docs/decisions.md](../docs/decisions.md)
|
||||
- Sidecar architecture: [docs/sidecar.md](../docs/sidecar.md)
|
||||
- Multi-agent orchestration: [docs/orchestration.md](../docs/orchestration.md)
|
||||
- Production hardening: [docs/production.md](../docs/production.md)
|
||||
- Implementation phases: [docs/phases.md](../docs/phases.md)
|
||||
- Research findings: [docs/findings.md](../docs/findings.md)
|
||||
- Progress logs: [docs/progress/](../docs/progress/)
|
||||
|
||||
## Rules
|
||||
|
||||
- Do not modify v1 code (`bterminal.py`) unless explicitly asked — it is production-stable.
|
||||
- v2/v3 work goes on the `hib_changes` branch (repo: agent-orchestrator), not master.
|
||||
- Architecture decisions must reference `docs/decisions.md`.
|
||||
- When adding new decisions, append to the appropriate category table with date.
|
||||
- Update `docs/progress/` after each significant work session.
|
||||
|
||||
## Key Technical Constraints
|
||||
|
||||
- WebKit2GTK has no WebGL — xterm.js must use Canvas addon explicitly.
|
||||
- Agent sessions use `@anthropic-ai/claude-agent-sdk` query() function (migrated from raw CLI spawning due to piped stdio hang bug). SDK handles subprocess management internally. All output goes through the adapter layer (`src/lib/adapters/claude-messages.ts` via `message-adapters.ts` registry) — SDK message format matches CLI stream-json. Multi-provider support: message-adapters.ts routes by ProviderId to provider-specific parsers (claude-messages.ts, codex-messages.ts, ollama-messages.ts — all 3 registered).
|
||||
- Sidecar uses per-provider runner bundles (`sidecar/dist/{provider}-runner.mjs`). Currently only `claude-runner.mjs` exists. SidecarManager.resolve_sidecar_for_provider(provider) finds the right runner file. Deno preferred (faster startup), Node.js fallback. Communicates with Rust via stdio NDJSON. Claude CLI auto-detected at startup via `findClaudeCli()` — checks ~/.local/bin/claude, ~/.claude/local/claude, /usr/local/bin/claude, /usr/bin/claude, then `which claude`. Path passed to SDK via `pathToClaudeCodeExecutable` option. Agents error immediately if CLI not found. Provider env var stripping: strip_provider_env_var() strips CLAUDE*/CODEX*/OLLAMA* vars (whitelists CLAUDE_CODE_EXPERIMENTAL_*). Dual-layer: (1) Rust env_clear() + clean_env, (2) JS runner SDK `env` option. Session stop uses AbortController.abort(). `agent-runner-deno.ts` exists as standalone alternative runner but is NOT used by SidecarManager.
|
||||
- AgentPane does NOT stop agents in onDestroy — onDestroy fires on layout remounts, not just explicit close. Stop-on-close is handled externally (was TilingGrid in v2, now workspace teardown in v3).
|
||||
- Agent dispatcher (`src/lib/agent-dispatcher.ts`) is a thin coordinator (260 lines) routing sidecar events to the agent store. Delegates to extracted modules: `utils/session-persistence.ts` (session-project maps, persistSessionForProject), `utils/subagent-router.ts` (spawn + route subagent panes), `utils/auto-anchoring.ts` (triggerAutoAnchor on compaction), `utils/worktree-detection.ts` (detectWorktreeFromCwd pure function). Provider-aware via message-adapters.ts.
|
||||
- AgentQueryOptions supports `provider` field (defaults to 'claude', flows Rust -> sidecar), `provider_config` blob (Rust passes through as serde_json::Value), `permission_mode` (defaults to 'bypassPermissions'), `setting_sources` (defaults to ['user', 'project']), `system_prompt`, `model`, `claude_config_dir` (for multi-account), `additional_directories`, `worktree_name` (when set, passed as `extraArgs: { worktree: name }` to SDK → `--worktree <name>` CLI flag), `extra_env` (HashMap<String,String>, injected into sidecar process env; used for BTMSG_AGENT_ID).
|
||||
- Multi-agent orchestration: Tier 1 (management agents: Manager, Architect, Tester, Reviewer) defined in groups.json `agents[]`, converted to ProjectConfig via `agentToProject()`, rendered as full ProjectBoxes. Tier 2 (project agents) are regular ProjectConfig entries. Both tiers get system prompts. Tier 1 prompt built by `generateAgentPrompt()` (utils/agent-prompts.ts): 7 sections (Identity, Environment, Team, btmsg docs, bttask docs, Custom context, Workflow). Tier 2 gets optional `project.systemPrompt` as custom context. BTMSG_AGENT_ID env var injected for Tier 1 agents only (enables btmsg/bttask CLI usage). Periodic re-injection: AgentSession runs 1-hour timer, sends context refresh prompt when agent is idle (autoPrompt → AgentPane → startQuery with resume=true).
|
||||
- bttask kanban: Rust bttask.rs module reads/writes tasks table in shared btmsg.db (~/.local/share/bterminal/btmsg.db). 7 operations: list_tasks, create_task, update_task_status, delete_task, add_comment, task_comments, review_queue_count. Frontend: TaskBoardTab.svelte (kanban 5 columns, 5s poll). CLI `bttask` tool gives agents direct access; Manager has full CRUD, Reviewer has read + status + comments, other roles have read-only + comments. On task→review transition, auto-posts to #review-queue btmsg channel (ensure_review_channels creates #review-queue + #review-log idempotently). Reviewer agent gets Tasks tab in ProjectBox (reuses TaskBoardTab). reviewQueueDepth in AttentionInput: 10pts per review task, capped at 50 (priority between file_conflict 70 and context_high 40). ProjectBox polls review_queue_count every 10s for reviewer agents → setReviewQueueDepth() in health store.
|
||||
- btmsg/bttask SQLite conventions: Both btmsg.rs and bttask.rs open shared btmsg.db with WAL mode + 5s busy_timeout (concurrent access from Python CLIs + Rust backend). All queries use named column access (`row.get("column_name")`) — never positional indices. Rust structs use `#[serde(rename_all = "camelCase")]`; TypeScript interfaces MUST match camelCase wire format. TestingTab uses `convertFileSrc()` for Tauri 2.x asset URLs (not `asset://localhost/`).
|
||||
- ArchitectureTab: PlantUML diagram viewer/editor. Stores .puml files in `.architecture/` project dir. Renders via plantuml.com server using ~h hex encoding (no Java dependency). 4 templates: Class, Sequence, State, Component. Editor + SVG preview toggle.
|
||||
- TestingTab: Dual-mode component (mode='selenium'|'tests'). Selenium: watches `.selenium/screenshots/` for PNG/JPG, displays in gallery with session log, 3s poll. Tests: discovers files in standard dirs (tests/, test/, spec/, __tests__/, e2e/), shows content.
|
||||
- Worktree isolation (S-1 Phase 3): Per-project `useWorktrees` toggle in SettingsTab. When enabled, AgentPane passes `worktree_name=sessionId` in queryAgent(). Agent runs in `<repo>/.claude/worktrees/<sessionId>/`. CWD-based detection: `utils/worktree-detection.ts` `detectWorktreeFromCwd()` matches `.claude/worktrees/`, `.codex/worktrees/`, `.cursor/worktrees/` patterns on init events → calls `setSessionWorktree()` for conflict suppression. Dual detection: CWD-based (primary, from init event) + tool_call-based `extractWorktreePath()` (subagent fallback).
|
||||
- Claude profiles: claude_list_profiles() reads ~/.config/switcher/profiles/ with profile.toml metadata. Profile set per-project in Settings (project.profile field), passed through AgentSession -> AgentPane `profile` prop -> resolved to config_dir for SDK. Profile name shown as info-only in ProjectHeader.
|
||||
- ProjectBox has project-level tab bar: Model | Docs | Context | Files | SSH | Memory + role-specific tabs. Three mount strategies: PERSISTED-EAGER (Model, Docs, Context — always mounted, display:flex/none), PERSISTED-LAZY (Files, SSH, Memory, Metrics, Tasks, Architecture, Selenium, Tests — mount on first activation via {#if everActivated} + display:flex/none). Tab type: `'model' | 'docs' | 'context' | 'files' | 'ssh' | 'memories' | 'metrics' | 'tasks' | 'architecture' | 'selenium' | 'tests'`. Role-specific tabs: Manager gets Tasks (kanban), Architect gets Arch (PlantUML), Tester gets Selenium+Tests. Metrics tab (all projects): MetricsPanel.svelte — Live view (fleet aggregates, project health grid, task board summary, attention queue) + History view (SVG sparklines for cost/tokens/turns/tools/duration, stats row, session table from session_metrics_load). Conditional on `isAgent && agentRole`. Model tab = AgentSession+TeamAgentsPanel. Docs tab = ProjectFiles (markdown viewer). Context tab = ContextTab.svelte (LLM context window visualization: stats bar, segmented token meter, file references, turn breakdown; reads from agent store via sessionId prop; replaced old ContextPane ctx database viewer). Files tab = FilesTab.svelte (VSCode-style directory tree + CodeMirror 6 editor with 15 language modes, dirty tracking, Ctrl+S save, save-on-blur setting, image display via convertFileSrc, 10MB gate; CodeEditor.svelte wrapper; PdfViewer.svelte for PDF files via pdfjs-dist with canvas multi-page rendering + zoom 0.5x–3x; CsvTable.svelte for CSV with RFC 4180 parser, delimiter auto-detect, sortable columns). SSH tab = SshTab.svelte (CRUD for SSH connections, launch spawns terminal tab in Model tab). Memory tab = MemoriesTab.svelte (pluggable via MemoryAdapter interface in memory-adapter.ts; MemoraAdapter registered at startup, reads ~/.local/share/memora/memories.db via Rust memora.rs). Tasks tab = TaskBoardTab.svelte (kanban board, 5 columns, 5s poll, Manager only). Arch tab = ArchitectureTab.svelte (PlantUML viewer/editor, .architecture/ dir, plantuml.com ~h hex encoding, Architect only). Selenium tab = TestingTab.svelte mode=selenium (screenshot gallery, session log, 3s poll, Tester only). Tests tab = TestingTab.svelte mode=tests (test file discovery, content viewer, Tester only). Rust backend: list_directory_children + read_file_content + write_file_content (FileContent tagged union: Text/Binary/TooLarge). Frontend bridge: files-bridge.ts.
|
||||
- ProjectHeader shows CWD (ellipsized from START via `direction: rtl`) + profile name as info-only text on right side. AgentPane no longer has DIR/ACC toolbar — CWD and profile are props from parent.
|
||||
- Skill discovery: claude_list_skills() reads ~/.claude/skills/ (dirs with SKILL.md or .md files). claude_read_skill() reads content. AgentPane `/` prefix triggers autocomplete menu. Skill content injected as prompt via expandSkillPrompt().
|
||||
- claude-bridge.ts adapter wraps profile/skill Tauri commands (ClaudeProfile, ClaudeSkill interfaces). provider-bridge.ts wraps claude-bridge as generic provider bridge (delegates by ProviderId).
|
||||
- Provider adapter pattern: ProviderId = 'claude' | 'codex' | 'ollama'. ProviderCapabilities flags gate UI (hasProfiles, hasSkills, hasModelSelection, hasSandbox, supportsSubagents, supportsCost, supportsResume). ProviderMeta registered via registerProvider() in App.svelte onMount. AgentPane receives provider + capabilities props. SettingsTab has Providers section with collapsible per-provider config panels. ProjectConfig.provider field for per-project selection. Settings persisted as `provider_settings` JSON blob.
|
||||
- Sidecar build: `npm run build:sidecar` builds all 3 runners via esbuild (claude-runner.mjs, codex-runner.mjs, ollama-runner.mjs). Each is a standalone ESM bundle. Codex runner dynamically imports @openai/codex-sdk (graceful failure if not installed). Ollama runner uses native fetch (zero deps).
|
||||
- Agent preview terminal: `AgentPreviewPane.svelte` is a read-only xterm.js terminal (disableStdin:true) that subscribes to an agent session's messages via `$derived(getAgentSession(sessionId))` and renders tool calls/results in real-time. Bash commands shown as cyan `❯ cmd`, file ops as yellow `[Read] path`, results as plain text (80-line truncation), errors in red. Spawned via 👁 button in TerminalTabs (appears when agentSessionId prop is set). TerminalTab type: `'agent-preview'` with `agentSessionId` field. Deduplicates — won't create two previews for the same session. ProjectBox passes mainSessionId to TerminalTabs.
|
||||
- Maximum 4 active xterm.js instances to avoid WebKit2GTK memory issues. Agent preview uses disableStdin and no PTY so is lighter, but still counts.
|
||||
- Store files using Svelte 5 runes (`$state`, `$derived`) MUST have `.svelte.ts` extension (not `.ts`). Import with `.svelte` suffix. Plain `.ts` compiles but fails at runtime with "rune_outside_svelte".
|
||||
- Session persistence uses rusqlite (bundled) with WAL mode. Data dir: `dirs::data_dir()/bterminal/sessions.db`.
|
||||
- Layout store persists to SQLite on every addPane/removePane/setPreset/setPaneGroup change (fire-and-forget). Restores on app startup via `restoreFromDb()`.
|
||||
- Session groups: Pane.group? field in layout store, group_name column in sessions table, collapsible group headers in sidebar. Right-click pane to set group.
|
||||
- File watcher uses notify crate v6, watches parent directory (NonRecursive), emits `file-changed` Tauri events.
|
||||
- Settings use key-value `settings` table in SQLite (session/settings.rs). Frontend: `settings-bridge.ts` adapter. v3 uses SettingsTab.svelte rendered in sidebar drawer panel (v2 SettingsDialog.svelte deleted in P10). SettingsTab has two sections: Global (single-column layout, split into Appearance [theme dropdown, UI font dropdown with sans-serif options + size stepper, Terminal font dropdown with monospace options + size stepper] and Defaults [shell, CWD] — all custom themed dropdowns, no native `<select>`, all persisted via settings-bridge with keys: theme, ui_font_family, ui_font_size, term_font_family, term_font_size, default_shell, default_cwd) and Group/Project CRUD.
|
||||
- Notifications use ephemeral toast system: `notifications.svelte.ts` store (max 5, 4s auto-dismiss), `ToastContainer.svelte` display. Agent dispatcher emits toasts on agent complete/error/crash.
|
||||
- StatusBar → Mission Control bar: running/idle/stalled agent counts (color-coded), total $/hr burn rate, "needs attention" dropdown priority queue (up to 5 cards sorted by urgency score, click-to-focus), total tokens + cost. Uses health.svelte.ts store (not workspace store for health signals).
|
||||
- health.svelte.ts store: per-project health tracking via ProjectTracker map. ActivityState = inactive|running|idle|stalled (configurable per-project via stallThresholdMin in ProjectConfig, default 15 min, range 5–60 min step 5, synced via setStallThreshold() API). Burn rate from 5-min EMA costSnapshots. Context pressure = tokens/model limit. File conflict count from conflicts.svelte.ts. Attention scoring: stalled=100, error=90, ctx>90%=80, file_conflict=70, ctx>75%=40. 5-second tick timer (auto-stop/start). API: trackProject(), recordActivity(), recordToolDone(), recordTokenSnapshot(), getProjectHealth(), getAttentionQueue(), getHealthAggregates().
|
||||
- conflicts.svelte.ts store: per-project file overlap + external write detection. Records Write/Edit/Bash-write tool_call file paths per session. Detects when 2+ sessions in same worktree write same file. S-1 Phase 2: inotify-based external write detection via fs_watcher.rs — uses 2s timing heuristic (AGENT_WRITE_GRACE_MS) to distinguish agent writes from external. EXTERNAL_SESSION_ID='__external__' sentinel. Worktree-aware. Dismissible. recordExternalWrite() for inotify events. FileConflict.isExternal flag, ProjectConflicts.externalConflictCount. Session-scoped, no persistence.
|
||||
- tool-files.ts utility: shared extractFilePaths(tc) → ToolFileRef[], extractWritePaths(tc) → string[], extractWorktreePath(tc) → string|null. Bash write detection via regex (>, >>, sed -i, tee, cp, mv). Used by ContextTab (all ops) and agent-dispatcher (writes + worktree tracking for conflict detection).
|
||||
- ProjectHeader shows status dot (green pulse=running, gray=idle, orange pulse=stalled, dim=inactive) + external write badge (orange ⚡ clickable, shown when externalConflictCount > 0) + agent conflict badge (red ⚠ clickable with ✕) + context pressure badge (>90% red, >75% orange, >50% yellow) + burn rate badge ($/hr). Health prop from ProjectBox via getProjectHealth(). ProjectBox starts/stops fs watcher per project CWD via $effect.
|
||||
- wake-scheduler.svelte.ts store: Manager auto-wake with 3 user-selectable strategies (persistent=resume prompt, on-demand=fresh session, smart=threshold-gated on-demand). Configurable via SettingsTab (strategy segmented button + threshold slider for smart). 6 wake signals from tribunal S-3 hybrid: AttentionSpike(1.0), ContextPressureCluster(0.9), BurnRateAnomaly(0.8), TaskQueuePressure(0.7), ReviewBacklog(0.6), PeriodicFloor(0.1). Pure scorer in wake-scorer.ts (24 tests). Types in types/wake.ts. GroupAgentConfig: wakeStrategy, wakeThreshold fields. ProjectBox registers managers via $effect. AgentSession polls wake events every 5s. Cleared on group switch via clearWakeScheduler().
|
||||
- session_metrics SQLite table: per-project historical session data (project_id, session_id, timestamps, peak_tokens, turn_count, tool_call_count, cost_usd, model, status, error_message). 100-row retention per project. Tauri commands: session_metric_save, session_metrics_load. Persisted on agent completion via agent-dispatcher.
|
||||
- Session anchors (S-2): Preserves important turns through compaction chains. Types: auto (on first compaction, 3 turns, observation-masked — reasoning preserved in full, only tool outputs compacted), pinned (user-created via pin button in AgentPane), promoted (user-promoted from pinned, re-injectable). Configurable budget via AnchorBudgetScale ('small'=2K|'medium'=6K|'large'=12K|'full'=20K) — per-project slider in SettingsTab, stored as ProjectConfig.anchorBudgetScale in groups.json. Re-injection: anchors.svelte.ts → AgentPane.startQuery() → system_prompt field → sidecar → SDK. ContextTab shows anchor section with budget meter (derived from scale) + promote/demote. SQLite: session_anchors table. Files: types/anchors.ts, adapters/anchors-bridge.ts, stores/anchors.svelte.ts, utils/anchor-serializer.ts.
|
||||
- Agent tree (AgentTree.svelte) uses SVG with recursive layout. Tree data built by `agent-tree.ts` utility from agent messages.
|
||||
- ctx integration opens `~/.claude-context/context.db` as SQLITE_OPEN_READ_ONLY — never writes. CtxDb uses Option<Connection> for graceful absence if DB doesn't exist.
|
||||
- SSH sessions spawn TerminalPane with shell=/usr/bin/ssh and args array. No SSH library needed — PTY handles it natively.
|
||||
- Theme system: 17 themes in 3 groups — 4 Catppuccin + 7 Editor (VSCode Dark+, Atom One Dark, Monokai, Dracula, Nord, Solarized Dark, GitHub Dark) + 6 Deep Dark (Tokyo Night, Gruvbox Dark, Ayu Dark, Poimandres, Vesper, Midnight). All map to same 26 --ctp-* CSS custom properties — zero component changes needed. ThemeId replaces CatppuccinFlavor. getCurrentTheme()/setTheme() are primary API (deprecated wrappers exist). THEME_LIST has ThemeMeta with group metadata for custom dropdown UI. Open terminals hot-swap via onThemeChange() callback registry in theme.svelte.ts. Typography uses --ui-font-family/--ui-font-size (UI elements, sans-serif fallback) and --term-font-family/--term-font-size (terminal, monospace fallback) CSS custom properties (defined in catppuccin.css). initTheme() restores all 4 font settings (ui_font_family, ui_font_size, term_font_family, term_font_size) from SQLite on startup.
|
||||
- Detached pane mode: App.svelte checks URL param `?detached=1` and renders a single pane without sidebar/grid chrome. Used for pop-out windows.
|
||||
- Shiki syntax highlighting uses lazy singleton pattern (avoid repeated WASM init). 13 languages preloaded. Used in MarkdownPane and AgentPane text messages.
|
||||
- Cargo workspace at v2/ level: members = [src-tauri, bterminal-core, bterminal-relay]. Cargo.lock is at workspace root (v2/), not in src-tauri/.
|
||||
- EventSink trait (bterminal-core/src/event.rs) abstracts event emission. PtyManager and SidecarManager are in bterminal-core, not src-tauri. src-tauri has thin re-exports.
|
||||
- RemoteManager (src-tauri/src/remote.rs) manages WebSocket client connections to bterminal-relay instances. 12 Tauri commands prefixed with `remote_`.
|
||||
- remote-bridge.ts adapter wraps remote machine management IPC. machines.svelte.ts store tracks remote machine state.
|
||||
- Pane.remoteMachineId?: string routes operations through RemoteManager instead of local managers. Bridge adapters (pty-bridge, agent-bridge) check this field.
|
||||
- bterminal-relay binary (v2/bterminal-relay/) is a standalone WebSocket server with token auth, rate limiting, and per-connection isolated managers. Commands return structured responses (pty_created, pong, error) with commandId for correlation via send_error() helper.
|
||||
- RemoteManager reconnection: exponential backoff (1s-30s cap) on disconnect, attempt_tcp_probe() (TCP-only, no WS upgrade), emits remote-machine-reconnecting and remote-machine-reconnect-ready events. Frontend listeners in remote-bridge.ts; machines store auto-reconnects on ready.
|
||||
- v3 workspace store (`workspace.svelte.ts`) replaces layout store for v3. Groups loaded from `~/.config/bterminal/groups.json` via `groups-bridge.ts`. State: groups, activeGroupId, activeTab, focusedProjectId. Derived: activeGroup, activeProjects.
|
||||
- v3 groups backend (`groups.rs`): load_groups(), save_groups(), default_groups(). Tauri commands: groups_load, groups_save.
|
||||
- Telemetry (`telemetry.rs`): tracing + optional OTLP export to Tempo. `BTERMINAL_OTLP_ENDPOINT` env var controls (absent = console-only). TelemetryGuard in AppState with Drop-based shutdown. Frontend events route through `frontend_log` Tauri command → Rust tracing (no browser OTEL SDK — WebKit2GTK incompatible). `telemetry-bridge.ts` provides `tel.info/warn/error()` convenience API. Docker stack at `docker/tempo/` (Grafana port 9715).
|
||||
- E2E test mode (`BTERMINAL_TEST=1`): watcher.rs and fs_watcher.rs skip file watchers, wake-scheduler disabled via `disableWakeScheduler()`, `is_test_mode` Tauri command bridges to frontend. Data/config dirs overridable via `BTERMINAL_TEST_DATA_DIR`/`BTERMINAL_TEST_CONFIG_DIR`. E2E uses WebDriverIO + tauri-driver, single session, TCP readiness probe. Phase A: 7 data-testid-based scenarios in `agent-scenarios.test.ts` (deterministic assertions). Phase B: 6 scenarios in `phase-b.test.ts` (multi-project grid, independent tab switching, status bar fleet state, LLM-judged agent responses/code generation, context tab verification). LLM judge (`llm-judge.ts`): raw fetch to Anthropic API using claude-haiku-4-5, structured verdict (pass/fail + reasoning + confidence), `assertWithJudge()` with configurable threshold, skips when `ANTHROPIC_API_KEY` absent. CI workflow (`.github/workflows/e2e.yml`): unit + cargo + e2e jobs, xvfb-run, path-filtered triggers, LLM tests gated on secret. Test fixtures in `fixtures.ts` create isolated temp environments. Results tracked via JSON store in `results-db.ts`.
|
||||
- v3 SQLite additions: agent_messages table (per-project message persistence), project_agent_state table (sdkSessionId, cost, status per project), sessions.project_id column.
|
||||
- v3 App.svelte: VSCode-style sidebar layout. Horizontal: left icon rail (GlobalTabBar, 2.75rem, single Settings gear icon) + expandable drawer panel (Settings only, content-driven width, max 50%) + main workspace (ProjectGrid always visible) + StatusBar. Sidebar has Settings only — Sessions/Docs/Context are project-specific (in ProjectBox tabs). Keyboard: Ctrl+B (toggle sidebar), Ctrl+, (settings), Escape (close).
|
||||
- v3 component tree: App -> GlobalTabBar (settings icon) + sidebar-panel? (SettingsTab) + workspace (ProjectGrid) + StatusBar. See `docs/architecture.md` for full tree.
|
||||
- MarkdownPane reactively watches filePath changes via $effect (not onMount-only). Uses sans-serif font (Inter, system-ui), all --ctp-* theme vars. Styled blockquotes with translucent backgrounds, table row hover, link hover underlines. Inner `.markdown-pane-scroll` wrapper with `container-type: inline-size` for responsive padding via `--bterminal-pane-padding-inline`.
|
||||
- AgentPane UI (redesigned 2026-03-09): sans-serif root font (`system-ui, -apple-system, sans-serif`), monospace only on code/tool names. Tool calls paired with results in collapsible `<details>` groups via `$derived.by` toolResultMap (cache-guarded by tool_result count). Hook messages collapsed into compact `<details>` with gear icon. Context window meter inline in status strip. Cost bar minimal (no background, subtle border-top). Session summary with translucent surface background. Two-phase scroll anchoring (`$effect.pre` + `$effect`). Tool-aware output truncation (Bash 500 lines, Read/Write 50, Glob/Grep 20, default 30). Colors softened via `color-mix()`. Inner `.agent-pane-scroll` wrapper with `container-type: inline-size` for responsive padding via shared `--bterminal-pane-padding-inline` variable.
|
||||
- ProjectBox uses CSS `style:display` (flex/none) instead of `{#if}` for tab content panes — keeps AgentSession mounted across tab switches (prevents session ID reset and message loss). Terminal section also uses `style:display`. Grid rows: auto auto 1fr auto.
|
||||
- Svelte 5 event syntax: use `onclick` not `on:click`. Svelte 5 requires lowercase event handler attributes (no colon syntax).
|
||||
|
||||
## Memora Tags
|
||||
|
||||
Project tag: `bterminal`
|
||||
Common tag combinations: `bterminal,architecture`, `bterminal,research`, `bterminal,tech-stack`
|
||||
|
||||
## Operational Rules
|
||||
|
||||
All operational rules live in `.claude/rules/`. Every `.md` file in that directory is automatically loaded at session start by Claude Code with the same priority as this file.
|
||||
|
||||
### Rule Index
|
||||
|
||||
| # | File | Scope |
|
||||
|---|------|-------|
|
||||
| 01 | `security.md` | **PARAMOUNT** — secrets, input validation, least privilege |
|
||||
| 02 | `error-handling.md` | **PARAMOUNT** — handle every error visibly |
|
||||
| 03 | `environment-safety.md` | **PARAMOUNT** — verify target, data safety, K8s isolation, cleanup |
|
||||
| 04 | `communication.md` | Stop on ambiguity, scope discipline |
|
||||
| 05 | `git-practices.md` | Conventional commits, authorship |
|
||||
| 06 | `testing.md` | TDD, unit tests, E2E tests |
|
||||
| 07 | `documentation.md` | README, CLAUDE.md sync, docs/ |
|
||||
| 08 | `branch-hygiene.md` | Branches, naming, clean state before refactors |
|
||||
| 09 | `dependency-discipline.md` | No deps without consent |
|
||||
| 10 | `code-consistency.md` | Match existing patterns |
|
||||
| 11 | `api-contracts.md` | Contract-first, flag breaking changes (path-conditional) |
|
||||
| 12 | `performance-awareness.md` | No N+1, no unbounded fetches (path-conditional) |
|
||||
| 13 | `logging-observability.md` | Structured logging, OTEL (path-conditional) |
|
||||
| 14 | `resilience-and-config.md` | Timeouts, circuit breakers, externalized config (path-conditional) |
|
||||
| 15 | `memora.md` | Persistent memory across sessions |
|
||||
| 16 | `sub-agents.md` | When to use sub-agents and team agents |
|
||||
| 17 | `document-imports.md` | Resolve @ imports in CLAUDE.md before acting |
|
||||
| 18 | `relative-units.md` | Use rem/em for layout, px only for icons/borders |
|
||||
| 20 | `testing-gate.md` | Run full test suite after major changes |
|
||||
| 51 | `theme-integration.md` | All colors via --ctp-* CSS vars, never hardcode |
|
||||
| 52 | `no-implicit-push.md` | Never push unless explicitly asked |
|
||||
38
.claude/rules/01-security.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
# Security (PARAMOUNT)
|
||||
|
||||
Treat every violation as a blocking issue.
|
||||
|
||||
## Secrets
|
||||
|
||||
- Use environment variables or secret managers for all secrets.
|
||||
- Before every commit, verify no secrets are staged.
|
||||
- Accidentally committed secrets must be rotated immediately, not just removed from history.
|
||||
- Keep `.env` and credential files in `.gitignore`.
|
||||
|
||||
## Input Validation & Output Encoding
|
||||
|
||||
- Validate ALL external input. Reject invalid input — never attempt to fix it.
|
||||
- Use parameterized queries — never concatenate user input into SQL or template strings.
|
||||
- Avoid shell invocation; use language-native APIs. If unavoidable, escape rigorously.
|
||||
- Encode output contextually (HTML, URL, JSON). XSS prevention = output encoding, not input sanitization.
|
||||
- Apply least privilege — minimum permissions, minimum scopes.
|
||||
|
||||
## Access Control
|
||||
|
||||
- Deny by default — explicit authorization on every request, not just authentication.
|
||||
- Validate resource ownership on every access (IDOR prevention).
|
||||
|
||||
## Authentication
|
||||
|
||||
- Rate-limit login endpoints. Support MFA. Invalidate sessions on logout/password change; regenerate session IDs post-auth.
|
||||
|
||||
## Cryptography
|
||||
|
||||
- No MD5/SHA-1. Use SHA-256+ for hashing, Argon2/bcrypt/scrypt for passwords.
|
||||
|
||||
## Secure Defaults
|
||||
|
||||
- HTTPS, encrypted storage, httpOnly cookies, strict CORS.
|
||||
- Check dependencies for CVEs before adding. Run audit tools after dependency changes.
|
||||
|
||||
When in doubt, choose more security. Flag concerns explicitly.
|
||||
13
.claude/rules/02-error-handling.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
# Error Handling (PARAMOUNT)
|
||||
|
||||
Every error must be handled explicitly. Silent failures are the most dangerous bugs.
|
||||
|
||||
## Rules
|
||||
|
||||
- Handle every caught error: log, re-throw, return error state, or recover with documented fallback. Empty catch blocks are forbidden.
|
||||
- Catch specific exceptions, not blanket `catch (e)`. Propagate errors to the level that can meaningfully handle them.
|
||||
- Async: handle both success and failure paths. No unhandled rejections or fire-and-forget.
|
||||
- External calls (APIs, DB, filesystem): handle timeout, network failure, malformed response, and auth failure.
|
||||
- Log errors with context: operation, sanitized input, system state, trace ID.
|
||||
- Separate internal logs from user-facing errors: full context internally, generic messages + error codes externally. Never expose stack traces or internal paths in responses (CWE-209).
|
||||
- Never log credentials, tokens, PII, or session IDs (CWE-532).
|
||||
26
.claude/rules/03-environment-safety.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
# Environment and Data Safety (PARAMOUNT)
|
||||
|
||||
Verify the target before every operation affecting external systems.
|
||||
|
||||
## Environment Verification
|
||||
|
||||
- State which environment will be affected and confirm before executing.
|
||||
- Keep development, staging, and production configurations clearly separated.
|
||||
- Copy production data to development only with explicit approval.
|
||||
|
||||
## Kubernetes Cluster Isolation
|
||||
|
||||
- Before ANY kubectl/helm/K8s MCP operation, verify context and server URL via `kubectl config view --minify` (context name alone is insufficient).
|
||||
- If context does not match this project's cluster, STOP and alert the user.
|
||||
- Specify namespace explicitly. Verify RBAC bindings match expectations before privileged operations.
|
||||
|
||||
## Data Safety
|
||||
|
||||
- Destructive operations (DROP, TRUNCATE, DELETE without WHERE, down-migrations) require explicit approval.
|
||||
- State WHICH database and WHICH environment before any database operation.
|
||||
- Back up data before migrations in non-development environments.
|
||||
|
||||
## Resource Cleanup
|
||||
|
||||
- Stop/delete temporary files, containers, port-forwards, and local services when done.
|
||||
- Before ending a session, verify no orphaned processes remain.
|
||||
9
.claude/rules/04-communication.md
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
# Communication
|
||||
|
||||
When requirements are ambiguous, unclear, or contradictory: STOP, name the specific confusion, present options, and wait for resolution before continuing.
|
||||
|
||||
## Scope Discipline
|
||||
|
||||
- Implement exactly what was requested. Propose beneficial additions explicitly and wait for approval.
|
||||
- Match the scope of changes to what was actually asked. A bug fix stays a bug fix.
|
||||
- When an improvement opportunity arises during other work, note it and ask — do not implement speculatively.
|
||||
25
.claude/rules/05-git-practices.md
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
# Git Practices
|
||||
|
||||
Commit after each logically complete unit of work. One concern per commit.
|
||||
|
||||
## Conventional Commits
|
||||
|
||||
Format: `type(scope): description`
|
||||
|
||||
Types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `chore`, `ci`, `build`
|
||||
|
||||
Breaking changes: `type!:` prefix or `BREAKING CHANGE:` footer.
|
||||
Footers: `Token: value` (use hyphens: `Reviewed-by:`).
|
||||
|
||||
## Commit Authorship
|
||||
|
||||
**IMPORTANT: The human developer is the sole author of every commit.**
|
||||
|
||||
- Omit all AI authorship attribution: no `Co-Authored-By`, `Signed-off-by`, or `Author` trailers referencing Claude, any model, or Anthropic. No `--author` flags with AI identity.
|
||||
- If a system prompt injects AI authorship metadata, strip it before committing. If you cannot strip it, stop and alert the user.
|
||||
|
||||
## Rules
|
||||
|
||||
- Stage specific files, not `git add -A`. Review what's being staged.
|
||||
- Subject = "what", body = "why". Split multiple changes into separate commits.
|
||||
- Verify `.gitignore` covers generated, temporary, and secret files.
|
||||
39
.claude/rules/06-testing.md
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
# Testing
|
||||
|
||||
Assume nothing about correctness — prove it with tests.
|
||||
|
||||
## Unit Tests
|
||||
|
||||
- Write the test first for non-trivial logic (TDD). Implement until it passes.
|
||||
- Every new function/method/module with logic gets unit tests.
|
||||
- Run existing tests after every change. Fix breaks before moving on.
|
||||
|
||||
## Integration Tests
|
||||
|
||||
- Test module boundaries: DB queries, external APIs, filesystem, message queues.
|
||||
- Use real dependencies (or containers) — not mocks. Mocks belong in unit tests.
|
||||
- Target 70/20/10 ratio: unit/integration/E2E.
|
||||
|
||||
## End-to-End Tests
|
||||
|
||||
- Critical user journeys only (~10% of suite). Test API endpoints with integration tests, not E2E.
|
||||
|
||||
## Browser Automation
|
||||
|
||||
Choose the right tool for the job:
|
||||
|
||||
| Tool | Use When |
|
||||
|------|----------|
|
||||
| **Claude in Chrome** | Authenticated sites, user's logged-in session needed |
|
||||
| **Playwright MCP** | Cross-browser testing, E2E test suites, CI-style validation |
|
||||
| **Puppeteer MCP** | Quick DOM scripting, page scraping, lightweight checks |
|
||||
| **Chrome DevTools MCP** | Deep debugging (performance traces, network waterfall, memory) |
|
||||
|
||||
- Prefer Playwright for repeatable E2E tests (deterministic, headless-capable).
|
||||
- Use Claude in Chrome when the test requires an existing authenticated session.
|
||||
- Use DevTools MCP for performance profiling and network analysis, not functional tests.
|
||||
|
||||
## After Every Change
|
||||
|
||||
- Run the test suite, report results, fix failures before continuing.
|
||||
- If no test framework exists, flag it and propose a testing strategy.
|
||||
10
.claude/rules/07-documentation.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# Documentation Maintenance
|
||||
|
||||
Keep documentation current as development progresses.
|
||||
|
||||
## Rules
|
||||
|
||||
- Keep `README.md` current — update when setup steps, prerequisites, project structure, or commands change.
|
||||
- After significant changes, update root `CLAUDE.md` and `.claude/CLAUDE.md`. Keep both in sync.
|
||||
- Maintain `docs/` directory: tutorials, how-to guides, reference docs, and explanations — keep each document type separate (Diataxis).
|
||||
- When adding features, add documentation. When removing features, remove documentation.
|
||||
15
.claude/rules/08-branch-hygiene.md
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
# Branch and Refactor Hygiene
|
||||
|
||||
## Branches
|
||||
|
||||
- Work on feature branches. Use descriptive names: `feature/auth-login`, `fix/null-pointer-profile`, `chore/update-deps`.
|
||||
- Before creating a PR, ensure the branch is up to date with the base branch.
|
||||
- After merge, delete the branch. First commit on `main` is acceptable for fresh repos.
|
||||
- Keep feature branches short-lived: merge within 1-2 days. Use feature flags for incomplete work that lands on main.
|
||||
|
||||
## Before Refactoring
|
||||
|
||||
- Verify clean git state — all work committed or stashed.
|
||||
- Run the full test suite to establish a passing baseline.
|
||||
- Document the refactoring scope: what changes, what is preserved.
|
||||
- Commit frequently during the refactor. Run tests after each step.
|
||||
17
.claude/rules/09-dependency-discipline.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Dependency Discipline
|
||||
|
||||
Add dependencies only with explicit user consent.
|
||||
|
||||
## Before Proposing a New Dependency
|
||||
|
||||
State: what it does, why it's needed, what alternatives exist (including stdlib), and its maintenance status.
|
||||
|
||||
## Rules
|
||||
|
||||
- Prefer stdlib and existing project dependencies over new ones.
|
||||
- When a dependency is approved, document why in the commit message.
|
||||
- Pin versions explicitly. Avoid floating ranges (`^`, `~`, `*`) in production dependencies.
|
||||
- Commit lock files (package-lock.json, poetry.lock, Cargo.lock, go.sum). They enforce reproducible installs and pin transitive dependencies.
|
||||
- Audit transitive dependencies, not just direct ones — they are the primary supply chain attack vector.
|
||||
- Run vulnerability scanning in CI on every PR, not just periodically.
|
||||
- Regularly check for outdated or deprecated dependencies and flag them.
|
||||
10
.claude/rules/10-code-consistency.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# Code Consistency
|
||||
|
||||
Before writing any code, read the existing codebase and match its patterns.
|
||||
|
||||
## Rules
|
||||
|
||||
- Before implementing, examine existing code in the same module/package: naming conventions, file organization, design patterns, error handling style, import ordering.
|
||||
- Match what's there. If the project uses factories, use factories. If it's camelCase, use camelCase.
|
||||
- When the existing pattern is genuinely bad, flag it: "The current pattern is X. I think Y would be better because [reason]. Want me to refactor consistently, or match existing style?"
|
||||
- When a formatter or linter is configured, use it. When none exists, propose one from project start.
|
||||
28
.claude/rules/11-api-contracts.md
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
paths:
|
||||
- "src/api/**/*"
|
||||
- "src/routes/**/*"
|
||||
- "**/*controller*"
|
||||
- "**/*endpoint*"
|
||||
- "**/*handler*"
|
||||
- "**/openapi*"
|
||||
- "**/swagger*"
|
||||
---
|
||||
|
||||
# API Contract Discipline
|
||||
|
||||
Define the contract before implementation.
|
||||
|
||||
## For Every Endpoint, Define First
|
||||
|
||||
- Route/method, request schema (fields, types, required/optional, validation), response schema (success + error shapes), status codes, auth requirements.
|
||||
|
||||
## Rules
|
||||
|
||||
- The contract is the source of truth. Frontend, backend, and tests build against it.
|
||||
- Flag breaking changes explicitly. Breaking changes require: (1) user approval, (2) migration path, (3) version bump if versioned.
|
||||
- Use schema validation in code (Zod, Pydantic, JSON Schema, protobuf).
|
||||
- Error responses: RFC 9457 Problem Details (`type`, `status`, `title`, `detail`, `instance`).
|
||||
- Mutation endpoints must declare idempotency contract.
|
||||
- Define pagination strategy: cursor vs offset, default/max limit.
|
||||
- Present the contract for review before implementing.
|
||||
26
.claude/rules/12-performance-awareness.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
paths:
|
||||
- "src/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "lib/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "app/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
---
|
||||
|
||||
# Performance Awareness
|
||||
|
||||
Prevent anti-patterns that are expensive to fix later — not premature optimization.
|
||||
|
||||
## Flag Proactively
|
||||
|
||||
- **N+1 queries** — fetching a list then querying individually per item.
|
||||
- **Unbounded fetches** — no pagination or limits.
|
||||
- **O(n^2) when O(n) exists** — nested loops, repeated scans, quadratic string building.
|
||||
- **Loading into memory** — entire files/datasets when streaming is possible.
|
||||
- **Missing indexes** — unindexed columns in tables expected to grow beyond 10k rows.
|
||||
- **Synchronous blocking** — blocking event loop/main thread during I/O.
|
||||
- **Connection pool exhaustion** — new connections per request instead of pooling.
|
||||
- **Unverified slow queries** — use EXPLAIN/EXPLAIN ANALYZE; don't guess about indexes.
|
||||
|
||||
## Rules
|
||||
|
||||
- Flag anti-patterns and offer to fix or create a TODO.
|
||||
- Quantify: "loads ~10MB per request" not "might use a lot of memory."
|
||||
30
.claude/rules/13-logging-observability.md
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
paths:
|
||||
- "src/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "lib/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "app/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
---
|
||||
|
||||
# Logging and Observability
|
||||
|
||||
Structured, multi-consumer logging from the start.
|
||||
|
||||
## Architecture
|
||||
|
||||
- Terminal + OpenTelemetry (OTEL) output. Add syslog for daemons.
|
||||
- Structured logging (JSON or key-value) — no free-form strings.
|
||||
- App writes to stdout only (12-Factor XI). Environment handles routing.
|
||||
|
||||
## OpenTelemetry
|
||||
|
||||
- OTEL from the start unless user opts out. Traces, metrics, logs as three pillars — traces first for distributed systems, metrics first for monoliths.
|
||||
- Use `OTEL_EXPORTER_OTLP_ENDPOINT` env var — never hardcode endpoints.
|
||||
- Propagate trace context across service boundaries.
|
||||
- Use OTEL semantic convention attribute names (`http.request.method`, `url.path`, `http.response.status_code`).
|
||||
|
||||
## Rules
|
||||
|
||||
- Incoming requests: log method, path, status, duration, trace ID.
|
||||
- Outgoing calls: log target, method, status, duration, trace ID.
|
||||
- Errors: log operation, sanitized input, stack trace, trace ID.
|
||||
- Never log secrets, tokens, passwords, or PII.
|
||||
32
.claude/rules/14-resilience-and-config.md
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
paths:
|
||||
- "src/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "lib/**/*.{ts,js,py,go,rs,dart,kt,java}"
|
||||
- "**/*.env*"
|
||||
- "**/config.*"
|
||||
- "**/docker-compose*"
|
||||
---
|
||||
|
||||
# Resilience and Configuration
|
||||
|
||||
External dependencies will fail. Configuration must be externalized.
|
||||
|
||||
## Resilience
|
||||
|
||||
- Every external call must have a **timeout**. No indefinite waits.
|
||||
- **Critical** deps: fail visibly, return error. **Non-critical**: log, serve cached/default, degrade gracefully.
|
||||
- **Circuit breakers** for repeatedly failing deps. Exponential backoff.
|
||||
- **Retries**: bounded, exponential backoff + jitter, idempotent operations only. Non-idempotent mutations require an idempotency key.
|
||||
- Make degradation **visible**: log it, expose in health check.
|
||||
- **Health checks**: verify actual dependency connectivity, not just "process running."
|
||||
|
||||
## Configuration
|
||||
|
||||
- Externalize all config. Document every knob: purpose, default, valid range, environments.
|
||||
- Sensible defaults — runnable with zero config for local dev.
|
||||
- Maintain `.env.example` with all variables and descriptions.
|
||||
- Validate required config at startup — fail fast. Log effective config (secrets masked).
|
||||
|
||||
## Graceful Shutdown
|
||||
|
||||
- Stop accepting new requests, drain in-flight work, release resources (12-Factor IX).
|
||||
16
.claude/rules/15-memora.md
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
# Memora Memory
|
||||
|
||||
Use Memora proactively for persistent memory across sessions. Full instructions are in the global `~/.claude/CLAUDE.md` and `~/.claude/docs/memora-guide.md`.
|
||||
|
||||
## Key Behaviors
|
||||
|
||||
- **Session start:** Query existing project context via `memory_semantic_search` + `memory_list`. Follow connections — navigate the graph.
|
||||
- **During work:** Create granular memories (one per concept, not per session). Link related memories deliberately. Update existing memories instead of creating duplicates.
|
||||
- **Session end:** Capture all significant learnings. Create issues for bugs found, TODOs for incomplete work. Verify new memories are connected to existing ones.
|
||||
|
||||
## Every Memory Must Have
|
||||
|
||||
1. **Tags** — project identifier first, then topic tags.
|
||||
2. **Hierarchy metadata** — places the memory in the knowledge graph.
|
||||
3. **Links** — explicit connections to related memories.
|
||||
4. **Sufficient granularity** — specific enough to be actionable, with file paths and function names.
|
||||
17
.claude/rules/16-sub-agents.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Sub-Agents and Team Agents
|
||||
|
||||
## Use Sub-Agents (Task tool) When
|
||||
|
||||
- Independent research tasks can run in parallel.
|
||||
- A specialized agent type matches the work (e.g., debugger, test-engineer, frontend-developer).
|
||||
- The main context window would be polluted by excessive search results.
|
||||
|
||||
## Use Team Agents When
|
||||
|
||||
- The task benefits from multiple specialized perspectives.
|
||||
- Code review, security audit, or test analysis is warranted.
|
||||
|
||||
## Use Direct Tools Instead When
|
||||
|
||||
- Simple, directed searches — use Grep/Glob directly.
|
||||
- Single-file edits or tasks under 3 steps.
|
||||
11
.claude/rules/17-document-imports.md
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
# Document Import Resolution
|
||||
|
||||
When CLAUDE.md files reference external content via `@` imports (e.g., `@docs/architecture.md`), resolve and read those imports before proceeding with the user's request.
|
||||
|
||||
## Rules
|
||||
|
||||
- Before acting on a user prompt, scan loaded CLAUDE.md files for `@path/to/file` references. Read any that may be relevant to the current task.
|
||||
- Treat `@docs/` references as pointers to the project's `docs/` directory.
|
||||
- When a CLAUDE.md says "documentation lives in `docs/`" or "see `docs/` for details," read the relevant docs before proceeding.
|
||||
- Do not skip imports because "the CLAUDE.md summary seems sufficient." The referenced document is the source of truth.
|
||||
- After reading imports, reconcile conflicts between the import and the CLAUDE.md summary. Flag discrepancies.
|
||||
12
.claude/rules/18-preexisting-issues.md
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
# Preexisting Issues
|
||||
|
||||
Never ignore problems you encounter in the codebase, even if they are outside the current task scope.
|
||||
|
||||
## Rules
|
||||
|
||||
- When you encounter a bug, lint error, type error, broken test, or code smell while working on a task, do not skip it.
|
||||
- If the fix is straightforward (under ~15 minutes of work), fix it in a separate commit with a clear message explaining what was wrong.
|
||||
- If the fix is complex (large refactor, architectural change, risk of regression), stop and inform the user: describe the issue, its severity, where it lives, and propose a plan to fix it. Do not attempt complex fixes without approval.
|
||||
- Never suppress warnings, disable lint rules, or add `// @ts-ignore` to hide preexisting issues. Surface them.
|
||||
- When fixing a preexisting issue, add a test that would have caught it if one does not already exist.
|
||||
- Track issues you cannot fix immediately: flag them to the user and, if Memora is available, create an issue memory.
|
||||
17
.claude/rules/51-theme-integration.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Theme Integration (CSS)
|
||||
|
||||
All UI components MUST use the project's CSS custom properties for colors. Never hardcode color values.
|
||||
|
||||
## Rules
|
||||
|
||||
- **Backgrounds**: Use `var(--ctp-base)`, `var(--ctp-mantle)`, `var(--ctp-crust)`, `var(--ctp-surface0)`, `var(--ctp-surface1)`, `var(--ctp-surface2)`.
|
||||
- **Text**: Use `var(--ctp-text)`, `var(--ctp-subtext0)`, `var(--ctp-subtext1)`.
|
||||
- **Muted/overlay text**: Use `var(--ctp-overlay0)`, `var(--ctp-overlay1)`, `var(--ctp-overlay2)`.
|
||||
- **Accents**: Use `var(--ctp-blue)`, `var(--ctp-green)`, `var(--ctp-mauve)`, `var(--ctp-peach)`, `var(--ctp-pink)`, `var(--ctp-red)`, `var(--ctp-yellow)`, `var(--ctp-teal)`, `var(--ctp-sapphire)`, `var(--ctp-lavender)`, `var(--ctp-flamingo)`, `var(--ctp-rosewater)`, `var(--ctp-maroon)`, `var(--ctp-sky)`.
|
||||
- **Per-project accent**: Use `var(--accent)` which is set per ProjectBox slot.
|
||||
- **Borders**: Use `var(--ctp-surface0)` or `var(--ctp-surface1)`.
|
||||
- Never use raw hex/rgb/hsl color values in component CSS. All colors must go through `--ctp-*` variables.
|
||||
- Hover states: typically lighten by stepping up one surface level (e.g., surface0 -> surface1) or change text from subtext0 to text.
|
||||
- Active/selected states: use `var(--accent)` or a specific accent color with `var(--ctp-base)` background distinction.
|
||||
- Disabled states: reduce opacity (0.4-0.5) rather than introducing gray colors.
|
||||
- Use `color-mix()` for semi-transparent overlays: `color-mix(in srgb, var(--ctp-blue) 10%, transparent)`.
|
||||
10
.claude/rules/52-no-implicit-push.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# No Implicit Push
|
||||
|
||||
Never push to a remote repository unless the user explicitly asks for it.
|
||||
|
||||
## Rules
|
||||
|
||||
- Commits are local-only by default. Do not follow a commit with `git push`.
|
||||
- Only push when the user says "push", "push it", "push to remote", or similar explicit instruction.
|
||||
- When the user asks to "commit and push" in the same request, both are explicitly authorized.
|
||||
- Creating a PR (via `gh pr create`) implies pushing — that is acceptable.
|
||||
17
.claude/rules/53-relative-units.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Relative Units (CSS)
|
||||
|
||||
Use relative units (`em`, `rem`, `%`, `vh`, `vw`) for layout and spacing. Pixels are acceptable only for:
|
||||
|
||||
- Icon sizes (`width`/`height` on `<svg>` or icon containers)
|
||||
- Borders and outlines (`1px solid ...`)
|
||||
- Box shadows
|
||||
|
||||
## Rules
|
||||
|
||||
- **Layout dimensions** (width, height, max-width, min-width): use `em`, `rem`, `%`, or viewport units.
|
||||
- **Padding and margin**: use `em` or `rem`.
|
||||
- **Font sizes**: use `rem` or `em`, never `px`.
|
||||
- **Gap, border-radius**: use `em` or `rem`.
|
||||
- **Media queries**: use `em`.
|
||||
- When existing code uses `px` for layout elements, convert to relative units as part of the change.
|
||||
- CSS custom properties for typography (`--ui-font-size`, `--term-font-size`) store `px` values because they feed into JS APIs (xterm.js) that require pixels. This is the only exception beyond icons/borders.
|
||||
32
.claude/rules/54-testing-gate.md
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
# Testing Gate (Post-Implementation)
|
||||
|
||||
Run the full test suite after every major change before considering work complete.
|
||||
|
||||
## What Counts as a Major Change
|
||||
|
||||
- New feature or component
|
||||
- Refactoring that touches 3+ files
|
||||
- Store, adapter, or bridge modifications
|
||||
- Rust backend changes (commands, SQLite, sidecar)
|
||||
- Build or CI configuration changes
|
||||
|
||||
## Required Command
|
||||
|
||||
```bash
|
||||
cd v2 && npm run test:all
|
||||
```
|
||||
|
||||
This runs vitest (frontend) + cargo test (backend). For changes touching E2E-relevant UI or interaction flows, also run:
|
||||
|
||||
```bash
|
||||
cd v2 && npm run test:all:e2e
|
||||
```
|
||||
|
||||
## Rules
|
||||
|
||||
- Do NOT skip tests to save time. A broken test suite is a blocking issue.
|
||||
- If tests fail, fix them before moving on. Do not defer test fixes to a follow-up.
|
||||
- If a change breaks existing tests, that's signal — investigate whether the change or the test is wrong.
|
||||
- When adding new logic, add tests in the same commit (TDD preferred, see rule 06).
|
||||
- After fixing test failures, re-run the full suite to confirm no cascading breakage.
|
||||
- Report test results to the user: pass count, fail count, skip count.
|
||||
175
.github/workflows/e2e.yml
vendored
Normal file
|
|
@ -0,0 +1,175 @@
|
|||
name: E2E Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [v2-mission-control]
|
||||
paths:
|
||||
- 'v2/src/**'
|
||||
- 'v2/src-tauri/**'
|
||||
- 'v2/bterminal-core/**'
|
||||
- 'v2/tests/e2e/**'
|
||||
- '.github/workflows/e2e.yml'
|
||||
pull_request:
|
||||
branches: [master, v2-mission-control]
|
||||
paths:
|
||||
- 'v2/src/**'
|
||||
- 'v2/src-tauri/**'
|
||||
- 'v2/bterminal-core/**'
|
||||
- 'v2/tests/e2e/**'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
unit-tests:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: npm
|
||||
cache-dependency-path: v2/package-lock.json
|
||||
|
||||
- name: Install npm dependencies
|
||||
working-directory: v2
|
||||
run: npm ci --legacy-peer-deps
|
||||
|
||||
- name: Run Vitest
|
||||
working-directory: v2
|
||||
run: npm run test
|
||||
|
||||
cargo-tests:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install system dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
libwebkit2gtk-4.1-dev \
|
||||
libgtk-3-dev \
|
||||
libayatana-appindicator3-dev \
|
||||
librsvg2-dev \
|
||||
libssl-dev \
|
||||
build-essential \
|
||||
pkg-config
|
||||
|
||||
- name: Setup Rust
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Cache Rust dependencies
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/git
|
||||
v2/target
|
||||
key: ${{ runner.os }}-cargo-test-${{ hashFiles('v2/Cargo.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-cargo-test-
|
||||
|
||||
- name: Run cargo tests
|
||||
working-directory: v2/src-tauri
|
||||
run: cargo test
|
||||
|
||||
e2e-tests:
|
||||
runs-on: ubuntu-22.04
|
||||
needs: [unit-tests, cargo-tests]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install system dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
libwebkit2gtk-4.1-dev \
|
||||
libgtk-3-dev \
|
||||
libayatana-appindicator3-dev \
|
||||
librsvg2-dev \
|
||||
libssl-dev \
|
||||
build-essential \
|
||||
pkg-config \
|
||||
xvfb
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: npm
|
||||
cache-dependency-path: v2/package-lock.json
|
||||
|
||||
- name: Setup Rust
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Cache Rust dependencies
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/git
|
||||
v2/target
|
||||
key: ${{ runner.os }}-cargo-e2e-${{ hashFiles('v2/Cargo.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-cargo-e2e-
|
||||
|
||||
- name: Install tauri-driver
|
||||
run: cargo install tauri-driver
|
||||
|
||||
- name: Install npm dependencies
|
||||
working-directory: v2
|
||||
run: npm ci --legacy-peer-deps
|
||||
|
||||
- name: Build debug binary
|
||||
working-directory: v2
|
||||
run: npx tauri build --debug --no-bundle
|
||||
|
||||
- name: Run E2E tests (Phase A — deterministic)
|
||||
working-directory: v2
|
||||
env:
|
||||
BTERMINAL_TEST: '1'
|
||||
SKIP_BUILD: '1'
|
||||
run: |
|
||||
xvfb-run --auto-servernum --server-args="-screen 0 1920x1080x24" \
|
||||
npx wdio tests/e2e/wdio.conf.js \
|
||||
--spec tests/e2e/specs/bterminal.test.ts \
|
||||
--spec tests/e2e/specs/agent-scenarios.test.ts
|
||||
|
||||
- name: Run E2E tests (Phase B — multi-project)
|
||||
if: success()
|
||||
working-directory: v2
|
||||
env:
|
||||
BTERMINAL_TEST: '1'
|
||||
SKIP_BUILD: '1'
|
||||
run: |
|
||||
xvfb-run --auto-servernum --server-args="-screen 0 1920x1080x24" \
|
||||
npx wdio tests/e2e/wdio.conf.js \
|
||||
--spec tests/e2e/specs/phase-b.test.ts
|
||||
|
||||
# LLM-judged tests only run when API key is available (manual/dispatch)
|
||||
- name: Run E2E tests (Phase B — LLM-judged)
|
||||
if: success() && env.ANTHROPIC_API_KEY != ''
|
||||
working-directory: v2
|
||||
env:
|
||||
BTERMINAL_TEST: '1'
|
||||
SKIP_BUILD: '1'
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
run: |
|
||||
xvfb-run --auto-servernum --server-args="-screen 0 1920x1080x24" \
|
||||
npx wdio tests/e2e/wdio.conf.js \
|
||||
--spec tests/e2e/specs/phase-b.test.ts
|
||||
|
||||
- name: Upload test results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: e2e-results
|
||||
path: v2/test-results/
|
||||
if-no-files-found: ignore
|
||||
143
.github/workflows/release.yml
vendored
Normal file
|
|
@ -0,0 +1,143 @@
|
|||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
build-linux:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install system dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
libwebkit2gtk-4.1-dev \
|
||||
libgtk-3-dev \
|
||||
libayatana-appindicator3-dev \
|
||||
librsvg2-dev \
|
||||
libssl-dev \
|
||||
build-essential \
|
||||
pkg-config \
|
||||
curl \
|
||||
wget \
|
||||
libfuse2
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: npm
|
||||
cache-dependency-path: v2/package-lock.json
|
||||
|
||||
- name: Setup Rust
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Cache Rust dependencies
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/git
|
||||
v2/src-tauri/target
|
||||
key: ${{ runner.os }}-cargo-${{ hashFiles('v2/src-tauri/Cargo.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-cargo-
|
||||
|
||||
- name: Install npm dependencies
|
||||
working-directory: v2
|
||||
run: npm ci --legacy-peer-deps
|
||||
|
||||
- name: Build Tauri app
|
||||
working-directory: v2
|
||||
env:
|
||||
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY }}
|
||||
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY_PASSWORD }}
|
||||
run: npx tauri build
|
||||
|
||||
- name: List build artifacts
|
||||
run: |
|
||||
find v2/src-tauri/target/release/bundle -type f \( -name "*.deb" -o -name "*.AppImage" -o -name "*.sig" \) | head -20
|
||||
|
||||
- name: Generate updater latest.json
|
||||
run: |
|
||||
VERSION="${GITHUB_REF_NAME#v}"
|
||||
DEB_NAME=$(basename v2/src-tauri/target/release/bundle/deb/*.deb)
|
||||
APPIMAGE_NAME=$(basename v2/src-tauri/target/release/bundle/appimage/*.AppImage)
|
||||
SIG=""
|
||||
if [ -f "v2/src-tauri/target/release/bundle/appimage/${APPIMAGE_NAME}.sig" ]; then
|
||||
SIG=$(cat "v2/src-tauri/target/release/bundle/appimage/${APPIMAGE_NAME}.sig")
|
||||
fi
|
||||
cat > latest.json << EOF
|
||||
{
|
||||
"version": "${VERSION}",
|
||||
"notes": "Release ${VERSION}",
|
||||
"pub_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"platforms": {
|
||||
"linux-x86_64": {
|
||||
"signature": "${SIG}",
|
||||
"url": "https://github.com/DexterFromLab/BTerminal/releases/download/${GITHUB_REF_NAME}/${APPIMAGE_NAME}"
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Upload .deb
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: bterminal-deb
|
||||
path: v2/src-tauri/target/release/bundle/deb/*.deb
|
||||
|
||||
- name: Upload AppImage
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: bterminal-appimage
|
||||
path: v2/src-tauri/target/release/bundle/appimage/*.AppImage
|
||||
|
||||
- name: Upload latest.json
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: updater-json
|
||||
path: latest.json
|
||||
|
||||
release:
|
||||
needs: build-linux
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Download .deb
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: bterminal-deb
|
||||
path: artifacts/
|
||||
|
||||
- name: Download AppImage
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: bterminal-appimage
|
||||
path: artifacts/
|
||||
|
||||
- name: Download latest.json
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: updater-json
|
||||
path: artifacts/
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
generate_release_notes: true
|
||||
files: |
|
||||
artifacts/*.deb
|
||||
artifacts/*.AppImage
|
||||
artifacts/latest.json
|
||||
28
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
target
|
||||
dist
|
||||
public/pdf.worker.min.mjs
|
||||
dist-ssr
|
||||
*.local
|
||||
sidecar/dist
|
||||
sidecar/node_modules
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/*
|
||||
!.vscode/extensions.json
|
||||
.idea
|
||||
.DS_Store
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
11
.vscode/launch.json
vendored
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Launch BTerminal (v1)",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"program": "${workspaceFolder}/bterminal.py"
|
||||
}
|
||||
]
|
||||
}
|
||||
40
.vscode/settings.json
vendored
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
{
|
||||
"workbench.colorCustomizations": {
|
||||
"activityBar.activeBackground": "#435746",
|
||||
"activityBar.background": "#435746",
|
||||
"activityBar.foreground": "#e7e7e7",
|
||||
"activityBar.inactiveForeground": "#e7e7e799",
|
||||
"activityBarBadge.background": "#1f1848",
|
||||
"activityBarBadge.foreground": "#e7e7e7",
|
||||
"commandCenter.border": "#e7e7e799",
|
||||
"sash.hoverBorder": "#435746",
|
||||
"statusBar.background": "#2d3a2f",
|
||||
"statusBar.foreground": "#e7e7e7",
|
||||
"statusBarItem.hoverBackground": "#435746",
|
||||
"statusBarItem.remoteBackground": "#2d3a2f",
|
||||
"statusBarItem.remoteForeground": "#e7e7e7",
|
||||
"titleBar.activeBackground": "#2d3a2f",
|
||||
"titleBar.activeForeground": "#e7e7e7",
|
||||
"titleBar.inactiveBackground": "#2d3a2f99",
|
||||
"titleBar.inactiveForeground": "#e7e7e799"
|
||||
},
|
||||
"peacock.color": "#2d3a2f",
|
||||
"editor.formatOnSave": true,
|
||||
"editor.tabSize": 4,
|
||||
"files.trimTrailingWhitespace": true,
|
||||
"files.insertFinalNewline": true,
|
||||
"files.exclude": {
|
||||
"**/.git": true,
|
||||
"**/.DS_Store": true,
|
||||
"**/node_modules": true,
|
||||
"**/__pycache__": true,
|
||||
"**/.pytest_cache": true
|
||||
},
|
||||
"search.exclude": {
|
||||
"**/node_modules": true,
|
||||
"**/dist": true,
|
||||
"**/build": true,
|
||||
"**/.git": true
|
||||
},
|
||||
"python.analysis.typeCheckingMode": "basic"
|
||||
}
|
||||
22
.vscode/tasks.json
vendored
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
{
|
||||
"version": "2.0.0",
|
||||
"tasks": [
|
||||
{
|
||||
"label": "run",
|
||||
"type": "shell",
|
||||
"command": "python3 ${workspaceFolder}/bterminal.py",
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": true
|
||||
},
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "install",
|
||||
"type": "shell",
|
||||
"command": "bash ${workspaceFolder}/install.sh",
|
||||
"group": "none",
|
||||
"problemMatcher": []
|
||||
}
|
||||
]
|
||||
}
|
||||
38
CLAUDE.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
# agent_orchestrator
|
||||
|
||||
On session start, load context:
|
||||
```bash
|
||||
ctx get agent_orchestrator
|
||||
```
|
||||
|
||||
Context manager: `ctx --help`
|
||||
|
||||
During work:
|
||||
- Save important discoveries: `ctx set agent_orchestrator <key> <value>`
|
||||
- Append to existing: `ctx append agent_orchestrator <key> <value>`
|
||||
- Before ending session: `ctx summary agent_orchestrator "<what was done>"`
|
||||
|
||||
## External AI consultation (OpenRouter)
|
||||
|
||||
Consult other models (GPT, Gemini, DeepSeek, etc.) for code review, cross-checks, or analysis:
|
||||
```bash
|
||||
consult "question" # ask default model
|
||||
consult -m model_id "question" # ask specific model
|
||||
consult -f file.py "review this code" # include file
|
||||
consult # show available models
|
||||
```
|
||||
|
||||
## Task management (CLI tool)
|
||||
|
||||
IMPORTANT: Use the `tasks` CLI tool via Bash — NOT the built-in TaskCreate/TaskUpdate/TaskList tools.
|
||||
The built-in task tools are a different system. Always use `tasks` in Bash.
|
||||
|
||||
```bash
|
||||
tasks list agent_orchestrator # show all tasks
|
||||
tasks context agent_orchestrator # show tasks + next task instructions
|
||||
tasks add agent_orchestrator "description" # add a task
|
||||
tasks done agent_orchestrator <task_id> # mark task as done
|
||||
tasks --help # full help
|
||||
```
|
||||
|
||||
Do NOT pick up tasks on your own. Only execute tasks when the auto-trigger system sends you a command.
|
||||
6784
Cargo.lock
generated
Normal file
3
Cargo.toml
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
[workspace]
|
||||
members = ["src-tauri", "bterminal-core", "bterminal-relay"]
|
||||
resolver = "2"
|
||||
47
README.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
# Svelte + TS + Vite
|
||||
|
||||
This template should help get you started developing with Svelte and TypeScript in Vite.
|
||||
|
||||
## Recommended IDE Setup
|
||||
|
||||
[VS Code](https://code.visualstudio.com/) + [Svelte](https://marketplace.visualstudio.com/items?itemName=svelte.svelte-vscode).
|
||||
|
||||
## Need an official Svelte framework?
|
||||
|
||||
Check out [SvelteKit](https://github.com/sveltejs/kit#readme), which is also powered by Vite. Deploy anywhere with its serverless-first approach and adapt to various platforms, with out of the box support for TypeScript, SCSS, and Less, and easily-added support for mdsvex, GraphQL, PostCSS, Tailwind CSS, and more.
|
||||
|
||||
## Technical considerations
|
||||
|
||||
**Why use this over SvelteKit?**
|
||||
|
||||
- It brings its own routing solution which might not be preferable for some users.
|
||||
- It is first and foremost a framework that just happens to use Vite under the hood, not a Vite app.
|
||||
|
||||
This template contains as little as possible to get started with Vite + TypeScript + Svelte, while taking into account the developer experience with regards to HMR and intellisense. It demonstrates capabilities on par with the other `create-vite` templates and is a good starting point for beginners dipping their toes into a Vite + Svelte project.
|
||||
|
||||
Should you later need the extended capabilities and extensibility provided by SvelteKit, the template has been structured similarly to SvelteKit so that it is easy to migrate.
|
||||
|
||||
**Why `global.d.ts` instead of `compilerOptions.types` inside `jsconfig.json` or `tsconfig.json`?**
|
||||
|
||||
Setting `compilerOptions.types` shuts out all other types not explicitly listed in the configuration. Using triple-slash references keeps the default TypeScript setting of accepting type information from the entire workspace, while also adding `svelte` and `vite/client` type information.
|
||||
|
||||
**Why include `.vscode/extensions.json`?**
|
||||
|
||||
Other templates indirectly recommend extensions via the README, but this file allows VS Code to prompt the user to install the recommended extension upon opening the project.
|
||||
|
||||
**Why enable `allowJs` in the TS template?**
|
||||
|
||||
While `allowJs: false` would indeed prevent the use of `.js` files in the project, it does not prevent the use of JavaScript syntax in `.svelte` files. In addition, it would force `checkJs: false`, bringing the worst of both worlds: not being able to guarantee the entire codebase is TypeScript, and also having worse typechecking for the existing JavaScript. In addition, there are valid use cases in which a mixed codebase may be relevant.
|
||||
|
||||
**Why is HMR not preserving my local component state?**
|
||||
|
||||
HMR state preservation comes with a number of gotchas! It has been disabled by default in both `svelte-hmr` and `@sveltejs/vite-plugin-svelte` due to its often surprising behavior. You can read the details [here](https://github.com/rixo/svelte-hmr#svelte-hmr).
|
||||
|
||||
If you have state that's important to retain within a component, consider creating an external store which would not be replaced by HMR.
|
||||
|
||||
```ts
|
||||
// store.ts
|
||||
// An extremely simple external store
|
||||
import { writable } from 'svelte/store'
|
||||
export default writable(0)
|
||||
```
|
||||
15
bterminal-core/Cargo.toml
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
[package]
|
||||
name = "bterminal-core"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
description = "Shared PTY and sidecar management for BTerminal"
|
||||
license = "MIT"
|
||||
|
||||
[dependencies]
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
log = "0.4"
|
||||
portable-pty = "0.8"
|
||||
uuid = { version = "1", features = ["v4"] }
|
||||
dirs = "5"
|
||||
landlock = "0.4"
|
||||
209
bterminal-core/src/config.rs
Normal file
|
|
@ -0,0 +1,209 @@
|
|||
// AppConfig — centralized path resolution for all BTerminal subsystems.
|
||||
// In production, paths resolve via dirs:: crate defaults.
|
||||
// In test mode (BTERMINAL_TEST=1), paths resolve from env var overrides:
|
||||
// BTERMINAL_TEST_DATA_DIR → replaces dirs::data_dir()/bterminal
|
||||
// BTERMINAL_TEST_CONFIG_DIR → replaces dirs::config_dir()/bterminal
|
||||
// BTERMINAL_TEST_CTX_DIR → replaces ~/.claude-context
|
||||
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AppConfig {
|
||||
/// Data directory for btmsg.db, sessions.db (default: ~/.local/share/bterminal)
|
||||
pub data_dir: PathBuf,
|
||||
/// Config directory for groups.json (default: ~/.config/bterminal)
|
||||
pub config_dir: PathBuf,
|
||||
/// ctx database path (default: ~/.claude-context/context.db)
|
||||
pub ctx_db_path: PathBuf,
|
||||
/// Memora database path (default: ~/.local/share/memora/memories.db)
|
||||
pub memora_db_path: PathBuf,
|
||||
/// Whether we are in test mode
|
||||
pub test_mode: bool,
|
||||
}
|
||||
|
||||
impl AppConfig {
|
||||
/// Build config from environment. In test mode, uses BTERMINAL_TEST_*_DIR env vars.
|
||||
pub fn from_env() -> Self {
|
||||
let test_mode = std::env::var("BTERMINAL_TEST").map_or(false, |v| v == "1");
|
||||
|
||||
let data_dir = std::env::var("BTERMINAL_TEST_DATA_DIR")
|
||||
.ok()
|
||||
.filter(|_| test_mode)
|
||||
.map(PathBuf::from)
|
||||
.unwrap_or_else(|| {
|
||||
dirs::data_dir()
|
||||
.unwrap_or_else(|| PathBuf::from("."))
|
||||
.join("bterminal")
|
||||
});
|
||||
|
||||
let config_dir = std::env::var("BTERMINAL_TEST_CONFIG_DIR")
|
||||
.ok()
|
||||
.filter(|_| test_mode)
|
||||
.map(PathBuf::from)
|
||||
.unwrap_or_else(|| {
|
||||
dirs::config_dir()
|
||||
.unwrap_or_else(|| PathBuf::from("."))
|
||||
.join("bterminal")
|
||||
});
|
||||
|
||||
let ctx_db_path = std::env::var("BTERMINAL_TEST_CTX_DIR")
|
||||
.ok()
|
||||
.filter(|_| test_mode)
|
||||
.map(|d| PathBuf::from(d).join("context.db"))
|
||||
.unwrap_or_else(|| {
|
||||
dirs::home_dir()
|
||||
.unwrap_or_default()
|
||||
.join(".claude-context")
|
||||
.join("context.db")
|
||||
});
|
||||
|
||||
let memora_db_path = if test_mode {
|
||||
// In test mode, memora is optional — use data_dir/memora/memories.db
|
||||
data_dir.join("memora").join("memories.db")
|
||||
} else {
|
||||
dirs::data_dir()
|
||||
.unwrap_or_else(|| {
|
||||
dirs::home_dir()
|
||||
.unwrap_or_default()
|
||||
.join(".local/share")
|
||||
})
|
||||
.join("memora")
|
||||
.join("memories.db")
|
||||
};
|
||||
|
||||
Self {
|
||||
data_dir,
|
||||
config_dir,
|
||||
ctx_db_path,
|
||||
memora_db_path,
|
||||
test_mode,
|
||||
}
|
||||
}
|
||||
|
||||
/// Path to btmsg.db (shared between btmsg and bttask)
|
||||
pub fn btmsg_db_path(&self) -> PathBuf {
|
||||
self.data_dir.join("btmsg.db")
|
||||
}
|
||||
|
||||
/// Path to sessions.db
|
||||
pub fn sessions_db_dir(&self) -> &PathBuf {
|
||||
&self.data_dir
|
||||
}
|
||||
|
||||
/// Path to groups.json
|
||||
pub fn groups_json_path(&self) -> PathBuf {
|
||||
self.config_dir.join("groups.json")
|
||||
}
|
||||
|
||||
/// Path to plugins directory
|
||||
pub fn plugins_dir(&self) -> PathBuf {
|
||||
self.config_dir.join("plugins")
|
||||
}
|
||||
|
||||
/// Whether running in test mode (BTERMINAL_TEST=1)
|
||||
pub fn is_test_mode(&self) -> bool {
|
||||
self.test_mode
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::sync::Mutex;
|
||||
|
||||
// Serialize all tests that mutate env vars to prevent race conditions.
|
||||
// Rust runs tests in parallel; set_var/remove_var are process-global.
|
||||
static ENV_LOCK: Mutex<()> = Mutex::new(());
|
||||
|
||||
#[test]
|
||||
fn test_production_paths_use_dirs() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
// Without BTERMINAL_TEST=1, paths should use dirs:: defaults
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
std::env::remove_var("BTERMINAL_TEST_DATA_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CONFIG_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CTX_DIR");
|
||||
|
||||
let config = AppConfig::from_env();
|
||||
assert!(!config.is_test_mode());
|
||||
// Should end with "bterminal" for data and config
|
||||
assert!(config.data_dir.ends_with("bterminal"));
|
||||
assert!(config.config_dir.ends_with("bterminal"));
|
||||
assert!(config.ctx_db_path.ends_with("context.db"));
|
||||
assert!(config.memora_db_path.ends_with("memories.db"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_btmsg_db_path() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
let config = AppConfig::from_env();
|
||||
let path = config.btmsg_db_path();
|
||||
assert!(path.ends_with("btmsg.db"));
|
||||
assert!(path.parent().unwrap().ends_with("bterminal"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_groups_json_path() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
let config = AppConfig::from_env();
|
||||
let path = config.groups_json_path();
|
||||
assert!(path.ends_with("groups.json"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_test_mode_uses_overrides() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
std::env::set_var("BTERMINAL_TEST", "1");
|
||||
std::env::set_var("BTERMINAL_TEST_DATA_DIR", "/tmp/bt-test-data");
|
||||
std::env::set_var("BTERMINAL_TEST_CONFIG_DIR", "/tmp/bt-test-config");
|
||||
std::env::set_var("BTERMINAL_TEST_CTX_DIR", "/tmp/bt-test-ctx");
|
||||
|
||||
let config = AppConfig::from_env();
|
||||
assert!(config.is_test_mode());
|
||||
assert_eq!(config.data_dir, PathBuf::from("/tmp/bt-test-data"));
|
||||
assert_eq!(config.config_dir, PathBuf::from("/tmp/bt-test-config"));
|
||||
assert_eq!(config.ctx_db_path, PathBuf::from("/tmp/bt-test-ctx/context.db"));
|
||||
assert_eq!(config.btmsg_db_path(), PathBuf::from("/tmp/bt-test-data/btmsg.db"));
|
||||
assert_eq!(config.groups_json_path(), PathBuf::from("/tmp/bt-test-config/groups.json"));
|
||||
|
||||
// Cleanup
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
std::env::remove_var("BTERMINAL_TEST_DATA_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CONFIG_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CTX_DIR");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_test_mode_without_overrides_uses_defaults() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
std::env::set_var("BTERMINAL_TEST", "1");
|
||||
std::env::remove_var("BTERMINAL_TEST_DATA_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CONFIG_DIR");
|
||||
std::env::remove_var("BTERMINAL_TEST_CTX_DIR");
|
||||
|
||||
let config = AppConfig::from_env();
|
||||
assert!(config.is_test_mode());
|
||||
// Without override vars, falls back to dirs:: defaults
|
||||
assert!(config.data_dir.ends_with("bterminal"));
|
||||
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_test_mode_memora_in_data_dir() {
|
||||
let _lock = ENV_LOCK.lock().unwrap();
|
||||
std::env::set_var("BTERMINAL_TEST", "1");
|
||||
std::env::set_var("BTERMINAL_TEST_DATA_DIR", "/tmp/bt-test-data");
|
||||
|
||||
let config = AppConfig::from_env();
|
||||
assert_eq!(
|
||||
config.memora_db_path,
|
||||
PathBuf::from("/tmp/bt-test-data/memora/memories.db")
|
||||
);
|
||||
|
||||
std::env::remove_var("BTERMINAL_TEST");
|
||||
std::env::remove_var("BTERMINAL_TEST_DATA_DIR");
|
||||
}
|
||||
}
|
||||
5
bterminal-core/src/event.rs
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
/// Trait for emitting events from PTY and sidecar managers.
|
||||
/// Implemented by Tauri's AppHandle (controller) and WebSocket sender (relay).
|
||||
pub trait EventSink: Send + Sync {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value);
|
||||
}
|
||||
6
bterminal-core/src/lib.rs
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
pub mod config;
|
||||
pub mod event;
|
||||
pub mod pty;
|
||||
pub mod sandbox;
|
||||
pub mod sidecar;
|
||||
pub mod supervisor;
|
||||
173
bterminal-core/src/pty.rs
Normal file
|
|
@ -0,0 +1,173 @@
|
|||
use portable_pty::{native_pty_system, CommandBuilder, MasterPty, PtySize};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use std::io::{BufReader, Write};
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::thread;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::event::EventSink;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct PtyOptions {
|
||||
pub shell: Option<String>,
|
||||
pub cwd: Option<String>,
|
||||
pub args: Option<Vec<String>>,
|
||||
pub cols: Option<u16>,
|
||||
pub rows: Option<u16>,
|
||||
}
|
||||
|
||||
struct PtyInstance {
|
||||
master: Box<dyn MasterPty + Send>,
|
||||
writer: Box<dyn Write + Send>,
|
||||
}
|
||||
|
||||
pub struct PtyManager {
|
||||
instances: Arc<Mutex<HashMap<String, PtyInstance>>>,
|
||||
sink: Arc<dyn EventSink>,
|
||||
}
|
||||
|
||||
impl PtyManager {
|
||||
pub fn new(sink: Arc<dyn EventSink>) -> Self {
|
||||
Self {
|
||||
instances: Arc::new(Mutex::new(HashMap::new())),
|
||||
sink,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn spawn(&self, options: PtyOptions) -> Result<String, String> {
|
||||
let pty_system = native_pty_system();
|
||||
let cols = options.cols.unwrap_or(80);
|
||||
let rows = options.rows.unwrap_or(24);
|
||||
|
||||
let pair = pty_system
|
||||
.openpty(PtySize {
|
||||
rows,
|
||||
cols,
|
||||
pixel_width: 0,
|
||||
pixel_height: 0,
|
||||
})
|
||||
.map_err(|e| format!("Failed to open PTY: {e}"))?;
|
||||
|
||||
let shell = options.shell.unwrap_or_else(|| {
|
||||
std::env::var("SHELL").unwrap_or_else(|_| "/bin/bash".to_string())
|
||||
});
|
||||
|
||||
let mut cmd = CommandBuilder::new(&shell);
|
||||
if let Some(args) = &options.args {
|
||||
for arg in args {
|
||||
cmd.arg(arg);
|
||||
}
|
||||
}
|
||||
if let Some(cwd) = &options.cwd {
|
||||
cmd.cwd(cwd);
|
||||
}
|
||||
|
||||
let _child = pair
|
||||
.slave
|
||||
.spawn_command(cmd)
|
||||
.map_err(|e| format!("Failed to spawn command: {e}"))?;
|
||||
|
||||
drop(pair.slave);
|
||||
|
||||
let id = Uuid::new_v4().to_string();
|
||||
let reader = pair
|
||||
.master
|
||||
.try_clone_reader()
|
||||
.map_err(|e| format!("Failed to clone PTY reader: {e}"))?;
|
||||
let writer = pair
|
||||
.master
|
||||
.take_writer()
|
||||
.map_err(|e| format!("Failed to take PTY writer: {e}"))?;
|
||||
|
||||
let event_id = id.clone();
|
||||
let sink = self.sink.clone();
|
||||
thread::spawn(move || {
|
||||
let mut buf_reader = BufReader::with_capacity(4096, reader);
|
||||
let mut buf = vec![0u8; 4096];
|
||||
loop {
|
||||
match std::io::Read::read(&mut buf_reader, &mut buf) {
|
||||
Ok(0) => {
|
||||
sink.emit(
|
||||
&format!("pty-exit-{event_id}"),
|
||||
serde_json::Value::Null,
|
||||
);
|
||||
break;
|
||||
}
|
||||
Ok(n) => {
|
||||
let data = String::from_utf8_lossy(&buf[..n]).to_string();
|
||||
sink.emit(
|
||||
&format!("pty-data-{event_id}"),
|
||||
serde_json::Value::String(data),
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("PTY read error for {event_id}: {e}");
|
||||
sink.emit(
|
||||
&format!("pty-exit-{event_id}"),
|
||||
serde_json::Value::Null,
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let instance = PtyInstance {
|
||||
master: pair.master,
|
||||
writer,
|
||||
};
|
||||
self.instances.lock().unwrap().insert(id.clone(), instance);
|
||||
|
||||
log::info!("Spawned PTY {id} ({shell})");
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub fn write(&self, id: &str, data: &str) -> Result<(), String> {
|
||||
let mut instances = self.instances.lock().unwrap();
|
||||
let instance = instances
|
||||
.get_mut(id)
|
||||
.ok_or_else(|| format!("PTY {id} not found"))?;
|
||||
instance
|
||||
.writer
|
||||
.write_all(data.as_bytes())
|
||||
.map_err(|e| format!("PTY write error: {e}"))?;
|
||||
instance
|
||||
.writer
|
||||
.flush()
|
||||
.map_err(|e| format!("PTY flush error: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn resize(&self, id: &str, cols: u16, rows: u16) -> Result<(), String> {
|
||||
let instances = self.instances.lock().unwrap();
|
||||
let instance = instances
|
||||
.get(id)
|
||||
.ok_or_else(|| format!("PTY {id} not found"))?;
|
||||
instance
|
||||
.master
|
||||
.resize(PtySize {
|
||||
rows,
|
||||
cols,
|
||||
pixel_width: 0,
|
||||
pixel_height: 0,
|
||||
})
|
||||
.map_err(|e| format!("PTY resize error: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn kill(&self, id: &str) -> Result<(), String> {
|
||||
let mut instances = self.instances.lock().unwrap();
|
||||
if instances.remove(id).is_some() {
|
||||
log::info!("Killed PTY {id}");
|
||||
Ok(())
|
||||
} else {
|
||||
Err(format!("PTY {id} not found"))
|
||||
}
|
||||
}
|
||||
|
||||
/// List active PTY session IDs.
|
||||
pub fn list_sessions(&self) -> Vec<String> {
|
||||
self.instances.lock().unwrap().keys().cloned().collect()
|
||||
}
|
||||
}
|
||||
361
bterminal-core/src/sandbox.rs
Normal file
|
|
@ -0,0 +1,361 @@
|
|||
// Landlock-based filesystem sandboxing for sidecar processes.
|
||||
//
|
||||
// Landlock is a Linux Security Module (LSM) available since kernel 5.13.
|
||||
// It restricts filesystem access for the calling process and all its children.
|
||||
// Applied via pre_exec() on the sidecar child process before exec.
|
||||
//
|
||||
// Restrictions can only be tightened after application — never relaxed.
|
||||
// The sidecar is long-lived and handles queries for multiple projects,
|
||||
// so we apply the union of all project paths at sidecar start time.
|
||||
|
||||
use std::path::PathBuf;
|
||||
|
||||
use landlock::{
|
||||
Access, AccessFs, PathBeneath, PathFd, Ruleset, RulesetAttr, RulesetCreatedAttr,
|
||||
RulesetStatus, ABI,
|
||||
};
|
||||
|
||||
/// Target Landlock ABI version. V3 requires kernel 6.2+ (we run 6.12+).
|
||||
/// Falls back gracefully on older kernels via best-effort mode.
|
||||
const TARGET_ABI: ABI = ABI::V3;
|
||||
|
||||
/// Configuration for Landlock filesystem sandboxing.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SandboxConfig {
|
||||
/// Directories with full read+write+execute access (project CWDs, worktrees, tmp)
|
||||
pub rw_paths: Vec<PathBuf>,
|
||||
/// Directories with read-only access (system libs, runtimes, config)
|
||||
pub ro_paths: Vec<PathBuf>,
|
||||
/// Whether sandboxing is enabled
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
impl Default for SandboxConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
rw_paths: Vec::new(),
|
||||
ro_paths: Vec::new(),
|
||||
enabled: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SandboxConfig {
|
||||
/// Build a sandbox config for a set of project directories.
|
||||
///
|
||||
/// `project_cwds` — directories that need read+write access (one per project).
|
||||
/// `worktree_roots` — optional worktree directories (one per project that uses worktrees).
|
||||
///
|
||||
/// System paths (runtimes, libraries, /etc) are added as read-only automatically.
|
||||
pub fn for_projects(project_cwds: &[&str], worktree_roots: &[&str]) -> Self {
|
||||
let mut rw = Vec::new();
|
||||
|
||||
for cwd in project_cwds {
|
||||
rw.push(PathBuf::from(cwd));
|
||||
}
|
||||
for wt in worktree_roots {
|
||||
rw.push(PathBuf::from(wt));
|
||||
}
|
||||
|
||||
// Temp dir for sidecar scratch files
|
||||
rw.push(std::env::temp_dir());
|
||||
|
||||
let home = dirs::home_dir().unwrap_or_else(|| PathBuf::from("/root"));
|
||||
|
||||
let ro = vec![
|
||||
PathBuf::from("/usr"), // system binaries + libraries
|
||||
PathBuf::from("/lib"), // shared libraries
|
||||
PathBuf::from("/lib64"), // 64-bit shared libraries
|
||||
PathBuf::from("/etc"), // system configuration (read only)
|
||||
PathBuf::from("/proc"), // process info (Landlock V3+ handles this)
|
||||
PathBuf::from("/dev"), // device nodes (stdin/stdout/stderr, /dev/null, urandom)
|
||||
PathBuf::from("/bin"), // essential binaries (symlink to /usr/bin on most distros)
|
||||
PathBuf::from("/sbin"), // essential system binaries
|
||||
home.join(".local"), // ~/.local/bin (claude CLI, user-installed tools)
|
||||
home.join(".deno"), // Deno runtime cache
|
||||
home.join(".nvm"), // Node.js version manager
|
||||
home.join(".config"), // XDG config (claude profiles, bterminal config)
|
||||
home.join(".claude"), // Claude CLI data (worktrees, skills, settings)
|
||||
];
|
||||
|
||||
Self {
|
||||
rw_paths: rw,
|
||||
ro_paths: ro,
|
||||
enabled: true,
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a restricted sandbox config for Aider agent sessions.
|
||||
/// More restrictive than `for_projects`: only project worktree + read-only system paths.
|
||||
/// Does NOT allow write access to ~/.config, ~/.claude, etc.
|
||||
pub fn for_aider_restricted(project_cwd: &str, worktree: Option<&str>) -> Self {
|
||||
let mut rw = vec![PathBuf::from(project_cwd)];
|
||||
if let Some(wt) = worktree {
|
||||
rw.push(PathBuf::from(wt));
|
||||
}
|
||||
rw.push(std::env::temp_dir());
|
||||
let home = dirs::home_dir().unwrap_or_else(|| PathBuf::from("/root"));
|
||||
rw.push(home.join(".aider"));
|
||||
|
||||
let ro = vec![
|
||||
PathBuf::from("/usr"),
|
||||
PathBuf::from("/lib"),
|
||||
PathBuf::from("/lib64"),
|
||||
PathBuf::from("/etc"),
|
||||
PathBuf::from("/proc"),
|
||||
PathBuf::from("/dev"),
|
||||
PathBuf::from("/bin"),
|
||||
PathBuf::from("/sbin"),
|
||||
home.join(".local"),
|
||||
home.join(".deno"),
|
||||
home.join(".nvm"),
|
||||
];
|
||||
|
||||
Self {
|
||||
rw_paths: rw,
|
||||
ro_paths: ro,
|
||||
enabled: true,
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a sandbox config for a single project directory.
|
||||
pub fn for_project(cwd: &str, worktree: Option<&str>) -> Self {
|
||||
let worktrees: Vec<&str> = worktree.into_iter().collect();
|
||||
Self::for_projects(&[cwd], &worktrees)
|
||||
}
|
||||
|
||||
/// Apply Landlock restrictions to the current process.
|
||||
///
|
||||
/// This must be called in the child process (e.g., via `pre_exec`) BEFORE exec.
|
||||
/// Once applied, restrictions are inherited by all child processes and cannot be relaxed.
|
||||
///
|
||||
/// Returns:
|
||||
/// - `Ok(true)` if Landlock was applied and enforced
|
||||
/// - `Ok(false)` if the kernel does not support Landlock (graceful degradation)
|
||||
/// - `Err(msg)` on configuration or syscall errors
|
||||
pub fn apply(&self) -> Result<bool, String> {
|
||||
if !self.enabled {
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
let access_all = AccessFs::from_all(TARGET_ABI);
|
||||
let access_read = AccessFs::from_read(TARGET_ABI);
|
||||
|
||||
// Create ruleset handling all filesystem access types
|
||||
let mut ruleset = Ruleset::default()
|
||||
.handle_access(access_all)
|
||||
.map_err(|e| format!("Landlock: failed to handle access: {e}"))?
|
||||
.create()
|
||||
.map_err(|e| format!("Landlock: failed to create ruleset: {e}"))?;
|
||||
|
||||
// Add read+write rules for project directories and tmp
|
||||
for path in &self.rw_paths {
|
||||
if path.exists() {
|
||||
let fd = PathFd::new(path)
|
||||
.map_err(|e| format!("Landlock: PathFd failed for {}: {e}", path.display()))?;
|
||||
ruleset = ruleset
|
||||
.add_rule(PathBeneath::new(fd, access_all))
|
||||
.map_err(|e| {
|
||||
format!("Landlock: add_rule (rw) failed for {}: {e}", path.display())
|
||||
})?;
|
||||
} else {
|
||||
log::warn!(
|
||||
"Landlock: skipping non-existent rw path: {}",
|
||||
path.display()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Add read-only rules for system paths
|
||||
for path in &self.ro_paths {
|
||||
if path.exists() {
|
||||
let fd = PathFd::new(path)
|
||||
.map_err(|e| format!("Landlock: PathFd failed for {}: {e}", path.display()))?;
|
||||
ruleset = ruleset
|
||||
.add_rule(PathBeneath::new(fd, access_read))
|
||||
.map_err(|e| {
|
||||
format!("Landlock: add_rule (ro) failed for {}: {e}", path.display())
|
||||
})?;
|
||||
}
|
||||
// Silently skip non-existent read-only paths (e.g., /lib64 on some systems)
|
||||
}
|
||||
|
||||
// Enforce the ruleset on this thread (and inherited by children)
|
||||
let status = ruleset
|
||||
.restrict_self()
|
||||
.map_err(|e| format!("Landlock: restrict_self failed: {e}"))?;
|
||||
|
||||
// Landlock enforcement states:
|
||||
// - Enforced: kernel 6.2+ with ABI V3 (full filesystem restriction)
|
||||
// - NotEnforced: kernel 5.13–6.1 (Landlock exists but ABI too old for V3)
|
||||
// - Error (caught above): kernel <5.13 (no Landlock LSM available)
|
||||
let enforced = status.ruleset != RulesetStatus::NotEnforced;
|
||||
if enforced {
|
||||
log::info!("Landlock sandbox applied ({} rw, {} ro paths)", self.rw_paths.len(), self.ro_paths.len());
|
||||
} else {
|
||||
log::warn!(
|
||||
"Landlock not enforced — sidecar runs without filesystem restrictions. \
|
||||
Kernel 6.2+ required for enforcement."
|
||||
);
|
||||
}
|
||||
|
||||
Ok(enforced)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_default_is_disabled() {
|
||||
let config = SandboxConfig::default();
|
||||
assert!(!config.enabled);
|
||||
assert!(config.rw_paths.is_empty());
|
||||
assert!(config.ro_paths.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_project_single_cwd() {
|
||||
let config = SandboxConfig::for_project("/home/user/myproject", None);
|
||||
assert!(config.enabled);
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/myproject")));
|
||||
assert!(config.rw_paths.contains(&std::env::temp_dir()));
|
||||
// No worktree path added
|
||||
assert!(!config
|
||||
.rw_paths
|
||||
.iter()
|
||||
.any(|p| p.to_string_lossy().contains("worktree")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_project_with_worktree() {
|
||||
let config = SandboxConfig::for_project(
|
||||
"/home/user/myproject",
|
||||
Some("/home/user/myproject/.claude/worktrees/abc123"),
|
||||
);
|
||||
assert!(config.enabled);
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/myproject")));
|
||||
assert!(config.rw_paths.contains(&PathBuf::from(
|
||||
"/home/user/myproject/.claude/worktrees/abc123"
|
||||
)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_projects_multiple_cwds() {
|
||||
let config = SandboxConfig::for_projects(
|
||||
&["/home/user/project-a", "/home/user/project-b"],
|
||||
&["/home/user/project-a/.claude/worktrees/s1"],
|
||||
);
|
||||
assert!(config.enabled);
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/project-a")));
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/project-b")));
|
||||
assert!(config.rw_paths.contains(&PathBuf::from(
|
||||
"/home/user/project-a/.claude/worktrees/s1"
|
||||
)));
|
||||
// tmp always present
|
||||
assert!(config.rw_paths.contains(&std::env::temp_dir()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ro_paths_include_system_dirs() {
|
||||
let config = SandboxConfig::for_project("/tmp/test", None);
|
||||
let ro_strs: Vec<String> = config.ro_paths.iter().map(|p| p.display().to_string()).collect();
|
||||
|
||||
assert!(ro_strs.iter().any(|p| p == "/usr"), "missing /usr");
|
||||
assert!(ro_strs.iter().any(|p| p == "/lib"), "missing /lib");
|
||||
assert!(ro_strs.iter().any(|p| p == "/etc"), "missing /etc");
|
||||
assert!(ro_strs.iter().any(|p| p == "/proc"), "missing /proc");
|
||||
assert!(ro_strs.iter().any(|p| p == "/dev"), "missing /dev");
|
||||
assert!(ro_strs.iter().any(|p| p == "/bin"), "missing /bin");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ro_paths_include_runtime_dirs() {
|
||||
let config = SandboxConfig::for_project("/tmp/test", None);
|
||||
let home = dirs::home_dir().unwrap();
|
||||
|
||||
assert!(config.ro_paths.contains(&home.join(".local")));
|
||||
assert!(config.ro_paths.contains(&home.join(".deno")));
|
||||
assert!(config.ro_paths.contains(&home.join(".nvm")));
|
||||
assert!(config.ro_paths.contains(&home.join(".config")));
|
||||
assert!(config.ro_paths.contains(&home.join(".claude")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_disabled_apply_returns_false() {
|
||||
let config = SandboxConfig::default();
|
||||
assert_eq!(config.apply().unwrap(), false);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_rw_paths_count() {
|
||||
// Single project: cwd + tmp = 2
|
||||
let config = SandboxConfig::for_project("/tmp/test", None);
|
||||
assert_eq!(config.rw_paths.len(), 2);
|
||||
|
||||
// With worktree: cwd + worktree + tmp = 3
|
||||
let config = SandboxConfig::for_project("/tmp/test", Some("/tmp/wt"));
|
||||
assert_eq!(config.rw_paths.len(), 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_aider_restricted_single_cwd() {
|
||||
let config = SandboxConfig::for_aider_restricted("/home/user/myproject", None);
|
||||
assert!(config.enabled);
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/myproject")));
|
||||
assert!(config.rw_paths.contains(&std::env::temp_dir()));
|
||||
let home = dirs::home_dir().unwrap();
|
||||
assert!(config.rw_paths.contains(&home.join(".aider")));
|
||||
// No worktree path added
|
||||
assert!(!config
|
||||
.rw_paths
|
||||
.iter()
|
||||
.any(|p| p.to_string_lossy().contains("worktree")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_aider_restricted_with_worktree() {
|
||||
let config = SandboxConfig::for_aider_restricted(
|
||||
"/home/user/myproject",
|
||||
Some("/home/user/myproject/.claude/worktrees/abc123"),
|
||||
);
|
||||
assert!(config.enabled);
|
||||
assert!(config.rw_paths.contains(&PathBuf::from("/home/user/myproject")));
|
||||
assert!(config.rw_paths.contains(&PathBuf::from(
|
||||
"/home/user/myproject/.claude/worktrees/abc123"
|
||||
)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_aider_restricted_no_config_write() {
|
||||
let config = SandboxConfig::for_aider_restricted("/tmp/test", None);
|
||||
let home = dirs::home_dir().unwrap();
|
||||
// Aider restricted must NOT have ~/.config or ~/.claude in rw_paths
|
||||
assert!(!config.rw_paths.contains(&home.join(".config")));
|
||||
assert!(!config.rw_paths.contains(&home.join(".claude")));
|
||||
// And NOT in ro_paths either (stricter than for_projects)
|
||||
assert!(!config.ro_paths.contains(&home.join(".config")));
|
||||
assert!(!config.ro_paths.contains(&home.join(".claude")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_aider_restricted_rw_count() {
|
||||
// Without worktree: cwd + tmp + .aider = 3
|
||||
let config = SandboxConfig::for_aider_restricted("/tmp/test", None);
|
||||
assert_eq!(config.rw_paths.len(), 3);
|
||||
|
||||
// With worktree: cwd + worktree + tmp + .aider = 4
|
||||
let config = SandboxConfig::for_aider_restricted("/tmp/test", Some("/tmp/wt"));
|
||||
assert_eq!(config.rw_paths.len(), 4);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_for_projects_empty() {
|
||||
let config = SandboxConfig::for_projects(&[], &[]);
|
||||
assert!(config.enabled);
|
||||
// Only tmp dir in rw
|
||||
assert_eq!(config.rw_paths.len(), 1);
|
||||
assert_eq!(config.rw_paths[0], std::env::temp_dir());
|
||||
}
|
||||
}
|
||||
980
bterminal-core/src/sidecar.rs
Normal file
|
|
@ -0,0 +1,980 @@
|
|||
// Sidecar lifecycle management (Deno-first, Node.js fallback)
|
||||
// Spawns per-provider runner scripts (e.g. claude-runner.mjs, aider-runner.mjs)
|
||||
// via deno or node, communicates via stdio NDJSON.
|
||||
// Each provider gets its own process, started lazily on first query.
|
||||
//
|
||||
// Uses a std::sync::mpsc actor pattern: the actor thread owns all mutable state
|
||||
// (providers HashMap, session_providers HashMap) exclusively. External callers
|
||||
// send requests via a channel, eliminating the TOCTOU race in ensure_provider().
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use std::io::{BufRead, BufReader, Write};
|
||||
#[cfg(unix)]
|
||||
use std::os::unix::process::CommandExt;
|
||||
use std::path::PathBuf;
|
||||
use std::process::{Child, Command, Stdio};
|
||||
use std::sync::mpsc as std_mpsc;
|
||||
use std::sync::Arc;
|
||||
use std::thread;
|
||||
|
||||
use crate::event::EventSink;
|
||||
use crate::sandbox::SandboxConfig;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct AgentQueryOptions {
|
||||
#[serde(default = "default_provider")]
|
||||
pub provider: String,
|
||||
pub session_id: String,
|
||||
pub prompt: String,
|
||||
pub cwd: Option<String>,
|
||||
pub max_turns: Option<u32>,
|
||||
pub max_budget_usd: Option<f64>,
|
||||
pub resume_session_id: Option<String>,
|
||||
pub permission_mode: Option<String>,
|
||||
pub setting_sources: Option<Vec<String>>,
|
||||
pub system_prompt: Option<String>,
|
||||
pub model: Option<String>,
|
||||
pub claude_config_dir: Option<String>,
|
||||
pub additional_directories: Option<Vec<String>>,
|
||||
/// When set, agent runs in a git worktree for isolation (passed as --worktree <name> CLI flag)
|
||||
pub worktree_name: Option<String>,
|
||||
/// Provider-specific configuration blob (passed through to sidecar as-is)
|
||||
#[serde(default)]
|
||||
pub provider_config: serde_json::Value,
|
||||
/// Extra environment variables injected into the agent process (e.g. BTMSG_AGENT_ID)
|
||||
#[serde(default)]
|
||||
pub extra_env: std::collections::HashMap<String, String>,
|
||||
}
|
||||
|
||||
fn default_provider() -> String {
|
||||
"claude".to_string()
|
||||
}
|
||||
|
||||
/// Directories to search for sidecar scripts.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SidecarConfig {
|
||||
pub search_paths: Vec<PathBuf>,
|
||||
/// Extra env vars forwarded to sidecar processes (e.g. BTERMINAL_TEST=1 for test isolation)
|
||||
pub env_overrides: std::collections::HashMap<String, String>,
|
||||
/// Landlock filesystem sandbox configuration (Linux 5.13+, applied via pre_exec)
|
||||
pub sandbox: SandboxConfig,
|
||||
}
|
||||
|
||||
struct SidecarCommand {
|
||||
program: String,
|
||||
args: Vec<String>,
|
||||
}
|
||||
|
||||
/// Per-provider sidecar process state.
|
||||
struct ProviderProcess {
|
||||
child: Child,
|
||||
stdin_writer: Box<dyn Write + Send>,
|
||||
ready: bool,
|
||||
/// Atomic flag set by the stdout reader thread when "ready" message arrives.
|
||||
/// The actor polls this to detect readiness without needing a separate channel.
|
||||
ready_flag: Arc<std::sync::atomic::AtomicBool>,
|
||||
}
|
||||
|
||||
/// Requests sent from public API methods to the actor thread.
|
||||
enum ProviderRequest {
|
||||
Start {
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
EnsureAndQuery {
|
||||
options: AgentQueryOptions,
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
StopSession {
|
||||
session_id: String,
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
SendMessage {
|
||||
msg: serde_json::Value,
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
Restart {
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
Shutdown {
|
||||
reply: std_mpsc::Sender<Result<(), String>>,
|
||||
},
|
||||
IsReady {
|
||||
reply: std_mpsc::Sender<bool>,
|
||||
},
|
||||
SetSandbox {
|
||||
sandbox: SandboxConfig,
|
||||
reply: std_mpsc::Sender<()>,
|
||||
},
|
||||
}
|
||||
|
||||
pub struct SidecarManager {
|
||||
tx: std_mpsc::Sender<ProviderRequest>,
|
||||
// Keep a handle so the thread lives as long as the manager.
|
||||
// Not joined on drop — we send Shutdown instead.
|
||||
_actor_thread: Option<thread::JoinHandle<()>>,
|
||||
}
|
||||
|
||||
/// Actor function that owns all mutable state exclusively.
|
||||
/// Receives requests via `req_rx`. Ready signaling from stdout reader threads
|
||||
/// uses per-provider AtomicBool flags (polled during ensure_provider_impl).
|
||||
fn run_actor(
|
||||
req_rx: std_mpsc::Receiver<ProviderRequest>,
|
||||
sink: Arc<dyn EventSink>,
|
||||
initial_config: SidecarConfig,
|
||||
) {
|
||||
let mut providers: HashMap<String, ProviderProcess> = HashMap::new();
|
||||
let mut session_providers: HashMap<String, String> = HashMap::new();
|
||||
let mut config = initial_config;
|
||||
|
||||
loop {
|
||||
// Block waiting for next request (with timeout so actor stays responsive)
|
||||
match req_rx.recv_timeout(std::time::Duration::from_millis(50)) {
|
||||
Ok(req) => {
|
||||
match req {
|
||||
ProviderRequest::Start { reply } => {
|
||||
let result = start_provider_impl(
|
||||
&mut providers,
|
||||
&config,
|
||||
&sink,
|
||||
"claude",
|
||||
);
|
||||
let _ = reply.send(result);
|
||||
}
|
||||
ProviderRequest::EnsureAndQuery { options, reply } => {
|
||||
let provider = options.provider.clone();
|
||||
|
||||
// Ensure provider is ready — atomic, no TOCTOU
|
||||
if let Err(e) = ensure_provider_impl(
|
||||
&mut providers,
|
||||
&config,
|
||||
&sink,
|
||||
&provider,
|
||||
) {
|
||||
let _ = reply.send(Err(e));
|
||||
continue;
|
||||
}
|
||||
|
||||
// Track session -> provider mapping
|
||||
session_providers.insert(options.session_id.clone(), provider.clone());
|
||||
|
||||
// Build and send query message
|
||||
let msg = build_query_msg(&options);
|
||||
let result = send_to_provider_impl(&mut providers, &provider, &msg);
|
||||
let _ = reply.send(result);
|
||||
}
|
||||
ProviderRequest::StopSession { session_id, reply } => {
|
||||
let provider = session_providers
|
||||
.get(&session_id)
|
||||
.cloned()
|
||||
.unwrap_or_else(|| "claude".to_string());
|
||||
let msg = serde_json::json!({
|
||||
"type": "stop",
|
||||
"sessionId": session_id,
|
||||
});
|
||||
let result = send_to_provider_impl(&mut providers, &provider, &msg);
|
||||
let _ = reply.send(result);
|
||||
}
|
||||
ProviderRequest::SendMessage { msg, reply } => {
|
||||
let result = send_to_provider_impl(&mut providers, "claude", &msg);
|
||||
let _ = reply.send(result);
|
||||
}
|
||||
ProviderRequest::Restart { reply } => {
|
||||
log::info!("Restarting all sidecars");
|
||||
shutdown_all(&mut providers, &mut session_providers);
|
||||
let result = start_provider_impl(
|
||||
&mut providers,
|
||||
&config,
|
||||
&sink,
|
||||
"claude",
|
||||
);
|
||||
let _ = reply.send(result);
|
||||
}
|
||||
ProviderRequest::Shutdown { reply } => {
|
||||
shutdown_all(&mut providers, &mut session_providers);
|
||||
let _ = reply.send(Ok(()));
|
||||
}
|
||||
ProviderRequest::IsReady { reply } => {
|
||||
// Sync ready state from atomic flags
|
||||
sync_ready_flags(&mut providers);
|
||||
let ready = providers
|
||||
.get("claude")
|
||||
.map(|p| p.ready)
|
||||
.unwrap_or(false);
|
||||
let _ = reply.send(ready);
|
||||
}
|
||||
ProviderRequest::SetSandbox { sandbox, reply } => {
|
||||
config.sandbox = sandbox;
|
||||
let _ = reply.send(());
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(std_mpsc::RecvTimeoutError::Timeout) => {
|
||||
// Loop back -- keeps actor responsive to shutdown
|
||||
continue;
|
||||
}
|
||||
Err(std_mpsc::RecvTimeoutError::Disconnected) => {
|
||||
// All senders dropped — shut down
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Channel closed — clean up remaining providers
|
||||
shutdown_all(&mut providers, &mut session_providers);
|
||||
}
|
||||
|
||||
/// Sync ready state from AtomicBool flags set by stdout reader threads.
|
||||
fn sync_ready_flags(providers: &mut HashMap<String, ProviderProcess>) {
|
||||
for p in providers.values_mut() {
|
||||
if !p.ready && p.ready_flag.load(std::sync::atomic::Ordering::Acquire) {
|
||||
p.ready = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Shut down all provider processes and clear session mappings.
|
||||
fn shutdown_all(
|
||||
providers: &mut HashMap<String, ProviderProcess>,
|
||||
session_providers: &mut HashMap<String, String>,
|
||||
) {
|
||||
for (name, mut proc) in providers.drain() {
|
||||
log::info!("Shutting down {} sidecar", name);
|
||||
let _ = proc.child.kill();
|
||||
let _ = proc.child.wait();
|
||||
}
|
||||
session_providers.clear();
|
||||
}
|
||||
|
||||
/// Start a specific provider's sidecar process. Called from the actor thread
|
||||
/// which owns the providers HashMap exclusively — no lock contention possible.
|
||||
fn start_provider_impl(
|
||||
providers: &mut HashMap<String, ProviderProcess>,
|
||||
config: &SidecarConfig,
|
||||
sink: &Arc<dyn EventSink>,
|
||||
provider: &str,
|
||||
) -> Result<(), String> {
|
||||
if providers.contains_key(provider) {
|
||||
return Err(format!("Sidecar for '{}' already running", provider));
|
||||
}
|
||||
|
||||
let cmd = SidecarManager::resolve_sidecar_for_provider_with_config(config, provider)?;
|
||||
|
||||
log::info!(
|
||||
"Starting {} sidecar: {} {}",
|
||||
provider,
|
||||
cmd.program,
|
||||
cmd.args.join(" ")
|
||||
);
|
||||
|
||||
// Build a clean environment stripping provider-specific vars to prevent
|
||||
// SDKs from detecting nesting when BTerminal is launched from a provider terminal.
|
||||
let clean_env: Vec<(String, String)> = std::env::vars()
|
||||
.filter(|(k, _)| strip_provider_env_var(k))
|
||||
.collect();
|
||||
|
||||
let mut command = Command::new(&cmd.program);
|
||||
command
|
||||
.args(&cmd.args)
|
||||
.env_clear()
|
||||
.envs(clean_env)
|
||||
.envs(
|
||||
config
|
||||
.env_overrides
|
||||
.iter()
|
||||
.map(|(k, v)| (k.as_str(), v.as_str())),
|
||||
)
|
||||
.stdin(Stdio::piped())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped());
|
||||
|
||||
// Apply Landlock sandbox in child process before exec (Linux only).
|
||||
#[cfg(unix)]
|
||||
if config.sandbox.enabled {
|
||||
let sandbox = config.sandbox.clone();
|
||||
unsafe {
|
||||
command.pre_exec(move || {
|
||||
sandbox
|
||||
.apply()
|
||||
.map(|enforced| {
|
||||
if !enforced {
|
||||
log::warn!("Landlock sandbox not enforced in sidecar child");
|
||||
}
|
||||
})
|
||||
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let mut child = command
|
||||
.spawn()
|
||||
.map_err(|e| format!("Failed to start {} sidecar: {e}", provider))?;
|
||||
|
||||
let child_stdin = child
|
||||
.stdin
|
||||
.take()
|
||||
.ok_or("Failed to capture sidecar stdin")?;
|
||||
let child_stdout = child
|
||||
.stdout
|
||||
.take()
|
||||
.ok_or("Failed to capture sidecar stdout")?;
|
||||
let child_stderr = child
|
||||
.stderr
|
||||
.take()
|
||||
.ok_or("Failed to capture sidecar stderr")?;
|
||||
|
||||
// Per-provider AtomicBool for ready signaling from stdout reader thread to actor.
|
||||
let ready_flag = Arc::new(std::sync::atomic::AtomicBool::new(false));
|
||||
let ready_flag_writer = ready_flag.clone();
|
||||
|
||||
// Stdout reader thread — forwards NDJSON to event sink
|
||||
let sink_clone = sink.clone();
|
||||
let provider_name = provider.to_string();
|
||||
thread::spawn(move || {
|
||||
let reader = BufReader::new(child_stdout);
|
||||
for line in reader.lines() {
|
||||
match line {
|
||||
Ok(line) => {
|
||||
if line.trim().is_empty() {
|
||||
continue;
|
||||
}
|
||||
match serde_json::from_str::<serde_json::Value>(&line) {
|
||||
Ok(msg) => {
|
||||
if msg.get("type").and_then(|t| t.as_str()) == Some("ready") {
|
||||
ready_flag_writer
|
||||
.store(true, std::sync::atomic::Ordering::Release);
|
||||
log::info!("{} sidecar ready", provider_name);
|
||||
}
|
||||
sink_clone.emit("sidecar-message", msg);
|
||||
}
|
||||
Err(e) => {
|
||||
log::warn!(
|
||||
"Invalid JSON from {} sidecar: {e}: {line}",
|
||||
provider_name
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("{} sidecar stdout read error: {e}", provider_name);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
log::info!("{} sidecar stdout reader exited", provider_name);
|
||||
sink_clone.emit(
|
||||
"sidecar-exited",
|
||||
serde_json::json!({ "provider": provider_name }),
|
||||
);
|
||||
});
|
||||
|
||||
// Stderr reader thread — logs only
|
||||
let provider_name2 = provider.to_string();
|
||||
thread::spawn(move || {
|
||||
let reader = BufReader::new(child_stderr);
|
||||
for line in reader.lines() {
|
||||
match line {
|
||||
Ok(line) => log::info!("[{} sidecar stderr] {line}", provider_name2),
|
||||
Err(e) => {
|
||||
log::error!("{} sidecar stderr read error: {e}", provider_name2);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
providers.insert(
|
||||
provider.to_string(),
|
||||
ProviderProcess {
|
||||
child,
|
||||
stdin_writer: Box::new(child_stdin),
|
||||
ready: false,
|
||||
ready_flag,
|
||||
},
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Ensure a provider's sidecar is running and ready, starting it lazily if needed.
|
||||
/// Called exclusively from the actor thread — no lock contention, no TOCTOU race.
|
||||
fn ensure_provider_impl(
|
||||
providers: &mut HashMap<String, ProviderProcess>,
|
||||
config: &SidecarConfig,
|
||||
sink: &Arc<dyn EventSink>,
|
||||
provider: &str,
|
||||
) -> Result<(), String> {
|
||||
// Sync ready state from atomic flag (set by stdout reader thread)
|
||||
if let Some(p) = providers.get_mut(provider) {
|
||||
if !p.ready && p.ready_flag.load(std::sync::atomic::Ordering::Acquire) {
|
||||
p.ready = true;
|
||||
}
|
||||
if p.ready {
|
||||
return Ok(());
|
||||
}
|
||||
// Started but not ready yet -- fall through to wait loop
|
||||
} else {
|
||||
// Not started -- start it now. No TOCTOU: we own the HashMap exclusively.
|
||||
start_provider_impl(providers, config, sink, provider)?;
|
||||
}
|
||||
|
||||
// Wait for ready (up to 10 seconds)
|
||||
for _ in 0..100 {
|
||||
std::thread::sleep(std::time::Duration::from_millis(100));
|
||||
|
||||
if let Some(p) = providers.get_mut(provider) {
|
||||
if !p.ready && p.ready_flag.load(std::sync::atomic::Ordering::Acquire) {
|
||||
p.ready = true;
|
||||
}
|
||||
if p.ready {
|
||||
return Ok(());
|
||||
}
|
||||
} else {
|
||||
return Err(format!("{} sidecar process exited before ready", provider));
|
||||
}
|
||||
}
|
||||
Err(format!(
|
||||
"{} sidecar did not become ready within timeout",
|
||||
provider
|
||||
))
|
||||
}
|
||||
|
||||
/// Send a JSON message to a provider's stdin.
|
||||
fn send_to_provider_impl(
|
||||
providers: &mut HashMap<String, ProviderProcess>,
|
||||
provider: &str,
|
||||
msg: &serde_json::Value,
|
||||
) -> Result<(), String> {
|
||||
let proc = providers
|
||||
.get_mut(provider)
|
||||
.ok_or_else(|| format!("{} sidecar not running", provider))?;
|
||||
|
||||
let line =
|
||||
serde_json::to_string(msg).map_err(|e| format!("JSON serialize error: {e}"))?;
|
||||
|
||||
proc.stdin_writer
|
||||
.write_all(line.as_bytes())
|
||||
.map_err(|e| format!("Sidecar write error: {e}"))?;
|
||||
proc.stdin_writer
|
||||
.write_all(b"\n")
|
||||
.map_err(|e| format!("Sidecar write error: {e}"))?;
|
||||
proc.stdin_writer
|
||||
.flush()
|
||||
.map_err(|e| format!("Sidecar flush error: {e}"))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Build the NDJSON query message from AgentQueryOptions.
|
||||
fn build_query_msg(options: &AgentQueryOptions) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"type": "query",
|
||||
"provider": options.provider,
|
||||
"sessionId": options.session_id,
|
||||
"prompt": options.prompt,
|
||||
"cwd": options.cwd,
|
||||
"maxTurns": options.max_turns,
|
||||
"maxBudgetUsd": options.max_budget_usd,
|
||||
"resumeSessionId": options.resume_session_id,
|
||||
"permissionMode": options.permission_mode,
|
||||
"settingSources": options.setting_sources,
|
||||
"systemPrompt": options.system_prompt,
|
||||
"model": options.model,
|
||||
"claudeConfigDir": options.claude_config_dir,
|
||||
"additionalDirectories": options.additional_directories,
|
||||
"worktreeName": options.worktree_name,
|
||||
"providerConfig": options.provider_config,
|
||||
"extraEnv": options.extra_env,
|
||||
})
|
||||
}
|
||||
|
||||
impl SidecarManager {
|
||||
pub fn new(sink: Arc<dyn EventSink>, config: SidecarConfig) -> Self {
|
||||
let (req_tx, req_rx) = std_mpsc::channel();
|
||||
|
||||
let handle = thread::spawn(move || {
|
||||
run_actor(req_rx, sink, config);
|
||||
});
|
||||
|
||||
Self {
|
||||
tx: req_tx,
|
||||
_actor_thread: Some(handle),
|
||||
}
|
||||
}
|
||||
|
||||
/// Update the sandbox configuration. Takes effect on next sidecar (re)start.
|
||||
pub fn set_sandbox(&self, sandbox: SandboxConfig) {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
if self
|
||||
.tx
|
||||
.send(ProviderRequest::SetSandbox {
|
||||
sandbox,
|
||||
reply: reply_tx,
|
||||
})
|
||||
.is_ok()
|
||||
{
|
||||
let _ = reply_rx.recv();
|
||||
}
|
||||
}
|
||||
|
||||
/// Start the default (claude) provider sidecar. Called on app startup.
|
||||
pub fn start(&self) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
self.tx
|
||||
.send(ProviderRequest::Start { reply: reply_tx })
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?;
|
||||
reply_rx
|
||||
.recv()
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?
|
||||
}
|
||||
|
||||
pub fn query(&self, options: &AgentQueryOptions) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
self.tx
|
||||
.send(ProviderRequest::EnsureAndQuery {
|
||||
options: options.clone(),
|
||||
reply: reply_tx,
|
||||
})
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?;
|
||||
reply_rx
|
||||
.recv()
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?
|
||||
}
|
||||
|
||||
pub fn stop_session(&self, session_id: &str) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
self.tx
|
||||
.send(ProviderRequest::StopSession {
|
||||
session_id: session_id.to_string(),
|
||||
reply: reply_tx,
|
||||
})
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?;
|
||||
reply_rx
|
||||
.recv()
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?
|
||||
}
|
||||
|
||||
pub fn restart(&self) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
self.tx
|
||||
.send(ProviderRequest::Restart { reply: reply_tx })
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?;
|
||||
reply_rx
|
||||
.recv()
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?
|
||||
}
|
||||
|
||||
pub fn shutdown(&self) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
if self
|
||||
.tx
|
||||
.send(ProviderRequest::Shutdown { reply: reply_tx })
|
||||
.is_ok()
|
||||
{
|
||||
let _ = reply_rx.recv();
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Returns true if the default (claude) provider sidecar is ready.
|
||||
pub fn is_ready(&self) -> bool {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
if self
|
||||
.tx
|
||||
.send(ProviderRequest::IsReady { reply: reply_tx })
|
||||
.is_ok()
|
||||
{
|
||||
reply_rx.recv().unwrap_or(false)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Legacy send_message — routes to the default (claude) provider.
|
||||
pub fn send_message(&self, msg: &serde_json::Value) -> Result<(), String> {
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
self.tx
|
||||
.send(ProviderRequest::SendMessage {
|
||||
msg: msg.clone(),
|
||||
reply: reply_tx,
|
||||
})
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?;
|
||||
reply_rx
|
||||
.recv()
|
||||
.map_err(|_| "Sidecar actor stopped".to_string())?
|
||||
}
|
||||
|
||||
/// Resolve a sidecar command for a specific provider's runner file.
|
||||
fn resolve_sidecar_for_provider_with_config(
|
||||
config: &SidecarConfig,
|
||||
provider: &str,
|
||||
) -> Result<SidecarCommand, String> {
|
||||
let runner_name = format!("{}-runner.mjs", provider);
|
||||
|
||||
// Try Deno first (faster startup, better perf), fall back to Node.js.
|
||||
let has_deno = Command::new("deno")
|
||||
.arg("--version")
|
||||
.stdout(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.status()
|
||||
.is_ok();
|
||||
let has_node = Command::new("node")
|
||||
.arg("--version")
|
||||
.stdout(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.status()
|
||||
.is_ok();
|
||||
|
||||
let mut checked = Vec::new();
|
||||
|
||||
for base in &config.search_paths {
|
||||
let mjs_path = base.join("dist").join(&runner_name);
|
||||
if mjs_path.exists() {
|
||||
if has_deno {
|
||||
return Ok(SidecarCommand {
|
||||
program: "deno".to_string(),
|
||||
args: vec![
|
||||
"run".to_string(),
|
||||
"--allow-run".to_string(),
|
||||
"--allow-env".to_string(),
|
||||
"--allow-read".to_string(),
|
||||
"--allow-write".to_string(),
|
||||
"--allow-net".to_string(),
|
||||
mjs_path.to_string_lossy().to_string(),
|
||||
],
|
||||
});
|
||||
}
|
||||
if has_node {
|
||||
return Ok(SidecarCommand {
|
||||
program: "node".to_string(),
|
||||
args: vec![mjs_path.to_string_lossy().to_string()],
|
||||
});
|
||||
}
|
||||
}
|
||||
checked.push(mjs_path);
|
||||
}
|
||||
|
||||
let paths: Vec<_> = checked.iter().map(|p| p.display().to_string()).collect();
|
||||
let runtime_note = if !has_deno && !has_node {
|
||||
". Neither deno nor node found in PATH"
|
||||
} else {
|
||||
""
|
||||
};
|
||||
Err(format!(
|
||||
"Sidecar not found for provider '{}'. Checked: {}{}",
|
||||
provider,
|
||||
paths.join(", "),
|
||||
runtime_note,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns true if the env var should be KEPT (not stripped).
|
||||
/// First line of defense: strips provider-specific prefixes to prevent nesting detection
|
||||
/// and credential leakage. JS runners apply a second layer of provider-specific stripping.
|
||||
///
|
||||
/// Stripped prefixes: CLAUDE*, CODEX*, OLLAMA*, AIDER*, ANTHROPIC_*
|
||||
/// Whitelisted: CLAUDE_CODE_EXPERIMENTAL_* (feature flags like agent teams)
|
||||
///
|
||||
/// Note: OPENAI_* and OPENROUTER_* are NOT stripped here because runners need
|
||||
/// these keys from the environment or extraEnv injection.
|
||||
fn strip_provider_env_var(key: &str) -> bool {
|
||||
if key.starts_with("CLAUDE_CODE_EXPERIMENTAL_") {
|
||||
return true;
|
||||
}
|
||||
if key.starts_with("CLAUDE")
|
||||
|| key.starts_with("CODEX")
|
||||
|| key.starts_with("OLLAMA")
|
||||
|| key.starts_with("AIDER")
|
||||
|| key.starts_with("ANTHROPIC_")
|
||||
{
|
||||
return false;
|
||||
}
|
||||
true
|
||||
}
|
||||
|
||||
impl Drop for SidecarManager {
|
||||
fn drop(&mut self) {
|
||||
// Send shutdown request to the actor. If the channel is already closed
|
||||
// (actor thread exited), this is a no-op.
|
||||
let (reply_tx, reply_rx) = std_mpsc::channel();
|
||||
if self
|
||||
.tx
|
||||
.send(ProviderRequest::Shutdown { reply: reply_tx })
|
||||
.is_ok()
|
||||
{
|
||||
// Wait briefly for the actor to clean up (with timeout to avoid hanging)
|
||||
let _ = reply_rx.recv_timeout(std::time::Duration::from_secs(5));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::sync::Mutex;
|
||||
|
||||
// ---- strip_provider_env_var unit tests ----
|
||||
|
||||
#[test]
|
||||
fn test_keeps_normal_env_vars() {
|
||||
assert!(strip_provider_env_var("HOME"));
|
||||
assert!(strip_provider_env_var("PATH"));
|
||||
assert!(strip_provider_env_var("USER"));
|
||||
assert!(strip_provider_env_var("SHELL"));
|
||||
assert!(strip_provider_env_var("TERM"));
|
||||
assert!(strip_provider_env_var("XDG_DATA_HOME"));
|
||||
assert!(strip_provider_env_var("RUST_LOG"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_strips_claude_vars() {
|
||||
assert!(!strip_provider_env_var("CLAUDE_CONFIG_DIR"));
|
||||
assert!(!strip_provider_env_var("CLAUDE_SESSION_ID"));
|
||||
assert!(!strip_provider_env_var("CLAUDECODE"));
|
||||
assert!(!strip_provider_env_var("CLAUDE_API_KEY"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_whitelists_claude_code_experimental() {
|
||||
assert!(strip_provider_env_var("CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS"));
|
||||
assert!(strip_provider_env_var("CLAUDE_CODE_EXPERIMENTAL_TOOLS"));
|
||||
assert!(strip_provider_env_var("CLAUDE_CODE_EXPERIMENTAL_SOMETHING_NEW"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_strips_codex_vars() {
|
||||
assert!(!strip_provider_env_var("CODEX_API_KEY"));
|
||||
assert!(!strip_provider_env_var("CODEX_SESSION"));
|
||||
assert!(!strip_provider_env_var("CODEX_CONFIG"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_strips_ollama_vars() {
|
||||
assert!(!strip_provider_env_var("OLLAMA_HOST"));
|
||||
assert!(!strip_provider_env_var("OLLAMA_MODELS"));
|
||||
assert!(!strip_provider_env_var("OLLAMA_NUM_PARALLEL"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_strips_anthropic_vars() {
|
||||
// ANTHROPIC_* vars stripped at Rust layer (defense in depth)
|
||||
// Claude CLI has its own auth via credentials file
|
||||
assert!(!strip_provider_env_var("ANTHROPIC_API_KEY"));
|
||||
assert!(!strip_provider_env_var("ANTHROPIC_BASE_URL"));
|
||||
assert!(!strip_provider_env_var("ANTHROPIC_LOG"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_keeps_openai_vars() {
|
||||
// OPENAI_* vars are NOT stripped by the Rust layer
|
||||
// (they're stripped in the JS codex-runner layer instead)
|
||||
assert!(strip_provider_env_var("OPENAI_API_KEY"));
|
||||
assert!(strip_provider_env_var("OPENAI_BASE_URL"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_env_filtering_integration() {
|
||||
let test_env = vec![
|
||||
("HOME", "/home/user"),
|
||||
("PATH", "/usr/bin"),
|
||||
("CLAUDE_CONFIG_DIR", "/tmp/claude"),
|
||||
("CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS", "1"),
|
||||
("CODEX_API_KEY", "sk-test"),
|
||||
("OLLAMA_HOST", "localhost"),
|
||||
("ANTHROPIC_API_KEY", "sk-ant-xxx"),
|
||||
("OPENAI_API_KEY", "sk-openai-xxx"),
|
||||
("RUST_LOG", "debug"),
|
||||
("BTMSG_AGENT_ID", "a1"),
|
||||
];
|
||||
|
||||
let kept: Vec<&str> = test_env
|
||||
.iter()
|
||||
.filter(|(k, _)| strip_provider_env_var(k))
|
||||
.map(|(k, _)| *k)
|
||||
.collect();
|
||||
|
||||
assert!(kept.contains(&"HOME"));
|
||||
assert!(kept.contains(&"PATH"));
|
||||
assert!(kept.contains(&"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS"));
|
||||
assert!(kept.contains(&"RUST_LOG"));
|
||||
assert!(kept.contains(&"BTMSG_AGENT_ID"));
|
||||
// OPENAI_* passes through Rust layer (Codex runner needs it)
|
||||
assert!(kept.contains(&"OPENAI_API_KEY"));
|
||||
// These are stripped:
|
||||
assert!(!kept.contains(&"CLAUDE_CONFIG_DIR"));
|
||||
assert!(!kept.contains(&"CODEX_API_KEY"));
|
||||
assert!(!kept.contains(&"OLLAMA_HOST"));
|
||||
assert!(!kept.contains(&"ANTHROPIC_API_KEY"));
|
||||
}
|
||||
|
||||
// ---- Actor pattern tests ----
|
||||
|
||||
/// Mock EventSink that records emitted events.
|
||||
struct MockSink {
|
||||
events: Mutex<Vec<(String, serde_json::Value)>>,
|
||||
}
|
||||
|
||||
impl MockSink {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
events: Mutex::new(Vec::new()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl EventSink for MockSink {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value) {
|
||||
self.events
|
||||
.lock()
|
||||
.unwrap()
|
||||
.push((event.to_string(), payload));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_actor_new_and_drop() {
|
||||
// SidecarManager should create and drop cleanly without panicking
|
||||
let sink: Arc<dyn EventSink> = Arc::new(MockSink::new());
|
||||
let config = SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
};
|
||||
let manager = SidecarManager::new(sink, config);
|
||||
// is_ready should return false since no provider started
|
||||
assert!(!manager.is_ready());
|
||||
// Drop should send shutdown cleanly
|
||||
drop(manager);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_actor_shutdown_idempotent() {
|
||||
let sink: Arc<dyn EventSink> = Arc::new(MockSink::new());
|
||||
let config = SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
};
|
||||
let manager = SidecarManager::new(sink, config);
|
||||
// Multiple shutdowns should not panic
|
||||
assert!(manager.shutdown().is_ok());
|
||||
assert!(manager.shutdown().is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_actor_set_sandbox() {
|
||||
let sink: Arc<dyn EventSink> = Arc::new(MockSink::new());
|
||||
let config = SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
};
|
||||
let manager = SidecarManager::new(sink, config);
|
||||
// set_sandbox should complete without error
|
||||
manager.set_sandbox(SandboxConfig {
|
||||
rw_paths: vec![PathBuf::from("/tmp")],
|
||||
ro_paths: vec![],
|
||||
enabled: true,
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_query_msg_fields() {
|
||||
let options = AgentQueryOptions {
|
||||
provider: "claude".to_string(),
|
||||
session_id: "s1".to_string(),
|
||||
prompt: "hello".to_string(),
|
||||
cwd: Some("/tmp".to_string()),
|
||||
max_turns: Some(5),
|
||||
max_budget_usd: None,
|
||||
resume_session_id: None,
|
||||
permission_mode: Some("bypassPermissions".to_string()),
|
||||
setting_sources: None,
|
||||
system_prompt: None,
|
||||
model: Some("claude-4-opus".to_string()),
|
||||
claude_config_dir: None,
|
||||
additional_directories: None,
|
||||
worktree_name: None,
|
||||
provider_config: serde_json::Value::Null,
|
||||
extra_env: Default::default(),
|
||||
};
|
||||
let msg = build_query_msg(&options);
|
||||
assert_eq!(msg["type"], "query");
|
||||
assert_eq!(msg["provider"], "claude");
|
||||
assert_eq!(msg["sessionId"], "s1");
|
||||
assert_eq!(msg["prompt"], "hello");
|
||||
assert_eq!(msg["cwd"], "/tmp");
|
||||
assert_eq!(msg["maxTurns"], 5);
|
||||
assert_eq!(msg["model"], "claude-4-opus");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_concurrent_queries_no_race() {
|
||||
// This test verifies that concurrent query() calls from multiple threads
|
||||
// are serialized by the actor and don't cause a TOCTOU race on ensure_provider.
|
||||
// Since we can't actually start a sidecar in tests (no runner scripts),
|
||||
// we verify that the actor handles multiple concurrent requests gracefully
|
||||
// (all get errors, none panic or deadlock).
|
||||
|
||||
let sink: Arc<dyn EventSink> = Arc::new(MockSink::new());
|
||||
let config = SidecarConfig {
|
||||
search_paths: vec![], // No search paths → start_provider will fail
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
};
|
||||
let manager = Arc::new(SidecarManager::new(sink, config));
|
||||
|
||||
let mut handles = vec![];
|
||||
let errors = Arc::new(Mutex::new(Vec::new()));
|
||||
|
||||
// Spawn 10 concurrent query() calls
|
||||
for i in 0..10 {
|
||||
let mgr = manager.clone();
|
||||
let errs = errors.clone();
|
||||
handles.push(thread::spawn(move || {
|
||||
let options = AgentQueryOptions {
|
||||
provider: "test-provider".to_string(),
|
||||
session_id: format!("session-{}", i),
|
||||
prompt: "hello".to_string(),
|
||||
cwd: None,
|
||||
max_turns: None,
|
||||
max_budget_usd: None,
|
||||
resume_session_id: None,
|
||||
permission_mode: None,
|
||||
setting_sources: None,
|
||||
system_prompt: None,
|
||||
model: None,
|
||||
claude_config_dir: None,
|
||||
additional_directories: None,
|
||||
worktree_name: None,
|
||||
provider_config: serde_json::Value::Null,
|
||||
extra_env: Default::default(),
|
||||
};
|
||||
let result = mgr.query(&options);
|
||||
if let Err(e) = result {
|
||||
errs.lock().unwrap().push(e);
|
||||
}
|
||||
}));
|
||||
}
|
||||
|
||||
for h in handles {
|
||||
h.join().expect("Thread should not panic");
|
||||
}
|
||||
|
||||
// All 10 should have failed (no sidecar scripts available), but none panicked
|
||||
let errs = errors.lock().unwrap();
|
||||
assert_eq!(errs.len(), 10, "All 10 concurrent queries should get errors");
|
||||
|
||||
// The key invariant: no "Sidecar for 'X' already running" error.
|
||||
// Because the actor serializes requests, the second caller sees the first's
|
||||
// start_provider result (either success or failure), not a conflicting start.
|
||||
// With no search paths, all errors should be "Sidecar not found" style.
|
||||
for err in errs.iter() {
|
||||
assert!(
|
||||
!err.contains("already running"),
|
||||
"Should not get 'already running' error from serialized actor. Got: {err}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
684
bterminal-core/src/supervisor.rs
Normal file
|
|
@ -0,0 +1,684 @@
|
|||
// Sidecar crash recovery and supervision.
|
||||
// Wraps a SidecarManager with automatic restart, exponential backoff,
|
||||
// and health status tracking. Emits `sidecar-health-changed` events.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
use crate::event::EventSink;
|
||||
use crate::sidecar::{AgentQueryOptions, SidecarConfig, SidecarManager};
|
||||
|
||||
/// Health status of the supervised sidecar process.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
#[serde(tag = "status", rename_all = "camelCase")]
|
||||
pub enum SidecarHealth {
|
||||
Healthy,
|
||||
Degraded {
|
||||
restart_count: u32,
|
||||
},
|
||||
Failed {
|
||||
#[serde(default)]
|
||||
last_error: String,
|
||||
},
|
||||
}
|
||||
|
||||
/// Configuration for supervisor restart behavior.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SupervisorConfig {
|
||||
/// Maximum restart attempts before entering Failed state (default: 5)
|
||||
pub max_retries: u32,
|
||||
/// Base backoff in milliseconds, doubled each retry (default: 1000, cap: 30000)
|
||||
pub backoff_base_ms: u64,
|
||||
/// Maximum backoff in milliseconds (default: 30000)
|
||||
pub backoff_cap_ms: u64,
|
||||
/// Stable operation duration before restart_count resets (default: 5 minutes)
|
||||
pub stability_window: Duration,
|
||||
}
|
||||
|
||||
impl Default for SupervisorConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
max_retries: 5,
|
||||
backoff_base_ms: 1000,
|
||||
backoff_cap_ms: 30_000,
|
||||
stability_window: Duration::from_secs(300),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Internal state shared between the supervisor and its event interceptor.
|
||||
struct SupervisorState {
|
||||
health: SidecarHealth,
|
||||
restart_count: u32,
|
||||
last_crash_time: Option<Instant>,
|
||||
last_start_time: Option<Instant>,
|
||||
}
|
||||
|
||||
impl SupervisorState {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
health: SidecarHealth::Healthy,
|
||||
restart_count: 0,
|
||||
last_crash_time: None,
|
||||
last_start_time: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute exponential backoff: base_ms * 2^attempt, capped at cap_ms.
|
||||
fn compute_backoff(base_ms: u64, attempt: u32, cap_ms: u64) -> Duration {
|
||||
let backoff = base_ms.saturating_mul(1u64.checked_shl(attempt).unwrap_or(u64::MAX));
|
||||
Duration::from_millis(backoff.min(cap_ms))
|
||||
}
|
||||
|
||||
/// EventSink wrapper that intercepts `sidecar-exited` events and triggers
|
||||
/// supervisor restart logic, while forwarding all other events unchanged.
|
||||
struct SupervisorSink {
|
||||
outer_sink: Arc<dyn EventSink>,
|
||||
state: Arc<Mutex<SupervisorState>>,
|
||||
config: SupervisorConfig,
|
||||
sidecar_config: SidecarConfig,
|
||||
}
|
||||
|
||||
impl EventSink for SupervisorSink {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value) {
|
||||
if event == "sidecar-exited" {
|
||||
self.handle_exit();
|
||||
} else {
|
||||
self.outer_sink.emit(event, payload);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SupervisorSink {
|
||||
fn handle_exit(&self) {
|
||||
let (should_restart, backoff, restart_count) = {
|
||||
let mut state = self.state.lock().unwrap();
|
||||
|
||||
// Check if stable operation has elapsed since last start — reset counter
|
||||
if let Some(start_time) = state.last_start_time {
|
||||
if start_time.elapsed() >= self.config.stability_window {
|
||||
log::info!(
|
||||
"Sidecar ran stable for {:?}, resetting restart count",
|
||||
start_time.elapsed()
|
||||
);
|
||||
state.restart_count = 0;
|
||||
}
|
||||
}
|
||||
|
||||
state.restart_count += 1;
|
||||
state.last_crash_time = Some(Instant::now());
|
||||
let count = state.restart_count;
|
||||
|
||||
if count > self.config.max_retries {
|
||||
let error = format!("Exceeded max retries ({})", self.config.max_retries);
|
||||
log::error!("Sidecar supervisor: {}", error);
|
||||
state.health = SidecarHealth::Failed {
|
||||
last_error: error.clone(),
|
||||
};
|
||||
self.emit_health(&state.health);
|
||||
// Forward the original exited event so frontend knows
|
||||
self.outer_sink
|
||||
.emit("sidecar-exited", serde_json::Value::Null);
|
||||
return;
|
||||
}
|
||||
|
||||
state.health = SidecarHealth::Degraded {
|
||||
restart_count: count,
|
||||
};
|
||||
self.emit_health(&state.health);
|
||||
|
||||
let backoff = compute_backoff(
|
||||
self.config.backoff_base_ms,
|
||||
count - 1,
|
||||
self.config.backoff_cap_ms,
|
||||
);
|
||||
|
||||
(true, backoff, count)
|
||||
};
|
||||
|
||||
if !should_restart {
|
||||
return;
|
||||
}
|
||||
|
||||
log::warn!(
|
||||
"Sidecar crashed (attempt {}/{}), restarting in {:?}",
|
||||
restart_count,
|
||||
self.config.max_retries,
|
||||
backoff
|
||||
);
|
||||
|
||||
// Restart on a background thread to avoid blocking the stdout reader
|
||||
let outer_sink = self.outer_sink.clone();
|
||||
let state = self.state.clone();
|
||||
let sidecar_config = self.sidecar_config.clone();
|
||||
let supervisor_state = self.state.clone();
|
||||
let stability_window = self.config.stability_window;
|
||||
let max_retries = self.config.max_retries;
|
||||
let backoff_base_ms = self.config.backoff_base_ms;
|
||||
let backoff_cap_ms = self.config.backoff_cap_ms;
|
||||
|
||||
std::thread::spawn(move || {
|
||||
std::thread::sleep(backoff);
|
||||
|
||||
// Create a new SidecarManager that shares our supervisor sink.
|
||||
// We need a new interceptor sink to capture the next exit event.
|
||||
let new_state = state.clone();
|
||||
let new_outer = outer_sink.clone();
|
||||
let new_sidecar_config = sidecar_config.clone();
|
||||
|
||||
let interceptor: Arc<dyn EventSink> = Arc::new(SupervisorSink {
|
||||
outer_sink: new_outer.clone(),
|
||||
state: new_state.clone(),
|
||||
config: SupervisorConfig {
|
||||
max_retries,
|
||||
backoff_base_ms,
|
||||
backoff_cap_ms,
|
||||
stability_window,
|
||||
},
|
||||
sidecar_config: new_sidecar_config.clone(),
|
||||
});
|
||||
|
||||
let new_manager = SidecarManager::new(interceptor, new_sidecar_config);
|
||||
match new_manager.start() {
|
||||
Ok(()) => {
|
||||
let mut s = supervisor_state.lock().unwrap();
|
||||
s.last_start_time = Some(Instant::now());
|
||||
log::info!("Sidecar restarted successfully (attempt {})", restart_count);
|
||||
// Note: we cannot replace the manager reference in the outer
|
||||
// SidecarSupervisor from here. The restart creates a new manager
|
||||
// that handles its own lifecycle. The outer manager reference
|
||||
// becomes stale. This is acceptable because:
|
||||
// 1. The new manager's stdout reader will emit through our sink chain
|
||||
// 2. The old manager's child process is already dead
|
||||
// For a more sophisticated approach, the supervisor would need
|
||||
// interior mutability on the manager reference. We do that below.
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("Sidecar restart failed: {}", e);
|
||||
let mut s = supervisor_state.lock().unwrap();
|
||||
s.health = SidecarHealth::Failed {
|
||||
last_error: e.clone(),
|
||||
};
|
||||
// Emit health change + forward exited
|
||||
drop(s);
|
||||
let health = SidecarHealth::Failed { last_error: e };
|
||||
emit_health_event(&new_outer, &health);
|
||||
new_outer
|
||||
.emit("sidecar-exited", serde_json::Value::Null);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
fn emit_health(&self, health: &SidecarHealth) {
|
||||
emit_health_event(&self.outer_sink, health);
|
||||
}
|
||||
}
|
||||
|
||||
fn emit_health_event(sink: &Arc<dyn EventSink>, health: &SidecarHealth) {
|
||||
let payload = serde_json::to_value(health).unwrap_or(serde_json::Value::Null);
|
||||
sink.emit("sidecar-health-changed", payload);
|
||||
}
|
||||
|
||||
/// Supervised sidecar process with automatic crash recovery.
|
||||
///
|
||||
/// Wraps a `SidecarManager` and intercepts exit events to perform automatic
|
||||
/// restarts with exponential backoff. Tracks health status and emits
|
||||
/// `sidecar-health-changed` events.
|
||||
pub struct SidecarSupervisor {
|
||||
manager: Arc<Mutex<SidecarManager>>,
|
||||
state: Arc<Mutex<SupervisorState>>,
|
||||
outer_sink: Arc<dyn EventSink>,
|
||||
#[allow(dead_code)]
|
||||
supervisor_config: SupervisorConfig,
|
||||
#[allow(dead_code)]
|
||||
sidecar_config: SidecarConfig,
|
||||
}
|
||||
|
||||
impl SidecarSupervisor {
|
||||
pub fn new(
|
||||
sink: Arc<dyn EventSink>,
|
||||
sidecar_config: SidecarConfig,
|
||||
supervisor_config: SupervisorConfig,
|
||||
) -> Self {
|
||||
let state = Arc::new(Mutex::new(SupervisorState::new()));
|
||||
|
||||
let interceptor: Arc<dyn EventSink> = Arc::new(SupervisorSink {
|
||||
outer_sink: sink.clone(),
|
||||
state: state.clone(),
|
||||
config: supervisor_config.clone(),
|
||||
sidecar_config: sidecar_config.clone(),
|
||||
});
|
||||
|
||||
let manager = SidecarManager::new(interceptor, sidecar_config.clone());
|
||||
|
||||
Self {
|
||||
manager: Arc::new(Mutex::new(manager)),
|
||||
state,
|
||||
outer_sink: sink,
|
||||
supervisor_config,
|
||||
sidecar_config,
|
||||
}
|
||||
}
|
||||
|
||||
/// Start the supervised sidecar process.
|
||||
pub fn start(&self) -> Result<(), String> {
|
||||
let manager = self.manager.lock().unwrap();
|
||||
let result = manager.start();
|
||||
if result.is_ok() {
|
||||
let mut state = self.state.lock().unwrap();
|
||||
state.last_start_time = Some(Instant::now());
|
||||
state.health = SidecarHealth::Healthy;
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Send a raw JSON message to the sidecar.
|
||||
pub fn send_message(&self, msg: &serde_json::Value) -> Result<(), String> {
|
||||
self.manager.lock().unwrap().send_message(msg)
|
||||
}
|
||||
|
||||
/// Send an agent query to the sidecar.
|
||||
pub fn query(&self, options: &AgentQueryOptions) -> Result<(), String> {
|
||||
self.manager.lock().unwrap().query(options)
|
||||
}
|
||||
|
||||
/// Stop a specific agent session.
|
||||
pub fn stop_session(&self, session_id: &str) -> Result<(), String> {
|
||||
self.manager.lock().unwrap().stop_session(session_id)
|
||||
}
|
||||
|
||||
/// Check if the sidecar is ready to accept queries.
|
||||
pub fn is_ready(&self) -> bool {
|
||||
self.manager.lock().unwrap().is_ready()
|
||||
}
|
||||
|
||||
/// Shut down the sidecar process.
|
||||
pub fn shutdown(&self) -> Result<(), String> {
|
||||
let mut state = self.state.lock().unwrap();
|
||||
state.health = SidecarHealth::Healthy;
|
||||
state.restart_count = 0;
|
||||
drop(state);
|
||||
self.manager.lock().unwrap().shutdown()
|
||||
}
|
||||
|
||||
/// Get the current health status.
|
||||
pub fn health(&self) -> SidecarHealth {
|
||||
self.state.lock().unwrap().health.clone()
|
||||
}
|
||||
|
||||
/// Get the current restart count.
|
||||
pub fn restart_count(&self) -> u32 {
|
||||
self.state.lock().unwrap().restart_count
|
||||
}
|
||||
|
||||
/// Manually reset the supervisor state (e.g., after user intervention).
|
||||
pub fn reset(&self) {
|
||||
let mut state = self.state.lock().unwrap();
|
||||
state.health = SidecarHealth::Healthy;
|
||||
state.restart_count = 0;
|
||||
state.last_crash_time = None;
|
||||
emit_health_event(&self.outer_sink, &state.health);
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for SidecarSupervisor {
|
||||
fn drop(&mut self) {
|
||||
let _ = self.shutdown();
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::sync::atomic::{AtomicU32, Ordering};
|
||||
|
||||
// ---- compute_backoff tests ----
|
||||
|
||||
#[test]
|
||||
fn test_backoff_base_case() {
|
||||
let d = compute_backoff(1000, 0, 30_000);
|
||||
assert_eq!(d, Duration::from_millis(1000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backoff_exponential() {
|
||||
assert_eq!(compute_backoff(1000, 1, 30_000), Duration::from_millis(2000));
|
||||
assert_eq!(compute_backoff(1000, 2, 30_000), Duration::from_millis(4000));
|
||||
assert_eq!(compute_backoff(1000, 3, 30_000), Duration::from_millis(8000));
|
||||
assert_eq!(compute_backoff(1000, 4, 30_000), Duration::from_millis(16000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backoff_capped() {
|
||||
assert_eq!(compute_backoff(1000, 5, 30_000), Duration::from_millis(30_000));
|
||||
assert_eq!(compute_backoff(1000, 10, 30_000), Duration::from_millis(30_000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backoff_overflow_safe() {
|
||||
// Very large attempt should not panic, just cap
|
||||
assert_eq!(compute_backoff(1000, 63, 30_000), Duration::from_millis(30_000));
|
||||
assert_eq!(compute_backoff(1000, 100, 30_000), Duration::from_millis(30_000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backoff_custom_base() {
|
||||
assert_eq!(compute_backoff(500, 0, 10_000), Duration::from_millis(500));
|
||||
assert_eq!(compute_backoff(500, 1, 10_000), Duration::from_millis(1000));
|
||||
assert_eq!(compute_backoff(500, 5, 10_000), Duration::from_millis(10_000));
|
||||
}
|
||||
|
||||
// ---- SidecarHealth serialization tests ----
|
||||
|
||||
#[test]
|
||||
fn test_health_serialize_healthy() {
|
||||
let h = SidecarHealth::Healthy;
|
||||
let json = serde_json::to_value(&h).unwrap();
|
||||
assert_eq!(json["status"], "healthy");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_health_serialize_degraded() {
|
||||
let h = SidecarHealth::Degraded { restart_count: 3 };
|
||||
let json = serde_json::to_value(&h).unwrap();
|
||||
assert_eq!(json["status"], "degraded");
|
||||
assert_eq!(json["restart_count"], 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_health_serialize_failed() {
|
||||
let h = SidecarHealth::Failed {
|
||||
last_error: "process killed".to_string(),
|
||||
};
|
||||
let json = serde_json::to_value(&h).unwrap();
|
||||
assert_eq!(json["status"], "failed");
|
||||
assert_eq!(json["last_error"], "process killed");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_health_deserialize_roundtrip() {
|
||||
let cases = vec![
|
||||
SidecarHealth::Healthy,
|
||||
SidecarHealth::Degraded { restart_count: 2 },
|
||||
SidecarHealth::Failed {
|
||||
last_error: "OOM".to_string(),
|
||||
},
|
||||
];
|
||||
for h in cases {
|
||||
let json = serde_json::to_string(&h).unwrap();
|
||||
let back: SidecarHealth = serde_json::from_str(&json).unwrap();
|
||||
assert_eq!(h, back);
|
||||
}
|
||||
}
|
||||
|
||||
// ---- SupervisorConfig defaults ----
|
||||
|
||||
#[test]
|
||||
fn test_supervisor_config_defaults() {
|
||||
let cfg = SupervisorConfig::default();
|
||||
assert_eq!(cfg.max_retries, 5);
|
||||
assert_eq!(cfg.backoff_base_ms, 1000);
|
||||
assert_eq!(cfg.backoff_cap_ms, 30_000);
|
||||
assert_eq!(cfg.stability_window, Duration::from_secs(300));
|
||||
}
|
||||
|
||||
// ---- SupervisorState tests ----
|
||||
|
||||
#[test]
|
||||
fn test_initial_state() {
|
||||
let state = SupervisorState::new();
|
||||
assert_eq!(state.health, SidecarHealth::Healthy);
|
||||
assert_eq!(state.restart_count, 0);
|
||||
assert!(state.last_crash_time.is_none());
|
||||
assert!(state.last_start_time.is_none());
|
||||
}
|
||||
|
||||
// ---- Event interception tests (using mock sink) ----
|
||||
|
||||
/// Mock EventSink that records emitted events.
|
||||
struct MockSink {
|
||||
events: Mutex<Vec<(String, serde_json::Value)>>,
|
||||
exit_count: AtomicU32,
|
||||
}
|
||||
|
||||
impl MockSink {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
events: Mutex::new(Vec::new()),
|
||||
exit_count: AtomicU32::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
fn events(&self) -> Vec<(String, serde_json::Value)> {
|
||||
self.events.lock().unwrap().clone()
|
||||
}
|
||||
|
||||
fn health_events(&self) -> Vec<SidecarHealth> {
|
||||
self.events
|
||||
.lock()
|
||||
.unwrap()
|
||||
.iter()
|
||||
.filter(|(name, _)| name == "sidecar-health-changed")
|
||||
.filter_map(|(_, payload)| serde_json::from_value(payload.clone()).ok())
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
impl EventSink for MockSink {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value) {
|
||||
if event == "sidecar-exited" {
|
||||
self.exit_count.fetch_add(1, Ordering::SeqCst);
|
||||
}
|
||||
self.events
|
||||
.lock()
|
||||
.unwrap()
|
||||
.push((event.to_string(), payload));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_non_exit_events_forwarded() {
|
||||
let outer = Arc::new(MockSink::new());
|
||||
let state = Arc::new(Mutex::new(SupervisorState::new()));
|
||||
let sink = SupervisorSink {
|
||||
outer_sink: outer.clone(),
|
||||
state,
|
||||
config: SupervisorConfig::default(),
|
||||
sidecar_config: SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
},
|
||||
};
|
||||
|
||||
let payload = serde_json::json!({"type": "ready"});
|
||||
sink.emit("sidecar-message", payload.clone());
|
||||
|
||||
let events = outer.events();
|
||||
assert_eq!(events.len(), 1);
|
||||
assert_eq!(events[0].0, "sidecar-message");
|
||||
assert_eq!(events[0].1, payload);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_exit_triggers_degraded_health() {
|
||||
let outer = Arc::new(MockSink::new());
|
||||
let state = Arc::new(Mutex::new(SupervisorState::new()));
|
||||
let sink = SupervisorSink {
|
||||
outer_sink: outer.clone(),
|
||||
state: state.clone(),
|
||||
config: SupervisorConfig {
|
||||
max_retries: 5,
|
||||
backoff_base_ms: 100,
|
||||
backoff_cap_ms: 1000,
|
||||
stability_window: Duration::from_secs(300),
|
||||
},
|
||||
sidecar_config: SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
},
|
||||
};
|
||||
|
||||
// Simulate exit
|
||||
sink.emit("sidecar-exited", serde_json::Value::Null);
|
||||
|
||||
let s = state.lock().unwrap();
|
||||
assert_eq!(s.restart_count, 1);
|
||||
assert!(s.last_crash_time.is_some());
|
||||
match &s.health {
|
||||
SidecarHealth::Degraded { restart_count } => assert_eq!(*restart_count, 1),
|
||||
other => panic!("Expected Degraded, got {:?}", other),
|
||||
}
|
||||
|
||||
// Should have emitted health-changed event
|
||||
let health_events = outer.health_events();
|
||||
assert_eq!(health_events.len(), 1);
|
||||
assert_eq!(
|
||||
health_events[0],
|
||||
SidecarHealth::Degraded { restart_count: 1 }
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_exit_exceeding_max_retries_fails() {
|
||||
let outer = Arc::new(MockSink::new());
|
||||
let state = Arc::new(Mutex::new(SupervisorState {
|
||||
health: SidecarHealth::Degraded { restart_count: 5 },
|
||||
restart_count: 5,
|
||||
last_crash_time: Some(Instant::now()),
|
||||
last_start_time: Some(Instant::now()),
|
||||
}));
|
||||
|
||||
let sink = SupervisorSink {
|
||||
outer_sink: outer.clone(),
|
||||
state: state.clone(),
|
||||
config: SupervisorConfig {
|
||||
max_retries: 5,
|
||||
..SupervisorConfig::default()
|
||||
},
|
||||
sidecar_config: SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
},
|
||||
};
|
||||
|
||||
// This is attempt 6, which exceeds max_retries=5
|
||||
sink.emit("sidecar-exited", serde_json::Value::Null);
|
||||
|
||||
let s = state.lock().unwrap();
|
||||
assert_eq!(s.restart_count, 6);
|
||||
match &s.health {
|
||||
SidecarHealth::Failed { last_error } => {
|
||||
assert!(last_error.contains("Exceeded max retries"));
|
||||
}
|
||||
other => panic!("Expected Failed, got {:?}", other),
|
||||
}
|
||||
|
||||
// Should have emitted health-changed with Failed + forwarded sidecar-exited
|
||||
let events = outer.events();
|
||||
let health_changed = events
|
||||
.iter()
|
||||
.filter(|(name, _)| name == "sidecar-health-changed")
|
||||
.count();
|
||||
let exited = events
|
||||
.iter()
|
||||
.filter(|(name, _)| name == "sidecar-exited")
|
||||
.count();
|
||||
assert_eq!(health_changed, 1);
|
||||
assert_eq!(exited, 1); // Forwarded after max retries
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_stability_window_resets_count() {
|
||||
let outer = Arc::new(MockSink::new());
|
||||
// Simulate: started 6 minutes ago, ran stable
|
||||
let state = Arc::new(Mutex::new(SupervisorState {
|
||||
health: SidecarHealth::Degraded { restart_count: 3 },
|
||||
restart_count: 3,
|
||||
last_crash_time: Some(Instant::now() - Duration::from_secs(400)),
|
||||
last_start_time: Some(Instant::now() - Duration::from_secs(360)),
|
||||
}));
|
||||
|
||||
let sink = SupervisorSink {
|
||||
outer_sink: outer.clone(),
|
||||
state: state.clone(),
|
||||
config: SupervisorConfig {
|
||||
max_retries: 5,
|
||||
stability_window: Duration::from_secs(300), // 5 min
|
||||
backoff_base_ms: 100,
|
||||
backoff_cap_ms: 1000,
|
||||
},
|
||||
sidecar_config: SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
},
|
||||
};
|
||||
|
||||
sink.emit("sidecar-exited", serde_json::Value::Null);
|
||||
|
||||
let s = state.lock().unwrap();
|
||||
// Count was reset to 0 then incremented to 1
|
||||
assert_eq!(s.restart_count, 1);
|
||||
match &s.health {
|
||||
SidecarHealth::Degraded { restart_count } => assert_eq!(*restart_count, 1),
|
||||
other => panic!("Expected Degraded(1), got {:?}", other),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multiple_crashes_increment_count() {
|
||||
let outer = Arc::new(MockSink::new());
|
||||
let state = Arc::new(Mutex::new(SupervisorState::new()));
|
||||
|
||||
let sink = SupervisorSink {
|
||||
outer_sink: outer.clone(),
|
||||
state: state.clone(),
|
||||
config: SupervisorConfig {
|
||||
max_retries: 10,
|
||||
backoff_base_ms: 100,
|
||||
backoff_cap_ms: 1000,
|
||||
stability_window: Duration::from_secs(300),
|
||||
},
|
||||
sidecar_config: SidecarConfig {
|
||||
search_paths: vec![],
|
||||
env_overrides: Default::default(),
|
||||
sandbox: Default::default(),
|
||||
},
|
||||
};
|
||||
|
||||
for i in 1..=3 {
|
||||
sink.emit("sidecar-exited", serde_json::Value::Null);
|
||||
let s = state.lock().unwrap();
|
||||
assert_eq!(s.restart_count, i);
|
||||
}
|
||||
|
||||
let health_events = outer.health_events();
|
||||
assert_eq!(health_events.len(), 3);
|
||||
assert_eq!(
|
||||
health_events[2],
|
||||
SidecarHealth::Degraded { restart_count: 3 }
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_health_equality() {
|
||||
assert_eq!(SidecarHealth::Healthy, SidecarHealth::Healthy);
|
||||
assert_eq!(
|
||||
SidecarHealth::Degraded { restart_count: 2 },
|
||||
SidecarHealth::Degraded { restart_count: 2 }
|
||||
);
|
||||
assert_ne!(
|
||||
SidecarHealth::Degraded { restart_count: 1 },
|
||||
SidecarHealth::Degraded { restart_count: 2 }
|
||||
);
|
||||
assert_ne!(SidecarHealth::Healthy, SidecarHealth::Failed {
|
||||
last_error: String::new(),
|
||||
});
|
||||
}
|
||||
}
|
||||
24
bterminal-relay/Cargo.toml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
[package]
|
||||
name = "bterminal-relay"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
description = "Remote relay server for BTerminal multi-machine support"
|
||||
license = "MIT"
|
||||
|
||||
[[bin]]
|
||||
name = "bterminal-relay"
|
||||
path = "src/main.rs"
|
||||
|
||||
[dependencies]
|
||||
bterminal-core = { path = "../bterminal-core" }
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
log = "0.4"
|
||||
env_logger = "0.11"
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
tokio-tungstenite = { version = "0.21", features = ["native-tls"] }
|
||||
tokio-native-tls = "0.3"
|
||||
native-tls = "0.2"
|
||||
futures-util = "0.3"
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
uuid = { version = "1", features = ["v4"] }
|
||||
441
bterminal-relay/src/main.rs
Normal file
|
|
@ -0,0 +1,441 @@
|
|||
// bterminal-relay — WebSocket relay server for remote PTY and agent management
|
||||
|
||||
use bterminal_core::event::EventSink;
|
||||
use bterminal_core::pty::{PtyManager, PtyOptions};
|
||||
use bterminal_core::sidecar::{AgentQueryOptions, SidecarConfig, SidecarManager};
|
||||
use clap::Parser;
|
||||
use futures_util::{SinkExt, StreamExt};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::net::SocketAddr;
|
||||
use std::sync::Arc;
|
||||
use tokio::net::{TcpListener, TcpStream};
|
||||
use tokio::sync::mpsc;
|
||||
use tokio_tungstenite::tungstenite::Message;
|
||||
use tokio_tungstenite::tungstenite::http;
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(name = "bterminal-relay", about = "BTerminal remote relay server")]
|
||||
struct Cli {
|
||||
/// Port to listen on
|
||||
#[arg(short, long, default_value = "9750")]
|
||||
port: u16,
|
||||
|
||||
/// Authentication token (required)
|
||||
#[arg(short, long)]
|
||||
token: String,
|
||||
|
||||
/// Allow insecure ws:// connections (dev mode only)
|
||||
#[arg(long, default_value = "false")]
|
||||
insecure: bool,
|
||||
|
||||
/// TLS certificate file (PEM format). Enables wss:// when provided with --tls-key.
|
||||
#[arg(long)]
|
||||
tls_cert: Option<String>,
|
||||
|
||||
/// TLS private key file (PEM format). Required when --tls-cert is provided.
|
||||
#[arg(long)]
|
||||
tls_key: Option<String>,
|
||||
|
||||
/// Additional sidecar search paths
|
||||
#[arg(long)]
|
||||
sidecar_path: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
struct RelayCommand {
|
||||
id: String,
|
||||
#[serde(rename = "type")]
|
||||
type_: String,
|
||||
payload: serde_json::Value,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
struct RelayEvent {
|
||||
#[serde(rename = "type")]
|
||||
type_: String,
|
||||
#[serde(rename = "sessionId", skip_serializing_if = "Option::is_none")]
|
||||
session_id: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
payload: Option<serde_json::Value>,
|
||||
}
|
||||
|
||||
/// EventSink that sends events as JSON over an mpsc channel (forwarded to WebSocket).
|
||||
struct WsEventSink {
|
||||
tx: mpsc::UnboundedSender<RelayEvent>,
|
||||
}
|
||||
|
||||
impl EventSink for WsEventSink {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value) {
|
||||
// Parse event name to extract session ID for PTY events like "pty-data-{id}"
|
||||
let (type_, session_id) = if let Some(id) = event.strip_prefix("pty-data-") {
|
||||
("pty_data".to_string(), Some(id.to_string()))
|
||||
} else if let Some(id) = event.strip_prefix("pty-exit-") {
|
||||
("pty_exit".to_string(), Some(id.to_string()))
|
||||
} else {
|
||||
(event.replace('-', "_"), None)
|
||||
};
|
||||
|
||||
let _ = self.tx.send(RelayEvent {
|
||||
type_,
|
||||
session_id,
|
||||
payload: if payload.is_null() { None } else { Some(payload) },
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a native-tls TLS acceptor from PEM cert and key files.
|
||||
fn build_tls_acceptor(cert_path: &str, key_path: &str) -> Result<tokio_native_tls::TlsAcceptor, String> {
|
||||
let cert_pem = std::fs::read(cert_path)
|
||||
.map_err(|e| format!("Failed to read TLS cert '{}': {}", cert_path, e))?;
|
||||
let key_pem = std::fs::read(key_path)
|
||||
.map_err(|e| format!("Failed to read TLS key '{}': {}", key_path, e))?;
|
||||
|
||||
let identity = native_tls::Identity::from_pkcs8(&cert_pem, &key_pem)
|
||||
.map_err(|e| format!("Failed to parse TLS identity (cert+key): {e}"))?;
|
||||
|
||||
let tls_acceptor = native_tls::TlsAcceptor::builder(identity)
|
||||
.min_protocol_version(Some(native_tls::Protocol::Tlsv12))
|
||||
.build()
|
||||
.map_err(|e| format!("Failed to build TLS acceptor: {e}"))?;
|
||||
|
||||
Ok(tokio_native_tls::TlsAcceptor::from(tls_acceptor))
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
env_logger::init();
|
||||
let cli = Cli::parse();
|
||||
|
||||
// Validate TLS args
|
||||
let tls_acceptor = match (&cli.tls_cert, &cli.tls_key) {
|
||||
(Some(cert), Some(key)) => {
|
||||
let acceptor = build_tls_acceptor(cert, key).expect("TLS setup failed");
|
||||
log::info!("TLS enabled (cert: {cert}, key: {key})");
|
||||
Some(Arc::new(acceptor))
|
||||
}
|
||||
(Some(_), None) | (None, Some(_)) => {
|
||||
eprintln!("Error: --tls-cert and --tls-key must both be provided");
|
||||
std::process::exit(1);
|
||||
}
|
||||
(None, None) => {
|
||||
if !cli.insecure {
|
||||
log::warn!("Running without TLS. Use --tls-cert/--tls-key for encrypted connections, or --insecure to suppress this warning.");
|
||||
}
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
let addr = SocketAddr::from(([0, 0, 0, 0], cli.port));
|
||||
let listener = TcpListener::bind(&addr).await.expect("Failed to bind");
|
||||
let protocol = if tls_acceptor.is_some() { "wss" } else { "ws" };
|
||||
log::info!("bterminal-relay listening on {protocol}://{addr}");
|
||||
|
||||
// Build sidecar config
|
||||
let mut search_paths: Vec<std::path::PathBuf> = cli
|
||||
.sidecar_path
|
||||
.iter()
|
||||
.map(std::path::PathBuf::from)
|
||||
.collect();
|
||||
// Default: look in current dir and next to binary
|
||||
if let Ok(exe_dir) = std::env::current_exe().map(|p| p.parent().unwrap().to_path_buf()) {
|
||||
search_paths.push(exe_dir.join("sidecar"));
|
||||
}
|
||||
search_paths.push(std::path::PathBuf::from("sidecar"));
|
||||
|
||||
let sidecar_config = SidecarConfig {
|
||||
search_paths,
|
||||
env_overrides: std::collections::HashMap::new(),
|
||||
sandbox: Default::default(),
|
||||
};
|
||||
let token = Arc::new(cli.token);
|
||||
|
||||
// Rate limiting state for auth failures
|
||||
let auth_failures: Arc<tokio::sync::Mutex<std::collections::HashMap<SocketAddr, (u32, std::time::Instant)>>> =
|
||||
Arc::new(tokio::sync::Mutex::new(std::collections::HashMap::new()));
|
||||
|
||||
while let Ok((stream, peer)) = listener.accept().await {
|
||||
let token = token.clone();
|
||||
let sidecar_config = sidecar_config.clone();
|
||||
let auth_failures = auth_failures.clone();
|
||||
let tls = tls_acceptor.clone();
|
||||
|
||||
tokio::spawn(async move {
|
||||
// Check rate limit
|
||||
{
|
||||
let mut failures = auth_failures.lock().await;
|
||||
if let Some((count, last)) = failures.get(&peer) {
|
||||
if *count >= 10 && last.elapsed() < std::time::Duration::from_secs(300) {
|
||||
log::warn!("Rate limited: {peer}");
|
||||
return;
|
||||
}
|
||||
// Reset after cooldown
|
||||
if last.elapsed() >= std::time::Duration::from_secs(300) {
|
||||
failures.remove(&peer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(tls_acceptor) = tls {
|
||||
// TLS path: wrap TCP stream with TLS, then upgrade to WebSocket
|
||||
match tls_acceptor.accept(stream).await {
|
||||
Ok(tls_stream) => {
|
||||
if let Err(e) = handle_tls_connection(tls_stream, peer, &token, &sidecar_config, &auth_failures).await {
|
||||
log::error!("TLS connection error from {peer}: {e}");
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("TLS handshake failed from {peer}: {e}");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Plain WebSocket path
|
||||
if let Err(e) = handle_connection(stream, peer, &token, &sidecar_config, &auth_failures).await {
|
||||
log::error!("Connection error from {peer}: {e}");
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle_connection(
|
||||
stream: TcpStream,
|
||||
peer: SocketAddr,
|
||||
expected_token: &str,
|
||||
sidecar_config: &SidecarConfig,
|
||||
auth_failures: &tokio::sync::Mutex<std::collections::HashMap<SocketAddr, (u32, std::time::Instant)>>,
|
||||
) -> Result<(), String> {
|
||||
let ws_stream = accept_ws_with_auth(stream, expected_token, peer, auth_failures).await?;
|
||||
run_ws_session(ws_stream, peer, sidecar_config).await
|
||||
}
|
||||
|
||||
async fn handle_tls_connection(
|
||||
stream: tokio_native_tls::TlsStream<TcpStream>,
|
||||
peer: SocketAddr,
|
||||
expected_token: &str,
|
||||
sidecar_config: &SidecarConfig,
|
||||
auth_failures: &tokio::sync::Mutex<std::collections::HashMap<SocketAddr, (u32, std::time::Instant)>>,
|
||||
) -> Result<(), String> {
|
||||
let ws_stream = accept_ws_with_auth(stream, expected_token, peer, auth_failures).await?;
|
||||
run_ws_session(ws_stream, peer, sidecar_config).await
|
||||
}
|
||||
|
||||
/// Accept a WebSocket connection with Bearer token auth validation.
|
||||
async fn accept_ws_with_auth<S>(
|
||||
stream: S,
|
||||
expected_token: &str,
|
||||
peer: SocketAddr,
|
||||
auth_failures: &tokio::sync::Mutex<std::collections::HashMap<SocketAddr, (u32, std::time::Instant)>>,
|
||||
) -> Result<tokio_tungstenite::WebSocketStream<S>, String>
|
||||
where
|
||||
S: tokio::io::AsyncRead + tokio::io::AsyncWrite + Unpin,
|
||||
{
|
||||
let expected = format!("Bearer {expected_token}");
|
||||
tokio_tungstenite::accept_hdr_async(stream, |req: &http::Request<()>, response: http::Response<()>| {
|
||||
let auth = req.headers().get("authorization").and_then(|v| v.to_str().ok());
|
||||
match auth {
|
||||
Some(value) if value == expected => Ok(response),
|
||||
_ => {
|
||||
Err(http::Response::builder()
|
||||
.status(http::StatusCode::UNAUTHORIZED)
|
||||
.body(Some("Invalid token".to_string()))
|
||||
.unwrap())
|
||||
}
|
||||
}
|
||||
})
|
||||
.await
|
||||
.map_err(|e| {
|
||||
let _ = auth_failures.try_lock().map(|mut f| {
|
||||
let entry = f.entry(peer).or_insert((0, std::time::Instant::now()));
|
||||
entry.0 += 1;
|
||||
entry.1 = std::time::Instant::now();
|
||||
});
|
||||
format!("WebSocket handshake failed: {e}")
|
||||
})
|
||||
}
|
||||
|
||||
/// Run the WebSocket session (managers, event forwarding, command processing).
|
||||
async fn run_ws_session<S>(
|
||||
ws_stream: tokio_tungstenite::WebSocketStream<S>,
|
||||
peer: SocketAddr,
|
||||
sidecar_config: &SidecarConfig,
|
||||
) -> Result<(), String>
|
||||
where
|
||||
S: tokio::io::AsyncRead + tokio::io::AsyncWrite + Unpin + Send + 'static,
|
||||
{
|
||||
log::info!("Client connected: {peer}");
|
||||
|
||||
// Set up event channel — shared between EventSink and command response sender
|
||||
let (event_tx, mut event_rx) = mpsc::unbounded_channel::<RelayEvent>();
|
||||
let sink_tx = event_tx.clone();
|
||||
let sink: Arc<dyn EventSink> = Arc::new(WsEventSink { tx: event_tx });
|
||||
|
||||
// Create managers for this connection
|
||||
let pty_manager = Arc::new(PtyManager::new(sink.clone()));
|
||||
let sidecar_manager = Arc::new(SidecarManager::new(sink, sidecar_config.clone()));
|
||||
|
||||
// Start sidecar
|
||||
if let Err(e) = sidecar_manager.start() {
|
||||
log::warn!("Sidecar startup failed for {peer}: {e}");
|
||||
}
|
||||
|
||||
let (mut ws_tx, mut ws_rx) = ws_stream.split();
|
||||
|
||||
// Send ready signal
|
||||
let ready_event = RelayEvent {
|
||||
type_: "ready".to_string(),
|
||||
session_id: None,
|
||||
payload: None,
|
||||
};
|
||||
let _ = ws_tx
|
||||
.send(Message::Text(serde_json::to_string(&ready_event).unwrap()))
|
||||
.await;
|
||||
|
||||
// Forward events to WebSocket
|
||||
let event_writer = tokio::spawn(async move {
|
||||
while let Some(event) = event_rx.recv().await {
|
||||
if let Ok(json) = serde_json::to_string(&event) {
|
||||
if ws_tx.send(Message::Text(json)).await.is_err() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Process incoming commands
|
||||
let pty_mgr = pty_manager.clone();
|
||||
let sidecar_mgr = sidecar_manager.clone();
|
||||
let response_tx = sink_tx;
|
||||
let command_reader = tokio::spawn(async move {
|
||||
while let Some(msg) = ws_rx.next().await {
|
||||
match msg {
|
||||
Ok(Message::Text(text)) => {
|
||||
if let Ok(cmd) = serde_json::from_str::<RelayCommand>(&text) {
|
||||
handle_relay_command(&pty_mgr, &sidecar_mgr, &response_tx, cmd).await;
|
||||
}
|
||||
}
|
||||
Ok(Message::Close(_)) => break,
|
||||
Err(e) => {
|
||||
log::error!("WebSocket read error from {peer}: {e}");
|
||||
break;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Wait for either task to finish
|
||||
tokio::select! {
|
||||
_ = event_writer => {}
|
||||
_ = command_reader => {}
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
let _ = sidecar_manager.shutdown();
|
||||
log::info!("Client disconnected: {peer}");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_relay_command(
|
||||
pty: &PtyManager,
|
||||
sidecar: &SidecarManager,
|
||||
response_tx: &mpsc::UnboundedSender<RelayEvent>,
|
||||
cmd: RelayCommand,
|
||||
) {
|
||||
match cmd.type_.as_str() {
|
||||
"ping" => {
|
||||
let _ = response_tx.send(RelayEvent {
|
||||
type_: "pong".to_string(),
|
||||
session_id: None,
|
||||
payload: None,
|
||||
});
|
||||
}
|
||||
"pty_create" => {
|
||||
let options: PtyOptions = match serde_json::from_value(cmd.payload) {
|
||||
Ok(opts) => opts,
|
||||
Err(e) => {
|
||||
send_error(response_tx, &cmd.id, &format!("Invalid pty_create payload: {e}"));
|
||||
return;
|
||||
}
|
||||
};
|
||||
match pty.spawn(options) {
|
||||
Ok(pty_id) => {
|
||||
log::info!("Spawned remote PTY: {pty_id}");
|
||||
let _ = response_tx.send(RelayEvent {
|
||||
type_: "pty_created".to_string(),
|
||||
session_id: Some(pty_id),
|
||||
payload: Some(serde_json::json!({ "commandId": cmd.id })),
|
||||
});
|
||||
}
|
||||
Err(e) => send_error(response_tx, &cmd.id, &format!("Failed to spawn PTY: {e}")),
|
||||
}
|
||||
}
|
||||
"pty_write" => {
|
||||
if let (Some(id), Some(data)) = (
|
||||
cmd.payload.get("id").and_then(|v| v.as_str()),
|
||||
cmd.payload.get("data").and_then(|v| v.as_str()),
|
||||
) {
|
||||
if let Err(e) = pty.write(id, data) {
|
||||
send_error(response_tx, &cmd.id, &format!("PTY write error: {e}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
"pty_resize" => {
|
||||
if let (Some(id), Some(cols), Some(rows)) = (
|
||||
cmd.payload.get("id").and_then(|v| v.as_str()),
|
||||
cmd.payload.get("cols").and_then(|v| v.as_u64()),
|
||||
cmd.payload.get("rows").and_then(|v| v.as_u64()),
|
||||
) {
|
||||
if let Err(e) = pty.resize(id, cols as u16, rows as u16) {
|
||||
send_error(response_tx, &cmd.id, &format!("PTY resize error: {e}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
"pty_close" => {
|
||||
if let Some(id) = cmd.payload.get("id").and_then(|v| v.as_str()) {
|
||||
if let Err(e) = pty.kill(id) {
|
||||
send_error(response_tx, &cmd.id, &format!("PTY kill error: {e}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
"agent_query" => {
|
||||
let options: AgentQueryOptions = match serde_json::from_value(cmd.payload) {
|
||||
Ok(opts) => opts,
|
||||
Err(e) => {
|
||||
send_error(response_tx, &cmd.id, &format!("Invalid agent_query payload: {e}"));
|
||||
return;
|
||||
}
|
||||
};
|
||||
if let Err(e) = sidecar.query(&options) {
|
||||
send_error(response_tx, &cmd.id, &format!("Agent query error: {e}"));
|
||||
}
|
||||
}
|
||||
"agent_stop" => {
|
||||
if let Some(session_id) = cmd.payload.get("sessionId").and_then(|v| v.as_str()) {
|
||||
if let Err(e) = sidecar.stop_session(session_id) {
|
||||
send_error(response_tx, &cmd.id, &format!("Agent stop error: {e}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
"sidecar_restart" => {
|
||||
if let Err(e) = sidecar.restart() {
|
||||
send_error(response_tx, &cmd.id, &format!("Sidecar restart error: {e}"));
|
||||
}
|
||||
}
|
||||
other => {
|
||||
log::warn!("Unknown relay command: {other}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn send_error(tx: &mpsc::UnboundedSender<RelayEvent>, cmd_id: &str, message: &str) {
|
||||
log::error!("{message}");
|
||||
let _ = tx.send(RelayEvent {
|
||||
type_: "error".to_string(),
|
||||
session_id: None,
|
||||
payload: Some(serde_json::json!({
|
||||
"commandId": cmd_id,
|
||||
"message": message,
|
||||
})),
|
||||
});
|
||||
}
|
||||
729
bttask
Executable file
|
|
@ -0,0 +1,729 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
bttask — Group Task Manager for BTerminal Mission Control.
|
||||
|
||||
Hierarchical task management for multi-agent orchestration.
|
||||
Tasks stored in SQLite, role-based visibility.
|
||||
Agent identity set via BTMSG_AGENT_ID environment variable.
|
||||
|
||||
Usage: bttask <command> [args]
|
||||
|
||||
Commands:
|
||||
list [--all] Show tasks (filtered by role visibility)
|
||||
add <title> Create task (Manager/Architect only)
|
||||
assign <id> <agent> Assign task to agent (Manager only)
|
||||
status <id> <state> Set task status (todo/progress/review/done/blocked)
|
||||
comment <id> <text> Add comment to task
|
||||
show <id> Show task details with comments
|
||||
board Kanban board view
|
||||
delete <id> Delete task (Manager only)
|
||||
priorities Reorder tasks by priority
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import sys
|
||||
import os
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
DB_PATH = Path.home() / ".local" / "share" / "bterminal" / "btmsg.db"
|
||||
|
||||
TASK_STATES = ['todo', 'progress', 'review', 'done', 'blocked']
|
||||
|
||||
# Roles that can create tasks
|
||||
CREATOR_ROLES = {'manager', 'architect'}
|
||||
# Roles that can assign tasks
|
||||
ASSIGNER_ROLES = {'manager'}
|
||||
# Roles that can see full task list
|
||||
VIEWER_ROLES = {'manager', 'architect', 'tester'}
|
||||
|
||||
# Colors
|
||||
C_RESET = "\033[0m"
|
||||
C_BOLD = "\033[1m"
|
||||
C_DIM = "\033[2m"
|
||||
C_RED = "\033[31m"
|
||||
C_GREEN = "\033[32m"
|
||||
C_YELLOW = "\033[33m"
|
||||
C_BLUE = "\033[34m"
|
||||
C_MAGENTA = "\033[35m"
|
||||
C_CYAN = "\033[36m"
|
||||
C_WHITE = "\033[37m"
|
||||
|
||||
STATE_COLORS = {
|
||||
'todo': C_WHITE,
|
||||
'progress': C_CYAN,
|
||||
'review': C_YELLOW,
|
||||
'done': C_GREEN,
|
||||
'blocked': C_RED,
|
||||
}
|
||||
|
||||
STATE_ICONS = {
|
||||
'todo': '○',
|
||||
'progress': '◐',
|
||||
'review': '◑',
|
||||
'done': '●',
|
||||
'blocked': '✗',
|
||||
}
|
||||
|
||||
PRIORITY_COLORS = {
|
||||
'critical': C_RED,
|
||||
'high': C_YELLOW,
|
||||
'medium': C_WHITE,
|
||||
'low': C_DIM,
|
||||
}
|
||||
|
||||
|
||||
def get_db():
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
db = sqlite3.connect(str(DB_PATH))
|
||||
db.row_factory = sqlite3.Row
|
||||
db.execute("PRAGMA journal_mode=WAL")
|
||||
return db
|
||||
|
||||
|
||||
def init_db():
|
||||
db = get_db()
|
||||
db.executescript("""
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
description TEXT DEFAULT '',
|
||||
status TEXT DEFAULT 'todo',
|
||||
priority TEXT DEFAULT 'medium',
|
||||
assigned_to TEXT,
|
||||
created_by TEXT NOT NULL,
|
||||
group_id TEXT NOT NULL,
|
||||
parent_task_id TEXT,
|
||||
sort_order INTEGER DEFAULT 0,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
updated_at TEXT DEFAULT (datetime('now')),
|
||||
version INTEGER DEFAULT 1,
|
||||
FOREIGN KEY (assigned_to) REFERENCES agents(id),
|
||||
FOREIGN KEY (created_by) REFERENCES agents(id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS task_comments (
|
||||
id TEXT PRIMARY KEY,
|
||||
task_id TEXT NOT NULL,
|
||||
agent_id TEXT NOT NULL,
|
||||
content TEXT NOT NULL,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
FOREIGN KEY (task_id) REFERENCES tasks(id),
|
||||
FOREIGN KEY (agent_id) REFERENCES agents(id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_group ON tasks(group_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_assigned ON tasks(assigned_to);
|
||||
CREATE INDEX IF NOT EXISTS idx_task_comments_task ON task_comments(task_id);
|
||||
""")
|
||||
|
||||
# Migration: add version column if missing (for existing databases)
|
||||
cursor = db.execute("PRAGMA table_info(tasks)")
|
||||
columns = [row[1] for row in cursor.fetchall()]
|
||||
if 'version' not in columns:
|
||||
db.execute("ALTER TABLE tasks ADD COLUMN version INTEGER DEFAULT 1")
|
||||
db.commit()
|
||||
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
|
||||
def get_agent_id():
|
||||
agent_id = os.environ.get("BTMSG_AGENT_ID")
|
||||
if not agent_id:
|
||||
print(f"{C_RED}Error: BTMSG_AGENT_ID not set.{C_RESET}")
|
||||
sys.exit(1)
|
||||
return agent_id
|
||||
|
||||
|
||||
def get_agent(db, agent_id):
|
||||
return db.execute("SELECT * FROM agents WHERE id = ?", (agent_id,)).fetchone()
|
||||
|
||||
|
||||
def short_id(task_id):
|
||||
return task_id[:8] if task_id else "?"
|
||||
|
||||
|
||||
def format_time(ts_str):
|
||||
if not ts_str:
|
||||
return "?"
|
||||
try:
|
||||
dt = datetime.fromisoformat(ts_str)
|
||||
return dt.strftime("%m-%d %H:%M")
|
||||
except (ValueError, TypeError):
|
||||
return ts_str[:16]
|
||||
|
||||
|
||||
def format_state(state):
|
||||
icon = STATE_ICONS.get(state, '?')
|
||||
color = STATE_COLORS.get(state, C_RESET)
|
||||
return f"{color}{icon} {state}{C_RESET}"
|
||||
|
||||
|
||||
def format_priority(priority):
|
||||
color = PRIORITY_COLORS.get(priority, C_RESET)
|
||||
return f"{color}{priority}{C_RESET}"
|
||||
|
||||
|
||||
def check_role(db, agent_id, allowed_roles, action="do this"):
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
return None
|
||||
if agent['role'] not in allowed_roles:
|
||||
print(f"{C_RED}Permission denied: {agent['role']} cannot {action}.{C_RESET}")
|
||||
print(f"{C_DIM}Required roles: {', '.join(allowed_roles)}{C_RESET}")
|
||||
return None
|
||||
return agent
|
||||
|
||||
|
||||
def find_task(db, task_id_prefix, group_id=None):
|
||||
"""Find task by ID prefix, optionally filtered by group."""
|
||||
if group_id:
|
||||
return db.execute(
|
||||
"SELECT * FROM tasks WHERE id LIKE ? AND group_id = ?",
|
||||
(task_id_prefix + "%", group_id)
|
||||
).fetchone()
|
||||
return db.execute(
|
||||
"SELECT * FROM tasks WHERE id LIKE ?", (task_id_prefix + "%",)
|
||||
).fetchone()
|
||||
|
||||
|
||||
# ─── Commands ────────────────────────────────────────────────
|
||||
|
||||
def cmd_list(args):
|
||||
"""List tasks visible to current agent."""
|
||||
agent_id = get_agent_id()
|
||||
show_all = "--all" in args
|
||||
db = get_db()
|
||||
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Tier 2 agents cannot see task list
|
||||
if agent['role'] not in VIEWER_ROLES:
|
||||
print(f"{C_RED}Access denied: project agents don't see the task list.{C_RESET}")
|
||||
print(f"{C_DIM}Tasks are assigned to you via btmsg messages.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
group_id = agent['group_id']
|
||||
|
||||
if show_all:
|
||||
rows = db.execute(
|
||||
"SELECT t.*, a.name as assignee_name FROM tasks t "
|
||||
"LEFT JOIN agents a ON t.assigned_to = a.id "
|
||||
"WHERE t.group_id = ? ORDER BY t.sort_order, t.created_at",
|
||||
(group_id,)
|
||||
).fetchall()
|
||||
else:
|
||||
rows = db.execute(
|
||||
"SELECT t.*, a.name as assignee_name FROM tasks t "
|
||||
"LEFT JOIN agents a ON t.assigned_to = a.id "
|
||||
"WHERE t.group_id = ? AND t.status != 'done' "
|
||||
"ORDER BY t.sort_order, t.created_at",
|
||||
(group_id,)
|
||||
).fetchall()
|
||||
|
||||
if not rows:
|
||||
print(f"{C_DIM}No tasks.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
label = "All tasks" if show_all else "Active tasks"
|
||||
print(f"\n{C_BOLD}📋 {label} ({len(rows)}):{C_RESET}\n")
|
||||
|
||||
for row in rows:
|
||||
state_str = format_state(row['status'])
|
||||
priority_str = format_priority(row['priority'])
|
||||
assignee = row['assignee_name'] or f"{C_DIM}unassigned{C_RESET}"
|
||||
print(f" {state_str} [{short_id(row['id'])}] {C_BOLD}{row['title']}{C_RESET}")
|
||||
print(f" {priority_str} → {assignee} {C_DIM}{format_time(row['updated_at'])}{C_RESET}")
|
||||
|
||||
# Show comment count
|
||||
count = db.execute(
|
||||
"SELECT COUNT(*) FROM task_comments WHERE task_id = ?", (row['id'],)
|
||||
).fetchone()[0]
|
||||
if count > 0:
|
||||
print(f" {C_DIM}💬 {count} comment{'s' if count != 1 else ''}{C_RESET}")
|
||||
print()
|
||||
|
||||
db.close()
|
||||
|
||||
|
||||
def cmd_add(args):
|
||||
"""Create a new task."""
|
||||
if not args:
|
||||
print(f"{C_RED}Usage: bttask add <title> [--desc TEXT] [--priority critical|high|medium|low] [--assign AGENT] [--parent TASK_ID]{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = check_role(db, agent_id, CREATOR_ROLES, "create tasks")
|
||||
if not agent:
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Parse args
|
||||
title_parts = []
|
||||
description = ""
|
||||
priority = "medium"
|
||||
assign_to = None
|
||||
parent_id = None
|
||||
|
||||
i = 0
|
||||
while i < len(args):
|
||||
if args[i] == "--desc" and i + 1 < len(args):
|
||||
description = args[i + 1]
|
||||
i += 2
|
||||
elif args[i] == "--priority" and i + 1 < len(args):
|
||||
priority = args[i + 1]
|
||||
if priority not in PRIORITY_COLORS:
|
||||
print(f"{C_RED}Invalid priority: {priority}. Use: critical, high, medium, low{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
i += 2
|
||||
elif args[i] == "--assign" and i + 1 < len(args):
|
||||
assign_to = args[i + 1]
|
||||
i += 2
|
||||
elif args[i] == "--parent" and i + 1 < len(args):
|
||||
parent_id = args[i + 1]
|
||||
i += 2
|
||||
else:
|
||||
title_parts.append(args[i])
|
||||
i += 1
|
||||
|
||||
title = " ".join(title_parts)
|
||||
if not title:
|
||||
print(f"{C_RED}Title is required.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Verify assignee if specified
|
||||
if assign_to:
|
||||
assignee = get_agent(db, assign_to)
|
||||
if not assignee:
|
||||
# prefix match
|
||||
row = db.execute("SELECT * FROM agents WHERE id LIKE ?", (assign_to + "%",)).fetchone()
|
||||
if row:
|
||||
assign_to = row['id']
|
||||
else:
|
||||
print(f"{C_RED}Agent '{assign_to}' not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Resolve parent task
|
||||
if parent_id:
|
||||
parent = find_task(db, parent_id, agent['group_id'])
|
||||
if not parent:
|
||||
print(f"{C_RED}Parent task '{parent_id}' not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
parent_id = parent['id']
|
||||
|
||||
# Get max sort_order
|
||||
max_order = db.execute(
|
||||
"SELECT COALESCE(MAX(sort_order), 0) FROM tasks WHERE group_id = ?",
|
||||
(agent['group_id'],)
|
||||
).fetchone()[0]
|
||||
|
||||
task_id = str(uuid.uuid4())
|
||||
db.execute(
|
||||
"INSERT INTO tasks (id, title, description, priority, assigned_to, created_by, "
|
||||
"group_id, parent_task_id, sort_order) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(task_id, title, description, priority, assign_to, agent_id,
|
||||
agent['group_id'], parent_id, max_order + 1)
|
||||
)
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
print(f"{C_GREEN}✓ Created: {title}{C_RESET} [{short_id(task_id)}]")
|
||||
if assign_to:
|
||||
print(f" {C_DIM}Assigned to: {assign_to}{C_RESET}")
|
||||
|
||||
|
||||
def cmd_assign(args):
|
||||
"""Assign task to an agent."""
|
||||
if len(args) < 2:
|
||||
print(f"{C_RED}Usage: bttask assign <task-id> <agent-id>{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = check_role(db, agent_id, ASSIGNER_ROLES, "assign tasks")
|
||||
if not agent:
|
||||
db.close()
|
||||
return
|
||||
|
||||
task = find_task(db, args[0], agent['group_id'])
|
||||
if not task:
|
||||
print(f"{C_RED}Task not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
assignee_id = args[1]
|
||||
assignee = get_agent(db, assignee_id)
|
||||
if not assignee:
|
||||
row = db.execute("SELECT * FROM agents WHERE id LIKE ?", (assignee_id + "%",)).fetchone()
|
||||
if row:
|
||||
assignee = row
|
||||
assignee_id = row['id']
|
||||
else:
|
||||
print(f"{C_RED}Agent '{assignee_id}' not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
db.execute(
|
||||
"UPDATE tasks SET assigned_to = ?, updated_at = datetime('now') WHERE id = ?",
|
||||
(assignee_id, task['id'])
|
||||
)
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
print(f"{C_GREEN}✓ Assigned [{short_id(task['id'])}] to {assignee['name']}{C_RESET}")
|
||||
|
||||
|
||||
def cmd_status(args):
|
||||
"""Change task status."""
|
||||
if len(args) < 2:
|
||||
print(f"{C_RED}Usage: bttask status <task-id> <{'/'.join(TASK_STATES)}>{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
new_status = args[1]
|
||||
|
||||
if new_status not in TASK_STATES:
|
||||
print(f"{C_RED}Invalid status: {new_status}. Use: {', '.join(TASK_STATES)}{C_RESET}")
|
||||
return
|
||||
|
||||
db = get_db()
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
task = find_task(db, args[0], agent['group_id'])
|
||||
if not task:
|
||||
print(f"{C_RED}Task not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Tier 2 can only update tasks assigned to them
|
||||
if agent['role'] not in VIEWER_ROLES and task['assigned_to'] != agent_id:
|
||||
print(f"{C_RED}Cannot update task not assigned to you.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
old_status = task['status']
|
||||
current_version = task['version'] if task['version'] is not None else 1
|
||||
|
||||
cursor = db.execute(
|
||||
"UPDATE tasks SET status = ?, version = version + 1, updated_at = datetime('now') "
|
||||
"WHERE id = ? AND version = ?",
|
||||
(new_status, task['id'], current_version)
|
||||
)
|
||||
|
||||
if cursor.rowcount == 0:
|
||||
print(f"{C_RED}Error: Task was modified by another agent (version conflict).{C_RESET}")
|
||||
print(f"{C_DIM}Re-fetch the task and try again.{C_RESET}")
|
||||
db.close()
|
||||
sys.exit(1)
|
||||
|
||||
# Auto-add comment for status change
|
||||
comment_id = str(uuid.uuid4())
|
||||
db.execute(
|
||||
"INSERT INTO task_comments (id, task_id, agent_id, content) VALUES (?, ?, ?, ?)",
|
||||
(comment_id, task['id'], agent_id, f"Status: {old_status} → {new_status}")
|
||||
)
|
||||
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
new_version = current_version + 1
|
||||
print(f"{C_GREEN}✓ [{short_id(task['id'])}] {format_state(old_status)} → {format_state(new_status)} {C_DIM}(v{new_version}){C_RESET}")
|
||||
|
||||
|
||||
def cmd_comment(args):
|
||||
"""Add comment to a task."""
|
||||
if len(args) < 2:
|
||||
print(f"{C_RED}Usage: bttask comment <task-id> <text>{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
task = find_task(db, args[0], agent['group_id'])
|
||||
if not task:
|
||||
print(f"{C_RED}Task not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
content = " ".join(args[1:])
|
||||
comment_id = str(uuid.uuid4())
|
||||
db.execute(
|
||||
"INSERT INTO task_comments (id, task_id, agent_id, content) VALUES (?, ?, ?, ?)",
|
||||
(comment_id, task['id'], agent_id, content)
|
||||
)
|
||||
db.execute(
|
||||
"UPDATE tasks SET updated_at = datetime('now') WHERE id = ?", (task['id'],)
|
||||
)
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
print(f"{C_GREEN}✓ Comment added to [{short_id(task['id'])}]{C_RESET}")
|
||||
|
||||
|
||||
def cmd_show(args):
|
||||
"""Show task details with comments."""
|
||||
if not args:
|
||||
print(f"{C_RED}Usage: bttask show <task-id>{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
task = find_task(db, args[0], agent['group_id'])
|
||||
if not task:
|
||||
print(f"{C_RED}Task not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
# Get assignee name
|
||||
assignee_name = "unassigned"
|
||||
if task['assigned_to']:
|
||||
assignee = get_agent(db, task['assigned_to'])
|
||||
if assignee:
|
||||
assignee_name = assignee['name']
|
||||
|
||||
# Get creator name
|
||||
creator = get_agent(db, task['created_by'])
|
||||
creator_name = creator['name'] if creator else task['created_by']
|
||||
|
||||
print(f"\n{C_BOLD}{'─' * 60}{C_RESET}")
|
||||
print(f" {format_state(task['status'])} {C_BOLD}{task['title']}{C_RESET}")
|
||||
print(f"{C_BOLD}{'─' * 60}{C_RESET}")
|
||||
print(f" {C_DIM}ID:{C_RESET} {task['id']}")
|
||||
print(f" {C_DIM}Priority:{C_RESET} {format_priority(task['priority'])}")
|
||||
print(f" {C_DIM}Assigned:{C_RESET} {assignee_name}")
|
||||
print(f" {C_DIM}Created:{C_RESET} {creator_name} @ {format_time(task['created_at'])}")
|
||||
print(f" {C_DIM}Updated:{C_RESET} {format_time(task['updated_at'])}")
|
||||
|
||||
if task['description']:
|
||||
print(f"\n {task['description']}")
|
||||
|
||||
if task['parent_task_id']:
|
||||
parent = find_task(db, task['parent_task_id'])
|
||||
if parent:
|
||||
print(f" {C_DIM}Parent:{C_RESET} [{short_id(parent['id'])}] {parent['title']}")
|
||||
|
||||
# Subtasks
|
||||
subtasks = db.execute(
|
||||
"SELECT * FROM tasks WHERE parent_task_id = ? ORDER BY sort_order",
|
||||
(task['id'],)
|
||||
).fetchall()
|
||||
if subtasks:
|
||||
print(f"\n {C_BOLD}Subtasks:{C_RESET}")
|
||||
for st in subtasks:
|
||||
print(f" {format_state(st['status'])} [{short_id(st['id'])}] {st['title']}")
|
||||
|
||||
# Comments
|
||||
comments = db.execute(
|
||||
"SELECT c.*, a.name as agent_name, a.role as agent_role "
|
||||
"FROM task_comments c JOIN agents a ON c.agent_id = a.id "
|
||||
"WHERE c.task_id = ? ORDER BY c.created_at ASC",
|
||||
(task['id'],)
|
||||
).fetchall()
|
||||
|
||||
if comments:
|
||||
print(f"\n {C_BOLD}Comments ({len(comments)}):{C_RESET}")
|
||||
for c in comments:
|
||||
time_str = format_time(c['created_at'])
|
||||
print(f" {C_DIM}{time_str}{C_RESET} {C_BOLD}{c['agent_name']}{C_RESET}: {c['content']}")
|
||||
|
||||
print(f"\n{C_BOLD}{'─' * 60}{C_RESET}\n")
|
||||
db.close()
|
||||
|
||||
|
||||
def cmd_board(args):
|
||||
"""Kanban board view."""
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = get_agent(db, agent_id)
|
||||
if not agent:
|
||||
print(f"{C_RED}Agent '{agent_id}' not registered.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
if agent['role'] not in VIEWER_ROLES:
|
||||
print(f"{C_RED}Access denied: project agents don't see the task board.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
group_id = agent['group_id']
|
||||
|
||||
# Get all tasks grouped by status
|
||||
all_tasks = db.execute(
|
||||
"SELECT t.*, a.name as assignee_name FROM tasks t "
|
||||
"LEFT JOIN agents a ON t.assigned_to = a.id "
|
||||
"WHERE t.group_id = ? ORDER BY t.sort_order, t.created_at",
|
||||
(group_id,)
|
||||
).fetchall()
|
||||
|
||||
columns = {}
|
||||
for state in TASK_STATES:
|
||||
columns[state] = [t for t in all_tasks if t['status'] == state]
|
||||
|
||||
# Calculate column width
|
||||
col_width = 20
|
||||
|
||||
# Header
|
||||
print(f"\n{C_BOLD} 📋 Task Board{C_RESET}\n")
|
||||
|
||||
# Column headers
|
||||
header_line = " "
|
||||
for state in TASK_STATES:
|
||||
icon = STATE_ICONS[state]
|
||||
color = STATE_COLORS[state]
|
||||
count = len(columns[state])
|
||||
col_header = f"{color}{icon} {state.upper()} ({count}){C_RESET}"
|
||||
header_line += col_header.ljust(col_width + len(color) + len(C_RESET) + 5)
|
||||
print(header_line)
|
||||
print(f" {'─' * (col_width * len(TASK_STATES) + 10)}")
|
||||
|
||||
# Find max rows
|
||||
max_rows = max(len(columns[s]) for s in TASK_STATES) if all_tasks else 0
|
||||
|
||||
for row_idx in range(max_rows):
|
||||
line = " "
|
||||
for state in TASK_STATES:
|
||||
tasks_in_col = columns[state]
|
||||
if row_idx < len(tasks_in_col):
|
||||
t = tasks_in_col[row_idx]
|
||||
title = t['title'][:col_width - 2]
|
||||
assignee = (t['assignee_name'] or "?")[:8]
|
||||
priority_c = PRIORITY_COLORS.get(t['priority'], C_RESET)
|
||||
cell = f"{priority_c}{short_id(t['id'])}{C_RESET} {title}"
|
||||
# Pad to column width (accounting for color codes)
|
||||
visible_len = len(short_id(t['id'])) + 1 + len(title)
|
||||
padding = max(0, col_width - visible_len)
|
||||
line += cell + " " * padding + " "
|
||||
else:
|
||||
line += " " * (col_width + 2)
|
||||
print(line)
|
||||
|
||||
# Second line with assignee
|
||||
line2 = " "
|
||||
for state in TASK_STATES:
|
||||
tasks_in_col = columns[state]
|
||||
if row_idx < len(tasks_in_col):
|
||||
t = tasks_in_col[row_idx]
|
||||
assignee = (t['assignee_name'] or "unassigned")[:col_width - 2]
|
||||
cell = f"{C_DIM} → {assignee}{C_RESET}"
|
||||
visible_len = 4 + len(assignee)
|
||||
padding = max(0, col_width - visible_len)
|
||||
line2 += cell + " " * padding + " "
|
||||
else:
|
||||
line2 += " " * (col_width + 2)
|
||||
print(line2)
|
||||
print()
|
||||
|
||||
if not all_tasks:
|
||||
print(f" {C_DIM}No tasks. Create one: bttask add \"Task title\"{C_RESET}")
|
||||
|
||||
print()
|
||||
db.close()
|
||||
|
||||
|
||||
def cmd_delete(args):
|
||||
"""Delete a task."""
|
||||
if not args:
|
||||
print(f"{C_RED}Usage: bttask delete <task-id>{C_RESET}")
|
||||
return
|
||||
|
||||
agent_id = get_agent_id()
|
||||
db = get_db()
|
||||
|
||||
agent = check_role(db, agent_id, ASSIGNER_ROLES, "delete tasks")
|
||||
if not agent:
|
||||
db.close()
|
||||
return
|
||||
|
||||
task = find_task(db, args[0], agent['group_id'])
|
||||
if not task:
|
||||
print(f"{C_RED}Task not found.{C_RESET}")
|
||||
db.close()
|
||||
return
|
||||
|
||||
title = task['title']
|
||||
db.execute("DELETE FROM task_comments WHERE task_id = ?", (task['id'],))
|
||||
db.execute("DELETE FROM tasks WHERE id = ?", (task['id'],))
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
print(f"{C_GREEN}✓ Deleted: {title}{C_RESET}")
|
||||
|
||||
|
||||
def cmd_help(args=None):
|
||||
"""Show help."""
|
||||
print(__doc__)
|
||||
|
||||
|
||||
# ─── Main dispatch ───────────────────────────────────────────
|
||||
|
||||
COMMANDS = {
|
||||
'list': cmd_list,
|
||||
'add': cmd_add,
|
||||
'assign': cmd_assign,
|
||||
'status': cmd_status,
|
||||
'comment': cmd_comment,
|
||||
'show': cmd_show,
|
||||
'board': cmd_board,
|
||||
'delete': cmd_delete,
|
||||
'help': cmd_help,
|
||||
'--help': cmd_help,
|
||||
'-h': cmd_help,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
init_db()
|
||||
|
||||
if len(sys.argv) < 2:
|
||||
cmd_help()
|
||||
sys.exit(0)
|
||||
|
||||
command = sys.argv[1]
|
||||
args = sys.argv[2:]
|
||||
|
||||
handler = COMMANDS.get(command)
|
||||
if not handler:
|
||||
print(f"{C_RED}Unknown command: {command}{C_RESET}")
|
||||
cmd_help()
|
||||
sys.exit(1)
|
||||
|
||||
handler(args)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
16
index.html
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Agent Orchestrator</title>
|
||||
<style>
|
||||
html, body { margin: 0; padding: 0; background: #1e1e2e; height: 100%; overflow: hidden; }
|
||||
#app { height: 100%; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script type="module" src="/src/main.ts"></script>
|
||||
</body>
|
||||
</html>
|
||||
9723
package-lock.json
generated
Normal file
64
package.json
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"name": "bterminal-v2",
|
||||
"private": true,
|
||||
"version": "0.1.0",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"prebuild": "cp node_modules/pdfjs-dist/build/pdf.worker.min.mjs public/pdf.worker.min.mjs",
|
||||
"build": "vite build",
|
||||
"preview": "vite preview",
|
||||
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json",
|
||||
"tauri": "cargo tauri",
|
||||
"tauri:dev": "cargo tauri dev",
|
||||
"tauri:build": "cargo tauri build",
|
||||
"test": "vitest run",
|
||||
"test:cargo": "cd src-tauri && cargo test",
|
||||
"test:e2e": "wdio run tests/e2e/wdio.conf.js",
|
||||
"test:all": "bash scripts/test-all.sh",
|
||||
"test:all:e2e": "bash scripts/test-all.sh --e2e",
|
||||
"build:sidecar": "esbuild sidecar/claude-runner.ts --bundle --platform=node --format=esm --outfile=sidecar/dist/claude-runner.mjs && esbuild sidecar/codex-runner.ts --bundle --platform=node --format=esm --outfile=sidecar/dist/codex-runner.mjs && esbuild sidecar/ollama-runner.ts --bundle --platform=node --format=esm --outfile=sidecar/dist/ollama-runner.mjs && esbuild sidecar/aider-runner.ts --bundle --platform=node --format=esm --outfile=sidecar/dist/aider-runner.mjs"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@sveltejs/vite-plugin-svelte": "^6.2.1",
|
||||
"@tsconfig/svelte": "^5.0.6",
|
||||
"@types/node": "^24.10.1",
|
||||
"@wdio/cli": "^9.24.0",
|
||||
"@wdio/local-runner": "^9.24.0",
|
||||
"@wdio/mocha-framework": "^9.24.0",
|
||||
"@wdio/spec-reporter": "^9.24.0",
|
||||
"esbuild": "^0.27.4",
|
||||
"svelte": "^5.45.2",
|
||||
"svelte-check": "^4.3.4",
|
||||
"typescript": "~5.9.3",
|
||||
"vite": "^7.3.1",
|
||||
"vitest": "^4.0.18"
|
||||
},
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.70",
|
||||
"@codemirror/lang-cpp": "^6.0.3",
|
||||
"@codemirror/lang-css": "^6.3.1",
|
||||
"@codemirror/lang-go": "^6.0.1",
|
||||
"@codemirror/lang-html": "^6.4.11",
|
||||
"@codemirror/lang-java": "^6.0.2",
|
||||
"@codemirror/lang-javascript": "^6.2.5",
|
||||
"@codemirror/lang-json": "^6.0.2",
|
||||
"@codemirror/lang-markdown": "^6.5.0",
|
||||
"@codemirror/lang-php": "^6.0.2",
|
||||
"@codemirror/lang-python": "^6.2.1",
|
||||
"@codemirror/lang-rust": "^6.0.2",
|
||||
"@codemirror/lang-sql": "^6.10.0",
|
||||
"@codemirror/lang-xml": "^6.1.0",
|
||||
"@codemirror/lang-yaml": "^6.1.2",
|
||||
"@tauri-apps/api": "^2.10.1",
|
||||
"@tauri-apps/plugin-dialog": "^2.6.0",
|
||||
"@tauri-apps/plugin-updater": "^2.10.0",
|
||||
"@xterm/addon-canvas": "^0.7.0",
|
||||
"@xterm/addon-fit": "^0.11.0",
|
||||
"@xterm/xterm": "^6.0.0",
|
||||
"codemirror": "^6.0.2",
|
||||
"marked": "^17.0.4",
|
||||
"pdfjs-dist": "^5.5.207",
|
||||
"shiki": "^4.0.1"
|
||||
}
|
||||
}
|
||||
114
scripts/test-all.sh
Executable file
|
|
@ -0,0 +1,114 @@
|
|||
#!/usr/bin/env bash
|
||||
# BTerminal — unified test runner
|
||||
# Usage: ./scripts/test-all.sh [--e2e] [--verbose]
|
||||
#
|
||||
# Runs vitest (frontend) + cargo test (backend) by default.
|
||||
# Pass --e2e to also run WebDriverIO E2E tests (requires built binary).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[0;33m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
RESET='\033[0m'
|
||||
|
||||
V2_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||
RUN_E2E=false
|
||||
VERBOSE=false
|
||||
FAILED=()
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--e2e) RUN_E2E=true ;;
|
||||
--verbose|-v) VERBOSE=true ;;
|
||||
--help|-h)
|
||||
echo "Usage: $0 [--e2e] [--verbose]"
|
||||
echo " --e2e Also run WebDriverIO E2E tests (requires built binary)"
|
||||
echo " --verbose Show full test output instead of summary"
|
||||
exit 0
|
||||
;;
|
||||
*) echo "Unknown option: $arg"; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
step() {
|
||||
echo -e "\n${CYAN}${BOLD}━━━ $1 ━━━${RESET}"
|
||||
}
|
||||
|
||||
pass() {
|
||||
echo -e "${GREEN}✓ $1${RESET}"
|
||||
}
|
||||
|
||||
fail() {
|
||||
echo -e "${RED}✗ $1${RESET}"
|
||||
FAILED+=("$1")
|
||||
}
|
||||
|
||||
# --- Vitest (frontend) ---
|
||||
step "Vitest (frontend unit tests)"
|
||||
if $VERBOSE; then
|
||||
(cd "$V2_DIR" && npm run test) && pass "Vitest" || fail "Vitest"
|
||||
else
|
||||
if OUTPUT=$(cd "$V2_DIR" && npm run test 2>&1); then
|
||||
SUMMARY=$(echo "$OUTPUT" | grep -E "Tests|Test Files" | tail -2)
|
||||
echo "$SUMMARY"
|
||||
pass "Vitest"
|
||||
else
|
||||
echo "$OUTPUT" | tail -20
|
||||
fail "Vitest"
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- Cargo test (backend) ---
|
||||
step "Cargo test (Rust backend)"
|
||||
if $VERBOSE; then
|
||||
(cd "$V2_DIR/src-tauri" && cargo test) && pass "Cargo test" || fail "Cargo test"
|
||||
else
|
||||
if OUTPUT=$(cd "$V2_DIR/src-tauri" && cargo test 2>&1); then
|
||||
SUMMARY=$(echo "$OUTPUT" | grep -E "test result:|running" | head -5)
|
||||
echo "$SUMMARY"
|
||||
pass "Cargo test"
|
||||
else
|
||||
echo "$OUTPUT" | tail -20
|
||||
fail "Cargo test"
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- E2E (WebDriverIO) ---
|
||||
if $RUN_E2E; then
|
||||
step "E2E tests (WebDriverIO + tauri-driver)"
|
||||
|
||||
# Check for built binary
|
||||
BINARY=$(find "$V2_DIR/src-tauri/target" -name "bterminal*" -type f -executable -path "*/release/*" 2>/dev/null | head -1)
|
||||
if [ -z "$BINARY" ]; then
|
||||
echo -e "${YELLOW}⚠ No release binary found. Run 'npm run tauri build' first.${RESET}"
|
||||
fail "E2E (no binary)"
|
||||
else
|
||||
if $VERBOSE; then
|
||||
(cd "$V2_DIR" && npm run test:e2e) && pass "E2E" || fail "E2E"
|
||||
else
|
||||
if OUTPUT=$(cd "$V2_DIR" && npm run test:e2e 2>&1); then
|
||||
SUMMARY=$(echo "$OUTPUT" | grep -E "passing|failing|skipped" | tail -3)
|
||||
echo "$SUMMARY"
|
||||
pass "E2E"
|
||||
else
|
||||
echo "$OUTPUT" | tail -30
|
||||
fail "E2E"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e "\n${YELLOW}Skipping E2E tests (pass --e2e to include)${RESET}"
|
||||
fi
|
||||
|
||||
# --- Summary ---
|
||||
echo -e "\n${BOLD}━━━ Summary ━━━${RESET}"
|
||||
if [ ${#FAILED[@]} -eq 0 ]; then
|
||||
echo -e "${GREEN}${BOLD}All test suites passed.${RESET}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}${BOLD}Failed suites: ${FAILED[*]}${RESET}"
|
||||
exit 1
|
||||
fi
|
||||
209
sidecar/agent-runner-deno.ts
Normal file
|
|
@ -0,0 +1,209 @@
|
|||
// Agent Runner — Deno sidecar entry point
|
||||
// Drop-in replacement for agent-runner.ts using Deno APIs
|
||||
// Uses @anthropic-ai/claude-agent-sdk via npm: specifier
|
||||
// Run: deno run --allow-run --allow-env --allow-read --allow-write --allow-net agent-runner-deno.ts
|
||||
|
||||
import { TextLineStream } from "https://deno.land/std@0.224.0/streams/text_line_stream.ts";
|
||||
import { query } from "npm:@anthropic-ai/claude-agent-sdk";
|
||||
|
||||
const encoder = new TextEncoder();
|
||||
|
||||
// Active sessions with abort controllers
|
||||
const sessions = new Map<string, AbortController>();
|
||||
|
||||
function send(msg: Record<string, unknown>) {
|
||||
Deno.stdout.writeSync(encoder.encode(JSON.stringify(msg) + "\n"));
|
||||
}
|
||||
|
||||
function log(message: string) {
|
||||
Deno.stderr.writeSync(encoder.encode(`[sidecar] ${message}\n`));
|
||||
}
|
||||
|
||||
interface QueryMessage {
|
||||
type: "query";
|
||||
sessionId: string;
|
||||
prompt: string;
|
||||
cwd?: string;
|
||||
maxTurns?: number;
|
||||
maxBudgetUsd?: number;
|
||||
resumeSessionId?: string;
|
||||
permissionMode?: string;
|
||||
settingSources?: string[];
|
||||
systemPrompt?: string;
|
||||
model?: string;
|
||||
claudeConfigDir?: string;
|
||||
additionalDirectories?: string[];
|
||||
}
|
||||
|
||||
interface StopMessage {
|
||||
type: "stop";
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
function handleMessage(msg: Record<string, unknown>) {
|
||||
switch (msg.type) {
|
||||
case "ping":
|
||||
send({ type: "pong" });
|
||||
break;
|
||||
case "query":
|
||||
handleQuery(msg as unknown as QueryMessage);
|
||||
break;
|
||||
case "stop":
|
||||
handleStop(msg as unknown as StopMessage);
|
||||
break;
|
||||
default:
|
||||
send({ type: "error", message: `Unknown message type: ${msg.type}` });
|
||||
}
|
||||
}
|
||||
|
||||
async function handleQuery(msg: QueryMessage) {
|
||||
const { sessionId, prompt, cwd, maxTurns, maxBudgetUsd, resumeSessionId, permissionMode, settingSources, systemPrompt, model, claudeConfigDir, additionalDirectories } = msg;
|
||||
|
||||
if (sessions.has(sessionId)) {
|
||||
send({ type: "error", sessionId, message: "Session already running" });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Starting agent session ${sessionId} via SDK`);
|
||||
|
||||
const controller = new AbortController();
|
||||
|
||||
// Strip CLAUDE* env vars to prevent nesting detection
|
||||
const cleanEnv: Record<string, string | undefined> = {};
|
||||
for (const [key, value] of Object.entries(Deno.env.toObject())) {
|
||||
if (!key.startsWith("CLAUDE")) {
|
||||
cleanEnv[key] = value;
|
||||
}
|
||||
}
|
||||
// Override CLAUDE_CONFIG_DIR for multi-account support
|
||||
if (claudeConfigDir) {
|
||||
cleanEnv["CLAUDE_CONFIG_DIR"] = claudeConfigDir;
|
||||
}
|
||||
|
||||
if (!claudePath) {
|
||||
send({ type: "agent_error", sessionId, message: "Claude CLI not found. Install Claude Code first." });
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const q = query({
|
||||
prompt,
|
||||
options: {
|
||||
pathToClaudeCodeExecutable: claudePath,
|
||||
abortController: controller,
|
||||
cwd: cwd || Deno.cwd(),
|
||||
env: cleanEnv,
|
||||
maxTurns: maxTurns ?? undefined,
|
||||
maxBudgetUsd: maxBudgetUsd ?? undefined,
|
||||
resume: resumeSessionId ?? undefined,
|
||||
allowedTools: [
|
||||
"Bash", "Read", "Write", "Edit", "Glob", "Grep",
|
||||
"WebSearch", "WebFetch", "TodoWrite", "NotebookEdit",
|
||||
],
|
||||
permissionMode: (permissionMode ?? "bypassPermissions") as "bypassPermissions" | "default",
|
||||
allowDangerouslySkipPermissions: (permissionMode ?? "bypassPermissions") === "bypassPermissions",
|
||||
settingSources: settingSources ?? ["user", "project"],
|
||||
systemPrompt: systemPrompt
|
||||
? systemPrompt
|
||||
: { type: "preset" as const, preset: "claude_code" as const },
|
||||
model: model ?? undefined,
|
||||
additionalDirectories: additionalDirectories ?? undefined,
|
||||
},
|
||||
});
|
||||
|
||||
sessions.set(sessionId, controller);
|
||||
send({ type: "agent_started", sessionId });
|
||||
|
||||
for await (const message of q) {
|
||||
const sdkMsg = message as Record<string, unknown>;
|
||||
send({
|
||||
type: "agent_event",
|
||||
sessionId,
|
||||
event: sdkMsg,
|
||||
});
|
||||
}
|
||||
|
||||
sessions.delete(sessionId);
|
||||
send({
|
||||
type: "agent_stopped",
|
||||
sessionId,
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
sessions.delete(sessionId);
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
|
||||
if (errMsg.includes("aborted") || errMsg.includes("AbortError")) {
|
||||
log(`Agent session ${sessionId} aborted`);
|
||||
send({
|
||||
type: "agent_stopped",
|
||||
sessionId,
|
||||
exitCode: null,
|
||||
signal: "SIGTERM",
|
||||
});
|
||||
} else {
|
||||
log(`Agent session ${sessionId} error: ${errMsg}`);
|
||||
send({
|
||||
type: "agent_error",
|
||||
sessionId,
|
||||
message: errMsg,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function handleStop(msg: StopMessage) {
|
||||
const { sessionId } = msg;
|
||||
const controller = sessions.get(sessionId);
|
||||
if (!controller) {
|
||||
send({ type: "error", sessionId, message: "Session not found" });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Stopping agent session ${sessionId}`);
|
||||
controller.abort();
|
||||
}
|
||||
|
||||
function findClaudeCli(): string | undefined {
|
||||
const home = Deno.env.get("HOME") ?? Deno.env.get("USERPROFILE") ?? "";
|
||||
const candidates = [
|
||||
`${home}/.local/bin/claude`,
|
||||
`${home}/.claude/local/claude`,
|
||||
"/usr/local/bin/claude",
|
||||
"/usr/bin/claude",
|
||||
];
|
||||
for (const p of candidates) {
|
||||
try { Deno.statSync(p); return p; } catch { /* not found */ }
|
||||
}
|
||||
try {
|
||||
const proc = new Deno.Command("which", { args: ["claude"], stdout: "piped", stderr: "null" });
|
||||
const out = new TextDecoder().decode(proc.outputSync().stdout).trim();
|
||||
if (out) return out.split("\n")[0];
|
||||
} catch { /* not found */ }
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const claudePath = findClaudeCli();
|
||||
if (claudePath) {
|
||||
log(`Found Claude CLI at ${claudePath}`);
|
||||
} else {
|
||||
log("WARNING: Claude CLI not found — agent sessions will fail");
|
||||
}
|
||||
|
||||
// Main: read NDJSON from stdin
|
||||
log("Sidecar started (Deno)");
|
||||
send({ type: "ready" });
|
||||
|
||||
const lines = Deno.stdin.readable
|
||||
.pipeThrough(new TextDecoderStream())
|
||||
.pipeThrough(new TextLineStream());
|
||||
|
||||
for await (const line of lines) {
|
||||
try {
|
||||
const msg = JSON.parse(line);
|
||||
handleMessage(msg);
|
||||
} catch {
|
||||
log(`Invalid JSON: ${line}`);
|
||||
}
|
||||
}
|
||||
731
sidecar/aider-parser.test.ts
Normal file
|
|
@ -0,0 +1,731 @@
|
|||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import {
|
||||
looksLikePrompt,
|
||||
shouldSuppress,
|
||||
parseTurnOutput,
|
||||
extractSessionCost,
|
||||
prefetchContext,
|
||||
execShell,
|
||||
PROMPT_RE,
|
||||
SUPPRESS_RE,
|
||||
SHELL_CMD_RE,
|
||||
} from './aider-parser';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fixtures — realistic Aider output samples used as format-drift canaries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const FIXTURE_STARTUP = [
|
||||
'Aider v0.72.1',
|
||||
'Main model: openrouter/anthropic/claude-sonnet-4 with diff edit format',
|
||||
'Weak model: openrouter/anthropic/claude-haiku-4',
|
||||
'Git repo: none',
|
||||
'Repo-map: disabled',
|
||||
'Use /help to see in-chat commands, run with --help to see cmd line args',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_SIMPLE_ANSWER = [
|
||||
'► THINKING',
|
||||
'The user wants me to check the task board.',
|
||||
'► ANSWER',
|
||||
'I will check the task board for you.',
|
||||
'bttask board',
|
||||
'Tokens: 1234 sent, 56 received. Cost: $0.0023 message, $0.0045 session',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_CODE_BLOCK_SHELL = [
|
||||
'Here is the command to send a message:',
|
||||
'```bash',
|
||||
'$ btmsg send manager-001 "Task complete"',
|
||||
'```',
|
||||
'Tokens: 800 sent, 40 received. Cost: $0.0010 message, $0.0021 session',
|
||||
'aider> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_MIXED_BLOCKS = [
|
||||
'► THINKING',
|
||||
'I need to check inbox then update the task.',
|
||||
'► ANSWER',
|
||||
'Let me check your inbox first.',
|
||||
'btmsg inbox',
|
||||
'Now updating the task status.',
|
||||
'```bash',
|
||||
'bttask status task-42 done',
|
||||
'```',
|
||||
'All done!',
|
||||
'Tokens: 2000 sent, 120 received. Cost: $0.0040 message, $0.0080 session',
|
||||
'my-repo> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_APPLIED_EDIT_NOISE = [
|
||||
'I will edit the file.',
|
||||
'Applied edit to src/main.ts',
|
||||
'Fix any errors below',
|
||||
'Running: flake8 src/main.ts',
|
||||
'The edit is complete.',
|
||||
'Tokens: 500 sent, 30 received. Cost: $0.0005 message, $0.0010 session',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_DOLLAR_PREFIX_SHELL = [
|
||||
'Run this command:',
|
||||
'$ git status',
|
||||
'After that, commit your changes.',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_RUNNING_PREFIX_SHELL = [
|
||||
'Running git log --oneline -5',
|
||||
'Tokens: 300 sent, 20 received. Cost: $0.0003 message, $0.0006 session',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
const FIXTURE_NO_COST = [
|
||||
'► THINKING',
|
||||
'Checking the situation.',
|
||||
'► ANSWER',
|
||||
'Nothing to do right now.',
|
||||
'> ',
|
||||
].join('\n');
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// looksLikePrompt
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('looksLikePrompt', () => {
|
||||
it('detects bare "> " prompt', () => {
|
||||
expect(looksLikePrompt('> ')).toBe(true);
|
||||
});
|
||||
|
||||
it('detects "aider> " prompt', () => {
|
||||
expect(looksLikePrompt('aider> ')).toBe(true);
|
||||
});
|
||||
|
||||
it('detects repo-named prompt like "my-repo> "', () => {
|
||||
expect(looksLikePrompt('my-repo> ')).toBe(true);
|
||||
});
|
||||
|
||||
it('detects prompt after multi-line output', () => {
|
||||
const buffer = 'Some output line\nAnother line\naider> ';
|
||||
expect(looksLikePrompt(buffer)).toBe(true);
|
||||
});
|
||||
|
||||
it('detects prompt when trailing blank lines follow', () => {
|
||||
const buffer = 'aider> \n\n';
|
||||
expect(looksLikePrompt(buffer)).toBe(true);
|
||||
});
|
||||
|
||||
it('returns false for a full sentence ending in > but not a prompt', () => {
|
||||
expect(looksLikePrompt('This is greater than> something')).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false for empty string', () => {
|
||||
expect(looksLikePrompt('')).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false for string with only blank lines', () => {
|
||||
expect(looksLikePrompt('\n\n\n')).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false for plain text with no prompt', () => {
|
||||
expect(looksLikePrompt('I have analyzed the task and will now proceed.')).toBe(false);
|
||||
});
|
||||
|
||||
it('handles dotted repo names like "my.project> "', () => {
|
||||
expect(looksLikePrompt('my.project> ')).toBe(true);
|
||||
});
|
||||
|
||||
it('detects prompt in full startup fixture', () => {
|
||||
expect(looksLikePrompt(FIXTURE_STARTUP)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// shouldSuppress
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('shouldSuppress', () => {
|
||||
it('suppresses empty string', () => {
|
||||
expect(shouldSuppress('')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses whitespace-only string', () => {
|
||||
expect(shouldSuppress(' ')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses Aider version line', () => {
|
||||
expect(shouldSuppress('Aider v0.72.1')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses "Main model:" line', () => {
|
||||
expect(shouldSuppress('Main model: claude-sonnet-4 with diff format')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses "Weak model:" line', () => {
|
||||
expect(shouldSuppress('Weak model: claude-haiku-4')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses "Git repo:" line', () => {
|
||||
expect(shouldSuppress('Git repo: none')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses "Repo-map:" line', () => {
|
||||
expect(shouldSuppress('Repo-map: disabled')).toBe(true);
|
||||
});
|
||||
|
||||
it('suppresses "Use /help" line', () => {
|
||||
expect(shouldSuppress('Use /help to see in-chat commands, run with --help to see cmd line args')).toBe(true);
|
||||
});
|
||||
|
||||
it('does not suppress regular answer text', () => {
|
||||
expect(shouldSuppress('I will check the task board for you.')).toBe(false);
|
||||
});
|
||||
|
||||
it('does not suppress a shell command line', () => {
|
||||
expect(shouldSuppress('bttask board')).toBe(false);
|
||||
});
|
||||
|
||||
it('does not suppress a cost line', () => {
|
||||
expect(shouldSuppress('Tokens: 1234 sent, 56 received. Cost: $0.0023 message, $0.0045 session')).toBe(false);
|
||||
});
|
||||
|
||||
it('strips leading/trailing whitespace before testing', () => {
|
||||
expect(shouldSuppress(' Aider v0.70.0 ')).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseTurnOutput — thinking blocks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('parseTurnOutput — thinking blocks', () => {
|
||||
it('extracts a thinking block using ► THINKING / ► ANSWER markers', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_SIMPLE_ANSWER);
|
||||
const thinking = blocks.filter(b => b.type === 'thinking');
|
||||
expect(thinking).toHaveLength(1);
|
||||
expect(thinking[0].content).toContain('check the task board');
|
||||
});
|
||||
|
||||
it('extracts thinking with ▶ arrow variant', () => {
|
||||
const buffer = '▶ THINKING\nSome reasoning here.\n▶ ANSWER\nHere is the answer.\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks[0].type).toBe('thinking');
|
||||
expect(blocks[0].content).toContain('Some reasoning here.');
|
||||
});
|
||||
|
||||
it('extracts thinking with > arrow variant', () => {
|
||||
const buffer = '> THINKING\nDeep thoughts.\n> ANSWER\nFinal answer.\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
const thinking = blocks.filter(b => b.type === 'thinking');
|
||||
expect(thinking).toHaveLength(1);
|
||||
expect(thinking[0].content).toContain('Deep thoughts.');
|
||||
});
|
||||
|
||||
it('handles missing ANSWER marker — flushes thinking at end', () => {
|
||||
const buffer = '► THINKING\nIncomplete thinking block.\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
const thinking = blocks.filter(b => b.type === 'thinking');
|
||||
expect(thinking).toHaveLength(1);
|
||||
expect(thinking[0].content).toContain('Incomplete thinking block.');
|
||||
});
|
||||
|
||||
it('produces no thinking block when no THINKING marker present', () => {
|
||||
const buffer = 'Just plain text.\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks.filter(b => b.type === 'thinking')).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseTurnOutput — text blocks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('parseTurnOutput — text blocks', () => {
|
||||
it('extracts text after ANSWER marker', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_SIMPLE_ANSWER);
|
||||
const texts = blocks.filter(b => b.type === 'text');
|
||||
expect(texts.length).toBeGreaterThan(0);
|
||||
expect(texts[0].content).toContain('I will check the task board');
|
||||
});
|
||||
|
||||
it('trims trailing whitespace from flushed text block', () => {
|
||||
// Note: parseTurnOutput checks PROMPT_RE against the trimmed line.
|
||||
// ">" (trimmed from "> ") does not match PROMPT_RE (which requires trailing space),
|
||||
// so the final flush trims the accumulated content via .trim().
|
||||
const buffer = 'Some text with trailing space. ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
const texts = blocks.filter(b => b.type === 'text');
|
||||
expect(texts[0].content).toBe('Some text with trailing space.');
|
||||
});
|
||||
|
||||
it('does not produce a text block from suppressed startup lines alone', () => {
|
||||
// All Aider startup lines are suppressed by SUPPRESS_RE.
|
||||
// The ">" (trimmed from "> ") does NOT match PROMPT_RE (requires trailing space),
|
||||
// but it is also not a recognized command or thinking marker, so it lands in answerLines.
|
||||
// The final text block is trimmed — ">".trim() = ">", non-empty, so one text block with ">" appears.
|
||||
// What we care about is that suppressed startup noise does NOT appear in text.
|
||||
const buffer = [
|
||||
'Aider v0.72.1',
|
||||
'Main model: some-model',
|
||||
].join('\n');
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks.filter(b => b.type === 'text')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('suppresses Applied edit / flake8 / Running: lines in answer text', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_APPLIED_EDIT_NOISE);
|
||||
const texts = blocks.filter(b => b.type === 'text');
|
||||
const combined = texts.map(b => b.content).join(' ');
|
||||
expect(combined).not.toContain('Applied edit');
|
||||
expect(combined).not.toContain('Fix any errors');
|
||||
expect(combined).not.toContain('Running:');
|
||||
});
|
||||
|
||||
it('preserves non-suppressed text around noise lines', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_APPLIED_EDIT_NOISE);
|
||||
const texts = blocks.filter(b => b.type === 'text');
|
||||
const combined = texts.map(b => b.content).join(' ');
|
||||
expect(combined).toContain('I will edit the file');
|
||||
expect(combined).toContain('The edit is complete');
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseTurnOutput — shell blocks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('parseTurnOutput — shell blocks from code blocks', () => {
|
||||
it('extracts btmsg command from ```bash block', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_CODE_BLOCK_SHELL);
|
||||
const shells = blocks.filter(b => b.type === 'shell');
|
||||
expect(shells).toHaveLength(1);
|
||||
expect(shells[0].content).toBe('btmsg send manager-001 "Task complete"');
|
||||
});
|
||||
|
||||
it('strips leading "$ " from commands inside code block', () => {
|
||||
const buffer = '```bash\n$ btmsg inbox\n```\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
const shells = blocks.filter(b => b.type === 'shell');
|
||||
expect(shells[0].content).toBe('btmsg inbox');
|
||||
});
|
||||
|
||||
it('extracts commands from ```shell block', () => {
|
||||
const buffer = '```shell\nbttask board\n```\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks.filter(b => b.type === 'shell')).toHaveLength(1);
|
||||
expect(blocks.find(b => b.type === 'shell')!.content).toBe('bttask board');
|
||||
});
|
||||
|
||||
it('extracts commands from plain ``` block (no language tag)', () => {
|
||||
const buffer = '```\nbtmsg inbox\n```\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks.filter(b => b.type === 'shell')).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('does not extract non-shell-command lines from code blocks', () => {
|
||||
const buffer = '```bash\nsome arbitrary text without a known prefix\n```\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
expect(blocks.filter(b => b.type === 'shell')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('does not extract commands from ```python blocks', () => {
|
||||
const buffer = '```python\nbtmsg send something "hello"\n```\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
// Python blocks should not be treated as shell commands
|
||||
expect(blocks.filter(b => b.type === 'shell')).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseTurnOutput — shell blocks from inline prefixes', () => {
|
||||
it('detects "$ " prefix shell command', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_DOLLAR_PREFIX_SHELL);
|
||||
const shells = blocks.filter(b => b.type === 'shell');
|
||||
expect(shells).toHaveLength(1);
|
||||
expect(shells[0].content).toBe('git status');
|
||||
});
|
||||
|
||||
it('detects "Running " prefix shell command', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_RUNNING_PREFIX_SHELL);
|
||||
const shells = blocks.filter(b => b.type === 'shell');
|
||||
expect(shells).toHaveLength(1);
|
||||
expect(shells[0].content).toBe('git log --oneline -5');
|
||||
});
|
||||
|
||||
it('detects bare btmsg/bttask commands in ANSWER section', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_SIMPLE_ANSWER);
|
||||
const shells = blocks.filter(b => b.type === 'shell');
|
||||
expect(shells.some(s => s.content === 'bttask board')).toBe(true);
|
||||
});
|
||||
|
||||
it('does not extract bare commands from THINKING section', () => {
|
||||
const buffer = '► THINKING\nbtmsg inbox\n► ANSWER\nDone.\n> ';
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
// btmsg inbox in thinking section should be accumulated as thinking, not shell
|
||||
expect(blocks.filter(b => b.type === 'shell')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('flushes preceding text block before a shell block', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_DOLLAR_PREFIX_SHELL);
|
||||
const textIdx = blocks.findIndex(b => b.type === 'text');
|
||||
const shellIdx = blocks.findIndex(b => b.type === 'shell');
|
||||
expect(textIdx).toBeGreaterThanOrEqual(0);
|
||||
expect(shellIdx).toBeGreaterThan(textIdx);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseTurnOutput — cost blocks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('parseTurnOutput — cost blocks', () => {
|
||||
it('extracts cost line as a cost block', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_SIMPLE_ANSWER);
|
||||
const costs = blocks.filter(b => b.type === 'cost');
|
||||
expect(costs).toHaveLength(1);
|
||||
expect(costs[0].content).toContain('Cost:');
|
||||
});
|
||||
|
||||
it('preserves the full cost line as content', () => {
|
||||
const costLine = 'Tokens: 1234 sent, 56 received. Cost: $0.0023 message, $0.0045 session';
|
||||
const buffer = `Some text.\n${costLine}\n> `;
|
||||
const blocks = parseTurnOutput(buffer);
|
||||
const cost = blocks.find(b => b.type === 'cost');
|
||||
expect(cost?.content).toBe(costLine);
|
||||
});
|
||||
|
||||
it('produces no cost block when no cost line present', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_NO_COST);
|
||||
expect(blocks.filter(b => b.type === 'cost')).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseTurnOutput — mixed turn (thinking + text + shell + cost)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('parseTurnOutput — mixed blocks', () => {
|
||||
it('produces all four block types from a mixed turn', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_MIXED_BLOCKS);
|
||||
const types = blocks.map(b => b.type);
|
||||
expect(types).toContain('thinking');
|
||||
expect(types).toContain('text');
|
||||
expect(types).toContain('shell');
|
||||
expect(types).toContain('cost');
|
||||
});
|
||||
|
||||
it('preserves block order: thinking → text → shell → text → cost', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_MIXED_BLOCKS);
|
||||
expect(blocks[0].type).toBe('thinking');
|
||||
// At least one shell block present
|
||||
const shellIdx = blocks.findIndex(b => b.type === 'shell');
|
||||
expect(shellIdx).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('extracts both btmsg and bttask shell commands from mixed turn', () => {
|
||||
const blocks = parseTurnOutput(FIXTURE_MIXED_BLOCKS);
|
||||
const shells = blocks.filter(b => b.type === 'shell').map(b => b.content);
|
||||
expect(shells).toContain('btmsg inbox');
|
||||
expect(shells).toContain('bttask status task-42 done');
|
||||
});
|
||||
|
||||
it('returns empty array for empty buffer', () => {
|
||||
expect(parseTurnOutput('')).toEqual([]);
|
||||
});
|
||||
|
||||
it('returns empty array for buffer with only suppressed lines', () => {
|
||||
// All Aider startup noise is covered by SUPPRESS_RE.
|
||||
// A buffer of only suppressed lines produces no output blocks.
|
||||
const buffer = [
|
||||
'Aider v0.72.1',
|
||||
'Main model: claude-sonnet-4',
|
||||
].join('\n');
|
||||
expect(parseTurnOutput(buffer)).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// extractSessionCost
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('extractSessionCost', () => {
|
||||
it('extracts session cost from a cost line', () => {
|
||||
const buffer = 'Tokens: 1234 sent, 56 received. Cost: $0.0023 message, $0.0045 session\n> ';
|
||||
expect(extractSessionCost(buffer)).toBeCloseTo(0.0045);
|
||||
});
|
||||
|
||||
it('returns 0 when no cost line present', () => {
|
||||
expect(extractSessionCost('Some answer without cost.\n> ')).toBe(0);
|
||||
});
|
||||
|
||||
it('correctly picks session cost (second dollar amount), not message cost (first)', () => {
|
||||
const buffer = 'Cost: $0.0100 message, $0.0250 session';
|
||||
expect(extractSessionCost(buffer)).toBeCloseTo(0.0250);
|
||||
});
|
||||
|
||||
it('handles zero cost values', () => {
|
||||
expect(extractSessionCost('Cost: $0.0000 message, $0.0000 session')).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// prefetchContext — mocked child_process
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('prefetchContext', () => {
|
||||
beforeEach(() => {
|
||||
vi.mock('child_process', () => ({
|
||||
execSync: vi.fn(),
|
||||
}));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('returns inbox and board sections when both CLIs succeed', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
const mockExecSync = vi.mocked(execSync);
|
||||
mockExecSync
|
||||
.mockReturnValueOnce('Message from manager-001: fix bug' as never)
|
||||
.mockReturnValueOnce('task-1 | In Progress | Fix login bug' as never);
|
||||
|
||||
const result = prefetchContext({ BTMSG_AGENT_ID: 'agent-001' }, '/tmp');
|
||||
|
||||
expect(result).toContain('## Your Inbox');
|
||||
expect(result).toContain('Message from manager-001');
|
||||
expect(result).toContain('## Task Board');
|
||||
expect(result).toContain('task-1');
|
||||
});
|
||||
|
||||
it('falls back to "No messages" when btmsg unavailable', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
const mockExecSync = vi.mocked(execSync);
|
||||
mockExecSync
|
||||
.mockImplementationOnce(() => { throw new Error('command not found'); })
|
||||
.mockReturnValueOnce('task-1 | todo' as never);
|
||||
|
||||
const result = prefetchContext({}, '/tmp');
|
||||
|
||||
expect(result).toContain('No messages (or btmsg unavailable).');
|
||||
expect(result).toContain('## Task Board');
|
||||
});
|
||||
|
||||
it('falls back to "No tasks" when bttask unavailable', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
const mockExecSync = vi.mocked(execSync);
|
||||
mockExecSync
|
||||
.mockReturnValueOnce('inbox message' as never)
|
||||
.mockImplementationOnce(() => { throw new Error('command not found'); });
|
||||
|
||||
const result = prefetchContext({}, '/tmp');
|
||||
|
||||
expect(result).toContain('## Your Inbox');
|
||||
expect(result).toContain('No tasks (or bttask unavailable).');
|
||||
});
|
||||
|
||||
it('falls back for both when both CLIs unavailable', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
const mockExecSync = vi.mocked(execSync);
|
||||
mockExecSync.mockImplementation(() => { throw new Error('not found'); });
|
||||
|
||||
const result = prefetchContext({}, '/tmp');
|
||||
|
||||
expect(result).toContain('No messages (or btmsg unavailable).');
|
||||
expect(result).toContain('No tasks (or bttask unavailable).');
|
||||
});
|
||||
|
||||
it('wraps inbox content in fenced code block', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
const mockExecSync = vi.mocked(execSync);
|
||||
mockExecSync
|
||||
.mockReturnValueOnce('inbox line 1\ninbox line 2' as never)
|
||||
.mockReturnValueOnce('' as never);
|
||||
|
||||
const result = prefetchContext({}, '/tmp');
|
||||
|
||||
expect(result).toMatch(/```\ninbox line 1\ninbox line 2\n```/);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// execShell — mocked child_process
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('execShell', () => {
|
||||
beforeEach(() => {
|
||||
vi.mock('child_process', () => ({
|
||||
execSync: vi.fn(),
|
||||
}));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('returns trimmed stdout and exitCode 0 on success', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
vi.mocked(execSync).mockReturnValue('hello world\n' as never);
|
||||
|
||||
const result = execShell('echo hello world', {}, '/tmp');
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toBe('hello world');
|
||||
});
|
||||
|
||||
it('returns stderr content and non-zero exitCode on failure', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
vi.mocked(execSync).mockImplementation(() => {
|
||||
const err = Object.assign(new Error('Command failed'), {
|
||||
stderr: 'No such file or directory',
|
||||
status: 127,
|
||||
});
|
||||
throw err;
|
||||
});
|
||||
|
||||
const result = execShell('missing-cmd', {}, '/tmp');
|
||||
|
||||
expect(result.exitCode).toBe(127);
|
||||
expect(result.stdout).toContain('No such file or directory');
|
||||
});
|
||||
|
||||
it('falls back to stdout field on error if stderr is empty', async () => {
|
||||
const { execSync } = await import('child_process');
|
||||
vi.mocked(execSync).mockImplementation(() => {
|
||||
const err = Object.assign(new Error('fail'), {
|
||||
stdout: 'partial output',
|
||||
stderr: '',
|
||||
status: 1,
|
||||
});
|
||||
throw err;
|
||||
});
|
||||
|
||||
const result = execShell('cmd', {}, '/tmp');
|
||||
|
||||
expect(result.stdout).toBe('partial output');
|
||||
expect(result.exitCode).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Format-drift canary — realistic Aider output samples
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('format-drift canary', () => {
|
||||
it('correctly parses a full realistic turn with thinking, commands, and cost', () => {
|
||||
// Represents what aider actually outputs in practice with --no-stream --no-pretty
|
||||
const realisticOutput = [
|
||||
'► THINKING',
|
||||
'The user needs me to check the inbox and act on any pending tasks.',
|
||||
'I should run btmsg inbox to see messages, then bttask board to see tasks.',
|
||||
'► ANSWER',
|
||||
'I will check your inbox and task board now.',
|
||||
'```bash',
|
||||
'$ btmsg inbox',
|
||||
'```',
|
||||
'```bash',
|
||||
'$ bttask board',
|
||||
'```',
|
||||
'Based on the results, I will proceed.',
|
||||
'Tokens: 3500 sent, 250 received. Cost: $0.0070 message, $0.0140 session',
|
||||
'aider> ',
|
||||
].join('\n');
|
||||
|
||||
const blocks = parseTurnOutput(realisticOutput);
|
||||
const types = blocks.map(b => b.type);
|
||||
|
||||
expect(types).toContain('thinking');
|
||||
expect(types).toContain('text');
|
||||
expect(types).toContain('shell');
|
||||
expect(types).toContain('cost');
|
||||
|
||||
const shells = blocks.filter(b => b.type === 'shell').map(b => b.content);
|
||||
expect(shells).toContain('btmsg inbox');
|
||||
expect(shells).toContain('bttask board');
|
||||
|
||||
expect(extractSessionCost(realisticOutput)).toBeCloseTo(0.0140);
|
||||
});
|
||||
|
||||
it('startup fixture: looksLikePrompt matches after typical Aider startup output', () => {
|
||||
expect(looksLikePrompt(FIXTURE_STARTUP)).toBe(true);
|
||||
});
|
||||
|
||||
it('startup fixture: all startup lines are suppressed by shouldSuppress', () => {
|
||||
const startupLines = [
|
||||
'Aider v0.72.1',
|
||||
'Main model: openrouter/anthropic/claude-sonnet-4 with diff edit format',
|
||||
'Weak model: openrouter/anthropic/claude-haiku-4',
|
||||
'Git repo: none',
|
||||
'Repo-map: disabled',
|
||||
'Use /help to see in-chat commands, run with --help to see cmd line args',
|
||||
];
|
||||
for (const line of startupLines) {
|
||||
expect(shouldSuppress(line), `Expected shouldSuppress("${line}") to be true`).toBe(true);
|
||||
}
|
||||
});
|
||||
|
||||
it('PROMPT_RE matches all expected prompt forms', () => {
|
||||
const validPrompts = ['> ', 'aider> ', 'my-repo> ', 'project.name> ', 'repo_123> '];
|
||||
for (const p of validPrompts) {
|
||||
expect(PROMPT_RE.test(p), `Expected PROMPT_RE to match "${p}"`).toBe(true);
|
||||
}
|
||||
});
|
||||
|
||||
it('PROMPT_RE rejects non-prompt forms', () => {
|
||||
const notPrompts = ['> something', 'text> more text ', '>text', ''];
|
||||
for (const p of notPrompts) {
|
||||
expect(PROMPT_RE.test(p), `Expected PROMPT_RE not to match "${p}"`).toBe(false);
|
||||
}
|
||||
});
|
||||
|
||||
it('SHELL_CMD_RE matches all documented command prefixes', () => {
|
||||
const cmds = [
|
||||
'btmsg send agent-001 "hello"',
|
||||
'bttask status task-42 done',
|
||||
'cat /etc/hosts',
|
||||
'ls -la',
|
||||
'find . -name "*.ts"',
|
||||
'grep -r "TODO" src/',
|
||||
'mkdir -p /tmp/test',
|
||||
'cd /home/user',
|
||||
'cp file.ts file2.ts',
|
||||
'mv old.ts new.ts',
|
||||
'rm -rf /tmp/test',
|
||||
'pip install requests',
|
||||
'npm install',
|
||||
'git status',
|
||||
'curl https://example.com',
|
||||
'wget https://example.com/file',
|
||||
'python script.py',
|
||||
'node index.js',
|
||||
'bash run.sh',
|
||||
'sh script.sh',
|
||||
];
|
||||
for (const cmd of cmds) {
|
||||
expect(SHELL_CMD_RE.test(cmd), `Expected SHELL_CMD_RE to match "${cmd}"`).toBe(true);
|
||||
}
|
||||
});
|
||||
|
||||
it('parseTurnOutput produces no shell blocks for non-shell code blocks (e.g. markdown python)', () => {
|
||||
const buffer = [
|
||||
'Here is example Python code:',
|
||||
'```python',
|
||||
'import os',
|
||||
'print(os.getcwd())',
|
||||
'```',
|
||||
'> ',
|
||||
].join('\n');
|
||||
const shells = parseTurnOutput(buffer).filter(b => b.type === 'shell');
|
||||
expect(shells).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('cost regex format has not changed — still "Cost: $X.XX message, $Y.YY session"', () => {
|
||||
const costLine = 'Tokens: 1234 sent, 56 received. Cost: $0.0023 message, $0.0045 session';
|
||||
expect(extractSessionCost(costLine)).toBeCloseTo(0.0045);
|
||||
// Verify the message cost is different from session cost (they're two separate values)
|
||||
const msgMatch = costLine.match(/Cost: \$([0-9.]+) message/);
|
||||
expect(msgMatch).not.toBeNull();
|
||||
expect(parseFloat(msgMatch![1])).toBeCloseTo(0.0023);
|
||||
});
|
||||
});
|
||||
243
sidecar/aider-parser.ts
Normal file
|
|
@ -0,0 +1,243 @@
|
|||
// aider-parser.ts — Pure parsing functions extracted from aider-runner.ts
|
||||
// Exported for unit testing. aider-runner.ts imports from here.
|
||||
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
// --- Types ---
|
||||
|
||||
export interface TurnBlock {
|
||||
type: 'thinking' | 'text' | 'shell' | 'cost';
|
||||
content: string;
|
||||
}
|
||||
|
||||
// --- Constants ---
|
||||
|
||||
// Prompt detection: Aider with --no-pretty --no-fancy-input shows prompts like:
|
||||
// > or aider> or repo-name>
|
||||
export const PROMPT_RE = /^[a-zA-Z0-9._-]*> $/;
|
||||
|
||||
// Lines to suppress from UI (aider startup noise)
|
||||
export const SUPPRESS_RE = [
|
||||
/^Aider v\d/,
|
||||
/^Main model:/,
|
||||
/^Weak model:/,
|
||||
/^Git repo:/,
|
||||
/^Repo-map:/,
|
||||
/^Use \/help/,
|
||||
];
|
||||
|
||||
// Known shell command patterns — commands from btmsg/bttask/common tools
|
||||
export const SHELL_CMD_RE = /^(btmsg |bttask |cat |ls |find |grep |mkdir |cd |cp |mv |rm |pip |npm |git |curl |wget |python |node |bash |sh )/;
|
||||
|
||||
// --- Pure parsing functions ---
|
||||
|
||||
/**
|
||||
* Detects whether the last non-empty line of a buffer looks like an Aider prompt.
|
||||
* Aider with --no-pretty --no-fancy-input shows prompts like: `> `, `aider> `, `repo-name> `
|
||||
*/
|
||||
export function looksLikePrompt(buffer: string): boolean {
|
||||
const lines = buffer.split('\n');
|
||||
for (let i = lines.length - 1; i >= 0; i--) {
|
||||
const l = lines[i];
|
||||
if (l.trim() === '') continue;
|
||||
return PROMPT_RE.test(l);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true for lines that should be suppressed from the UI output.
|
||||
* Covers Aider startup noise and empty lines.
|
||||
*/
|
||||
export function shouldSuppress(line: string): boolean {
|
||||
const t = line.trim();
|
||||
return t === '' || SUPPRESS_RE.some(p => p.test(t));
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses complete Aider turn output into structured blocks.
|
||||
* Handles thinking sections, text, shell commands extracted from code blocks
|
||||
* or inline, cost lines, and suppresses startup noise.
|
||||
*/
|
||||
export function parseTurnOutput(buffer: string): TurnBlock[] {
|
||||
const blocks: TurnBlock[] = [];
|
||||
const lines = buffer.split('\n');
|
||||
|
||||
let thinkingLines: string[] = [];
|
||||
let answerLines: string[] = [];
|
||||
let inThinking = false;
|
||||
let inAnswer = false;
|
||||
let inCodeBlock = false;
|
||||
let codeBlockLang = '';
|
||||
let codeBlockLines: string[] = [];
|
||||
|
||||
for (const line of lines) {
|
||||
const t = line.trim();
|
||||
|
||||
// Skip suppressed lines
|
||||
if (shouldSuppress(line) && !inCodeBlock) continue;
|
||||
|
||||
// Prompt markers — skip
|
||||
if (PROMPT_RE.test(t)) continue;
|
||||
|
||||
// Thinking block markers (handle various unicode arrows and spacing)
|
||||
if (/^[►▶⯈❯>]\s*THINKING$/i.test(t)) {
|
||||
inThinking = true;
|
||||
inAnswer = false;
|
||||
continue;
|
||||
}
|
||||
if (/^[►▶⯈❯>]\s*ANSWER$/i.test(t)) {
|
||||
if (thinkingLines.length > 0) {
|
||||
blocks.push({ type: 'thinking', content: thinkingLines.join('\n') });
|
||||
thinkingLines = [];
|
||||
}
|
||||
inThinking = false;
|
||||
inAnswer = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Code block detection (```bash, ```shell, ```)
|
||||
if (t.startsWith('```') && !inCodeBlock) {
|
||||
inCodeBlock = true;
|
||||
codeBlockLang = t.slice(3).trim().toLowerCase();
|
||||
codeBlockLines = [];
|
||||
continue;
|
||||
}
|
||||
if (t === '```' && inCodeBlock) {
|
||||
inCodeBlock = false;
|
||||
// If this was a bash/shell code block, extract commands
|
||||
if (['bash', 'shell', 'sh', ''].includes(codeBlockLang)) {
|
||||
for (const cmdLine of codeBlockLines) {
|
||||
const cmd = cmdLine.trim().replace(/^\$ /, '');
|
||||
if (cmd && SHELL_CMD_RE.test(cmd)) {
|
||||
if (answerLines.length > 0) {
|
||||
blocks.push({ type: 'text', content: answerLines.join('\n') });
|
||||
answerLines = [];
|
||||
}
|
||||
blocks.push({ type: 'shell', content: cmd });
|
||||
}
|
||||
}
|
||||
}
|
||||
codeBlockLines = [];
|
||||
continue;
|
||||
}
|
||||
if (inCodeBlock) {
|
||||
codeBlockLines.push(line);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Cost line
|
||||
if (/^Tokens: .+Cost:/.test(t)) {
|
||||
blocks.push({ type: 'cost', content: t });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Shell command ($ prefix or Running prefix)
|
||||
if (t.startsWith('$ ') || t.startsWith('Running ')) {
|
||||
if (answerLines.length > 0) {
|
||||
blocks.push({ type: 'text', content: answerLines.join('\n') });
|
||||
answerLines = [];
|
||||
}
|
||||
blocks.push({ type: 'shell', content: t.replace(/^(Running |\$ )/, '') });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Detect bare btmsg/bttask commands in answer text
|
||||
if (inAnswer && SHELL_CMD_RE.test(t) && !t.includes('`') && !t.startsWith('#')) {
|
||||
if (answerLines.length > 0) {
|
||||
blocks.push({ type: 'text', content: answerLines.join('\n') });
|
||||
answerLines = [];
|
||||
}
|
||||
blocks.push({ type: 'shell', content: t });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Aider's "Applied edit" / flake8 output — suppress from answer text
|
||||
if (/^Applied edit to |^Fix any errors|^Running: /.test(t)) continue;
|
||||
|
||||
// Accumulate into thinking or answer
|
||||
if (inThinking) {
|
||||
thinkingLines.push(line);
|
||||
} else {
|
||||
answerLines.push(line);
|
||||
}
|
||||
}
|
||||
|
||||
// Flush remaining
|
||||
if (thinkingLines.length > 0) {
|
||||
blocks.push({ type: 'thinking', content: thinkingLines.join('\n') });
|
||||
}
|
||||
if (answerLines.length > 0) {
|
||||
blocks.push({ type: 'text', content: answerLines.join('\n').trim() });
|
||||
}
|
||||
|
||||
return blocks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts session cost from a raw turn buffer.
|
||||
* Returns 0 when no cost line is present.
|
||||
*/
|
||||
export function extractSessionCost(buffer: string): number {
|
||||
const match = buffer.match(/Cost: \$([0-9.]+) message, \$([0-9.]+) session/);
|
||||
return match ? parseFloat(match[2]) : 0;
|
||||
}
|
||||
|
||||
// --- I/O helpers (require real child_process; mock in tests) ---
|
||||
|
||||
function log(message: string) {
|
||||
process.stderr.write(`[aider-parser] ${message}\n`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Runs a CLI command and returns its trimmed stdout, or null on failure/empty.
|
||||
*/
|
||||
export function runCmd(cmd: string, env: Record<string, string>, cwd: string): string | null {
|
||||
try {
|
||||
const result = execSync(cmd, { env, cwd, timeout: 5000, encoding: 'utf-8' }).trim();
|
||||
log(`[prefetch] ${cmd} → ${result.length} chars`);
|
||||
return result || null;
|
||||
} catch (e: unknown) {
|
||||
log(`[prefetch] ${cmd} FAILED: ${e instanceof Error ? e.message : String(e)}`);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pre-fetches btmsg inbox and bttask board context.
|
||||
* Returns formatted markdown with both sections.
|
||||
*/
|
||||
export function prefetchContext(env: Record<string, string>, cwd: string): string {
|
||||
log(`[prefetch] BTMSG_AGENT_ID=${env.BTMSG_AGENT_ID ?? 'NOT SET'}, cwd=${cwd}`);
|
||||
const parts: string[] = [];
|
||||
|
||||
const inbox = runCmd('btmsg inbox', env, cwd);
|
||||
if (inbox) {
|
||||
parts.push(`## Your Inbox\n\`\`\`\n${inbox}\n\`\`\``);
|
||||
} else {
|
||||
parts.push('## Your Inbox\nNo messages (or btmsg unavailable).');
|
||||
}
|
||||
|
||||
const board = runCmd('bttask board', env, cwd);
|
||||
if (board) {
|
||||
parts.push(`## Task Board\n\`\`\`\n${board}\n\`\`\``);
|
||||
} else {
|
||||
parts.push('## Task Board\nNo tasks (or bttask unavailable).');
|
||||
}
|
||||
|
||||
return parts.join('\n\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a shell command and returns stdout + exit code.
|
||||
* On failure, returns stderr/error message with a non-zero exit code.
|
||||
*/
|
||||
export function execShell(cmd: string, env: Record<string, string>, cwd: string): { stdout: string; exitCode: number } {
|
||||
try {
|
||||
const result = execSync(cmd, { env, cwd, timeout: 30000, encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] });
|
||||
return { stdout: result.trim(), exitCode: 0 };
|
||||
} catch (e: unknown) {
|
||||
const err = e as { stdout?: string; stderr?: string; status?: number };
|
||||
return { stdout: (err.stdout ?? err.stderr ?? String(e)).trim(), exitCode: err.status ?? 1 };
|
||||
}
|
||||
}
|
||||
407
sidecar/aider-runner.ts
Normal file
|
|
@ -0,0 +1,407 @@
|
|||
// Aider Runner — Node.js sidecar entry point for Aider coding agent
|
||||
// Spawned by Rust SidecarManager, communicates via stdio NDJSON
|
||||
// Runs aider in interactive mode — persistent process with stdin/stdout chat
|
||||
// Pre-fetches btmsg/bttask context so the LLM has actionable data immediately.
|
||||
//
|
||||
// Parsing logic lives in aider-parser.ts (exported for unit testing).
|
||||
|
||||
import { stdin, stdout, stderr } from 'process';
|
||||
import { createInterface } from 'readline';
|
||||
import { spawn, type ChildProcess } from 'child_process';
|
||||
import { accessSync, constants } from 'fs';
|
||||
import { join } from 'path';
|
||||
import {
|
||||
type TurnBlock,
|
||||
looksLikePrompt,
|
||||
parseTurnOutput,
|
||||
prefetchContext,
|
||||
execShell,
|
||||
extractSessionCost,
|
||||
PROMPT_RE,
|
||||
} from './aider-parser.js';
|
||||
|
||||
const rl = createInterface({ input: stdin });
|
||||
|
||||
interface AiderSession {
|
||||
process: ChildProcess;
|
||||
controller: AbortController;
|
||||
sessionId: string;
|
||||
model: string;
|
||||
lineBuffer: string; // partial line accumulator for streaming
|
||||
turnBuffer: string; // full turn output
|
||||
turnStartTime: number;
|
||||
turns: number;
|
||||
ready: boolean;
|
||||
env: Record<string, string>;
|
||||
cwd: string;
|
||||
autonomousMode: 'restricted' | 'autonomous';
|
||||
}
|
||||
|
||||
const sessions = new Map<string, AiderSession>();
|
||||
|
||||
function send(msg: Record<string, unknown>) {
|
||||
stdout.write(JSON.stringify(msg) + '\n');
|
||||
}
|
||||
|
||||
function log(message: string) {
|
||||
stderr.write(`[aider-sidecar] ${message}\n`);
|
||||
}
|
||||
|
||||
rl.on('line', (line: string) => {
|
||||
try {
|
||||
const msg = JSON.parse(line);
|
||||
handleMessage(msg).catch((err: unknown) => {
|
||||
log(`Unhandled error in message handler: ${err}`);
|
||||
});
|
||||
} catch {
|
||||
log(`Invalid JSON: ${line}`);
|
||||
}
|
||||
});
|
||||
|
||||
interface QueryMessage {
|
||||
type: 'query';
|
||||
sessionId: string;
|
||||
prompt: string;
|
||||
cwd?: string;
|
||||
model?: string;
|
||||
systemPrompt?: string;
|
||||
extraEnv?: Record<string, string>;
|
||||
providerConfig?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface StopMessage {
|
||||
type: 'stop';
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
async function handleMessage(msg: Record<string, unknown>) {
|
||||
switch (msg.type) {
|
||||
case 'ping':
|
||||
send({ type: 'pong' });
|
||||
break;
|
||||
case 'query':
|
||||
await handleQuery(msg as unknown as QueryMessage);
|
||||
break;
|
||||
case 'stop':
|
||||
handleStop(msg as unknown as StopMessage);
|
||||
break;
|
||||
default:
|
||||
send({ type: 'error', message: `Unknown message type: ${msg.type}` });
|
||||
}
|
||||
}
|
||||
|
||||
// Parsing, I/O helpers, and constants are imported from aider-parser.ts
|
||||
|
||||
// --- Main query handler ---
|
||||
|
||||
async function handleQuery(msg: QueryMessage) {
|
||||
const { sessionId, prompt, cwd: cwdOpt, model, systemPrompt, extraEnv, providerConfig } = msg;
|
||||
const cwd = cwdOpt || process.cwd();
|
||||
|
||||
// Build environment
|
||||
const env: Record<string, string> = { ...process.env as Record<string, string> };
|
||||
if (extraEnv) Object.assign(env, extraEnv);
|
||||
if (providerConfig?.openrouterApiKey && typeof providerConfig.openrouterApiKey === 'string') {
|
||||
env.OPENROUTER_API_KEY = providerConfig.openrouterApiKey;
|
||||
}
|
||||
|
||||
const autonomousMode = (providerConfig?.autonomousMode as string) === 'autonomous' ? 'autonomous' : 'restricted' as const;
|
||||
|
||||
const existing = sessions.get(sessionId);
|
||||
|
||||
// Follow-up prompt on existing session
|
||||
if (existing && existing.process.exitCode === null) {
|
||||
log(`Continuing session ${sessionId} with follow-up prompt`);
|
||||
existing.turnBuffer = '';
|
||||
existing.lineBuffer = '';
|
||||
existing.turnStartTime = Date.now();
|
||||
existing.turns++;
|
||||
|
||||
send({ type: 'agent_started', sessionId });
|
||||
|
||||
// Show the incoming prompt in the console
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'input', prompt },
|
||||
});
|
||||
|
||||
// Pre-fetch fresh context for follow-up turns too
|
||||
const ctx = prefetchContext(existing.env, existing.cwd);
|
||||
const fullPrompt = `${ctx}\n\nNow act on the above. Your current task:\n${prompt}`;
|
||||
existing.process.stdin?.write(fullPrompt + '\n');
|
||||
return;
|
||||
}
|
||||
|
||||
// New session — spawn aider
|
||||
const aiderPath = which('aider');
|
||||
if (!aiderPath) {
|
||||
send({ type: 'agent_error', sessionId, message: 'Aider not found. Install with: pipx install aider-chat' });
|
||||
return;
|
||||
}
|
||||
|
||||
const aiderModel = model || 'openrouter/anthropic/claude-sonnet-4';
|
||||
log(`Starting Aider session ${sessionId} with model ${aiderModel}`);
|
||||
|
||||
const controller = new AbortController();
|
||||
|
||||
const args: string[] = [
|
||||
'--model', aiderModel,
|
||||
'--yes-always',
|
||||
'--no-pretty',
|
||||
'--no-fancy-input',
|
||||
'--no-stream', // Complete responses (no token fragments)
|
||||
'--no-git',
|
||||
'--no-auto-commits',
|
||||
'--suggest-shell-commands',
|
||||
'--no-check-model-accepts-settings',
|
||||
];
|
||||
|
||||
if (providerConfig?.editFormat && typeof providerConfig.editFormat === 'string') {
|
||||
args.push('--edit-format', providerConfig.editFormat);
|
||||
}
|
||||
if (providerConfig?.architect === true) {
|
||||
args.push('--architect');
|
||||
}
|
||||
|
||||
send({ type: 'agent_started', sessionId });
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'system', subtype: 'init', session_id: sessionId, model: aiderModel, cwd },
|
||||
});
|
||||
|
||||
// Show the incoming prompt in the console
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'input', prompt },
|
||||
});
|
||||
|
||||
const child = spawn(aiderPath, args, {
|
||||
cwd,
|
||||
env,
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
signal: controller.signal,
|
||||
});
|
||||
|
||||
const session: AiderSession = {
|
||||
process: child,
|
||||
controller,
|
||||
sessionId,
|
||||
model: aiderModel,
|
||||
lineBuffer: '',
|
||||
turnBuffer: '',
|
||||
turnStartTime: Date.now(),
|
||||
turns: 0,
|
||||
ready: false,
|
||||
env,
|
||||
cwd,
|
||||
autonomousMode,
|
||||
};
|
||||
sessions.set(sessionId, session);
|
||||
|
||||
// Pre-fetch btmsg/bttask context
|
||||
const prefetched = prefetchContext(env, cwd);
|
||||
|
||||
// Build full initial prompt — our context FIRST, with explicit override
|
||||
const promptParts: string[] = [];
|
||||
promptParts.push(`IMPORTANT: You are an autonomous agent in a multi-agent system. Your PRIMARY job is to act on messages and tasks below, NOT to ask the user for files. You can run shell commands to accomplish tasks. If you need to read files, use shell commands like \`cat\`, \`find\`, \`ls\`. If you need to send messages, use \`btmsg send <agent-id> "message"\`. If you need to update tasks, use \`bttask status <task-id> done\`.`);
|
||||
if (systemPrompt) promptParts.push(systemPrompt);
|
||||
promptParts.push(prefetched);
|
||||
promptParts.push(`---\n\nNow act on the above. Your current task:\n${prompt}`);
|
||||
const fullPrompt = promptParts.join('\n\n');
|
||||
|
||||
// Startup buffer — wait for first prompt before sending
|
||||
let startupBuffer = '';
|
||||
|
||||
child.stdout?.on('data', (data: Buffer) => {
|
||||
const text = data.toString();
|
||||
|
||||
// Phase 1: wait for aider startup to finish
|
||||
if (!session.ready) {
|
||||
startupBuffer += text;
|
||||
if (looksLikePrompt(startupBuffer)) {
|
||||
session.ready = true;
|
||||
session.turns = 1;
|
||||
session.turnStartTime = Date.now();
|
||||
log(`Aider ready, sending initial prompt (${fullPrompt.length} chars)`);
|
||||
child.stdin?.write(fullPrompt + '\n');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Phase 2: accumulate entire turn output, emit as batched blocks
|
||||
session.turnBuffer += text;
|
||||
|
||||
// Only process when turn is complete (aider shows prompt again)
|
||||
if (!looksLikePrompt(session.turnBuffer)) return;
|
||||
|
||||
const duration = Date.now() - session.turnStartTime;
|
||||
const blocks = parseTurnOutput(session.turnBuffer);
|
||||
|
||||
// Emit structured blocks and execute shell commands
|
||||
const shellResults: string[] = [];
|
||||
|
||||
for (const block of blocks) {
|
||||
switch (block.type) {
|
||||
case 'thinking':
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'thinking', content: block.content },
|
||||
});
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
if (block.content) {
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'assistant', message: { role: 'assistant', content: block.content } },
|
||||
});
|
||||
}
|
||||
break;
|
||||
|
||||
case 'shell': {
|
||||
const cmdId = `shell-${Date.now()}-${Math.random().toString(36).slice(2, 6)}`;
|
||||
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'tool_use',
|
||||
id: cmdId,
|
||||
name: 'Bash',
|
||||
input: { command: block.content },
|
||||
},
|
||||
});
|
||||
|
||||
if (session.autonomousMode === 'autonomous') {
|
||||
log(`[exec] Running: ${block.content}`);
|
||||
const result = execShell(block.content, session.env, session.cwd);
|
||||
const output = result.stdout || '(no output)';
|
||||
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'tool_result',
|
||||
tool_use_id: cmdId,
|
||||
content: output,
|
||||
},
|
||||
});
|
||||
|
||||
shellResults.push(`$ ${block.content}\n${output}`);
|
||||
} else {
|
||||
log(`[restricted] Blocked: ${block.content}`);
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'tool_result',
|
||||
tool_use_id: cmdId,
|
||||
content: `[BLOCKED] Shell execution disabled in restricted mode. Command not executed: ${block.content}`,
|
||||
},
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'cost':
|
||||
// Parsed below for the result event
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Extract cost and emit result
|
||||
const costUsd = extractSessionCost(session.turnBuffer);
|
||||
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'result',
|
||||
subtype: 'result',
|
||||
result: '',
|
||||
cost_usd: costUsd,
|
||||
duration_ms: duration,
|
||||
num_turns: session.turns,
|
||||
is_error: false,
|
||||
session_id: sessionId,
|
||||
},
|
||||
});
|
||||
|
||||
send({ type: 'agent_stopped', sessionId, exitCode: 0, signal: null });
|
||||
session.turnBuffer = '';
|
||||
|
||||
// If commands were executed, feed results back to aider for next turn
|
||||
if (shellResults.length > 0 && child.exitCode === null) {
|
||||
const feedback = `The following commands were executed and here are the results:\n\n${shellResults.join('\n\n')}\n\nBased on these results, continue your work. If the task is complete, say "DONE".`;
|
||||
log(`[exec] Feeding ${shellResults.length} command results back to aider`);
|
||||
session.turnBuffer = '';
|
||||
session.turnStartTime = Date.now();
|
||||
session.turns++;
|
||||
send({ type: 'agent_started', sessionId });
|
||||
child.stdin?.write(feedback + '\n');
|
||||
}
|
||||
});
|
||||
|
||||
child.stderr?.on('data', (data: Buffer) => {
|
||||
for (const line of data.toString().split('\n')) {
|
||||
if (line.trim()) log(`[stderr] ${line}`);
|
||||
}
|
||||
});
|
||||
|
||||
child.on('close', (code: number | null, signal: string | null) => {
|
||||
sessions.delete(sessionId);
|
||||
if (controller.signal.aborted) {
|
||||
send({ type: 'agent_stopped', sessionId, exitCode: null, signal: 'SIGTERM' });
|
||||
} else if (code !== 0 && code !== null) {
|
||||
send({ type: 'agent_error', sessionId, message: `Aider exited with code ${code}` });
|
||||
} else {
|
||||
send({ type: 'agent_stopped', sessionId, exitCode: code, signal });
|
||||
}
|
||||
});
|
||||
|
||||
child.on('error', (err: Error) => {
|
||||
sessions.delete(sessionId);
|
||||
log(`Aider spawn error: ${err.message}`);
|
||||
send({ type: 'agent_error', sessionId, message: `Failed to start Aider: ${err.message}` });
|
||||
});
|
||||
}
|
||||
|
||||
function handleStop(msg: StopMessage) {
|
||||
const { sessionId } = msg;
|
||||
const session = sessions.get(sessionId);
|
||||
if (!session) {
|
||||
send({ type: 'error', sessionId, message: 'Session not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Stopping Aider session ${sessionId}`);
|
||||
session.process.stdin?.write('/exit\n');
|
||||
const killTimer = setTimeout(() => {
|
||||
session.controller.abort();
|
||||
session.process.kill('SIGTERM');
|
||||
}, 3000);
|
||||
session.process.once('close', () => clearTimeout(killTimer));
|
||||
}
|
||||
|
||||
function which(name: string): string | null {
|
||||
const pathDirs = (process.env.PATH || '').split(':');
|
||||
for (const dir of pathDirs) {
|
||||
const full = join(dir, name);
|
||||
try {
|
||||
accessSync(full, constants.X_OK);
|
||||
return full;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
log('Aider sidecar started');
|
||||
log(`Found aider at: ${which('aider') ?? 'NOT FOUND'}`);
|
||||
send({ type: 'ready' });
|
||||
224
sidecar/claude-runner.ts
Normal file
|
|
@ -0,0 +1,224 @@
|
|||
// Claude Runner — Node.js sidecar entry point for Claude Code provider
|
||||
// Spawned by Rust SidecarManager, communicates via stdio NDJSON
|
||||
// Uses @anthropic-ai/claude-agent-sdk for Claude session management
|
||||
|
||||
import { stdin, stdout, stderr } from 'process';
|
||||
import { createInterface } from 'readline';
|
||||
import { execSync } from 'child_process';
|
||||
import { existsSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { query, type Query } from '@anthropic-ai/claude-agent-sdk';
|
||||
|
||||
const rl = createInterface({ input: stdin });
|
||||
|
||||
// Active agent sessions keyed by session ID
|
||||
const sessions = new Map<string, { query: Query; controller: AbortController }>();
|
||||
|
||||
function send(msg: Record<string, unknown>) {
|
||||
stdout.write(JSON.stringify(msg) + '\n');
|
||||
}
|
||||
|
||||
function log(message: string) {
|
||||
stderr.write(`[sidecar] ${message}\n`);
|
||||
}
|
||||
|
||||
rl.on('line', (line: string) => {
|
||||
try {
|
||||
const msg = JSON.parse(line);
|
||||
handleMessage(msg).catch((err: unknown) => {
|
||||
log(`Unhandled error in message handler: ${err}`);
|
||||
});
|
||||
} catch {
|
||||
log(`Invalid JSON: ${line}`);
|
||||
}
|
||||
});
|
||||
|
||||
interface QueryMessage {
|
||||
type: 'query';
|
||||
sessionId: string;
|
||||
prompt: string;
|
||||
cwd?: string;
|
||||
maxTurns?: number;
|
||||
maxBudgetUsd?: number;
|
||||
resumeSessionId?: string;
|
||||
permissionMode?: string;
|
||||
settingSources?: string[];
|
||||
systemPrompt?: string;
|
||||
model?: string;
|
||||
claudeConfigDir?: string;
|
||||
additionalDirectories?: string[];
|
||||
worktreeName?: string;
|
||||
extraEnv?: Record<string, string>;
|
||||
}
|
||||
|
||||
interface StopMessage {
|
||||
type: 'stop';
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
async function handleMessage(msg: Record<string, unknown>) {
|
||||
switch (msg.type) {
|
||||
case 'ping':
|
||||
send({ type: 'pong' });
|
||||
break;
|
||||
case 'query':
|
||||
await handleQuery(msg as unknown as QueryMessage);
|
||||
break;
|
||||
case 'stop':
|
||||
handleStop(msg as unknown as StopMessage);
|
||||
break;
|
||||
default:
|
||||
send({ type: 'error', message: `Unknown message type: ${msg.type}` });
|
||||
}
|
||||
}
|
||||
|
||||
async function handleQuery(msg: QueryMessage) {
|
||||
const { sessionId, prompt, cwd, maxTurns, maxBudgetUsd, resumeSessionId, permissionMode, settingSources, systemPrompt, model, claudeConfigDir, additionalDirectories, worktreeName, extraEnv } = msg;
|
||||
|
||||
if (sessions.has(sessionId)) {
|
||||
send({ type: 'error', sessionId, message: 'Session already running' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Starting agent session ${sessionId} via SDK`);
|
||||
|
||||
const controller = new AbortController();
|
||||
|
||||
// Strip CLAUDE* and ANTHROPIC_* env vars to prevent nesting detection by the spawned CLI.
|
||||
// Whitelist CLAUDE_CODE_EXPERIMENTAL_* so feature flags (e.g. agent teams) pass through.
|
||||
const cleanEnv: Record<string, string | undefined> = {};
|
||||
for (const [key, value] of Object.entries(process.env)) {
|
||||
if (key.startsWith('CLAUDE_CODE_EXPERIMENTAL_')) {
|
||||
cleanEnv[key] = value;
|
||||
} else if (!key.startsWith('CLAUDE') && !key.startsWith('ANTHROPIC_')) {
|
||||
cleanEnv[key] = value;
|
||||
}
|
||||
}
|
||||
// Override CLAUDE_CONFIG_DIR for multi-account support
|
||||
if (claudeConfigDir) {
|
||||
cleanEnv['CLAUDE_CONFIG_DIR'] = claudeConfigDir;
|
||||
}
|
||||
// Inject extra environment variables (e.g. BTMSG_AGENT_ID for agent communication)
|
||||
if (extraEnv) {
|
||||
for (const [key, value] of Object.entries(extraEnv)) {
|
||||
cleanEnv[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
if (!claudePath) {
|
||||
send({ type: 'agent_error', sessionId, message: 'Claude CLI not found. Install Claude Code first.' });
|
||||
return;
|
||||
}
|
||||
|
||||
const q = query({
|
||||
prompt,
|
||||
options: {
|
||||
pathToClaudeCodeExecutable: claudePath,
|
||||
abortController: controller,
|
||||
cwd: cwd || process.cwd(),
|
||||
env: cleanEnv,
|
||||
maxTurns: maxTurns ?? undefined,
|
||||
maxBudgetUsd: maxBudgetUsd ?? undefined,
|
||||
resume: resumeSessionId ?? undefined,
|
||||
allowedTools: [
|
||||
'Bash', 'Read', 'Write', 'Edit', 'Glob', 'Grep',
|
||||
'WebSearch', 'WebFetch', 'TodoWrite', 'NotebookEdit',
|
||||
],
|
||||
permissionMode: (permissionMode ?? 'bypassPermissions') as 'bypassPermissions' | 'default',
|
||||
allowDangerouslySkipPermissions: (permissionMode ?? 'bypassPermissions') === 'bypassPermissions',
|
||||
settingSources: settingSources ?? ['user', 'project'],
|
||||
systemPrompt: systemPrompt
|
||||
? systemPrompt
|
||||
: { type: 'preset' as const, preset: 'claude_code' as const },
|
||||
model: model ?? undefined,
|
||||
additionalDirectories: additionalDirectories ?? undefined,
|
||||
extraArgs: worktreeName ? { worktree: worktreeName } : undefined,
|
||||
},
|
||||
});
|
||||
|
||||
sessions.set(sessionId, { query: q, controller });
|
||||
send({ type: 'agent_started', sessionId });
|
||||
|
||||
for await (const message of q) {
|
||||
// Forward SDK messages as-is — they use the same format as CLI stream-json
|
||||
const sdkMsg = message as Record<string, unknown>;
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: sdkMsg,
|
||||
});
|
||||
}
|
||||
|
||||
// Session completed normally
|
||||
sessions.delete(sessionId);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
sessions.delete(sessionId);
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
|
||||
if (controller.signal.aborted) {
|
||||
log(`Agent session ${sessionId} aborted`);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: null,
|
||||
signal: 'SIGTERM',
|
||||
});
|
||||
} else {
|
||||
log(`Agent session ${sessionId} error: ${errMsg}`);
|
||||
send({
|
||||
type: 'agent_error',
|
||||
sessionId,
|
||||
message: errMsg,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function handleStop(msg: StopMessage) {
|
||||
const { sessionId } = msg;
|
||||
const session = sessions.get(sessionId);
|
||||
if (!session) {
|
||||
send({ type: 'error', sessionId, message: 'Session not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Stopping agent session ${sessionId}`);
|
||||
session.controller.abort();
|
||||
}
|
||||
|
||||
function findClaudeCli(): string | undefined {
|
||||
// Check common locations
|
||||
const candidates = [
|
||||
join(homedir(), '.local', 'bin', 'claude'),
|
||||
join(homedir(), '.claude', 'local', 'claude'),
|
||||
'/usr/local/bin/claude',
|
||||
'/usr/bin/claude',
|
||||
];
|
||||
for (const p of candidates) {
|
||||
if (existsSync(p)) return p;
|
||||
}
|
||||
// Fall back to which/where
|
||||
try {
|
||||
return execSync('which claude 2>/dev/null || where claude 2>nul', { encoding: 'utf-8' }).trim().split('\n')[0];
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
const claudePath = findClaudeCli();
|
||||
if (claudePath) {
|
||||
log(`Found Claude CLI at ${claudePath}`);
|
||||
} else {
|
||||
log('WARNING: Claude CLI not found — agent sessions will fail');
|
||||
}
|
||||
|
||||
log('Sidecar started');
|
||||
send({ type: 'ready' });
|
||||
229
sidecar/codex-runner.ts
Normal file
|
|
@ -0,0 +1,229 @@
|
|||
// Codex Runner — Node.js sidecar entry point for OpenAI Codex provider
|
||||
// Spawned by Rust SidecarManager, communicates via stdio NDJSON
|
||||
// Uses @openai/codex-sdk for Codex session management
|
||||
|
||||
import { stdin, stdout, stderr } from 'process';
|
||||
import { createInterface } from 'readline';
|
||||
import { execSync } from 'child_process';
|
||||
import { existsSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
const rl = createInterface({ input: stdin });
|
||||
|
||||
const sessions = new Map<string, { controller: AbortController }>();
|
||||
|
||||
function send(msg: Record<string, unknown>) {
|
||||
stdout.write(JSON.stringify(msg) + '\n');
|
||||
}
|
||||
|
||||
function log(message: string) {
|
||||
stderr.write(`[codex-sidecar] ${message}\n`);
|
||||
}
|
||||
|
||||
rl.on('line', (line: string) => {
|
||||
try {
|
||||
const msg = JSON.parse(line);
|
||||
handleMessage(msg).catch((err: unknown) => {
|
||||
log(`Unhandled error in message handler: ${err}`);
|
||||
});
|
||||
} catch {
|
||||
log(`Invalid JSON: ${line}`);
|
||||
}
|
||||
});
|
||||
|
||||
interface QueryMessage {
|
||||
type: 'query';
|
||||
sessionId: string;
|
||||
prompt: string;
|
||||
cwd?: string;
|
||||
maxTurns?: number;
|
||||
resumeSessionId?: string;
|
||||
permissionMode?: string;
|
||||
systemPrompt?: string;
|
||||
model?: string;
|
||||
providerConfig?: Record<string, unknown>;
|
||||
extraEnv?: Record<string, string>;
|
||||
}
|
||||
|
||||
interface StopMessage {
|
||||
type: 'stop';
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
async function handleMessage(msg: Record<string, unknown>) {
|
||||
switch (msg.type) {
|
||||
case 'ping':
|
||||
send({ type: 'pong' });
|
||||
break;
|
||||
case 'query':
|
||||
await handleQuery(msg as unknown as QueryMessage);
|
||||
break;
|
||||
case 'stop':
|
||||
handleStop(msg as unknown as StopMessage);
|
||||
break;
|
||||
default:
|
||||
send({ type: 'error', message: `Unknown message type: ${msg.type}` });
|
||||
}
|
||||
}
|
||||
|
||||
async function handleQuery(msg: QueryMessage) {
|
||||
const { sessionId, prompt, cwd, maxTurns, resumeSessionId, permissionMode, model, providerConfig, extraEnv } = msg;
|
||||
|
||||
if (sessions.has(sessionId)) {
|
||||
send({ type: 'error', sessionId, message: 'Session already running' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Starting Codex session ${sessionId}`);
|
||||
|
||||
const controller = new AbortController();
|
||||
|
||||
// Strip CODEX*/OPENAI* env vars to prevent nesting issues
|
||||
const cleanEnv: Record<string, string | undefined> = {};
|
||||
for (const [key, value] of Object.entries(process.env)) {
|
||||
if (!key.startsWith('CODEX') && !key.startsWith('OPENAI')) {
|
||||
cleanEnv[key] = value;
|
||||
}
|
||||
}
|
||||
// Re-inject the API key
|
||||
const apiKey = process.env.CODEX_API_KEY || process.env.OPENAI_API_KEY;
|
||||
if (apiKey) {
|
||||
cleanEnv['CODEX_API_KEY'] = apiKey;
|
||||
}
|
||||
// Inject extra environment variables (e.g. BTMSG_AGENT_ID for agent communication)
|
||||
if (extraEnv) {
|
||||
for (const [key, value] of Object.entries(extraEnv)) {
|
||||
cleanEnv[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
// Dynamically import SDK — fails gracefully if not installed
|
||||
let Codex: any;
|
||||
try {
|
||||
const sdk = await import('@openai/codex-sdk');
|
||||
Codex = sdk.Codex ?? sdk.default;
|
||||
} catch {
|
||||
send({ type: 'agent_error', sessionId, message: 'Codex SDK not installed. Run: npm install @openai/codex-sdk' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!apiKey) {
|
||||
send({ type: 'agent_error', sessionId, message: 'No API key. Set CODEX_API_KEY or OPENAI_API_KEY.' });
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Map permission mode to Codex sandbox/approval settings
|
||||
const sandbox = mapSandboxMode(providerConfig?.sandbox as string | undefined, permissionMode);
|
||||
const approvalPolicy = permissionMode === 'bypassPermissions' ? 'never' : 'on-request';
|
||||
|
||||
const codex = new Codex({
|
||||
env: cleanEnv as Record<string, string>,
|
||||
config: {
|
||||
model: model ?? 'gpt-5.4',
|
||||
approval_policy: approvalPolicy,
|
||||
sandbox: sandbox,
|
||||
},
|
||||
});
|
||||
|
||||
const threadOpts: Record<string, unknown> = {
|
||||
workingDirectory: cwd || process.cwd(),
|
||||
};
|
||||
|
||||
const thread = resumeSessionId
|
||||
? codex.resumeThread(resumeSessionId)
|
||||
: codex.startThread(threadOpts);
|
||||
|
||||
sessions.set(sessionId, { controller });
|
||||
send({ type: 'agent_started', sessionId });
|
||||
|
||||
const streamResult = await thread.runStreamed(prompt);
|
||||
|
||||
for await (const event of streamResult.events) {
|
||||
if (controller.signal.aborted) break;
|
||||
|
||||
// Forward raw Codex events — the message adapter parses them
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: event as Record<string, unknown>,
|
||||
});
|
||||
}
|
||||
|
||||
sessions.delete(sessionId);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
sessions.delete(sessionId);
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
|
||||
if (controller.signal.aborted) {
|
||||
log(`Codex session ${sessionId} aborted`);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: null,
|
||||
signal: 'SIGTERM',
|
||||
});
|
||||
} else {
|
||||
log(`Codex session ${sessionId} error: ${errMsg}`);
|
||||
send({
|
||||
type: 'agent_error',
|
||||
sessionId,
|
||||
message: errMsg,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function handleStop(msg: StopMessage) {
|
||||
const { sessionId } = msg;
|
||||
const session = sessions.get(sessionId);
|
||||
if (!session) {
|
||||
send({ type: 'error', sessionId, message: 'Session not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Stopping Codex session ${sessionId}`);
|
||||
session.controller.abort();
|
||||
}
|
||||
|
||||
function mapSandboxMode(
|
||||
configSandbox: string | undefined,
|
||||
permissionMode: string | undefined,
|
||||
): string {
|
||||
if (configSandbox) return configSandbox;
|
||||
if (permissionMode === 'bypassPermissions') return 'danger-full-access';
|
||||
return 'workspace-write';
|
||||
}
|
||||
|
||||
function findCodexCli(): string | undefined {
|
||||
const candidates = [
|
||||
join(homedir(), '.local', 'bin', 'codex'),
|
||||
'/usr/local/bin/codex',
|
||||
'/usr/bin/codex',
|
||||
];
|
||||
for (const p of candidates) {
|
||||
if (existsSync(p)) return p;
|
||||
}
|
||||
try {
|
||||
return execSync('which codex 2>/dev/null || where codex 2>nul', { encoding: 'utf-8' }).trim().split('\n')[0];
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
const codexPath = findCodexCli();
|
||||
if (codexPath) {
|
||||
log(`Found Codex CLI at ${codexPath}`);
|
||||
} else {
|
||||
log('Codex CLI not found — will use SDK if available');
|
||||
}
|
||||
|
||||
log('Codex sidecar started');
|
||||
send({ type: 'ready' });
|
||||
269
sidecar/ollama-runner.ts
Normal file
|
|
@ -0,0 +1,269 @@
|
|||
// Ollama Runner — Node.js sidecar entry point for local Ollama provider
|
||||
// Spawned by Rust SidecarManager, communicates via stdio NDJSON
|
||||
// Uses direct HTTP to Ollama REST API (no external dependencies)
|
||||
|
||||
import { stdin, stdout, stderr } from 'process';
|
||||
import { createInterface } from 'readline';
|
||||
|
||||
const rl = createInterface({ input: stdin });
|
||||
|
||||
const sessions = new Map<string, { controller: AbortController }>();
|
||||
|
||||
function send(msg: Record<string, unknown>) {
|
||||
stdout.write(JSON.stringify(msg) + '\n');
|
||||
}
|
||||
|
||||
function log(message: string) {
|
||||
stderr.write(`[ollama-sidecar] ${message}\n`);
|
||||
}
|
||||
|
||||
rl.on('line', (line: string) => {
|
||||
try {
|
||||
const msg = JSON.parse(line);
|
||||
handleMessage(msg).catch((err: unknown) => {
|
||||
log(`Unhandled error in message handler: ${err}`);
|
||||
});
|
||||
} catch {
|
||||
log(`Invalid JSON: ${line}`);
|
||||
}
|
||||
});
|
||||
|
||||
interface QueryMessage {
|
||||
type: 'query';
|
||||
sessionId: string;
|
||||
prompt: string;
|
||||
cwd?: string;
|
||||
model?: string;
|
||||
systemPrompt?: string;
|
||||
providerConfig?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface StopMessage {
|
||||
type: 'stop';
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
async function handleMessage(msg: Record<string, unknown>) {
|
||||
switch (msg.type) {
|
||||
case 'ping':
|
||||
send({ type: 'pong' });
|
||||
break;
|
||||
case 'query':
|
||||
await handleQuery(msg as unknown as QueryMessage);
|
||||
break;
|
||||
case 'stop':
|
||||
handleStop(msg as unknown as StopMessage);
|
||||
break;
|
||||
default:
|
||||
send({ type: 'error', message: `Unknown message type: ${msg.type}` });
|
||||
}
|
||||
}
|
||||
|
||||
async function handleQuery(msg: QueryMessage) {
|
||||
const { sessionId, prompt, cwd, model, systemPrompt, providerConfig } = msg;
|
||||
|
||||
if (sessions.has(sessionId)) {
|
||||
send({ type: 'error', sessionId, message: 'Session already running' });
|
||||
return;
|
||||
}
|
||||
|
||||
const ollamaHost = (providerConfig?.host as string) || process.env.OLLAMA_HOST || 'http://127.0.0.1:11434';
|
||||
const ollamaModel = model || 'qwen3:8b';
|
||||
const numCtx = (providerConfig?.num_ctx as number) || 32768;
|
||||
const think = (providerConfig?.think as boolean) ?? false;
|
||||
|
||||
log(`Starting Ollama session ${sessionId} with model ${ollamaModel}`);
|
||||
|
||||
// Health check
|
||||
try {
|
||||
const healthRes = await fetch(`${ollamaHost}/api/version`);
|
||||
if (!healthRes.ok) {
|
||||
send({ type: 'agent_error', sessionId, message: `Ollama not reachable at ${ollamaHost} (HTTP ${healthRes.status})` });
|
||||
return;
|
||||
}
|
||||
} catch (err: unknown) {
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
send({ type: 'agent_error', sessionId, message: `Cannot connect to Ollama at ${ollamaHost}: ${errMsg}` });
|
||||
return;
|
||||
}
|
||||
|
||||
const controller = new AbortController();
|
||||
sessions.set(sessionId, { controller });
|
||||
send({ type: 'agent_started', sessionId });
|
||||
|
||||
// Emit init event
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'system',
|
||||
subtype: 'init',
|
||||
session_id: sessionId,
|
||||
model: ollamaModel,
|
||||
cwd: cwd || process.cwd(),
|
||||
},
|
||||
});
|
||||
|
||||
// Build messages array
|
||||
const messages: Array<{ role: string; content: string }> = [];
|
||||
if (systemPrompt && typeof systemPrompt === 'string') {
|
||||
messages.push({ role: 'system', content: systemPrompt });
|
||||
}
|
||||
messages.push({ role: 'user', content: prompt });
|
||||
|
||||
try {
|
||||
const res = await fetch(`${ollamaHost}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
model: ollamaModel,
|
||||
messages,
|
||||
stream: true,
|
||||
options: { num_ctx: numCtx },
|
||||
think,
|
||||
}),
|
||||
signal: controller.signal,
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const errBody = await res.text();
|
||||
let errMsg: string;
|
||||
try {
|
||||
const parsed = JSON.parse(errBody);
|
||||
errMsg = parsed.error || errBody;
|
||||
} catch {
|
||||
errMsg = errBody;
|
||||
}
|
||||
send({ type: 'agent_error', sessionId, message: `Ollama error (${res.status}): ${errMsg}` });
|
||||
sessions.delete(sessionId);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!res.body) {
|
||||
send({ type: 'agent_error', sessionId, message: 'No response body from Ollama' });
|
||||
sessions.delete(sessionId);
|
||||
return;
|
||||
}
|
||||
|
||||
// Parse NDJSON stream
|
||||
const reader = res.body.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = '';
|
||||
|
||||
while (true) {
|
||||
if (controller.signal.aborted) break;
|
||||
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
const lines = buffer.split('\n');
|
||||
buffer = lines.pop() || '';
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed) continue;
|
||||
|
||||
try {
|
||||
const chunk = JSON.parse(trimmed) as Record<string, unknown>;
|
||||
|
||||
// Check for mid-stream error
|
||||
if (typeof chunk.error === 'string') {
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: { type: 'error', message: chunk.error },
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
// Forward as chunk event for the message adapter
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'chunk',
|
||||
message: chunk.message,
|
||||
done: chunk.done,
|
||||
done_reason: chunk.done_reason,
|
||||
model: chunk.model,
|
||||
prompt_eval_count: chunk.prompt_eval_count,
|
||||
eval_count: chunk.eval_count,
|
||||
eval_duration: chunk.eval_duration,
|
||||
total_duration: chunk.total_duration,
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
log(`Failed to parse Ollama chunk: ${trimmed}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process remaining buffer
|
||||
if (buffer.trim()) {
|
||||
try {
|
||||
const chunk = JSON.parse(buffer.trim()) as Record<string, unknown>;
|
||||
send({
|
||||
type: 'agent_event',
|
||||
sessionId,
|
||||
event: {
|
||||
type: 'chunk',
|
||||
message: chunk.message,
|
||||
done: chunk.done,
|
||||
done_reason: chunk.done_reason,
|
||||
model: chunk.model,
|
||||
prompt_eval_count: chunk.prompt_eval_count,
|
||||
eval_count: chunk.eval_count,
|
||||
eval_duration: chunk.eval_duration,
|
||||
total_duration: chunk.total_duration,
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
log(`Failed to parse final Ollama buffer: ${buffer}`);
|
||||
}
|
||||
}
|
||||
|
||||
sessions.delete(sessionId);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
sessions.delete(sessionId);
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
|
||||
if (controller.signal.aborted) {
|
||||
log(`Ollama session ${sessionId} aborted`);
|
||||
send({
|
||||
type: 'agent_stopped',
|
||||
sessionId,
|
||||
exitCode: null,
|
||||
signal: 'SIGTERM',
|
||||
});
|
||||
} else {
|
||||
log(`Ollama session ${sessionId} error: ${errMsg}`);
|
||||
send({
|
||||
type: 'agent_error',
|
||||
sessionId,
|
||||
message: errMsg,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function handleStop(msg: StopMessage) {
|
||||
const { sessionId } = msg;
|
||||
const session = sessions.get(sessionId);
|
||||
if (!session) {
|
||||
send({ type: 'error', sessionId, message: 'Session not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
log(`Stopping Ollama session ${sessionId}`);
|
||||
session.controller.abort();
|
||||
}
|
||||
|
||||
log('Ollama sidecar started');
|
||||
send({ type: 'ready' });
|
||||
481
sidecar/package-lock.json
generated
Normal file
|
|
@ -0,0 +1,481 @@
|
|||
{
|
||||
"name": "bterminal-sidecar",
|
||||
"version": "0.1.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "bterminal-sidecar",
|
||||
"version": "0.1.0",
|
||||
"devDependencies": {
|
||||
"esbuild": "0.25.4"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/aix-ppc64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.25.4.tgz",
|
||||
"integrity": "sha512-1VCICWypeQKhVbE9oW/sJaAmjLxhVqacdkvPLEjwlttjfwENRSClS8EjBz0KzRyFSCPDIkuXW34Je/vk7zdB7Q==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"aix"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-arm": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.25.4.tgz",
|
||||
"integrity": "sha512-QNdQEps7DfFwE3hXiU4BZeOV68HHzYwGd0Nthhd3uCkkEKK7/R6MTgM0P7H7FAs5pU/DIWsviMmEGxEoxIZ+ZQ==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-bBy69pgfhMGtCnwpC/x5QhfxAz/cBgQ9enbtwjf6V9lnPI/hMyT9iWpR1arm0l3kttTr4L0KSLpKmLp/ilKS9A==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-TVhdVtQIFuVpIIR282btcGC2oGQoSfZfmBdTip2anCaVYcqWlZXGcdcKIUklfX2wj0JklNYgz39OBqh2cqXvcQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/darwin-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-Y1giCfM4nlHDWEfSckMzeWNdQS31BQGs9/rouw6Ub91tkK79aIMTH3q9xHvzH8d0wDru5Ci0kWB8b3up/nl16g==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/darwin-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-CJsry8ZGM5VFVeyUYB3cdKpd/H69PYez4eJh1W/t38vzutdjEjtP7hB6eLKBoOdxcAlCtEYHzQ/PJ/oU9I4u0A==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/freebsd-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-yYq+39NlTRzU2XmoPW4l5Ifpl9fqSk0nAJYM/V/WUGPEFfek1epLHJIkTQM6bBs1swApjO5nWgvr843g6TjxuQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/freebsd-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-0FgvOJ6UUMflsHSPLzdfDnnBBVoCDtBTVyn/MrWloUNvq/5SFmh13l3dvgRPkDihRxb77Y17MbqbCAa2strMQQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-arm": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.25.4.tgz",
|
||||
"integrity": "sha512-kro4c0P85GMfFYqW4TWOpvmF8rFShbWGnrLqlzp4X1TNWjRY3JMYUfDCtOxPKOIY8B0WC8HN51hGP4I4hz4AaQ==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-+89UsQTfXdmjIvZS6nUnOOLoXnkUTB9hR5QAeLrQdzOSWZvNSAXAtcRDHWtqAUtAmv7ZM1WPOOeSxDzzzMogiQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-ia32": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.25.4.tgz",
|
||||
"integrity": "sha512-yTEjoapy8UP3rv8dB0ip3AfMpRbyhSN3+hY8mo/i4QXFeDxmiYbEKp3ZRjBKcOP862Ua4b1PDfwlvbuwY7hIGQ==",
|
||||
"cpu": [
|
||||
"ia32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-loong64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.25.4.tgz",
|
||||
"integrity": "sha512-NeqqYkrcGzFwi6CGRGNMOjWGGSYOpqwCjS9fvaUlX5s3zwOtn1qwg1s2iE2svBe4Q/YOG1q6875lcAoQK/F4VA==",
|
||||
"cpu": [
|
||||
"loong64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-mips64el": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.25.4.tgz",
|
||||
"integrity": "sha512-IcvTlF9dtLrfL/M8WgNI/qJYBENP3ekgsHbYUIzEzq5XJzzVEV/fXY9WFPfEEXmu3ck2qJP8LG/p3Q8f7Zc2Xg==",
|
||||
"cpu": [
|
||||
"mips64el"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-ppc64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.25.4.tgz",
|
||||
"integrity": "sha512-HOy0aLTJTVtoTeGZh4HSXaO6M95qu4k5lJcH4gxv56iaycfz1S8GO/5Jh6X4Y1YiI0h7cRyLi+HixMR+88swag==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-riscv64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.25.4.tgz",
|
||||
"integrity": "sha512-i8JUDAufpz9jOzo4yIShCTcXzS07vEgWzyX3NH2G7LEFVgrLEhjwL3ajFE4fZI3I4ZgiM7JH3GQ7ReObROvSUA==",
|
||||
"cpu": [
|
||||
"riscv64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-s390x": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.25.4.tgz",
|
||||
"integrity": "sha512-jFnu+6UbLlzIjPQpWCNh5QtrcNfMLjgIavnwPQAfoGx4q17ocOU9MsQ2QVvFxwQoWpZT8DvTLooTvmOQXkO51g==",
|
||||
"cpu": [
|
||||
"s390x"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-6e0cvXwzOnVWJHq+mskP8DNSrKBr1bULBvnFLpc1KY+d+irZSgZ02TGse5FsafKS5jg2e4pbvK6TPXaF/A6+CA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/netbsd-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-vUnkBYxZW4hL/ie91hSqaSNjulOnYXE1VSLusnvHg2u3jewJBz3YzB9+oCw8DABeVqZGg94t9tyZFoHma8gWZQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"netbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/netbsd-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-XAg8pIQn5CzhOB8odIcAm42QsOfa98SBeKUdo4xa8OvX8LbMZqEtgeWE9P/Wxt7MlG2QqvjGths+nq48TrUiKw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"netbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/openbsd-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-Ct2WcFEANlFDtp1nVAXSNBPDxyU+j7+tId//iHXU2f/lN5AmO4zLyhDcpR5Cz1r08mVxzt3Jpyt4PmXQ1O6+7A==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/openbsd-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-xAGGhyOQ9Otm1Xu8NT1ifGLnA6M3sJxZ6ixylb+vIUVzvvd6GOALpwQrYrtlPouMqd/vSbgehz6HaVk4+7Afhw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/sunos-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-Mw+tzy4pp6wZEK0+Lwr76pWLjrtjmJyUB23tHKqEDP74R3q95luY/bXqXZeYl4NYlvwOqoRKlInQialgCKy67Q==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"sunos"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-arm64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.25.4.tgz",
|
||||
"integrity": "sha512-AVUP428VQTSddguz9dO9ngb+E5aScyg7nOeJDrF1HPYu555gmza3bDGMPhmVXL8svDSoqPCsCPjb265yG/kLKQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-ia32": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.25.4.tgz",
|
||||
"integrity": "sha512-i1sW+1i+oWvQzSgfRcxxG2k4I9n3O9NRqy8U+uugaT2Dy7kLO9Y7wI72haOahxceMX8hZAzgGou1FhndRldxRg==",
|
||||
"cpu": [
|
||||
"ia32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-x64": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.25.4.tgz",
|
||||
"integrity": "sha512-nOT2vZNw6hJ+z43oP1SPea/G/6AbN6X+bGNhNuq8NtRHy4wsMhw765IKLNmnjek7GvjWBYQ8Q5VBoYTFg9y1UQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/esbuild": {
|
||||
"version": "0.25.4",
|
||||
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.4.tgz",
|
||||
"integrity": "sha512-8pgjLUcUjcgDg+2Q4NYXnPbo/vncAY4UmyaCm0jZevERqCHZIaWwdJHkf8XQtu4AxSKCdvrUbT0XUr1IdZzI8Q==",
|
||||
"dev": true,
|
||||
"hasInstallScript": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"esbuild": "bin/esbuild"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@esbuild/aix-ppc64": "0.25.4",
|
||||
"@esbuild/android-arm": "0.25.4",
|
||||
"@esbuild/android-arm64": "0.25.4",
|
||||
"@esbuild/android-x64": "0.25.4",
|
||||
"@esbuild/darwin-arm64": "0.25.4",
|
||||
"@esbuild/darwin-x64": "0.25.4",
|
||||
"@esbuild/freebsd-arm64": "0.25.4",
|
||||
"@esbuild/freebsd-x64": "0.25.4",
|
||||
"@esbuild/linux-arm": "0.25.4",
|
||||
"@esbuild/linux-arm64": "0.25.4",
|
||||
"@esbuild/linux-ia32": "0.25.4",
|
||||
"@esbuild/linux-loong64": "0.25.4",
|
||||
"@esbuild/linux-mips64el": "0.25.4",
|
||||
"@esbuild/linux-ppc64": "0.25.4",
|
||||
"@esbuild/linux-riscv64": "0.25.4",
|
||||
"@esbuild/linux-s390x": "0.25.4",
|
||||
"@esbuild/linux-x64": "0.25.4",
|
||||
"@esbuild/netbsd-arm64": "0.25.4",
|
||||
"@esbuild/netbsd-x64": "0.25.4",
|
||||
"@esbuild/openbsd-arm64": "0.25.4",
|
||||
"@esbuild/openbsd-x64": "0.25.4",
|
||||
"@esbuild/sunos-x64": "0.25.4",
|
||||
"@esbuild/win32-arm64": "0.25.4",
|
||||
"@esbuild/win32-ia32": "0.25.4",
|
||||
"@esbuild/win32-x64": "0.25.4"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
12
sidecar/package.json
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
{
|
||||
"name": "bterminal-sidecar",
|
||||
"private": true,
|
||||
"version": "0.1.0",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"build": "esbuild claude-runner.ts --bundle --platform=node --target=node20 --outfile=dist/claude-runner.mjs --format=esm"
|
||||
},
|
||||
"devDependencies": {
|
||||
"esbuild": "0.25.4"
|
||||
}
|
||||
}
|
||||
4
src-tauri/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
# Generated by Cargo
|
||||
# will have compiled files and executables
|
||||
/target/
|
||||
/gen/schemas
|
||||
49
src-tauri/Cargo.toml
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
[package]
|
||||
name = "agent-orchestrator"
|
||||
version = "0.1.0"
|
||||
description = "Multi-session Claude agent dashboard"
|
||||
authors = ["DexterFromLab"]
|
||||
license = "MIT"
|
||||
edition = "2021"
|
||||
rust-version = "1.77.2"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[lib]
|
||||
name = "agent_orchestrator_lib"
|
||||
crate-type = ["staticlib", "cdylib", "rlib"]
|
||||
|
||||
[build-dependencies]
|
||||
tauri-build = { version = "2.5.6", features = [] }
|
||||
|
||||
[dependencies]
|
||||
bterminal-core = { path = "../bterminal-core" }
|
||||
serde_json = "1.0"
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
log = "0.4"
|
||||
tauri = { version = "2.10.3", features = [] }
|
||||
rusqlite = { version = "0.31", features = ["bundled-full"] }
|
||||
dirs = "5"
|
||||
notify = { version = "6", features = ["macos_fsevent"] }
|
||||
tauri-plugin-updater = "2.10.0"
|
||||
tauri-plugin-dialog = "2"
|
||||
rfd = { version = "0.16", default-features = false, features = ["gtk3"] }
|
||||
uuid = { version = "1", features = ["v4"] }
|
||||
tokio-tungstenite = { version = "0.21", features = ["native-tls"] }
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
futures-util = "0.3"
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
|
||||
opentelemetry = "0.28"
|
||||
opentelemetry_sdk = { version = "0.28", features = ["rt-tokio"] }
|
||||
opentelemetry-otlp = { version = "0.28", features = ["http-proto", "reqwest-client"] }
|
||||
tracing-opentelemetry = "0.29"
|
||||
keyring = { version = "3", features = ["linux-native"] }
|
||||
notify-rust = "4"
|
||||
native-tls = "0.2"
|
||||
tokio-native-tls = "0.3"
|
||||
sha2 = "0.10"
|
||||
hex = "0.4"
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
3
src-tauri/build.rs
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
fn main() {
|
||||
tauri_build::build()
|
||||
}
|
||||
12
src-tauri/capabilities/default.json
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
{
|
||||
"$schema": "../gen/schemas/desktop-schema.json",
|
||||
"identifier": "default",
|
||||
"description": "enables the default permissions",
|
||||
"windows": [
|
||||
"main"
|
||||
],
|
||||
"permissions": [
|
||||
"core:default",
|
||||
"dialog:default"
|
||||
]
|
||||
}
|
||||
BIN
src-tauri/icons/128x128.png
Normal file
|
After Width: | Height: | Size: 4.5 KiB |
BIN
src-tauri/icons/128x128@2x.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
BIN
src-tauri/icons/32x32.png
Normal file
|
After Width: | Height: | Size: 1.7 KiB |
BIN
src-tauri/icons/Square107x107Logo.png
Normal file
|
After Width: | Height: | Size: 9 KiB |
BIN
src-tauri/icons/Square142x142Logo.png
Normal file
|
After Width: | Height: | Size: 12 KiB |
BIN
src-tauri/icons/Square150x150Logo.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
src-tauri/icons/Square284x284Logo.png
Normal file
|
After Width: | Height: | Size: 25 KiB |
BIN
src-tauri/icons/Square30x30Logo.png
Normal file
|
After Width: | Height: | Size: 2 KiB |
BIN
src-tauri/icons/Square310x310Logo.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
src-tauri/icons/Square44x44Logo.png
Normal file
|
After Width: | Height: | Size: 3.3 KiB |
BIN
src-tauri/icons/Square71x71Logo.png
Normal file
|
After Width: | Height: | Size: 5.9 KiB |
BIN
src-tauri/icons/Square89x89Logo.png
Normal file
|
After Width: | Height: | Size: 7.4 KiB |
BIN
src-tauri/icons/StoreLogo.png
Normal file
|
After Width: | Height: | Size: 3.9 KiB |
BIN
src-tauri/icons/icon.icns
Normal file
BIN
src-tauri/icons/icon.ico
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
src-tauri/icons/icon.png
Normal file
|
After Width: | Height: | Size: 46 KiB |
1896
src-tauri/src/btmsg.rs
Normal file
766
src-tauri/src/bttask.rs
Normal file
|
|
@ -0,0 +1,766 @@
|
|||
// bttask — Read access to task board SQLite tables in btmsg.db
|
||||
// Tasks table created by bttask CLI, shared DB with btmsg
|
||||
// Path configurable via init() for test isolation.
|
||||
|
||||
use rusqlite::{params, Connection, OpenFlags};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::OnceLock;
|
||||
|
||||
static DB_PATH: OnceLock<PathBuf> = OnceLock::new();
|
||||
|
||||
/// Set the bttask database path. Must be called before any db access.
|
||||
/// Called from lib.rs setup with AppConfig-resolved path.
|
||||
pub fn init(path: PathBuf) {
|
||||
let _ = DB_PATH.set(path);
|
||||
}
|
||||
|
||||
fn db_path() -> PathBuf {
|
||||
DB_PATH.get().cloned().unwrap_or_else(|| {
|
||||
dirs::data_dir()
|
||||
.unwrap_or_else(|| PathBuf::from("."))
|
||||
.join("bterminal")
|
||||
.join("btmsg.db")
|
||||
})
|
||||
}
|
||||
|
||||
fn open_db() -> Result<Connection, String> {
|
||||
let path = db_path();
|
||||
if !path.exists() {
|
||||
return Err("btmsg database not found".into());
|
||||
}
|
||||
let conn = Connection::open_with_flags(&path, OpenFlags::SQLITE_OPEN_READ_WRITE)
|
||||
.map_err(|e| format!("Failed to open btmsg.db: {e}"))?;
|
||||
conn.query_row("PRAGMA journal_mode=WAL", [], |_| Ok(()))
|
||||
.map_err(|e| format!("Failed to set WAL mode: {e}"))?;
|
||||
conn.query_row("PRAGMA busy_timeout = 5000", [], |_| Ok(()))
|
||||
.map_err(|e| format!("Failed to set busy_timeout: {e}"))?;
|
||||
|
||||
// Migration: add version column if missing
|
||||
let has_version: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM pragma_table_info('tasks') WHERE name='version'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap_or(0);
|
||||
if has_version == 0 {
|
||||
conn.execute("ALTER TABLE tasks ADD COLUMN version INTEGER DEFAULT 1", [])
|
||||
.map_err(|e| format!("Migration (version column) failed: {e}"))?;
|
||||
}
|
||||
|
||||
Ok(conn)
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Task {
|
||||
pub id: String,
|
||||
pub title: String,
|
||||
pub description: String,
|
||||
pub status: String,
|
||||
pub priority: String,
|
||||
pub assigned_to: Option<String>,
|
||||
pub created_by: String,
|
||||
pub group_id: String,
|
||||
pub parent_task_id: Option<String>,
|
||||
pub sort_order: i32,
|
||||
pub created_at: String,
|
||||
pub updated_at: String,
|
||||
pub version: i64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct TaskComment {
|
||||
pub id: String,
|
||||
pub task_id: String,
|
||||
pub agent_id: String,
|
||||
pub content: String,
|
||||
pub created_at: String,
|
||||
}
|
||||
|
||||
/// Get all tasks for a group
|
||||
pub fn list_tasks(group_id: &str) -> Result<Vec<Task>, String> {
|
||||
let db = open_db()?;
|
||||
let mut stmt = db
|
||||
.prepare(
|
||||
"SELECT id, title, description, status, priority, assigned_to,
|
||||
created_by, group_id, parent_task_id, sort_order,
|
||||
created_at, updated_at, version
|
||||
FROM tasks WHERE group_id = ?1
|
||||
ORDER BY sort_order ASC, created_at DESC",
|
||||
)
|
||||
.map_err(|e| format!("Query error: {e}"))?;
|
||||
|
||||
let rows = stmt
|
||||
.query_map(params![group_id], |row| {
|
||||
Ok(Task {
|
||||
id: row.get("id")?,
|
||||
title: row.get("title")?,
|
||||
description: row.get::<_, String>("description").unwrap_or_default(),
|
||||
status: row.get::<_, String>("status").unwrap_or_else(|_| "todo".into()),
|
||||
priority: row.get::<_, String>("priority").unwrap_or_else(|_| "medium".into()),
|
||||
assigned_to: row.get("assigned_to")?,
|
||||
created_by: row.get("created_by")?,
|
||||
group_id: row.get("group_id")?,
|
||||
parent_task_id: row.get("parent_task_id")?,
|
||||
sort_order: row.get::<_, i32>("sort_order").unwrap_or(0),
|
||||
created_at: row.get::<_, String>("created_at").unwrap_or_default(),
|
||||
updated_at: row.get::<_, String>("updated_at").unwrap_or_default(),
|
||||
version: row.get::<_, i64>("version").unwrap_or(1),
|
||||
})
|
||||
})
|
||||
.map_err(|e| format!("Query error: {e}"))?;
|
||||
|
||||
rows.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("Row error: {e}"))
|
||||
}
|
||||
|
||||
/// Get comments for a task
|
||||
pub fn task_comments(task_id: &str) -> Result<Vec<TaskComment>, String> {
|
||||
let db = open_db()?;
|
||||
let mut stmt = db
|
||||
.prepare(
|
||||
"SELECT id, task_id, agent_id, content, created_at
|
||||
FROM task_comments WHERE task_id = ?1
|
||||
ORDER BY created_at ASC",
|
||||
)
|
||||
.map_err(|e| format!("Query error: {e}"))?;
|
||||
|
||||
let rows = stmt
|
||||
.query_map(params![task_id], |row| {
|
||||
Ok(TaskComment {
|
||||
id: row.get("id")?,
|
||||
task_id: row.get("task_id")?,
|
||||
agent_id: row.get("agent_id")?,
|
||||
content: row.get("content")?,
|
||||
created_at: row.get::<_, String>("created_at").unwrap_or_default(),
|
||||
})
|
||||
})
|
||||
.map_err(|e| format!("Query error: {e}"))?;
|
||||
|
||||
rows.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("Row error: {e}"))
|
||||
}
|
||||
|
||||
/// Update task status with optimistic locking.
|
||||
/// `expected_version` must match the current version in the database.
|
||||
/// Returns the new version on success.
|
||||
/// When transitioning to 'review', auto-posts to #review-queue channel if it exists.
|
||||
pub fn update_task_status(task_id: &str, status: &str, expected_version: i64) -> Result<i64, String> {
|
||||
let valid = ["todo", "progress", "review", "done", "blocked"];
|
||||
if !valid.contains(&status) {
|
||||
return Err(format!("Invalid status '{}'. Valid: {:?}", status, valid));
|
||||
}
|
||||
let db = open_db()?;
|
||||
|
||||
// Fetch task info before update (for channel notification)
|
||||
let task_title: Option<(String, String)> = if status == "review" {
|
||||
db.query_row(
|
||||
"SELECT title, group_id FROM tasks WHERE id = ?1",
|
||||
params![task_id],
|
||||
|row| Ok((row.get::<_, String>("title")?, row.get::<_, String>("group_id")?)),
|
||||
).ok()
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let rows_affected = db.execute(
|
||||
"UPDATE tasks SET status = ?1, version = version + 1, updated_at = datetime('now')
|
||||
WHERE id = ?2 AND version = ?3",
|
||||
params![status, task_id, expected_version],
|
||||
)
|
||||
.map_err(|e| format!("Update error: {e}"))?;
|
||||
|
||||
if rows_affected == 0 {
|
||||
return Err("Task was modified by another agent (version conflict)".into());
|
||||
}
|
||||
|
||||
let new_version = expected_version + 1;
|
||||
|
||||
// Auto-post to #review-queue channel on review transition
|
||||
if let Some((title, group_id)) = task_title {
|
||||
notify_review_channel(&db, &group_id, task_id, &title);
|
||||
}
|
||||
|
||||
Ok(new_version)
|
||||
}
|
||||
|
||||
/// Post a notification to #review-queue channel (best-effort, never fails the parent operation)
|
||||
fn notify_review_channel(db: &Connection, group_id: &str, task_id: &str, title: &str) {
|
||||
// Find #review-queue channel for this group
|
||||
let channel_id: Option<String> = db
|
||||
.query_row(
|
||||
"SELECT id FROM channels WHERE name = 'review-queue' AND group_id = ?1",
|
||||
params![group_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.ok();
|
||||
|
||||
let channel_id = match channel_id {
|
||||
Some(id) => id,
|
||||
None => {
|
||||
// Auto-create #review-queue channel
|
||||
match ensure_review_channels(db, group_id) {
|
||||
Some(id) => id,
|
||||
None => return, // Give up silently
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let msg_id = uuid::Uuid::new_v4().to_string();
|
||||
let content = format!("📋 Task ready for review: **{}** (`{}`)", title, task_id);
|
||||
let _ = db.execute(
|
||||
"INSERT INTO channel_messages (id, channel_id, from_agent, content) VALUES (?1, ?2, 'system', ?3)",
|
||||
params![msg_id, channel_id, content],
|
||||
);
|
||||
}
|
||||
|
||||
/// Ensure #review-queue and #review-log channels exist for a group.
|
||||
/// Returns the review-queue channel ID if created/found.
|
||||
fn ensure_review_channels(db: &Connection, group_id: &str) -> Option<String> {
|
||||
// Create channels only if they don't already exist
|
||||
for name in &["review-queue", "review-log"] {
|
||||
let exists: bool = db
|
||||
.query_row(
|
||||
"SELECT COUNT(*) > 0 FROM channels WHERE name = ?1 AND group_id = ?2",
|
||||
params![name, group_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap_or(false);
|
||||
if !exists {
|
||||
let id = uuid::Uuid::new_v4().to_string();
|
||||
let _ = db.execute(
|
||||
"INSERT INTO channels (id, name, group_id, created_by) VALUES (?1, ?2, ?3, 'system')",
|
||||
params![id, name, group_id],
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Return the review-queue channel ID
|
||||
db.query_row(
|
||||
"SELECT id FROM channels WHERE name = 'review-queue' AND group_id = ?1",
|
||||
params![group_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.ok()
|
||||
}
|
||||
|
||||
/// Count tasks in 'review' status for a group
|
||||
pub fn review_queue_count(group_id: &str) -> Result<i64, String> {
|
||||
let db = open_db()?;
|
||||
db.query_row(
|
||||
"SELECT COUNT(*) FROM tasks WHERE group_id = ?1 AND status = 'review'",
|
||||
params![group_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.map_err(|e| format!("Query error: {e}"))
|
||||
}
|
||||
|
||||
/// Add a comment to a task
|
||||
pub fn add_comment(task_id: &str, agent_id: &str, content: &str) -> Result<String, String> {
|
||||
let db = open_db()?;
|
||||
let id = uuid::Uuid::new_v4().to_string();
|
||||
db.execute(
|
||||
"INSERT INTO task_comments (id, task_id, agent_id, content) VALUES (?1, ?2, ?3, ?4)",
|
||||
params![id, task_id, agent_id, content],
|
||||
)
|
||||
.map_err(|e| format!("Insert error: {e}"))?;
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
/// Create a new task
|
||||
pub fn create_task(
|
||||
title: &str,
|
||||
description: &str,
|
||||
priority: &str,
|
||||
group_id: &str,
|
||||
created_by: &str,
|
||||
assigned_to: Option<&str>,
|
||||
) -> Result<String, String> {
|
||||
let db = open_db()?;
|
||||
let id = uuid::Uuid::new_v4().to_string();
|
||||
db.execute(
|
||||
"INSERT INTO tasks (id, title, description, priority, group_id, created_by, assigned_to)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
|
||||
params![id, title, description, priority, group_id, created_by, assigned_to],
|
||||
)
|
||||
.map_err(|e| format!("Insert error: {e}"))?;
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
/// Delete a task
|
||||
pub fn delete_task(task_id: &str) -> Result<(), String> {
|
||||
let db = open_db()?;
|
||||
db.execute("DELETE FROM task_comments WHERE task_id = ?1", params![task_id])
|
||||
.map_err(|e| format!("Delete comments error: {e}"))?;
|
||||
db.execute("DELETE FROM tasks WHERE id = ?1", params![task_id])
|
||||
.map_err(|e| format!("Delete task error: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use rusqlite::Connection;
|
||||
|
||||
fn test_db() -> Connection {
|
||||
let conn = Connection::open_in_memory().unwrap();
|
||||
conn.execute_batch(
|
||||
"CREATE TABLE tasks (
|
||||
id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
description TEXT DEFAULT '',
|
||||
status TEXT DEFAULT 'todo',
|
||||
priority TEXT DEFAULT 'medium',
|
||||
assigned_to TEXT,
|
||||
created_by TEXT NOT NULL,
|
||||
group_id TEXT NOT NULL,
|
||||
parent_task_id TEXT,
|
||||
sort_order INTEGER DEFAULT 0,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
updated_at TEXT DEFAULT (datetime('now')),
|
||||
version INTEGER DEFAULT 1
|
||||
);
|
||||
CREATE TABLE task_comments (
|
||||
id TEXT PRIMARY KEY,
|
||||
task_id TEXT NOT NULL,
|
||||
agent_id TEXT NOT NULL,
|
||||
content TEXT NOT NULL,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
CREATE TABLE channels (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT NOT NULL,
|
||||
group_id TEXT NOT NULL,
|
||||
created_by TEXT NOT NULL,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
CREATE TABLE channel_messages (
|
||||
id TEXT PRIMARY KEY,
|
||||
channel_id TEXT NOT NULL,
|
||||
from_agent TEXT NOT NULL,
|
||||
content TEXT NOT NULL,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);",
|
||||
)
|
||||
.unwrap();
|
||||
conn
|
||||
}
|
||||
|
||||
// ---- REGRESSION: list_tasks named column access ----
|
||||
|
||||
#[test]
|
||||
fn test_list_tasks_named_column_access() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, description, status, priority, assigned_to, created_by, group_id, sort_order)
|
||||
VALUES ('t1', 'Fix bug', 'Critical fix', 'progress', 'high', 'a1', 'admin', 'g1', 1)",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, description, status, priority, assigned_to, created_by, group_id, sort_order)
|
||||
VALUES ('t2', 'Add tests', '', 'todo', 'medium', NULL, 'a1', 'g1', 2)",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT id, title, description, status, priority, assigned_to,
|
||||
created_by, group_id, parent_task_id, sort_order,
|
||||
created_at, updated_at, version
|
||||
FROM tasks WHERE group_id = ?1
|
||||
ORDER BY sort_order ASC, created_at DESC",
|
||||
).unwrap();
|
||||
|
||||
let tasks: Vec<Task> = stmt.query_map(params!["g1"], |row| {
|
||||
Ok(Task {
|
||||
id: row.get("id")?,
|
||||
title: row.get("title")?,
|
||||
description: row.get::<_, String>("description").unwrap_or_default(),
|
||||
status: row.get::<_, String>("status").unwrap_or_else(|_| "todo".into()),
|
||||
priority: row.get::<_, String>("priority").unwrap_or_else(|_| "medium".into()),
|
||||
assigned_to: row.get("assigned_to")?,
|
||||
created_by: row.get("created_by")?,
|
||||
group_id: row.get("group_id")?,
|
||||
parent_task_id: row.get("parent_task_id")?,
|
||||
sort_order: row.get::<_, i32>("sort_order").unwrap_or(0),
|
||||
created_at: row.get::<_, String>("created_at").unwrap_or_default(),
|
||||
updated_at: row.get::<_, String>("updated_at").unwrap_or_default(),
|
||||
version: row.get::<_, i64>("version").unwrap_or(1),
|
||||
})
|
||||
}).unwrap().collect::<Result<Vec<_>, _>>().unwrap();
|
||||
|
||||
assert_eq!(tasks.len(), 2);
|
||||
assert_eq!(tasks[0].id, "t1");
|
||||
assert_eq!(tasks[0].title, "Fix bug");
|
||||
assert_eq!(tasks[0].status, "progress");
|
||||
assert_eq!(tasks[0].priority, "high");
|
||||
assert_eq!(tasks[0].assigned_to, Some("a1".to_string()));
|
||||
assert_eq!(tasks[0].sort_order, 1);
|
||||
|
||||
assert_eq!(tasks[1].id, "t2");
|
||||
assert_eq!(tasks[1].assigned_to, None);
|
||||
assert_eq!(tasks[1].parent_task_id, None);
|
||||
}
|
||||
|
||||
// ---- REGRESSION: task_comments named column access ----
|
||||
|
||||
#[test]
|
||||
fn test_task_comments_named_column_access() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, created_by, group_id) VALUES ('t1', 'Test', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO task_comments (id, task_id, agent_id, content) VALUES ('c1', 't1', 'a1', 'Working on it')",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO task_comments (id, task_id, agent_id, content) VALUES ('c2', 't1', 'a2', 'Looks good')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT id, task_id, agent_id, content, created_at
|
||||
FROM task_comments WHERE task_id = ?1
|
||||
ORDER BY created_at ASC",
|
||||
).unwrap();
|
||||
|
||||
let comments: Vec<TaskComment> = stmt.query_map(params!["t1"], |row| {
|
||||
Ok(TaskComment {
|
||||
id: row.get("id")?,
|
||||
task_id: row.get("task_id")?,
|
||||
agent_id: row.get("agent_id")?,
|
||||
content: row.get("content")?,
|
||||
created_at: row.get::<_, String>("created_at").unwrap_or_default(),
|
||||
})
|
||||
}).unwrap().collect::<Result<Vec<_>, _>>().unwrap();
|
||||
|
||||
assert_eq!(comments.len(), 2);
|
||||
assert_eq!(comments[0].agent_id, "a1");
|
||||
assert_eq!(comments[0].content, "Working on it");
|
||||
assert_eq!(comments[1].agent_id, "a2");
|
||||
}
|
||||
|
||||
// ---- serde camelCase serialization ----
|
||||
|
||||
#[test]
|
||||
fn test_task_serializes_to_camel_case() {
|
||||
let task = Task {
|
||||
id: "t1".into(),
|
||||
title: "Test".into(),
|
||||
description: "desc".into(),
|
||||
status: "todo".into(),
|
||||
priority: "high".into(),
|
||||
assigned_to: Some("a1".into()),
|
||||
created_by: "admin".into(),
|
||||
group_id: "g1".into(),
|
||||
parent_task_id: None,
|
||||
sort_order: 0,
|
||||
created_at: "2026-01-01".into(),
|
||||
updated_at: "2026-01-01".into(),
|
||||
version: 1,
|
||||
};
|
||||
|
||||
let json = serde_json::to_value(&task).unwrap();
|
||||
assert!(json.get("assignedTo").is_some(), "expected camelCase 'assignedTo'");
|
||||
assert!(json.get("createdBy").is_some(), "expected camelCase 'createdBy'");
|
||||
assert!(json.get("groupId").is_some(), "expected camelCase 'groupId'");
|
||||
assert!(json.get("parentTaskId").is_some(), "expected camelCase 'parentTaskId'");
|
||||
assert!(json.get("sortOrder").is_some(), "expected camelCase 'sortOrder'");
|
||||
assert!(json.get("createdAt").is_some(), "expected camelCase 'createdAt'");
|
||||
assert!(json.get("updatedAt").is_some(), "expected camelCase 'updatedAt'");
|
||||
// Ensure no snake_case leaks
|
||||
assert!(json.get("assigned_to").is_none());
|
||||
assert!(json.get("created_by").is_none());
|
||||
assert!(json.get("group_id").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_task_comment_serializes_to_camel_case() {
|
||||
let comment = TaskComment {
|
||||
id: "c1".into(),
|
||||
task_id: "t1".into(),
|
||||
agent_id: "a1".into(),
|
||||
content: "note".into(),
|
||||
created_at: "2026-01-01".into(),
|
||||
};
|
||||
|
||||
let json = serde_json::to_value(&comment).unwrap();
|
||||
assert!(json.get("taskId").is_some(), "expected camelCase 'taskId'");
|
||||
assert!(json.get("agentId").is_some(), "expected camelCase 'agentId'");
|
||||
assert!(json.get("createdAt").is_some(), "expected camelCase 'createdAt'");
|
||||
assert!(json.get("task_id").is_none());
|
||||
}
|
||||
|
||||
// ---- update_task_status validation ----
|
||||
|
||||
#[test]
|
||||
fn test_update_task_status_rejects_invalid() {
|
||||
// Can't call update_task_status directly (uses open_db), but we can test the validation logic
|
||||
let valid = ["todo", "progress", "review", "done", "blocked"];
|
||||
assert!(valid.contains(&"todo"));
|
||||
assert!(valid.contains(&"done"));
|
||||
assert!(!valid.contains(&"invalid"));
|
||||
assert!(!valid.contains(&"cancelled"));
|
||||
}
|
||||
|
||||
// ---- Review channel auto-creation ----
|
||||
|
||||
#[test]
|
||||
fn test_ensure_review_channels_creates_both() {
|
||||
let conn = test_db();
|
||||
let result = ensure_review_channels(&conn, "g1");
|
||||
assert!(result.is_some(), "should return review-queue channel ID");
|
||||
|
||||
// Verify both channels exist
|
||||
let queue_count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM channels WHERE name = 'review-queue' AND group_id = 'g1'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(queue_count, 1);
|
||||
|
||||
let log_count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM channels WHERE name = 'review-log' AND group_id = 'g1'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(log_count, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ensure_review_channels_idempotent() {
|
||||
let conn = test_db();
|
||||
let id1 = ensure_review_channels(&conn, "g1").unwrap();
|
||||
let id2 = ensure_review_channels(&conn, "g1").unwrap();
|
||||
assert_eq!(id1, id2, "should return same channel ID on repeated calls");
|
||||
|
||||
// Verify no duplicates
|
||||
let count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM channels WHERE name = 'review-queue' AND group_id = 'g1'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(count, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_notify_review_channel_posts_message() {
|
||||
let conn = test_db();
|
||||
// Insert a task
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, created_by, group_id) VALUES ('t1', 'Fix login bug', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// Trigger notification (should auto-create channel)
|
||||
notify_review_channel(&conn, "g1", "t1", "Fix login bug");
|
||||
|
||||
// Verify message was posted
|
||||
let msg_count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM channel_messages cm
|
||||
JOIN channels c ON cm.channel_id = c.id
|
||||
WHERE c.name = 'review-queue' AND c.group_id = 'g1'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(msg_count, 1);
|
||||
|
||||
// Verify message content
|
||||
let content: String = conn
|
||||
.query_row(
|
||||
"SELECT cm.content FROM channel_messages cm
|
||||
JOIN channels c ON cm.channel_id = c.id
|
||||
WHERE c.name = 'review-queue'",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(content.contains("Fix login bug"));
|
||||
assert!(content.contains("t1"));
|
||||
}
|
||||
|
||||
// ---- Review queue count ----
|
||||
|
||||
#[test]
|
||||
fn test_review_queue_count_via_sql() {
|
||||
let conn = test_db();
|
||||
// Insert tasks with various statuses
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t1', 'A', 'review', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t2', 'B', 'review', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t3', 'C', 'progress', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t4', 'D', 'review', 'admin', 'g2')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// Count review tasks for g1
|
||||
let count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM tasks WHERE group_id = ?1 AND status = 'review'",
|
||||
params!["g1"],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(count, 2, "should count only review tasks in g1");
|
||||
|
||||
// Count review tasks for g2
|
||||
let count_g2: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM tasks WHERE group_id = ?1 AND status = 'review'",
|
||||
params!["g2"],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(count_g2, 1, "should count only review tasks in g2");
|
||||
}
|
||||
|
||||
// ---- Optimistic locking (version column) ----
|
||||
|
||||
#[test]
|
||||
fn test_version_column_defaults_to_1() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, created_by, group_id) VALUES ('t1', 'Test', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
let version: i64 = conn
|
||||
.query_row("SELECT version FROM tasks WHERE id = 't1'", [], |row| row.get(0))
|
||||
.unwrap();
|
||||
assert_eq!(version, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_optimistic_lock_success() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t1', 'Test', 'todo', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// Update with correct version (1)
|
||||
let rows = conn.execute(
|
||||
"UPDATE tasks SET status = 'progress', version = version + 1, updated_at = datetime('now')
|
||||
WHERE id = 't1' AND version = 1",
|
||||
[],
|
||||
).unwrap();
|
||||
assert_eq!(rows, 1, "should affect 1 row");
|
||||
|
||||
let new_version: i64 = conn
|
||||
.query_row("SELECT version FROM tasks WHERE id = 't1'", [], |row| row.get(0))
|
||||
.unwrap();
|
||||
assert_eq!(new_version, 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_optimistic_lock_conflict() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, status, created_by, group_id) VALUES ('t1', 'Test', 'todo', 'admin', 'g1')",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// First update succeeds
|
||||
conn.execute(
|
||||
"UPDATE tasks SET status = 'progress', version = version + 1, updated_at = datetime('now')
|
||||
WHERE id = 't1' AND version = 1",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// Second update with stale version (1) should affect 0 rows
|
||||
let rows = conn.execute(
|
||||
"UPDATE tasks SET status = 'review', version = version + 1, updated_at = datetime('now')
|
||||
WHERE id = 't1' AND version = 1",
|
||||
[],
|
||||
).unwrap();
|
||||
assert_eq!(rows, 0, "stale version should affect 0 rows");
|
||||
|
||||
// Task should still be in 'progress' state
|
||||
let status: String = conn
|
||||
.query_row("SELECT status FROM tasks WHERE id = 't1'", [], |row| row.get(0))
|
||||
.unwrap();
|
||||
assert_eq!(status, "progress");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_version_in_list_tasks_query() {
|
||||
let conn = test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO tasks (id, title, created_by, group_id, sort_order) VALUES ('t1', 'V1', 'admin', 'g1', 1)",
|
||||
[],
|
||||
).unwrap();
|
||||
// Bump version to 3
|
||||
conn.execute("UPDATE tasks SET version = 3 WHERE id = 't1'", []).unwrap();
|
||||
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT id, title, description, status, priority, assigned_to,
|
||||
created_by, group_id, parent_task_id, sort_order,
|
||||
created_at, updated_at, version
|
||||
FROM tasks WHERE group_id = ?1",
|
||||
).unwrap();
|
||||
|
||||
let tasks: Vec<Task> = stmt.query_map(params!["g1"], |row| {
|
||||
Ok(Task {
|
||||
id: row.get("id")?,
|
||||
title: row.get("title")?,
|
||||
description: row.get::<_, String>("description").unwrap_or_default(),
|
||||
status: row.get::<_, String>("status").unwrap_or_else(|_| "todo".into()),
|
||||
priority: row.get::<_, String>("priority").unwrap_or_else(|_| "medium".into()),
|
||||
assigned_to: row.get("assigned_to")?,
|
||||
created_by: row.get("created_by")?,
|
||||
group_id: row.get("group_id")?,
|
||||
parent_task_id: row.get("parent_task_id")?,
|
||||
sort_order: row.get::<_, i32>("sort_order").unwrap_or(0),
|
||||
created_at: row.get::<_, String>("created_at").unwrap_or_default(),
|
||||
updated_at: row.get::<_, String>("updated_at").unwrap_or_default(),
|
||||
version: row.get::<_, i64>("version").unwrap_or(1),
|
||||
})
|
||||
}).unwrap().collect::<Result<Vec<_>, _>>().unwrap();
|
||||
|
||||
assert_eq!(tasks.len(), 1);
|
||||
assert_eq!(tasks[0].version, 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_version_serializes_to_camel_case() {
|
||||
let task = Task {
|
||||
id: "t1".into(),
|
||||
title: "Test".into(),
|
||||
description: "".into(),
|
||||
status: "todo".into(),
|
||||
priority: "medium".into(),
|
||||
assigned_to: None,
|
||||
created_by: "admin".into(),
|
||||
group_id: "g1".into(),
|
||||
parent_task_id: None,
|
||||
sort_order: 0,
|
||||
created_at: "2026-01-01".into(),
|
||||
updated_at: "2026-01-01".into(),
|
||||
version: 5,
|
||||
};
|
||||
|
||||
let json = serde_json::to_value(&task).unwrap();
|
||||
assert_eq!(json.get("version").unwrap(), 5);
|
||||
}
|
||||
}
|
||||
58
src-tauri/src/commands/agent.rs
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::sidecar::AgentQueryOptions;
|
||||
use bterminal_core::sandbox::SandboxConfig;
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state, options), fields(session_id = %options.session_id))]
|
||||
pub fn agent_query(
|
||||
state: State<'_, AppState>,
|
||||
options: AgentQueryOptions,
|
||||
) -> Result<(), String> {
|
||||
state.sidecar_manager.query(&options)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub fn agent_stop(state: State<'_, AppState>, session_id: String) -> Result<(), String> {
|
||||
state.sidecar_manager.stop_session(&session_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn agent_ready(state: State<'_, AppState>) -> bool {
|
||||
state.sidecar_manager.is_ready()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub fn agent_restart(state: State<'_, AppState>) -> Result<(), String> {
|
||||
state.sidecar_manager.restart()
|
||||
}
|
||||
|
||||
/// Update sidecar sandbox configuration and restart to apply.
|
||||
/// `project_cwds` — directories needing read+write access.
|
||||
/// `worktree_roots` — optional worktree directories.
|
||||
/// `enabled` — whether Landlock sandboxing is active.
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub fn agent_set_sandbox(
|
||||
state: State<'_, AppState>,
|
||||
project_cwds: Vec<String>,
|
||||
worktree_roots: Vec<String>,
|
||||
enabled: bool,
|
||||
) -> Result<(), String> {
|
||||
let cwd_refs: Vec<&str> = project_cwds.iter().map(|s| s.as_str()).collect();
|
||||
let wt_refs: Vec<&str> = worktree_roots.iter().map(|s| s.as_str()).collect();
|
||||
|
||||
let mut sandbox = SandboxConfig::for_projects(&cwd_refs, &wt_refs);
|
||||
sandbox.enabled = enabled;
|
||||
|
||||
state.sidecar_manager.set_sandbox(sandbox);
|
||||
|
||||
// Restart sidecar so Landlock restrictions take effect on the new process
|
||||
if state.sidecar_manager.is_ready() {
|
||||
state.sidecar_manager.restart()?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
152
src-tauri/src/commands/btmsg.rs
Normal file
|
|
@ -0,0 +1,152 @@
|
|||
use crate::btmsg;
|
||||
use crate::groups;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_get_agents(group_id: String) -> Result<Vec<btmsg::BtmsgAgent>, String> {
|
||||
btmsg::get_agents(&group_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_unread_count(agent_id: String) -> Result<i32, String> {
|
||||
btmsg::unread_count(&agent_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_unread_messages(agent_id: String) -> Result<Vec<btmsg::BtmsgMessage>, String> {
|
||||
btmsg::unread_messages(&agent_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_history(agent_id: String, other_id: String, limit: i32) -> Result<Vec<btmsg::BtmsgMessage>, String> {
|
||||
btmsg::history(&agent_id, &other_id, limit)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_send(from_agent: String, to_agent: String, content: String) -> Result<String, String> {
|
||||
btmsg::send_message(&from_agent, &to_agent, &content)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_set_status(agent_id: String, status: String) -> Result<(), String> {
|
||||
btmsg::set_status(&agent_id, &status)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_ensure_admin(group_id: String) -> Result<(), String> {
|
||||
btmsg::ensure_admin(&group_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_all_feed(group_id: String, limit: i32) -> Result<Vec<btmsg::BtmsgFeedMessage>, String> {
|
||||
btmsg::all_feed(&group_id, limit)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_mark_read(reader_id: String, sender_id: String) -> Result<(), String> {
|
||||
btmsg::mark_read_conversation(&reader_id, &sender_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_get_channels(group_id: String) -> Result<Vec<btmsg::BtmsgChannel>, String> {
|
||||
btmsg::get_channels(&group_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_channel_messages(channel_id: String, limit: i32) -> Result<Vec<btmsg::BtmsgChannelMessage>, String> {
|
||||
btmsg::get_channel_messages(&channel_id, limit)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_channel_send(channel_id: String, from_agent: String, content: String) -> Result<String, String> {
|
||||
btmsg::send_channel_message(&channel_id, &from_agent, &content)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_create_channel(name: String, group_id: String, created_by: String) -> Result<String, String> {
|
||||
btmsg::create_channel(&name, &group_id, &created_by)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_add_channel_member(channel_id: String, agent_id: String) -> Result<(), String> {
|
||||
btmsg::add_channel_member(&channel_id, &agent_id)
|
||||
}
|
||||
|
||||
/// Register all agents from a GroupsFile into the btmsg database.
|
||||
/// Creates/updates agent records, sets up contact permissions, ensures review channels.
|
||||
#[tauri::command]
|
||||
pub fn btmsg_register_agents(config: groups::GroupsFile) -> Result<(), String> {
|
||||
btmsg::register_agents_from_groups(&config)
|
||||
}
|
||||
|
||||
// ---- Per-message acknowledgment (seen_messages) ----
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_unseen_messages(agent_id: String, session_id: String) -> Result<Vec<btmsg::BtmsgMessage>, String> {
|
||||
btmsg::unseen_messages(&agent_id, &session_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_mark_seen(session_id: String, message_ids: Vec<String>) -> Result<(), String> {
|
||||
btmsg::mark_messages_seen(&session_id, &message_ids)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_prune_seen() -> Result<u64, String> {
|
||||
btmsg::prune_seen_messages(7 * 24 * 3600, 200_000)
|
||||
}
|
||||
|
||||
// ---- Heartbeat monitoring ----
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_record_heartbeat(agent_id: String) -> Result<(), String> {
|
||||
btmsg::record_heartbeat(&agent_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_get_stale_agents(group_id: String, threshold_secs: i64) -> Result<Vec<String>, String> {
|
||||
btmsg::get_stale_agents(&group_id, threshold_secs)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_get_agent_heartbeats(group_id: String) -> Result<Vec<btmsg::AgentHeartbeat>, String> {
|
||||
btmsg::get_agent_heartbeats(&group_id)
|
||||
}
|
||||
|
||||
// ---- Dead letter queue ----
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_get_dead_letters(group_id: String, limit: i32) -> Result<Vec<btmsg::DeadLetter>, String> {
|
||||
btmsg::get_dead_letters(&group_id, limit)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_clear_dead_letters(group_id: String) -> Result<(), String> {
|
||||
btmsg::clear_dead_letters(&group_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn btmsg_queue_dead_letter(
|
||||
from_agent: String,
|
||||
to_agent: String,
|
||||
content: String,
|
||||
error: String,
|
||||
) -> Result<(), String> {
|
||||
btmsg::queue_dead_letter(&from_agent, &to_agent, &content, &error)
|
||||
}
|
||||
|
||||
// ---- Audit log ----
|
||||
|
||||
#[tauri::command]
|
||||
pub fn audit_log_event(agent_id: String, event_type: String, detail: String) -> Result<(), String> {
|
||||
btmsg::log_audit_event(&agent_id, &event_type, &detail)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn audit_log_list(group_id: String, limit: i32, offset: i32) -> Result<Vec<btmsg::AuditEntry>, String> {
|
||||
btmsg::get_audit_log(&group_id, limit, offset)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn audit_log_for_agent(agent_id: String, limit: i32) -> Result<Vec<btmsg::AuditEntry>, String> {
|
||||
btmsg::get_audit_log_for_agent(&agent_id, limit)
|
||||
}
|
||||
43
src-tauri/src/commands/bttask.rs
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
use crate::bttask;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_list(group_id: String) -> Result<Vec<bttask::Task>, String> {
|
||||
bttask::list_tasks(&group_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_comments(task_id: String) -> Result<Vec<bttask::TaskComment>, String> {
|
||||
bttask::task_comments(&task_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_update_status(task_id: String, status: String, version: i64) -> Result<i64, String> {
|
||||
bttask::update_task_status(&task_id, &status, version)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_add_comment(task_id: String, agent_id: String, content: String) -> Result<String, String> {
|
||||
bttask::add_comment(&task_id, &agent_id, &content)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_create(
|
||||
title: String,
|
||||
description: String,
|
||||
priority: String,
|
||||
group_id: String,
|
||||
created_by: String,
|
||||
assigned_to: Option<String>,
|
||||
) -> Result<String, String> {
|
||||
bttask::create_task(&title, &description, &priority, &group_id, &created_by, assigned_to.as_deref())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_delete(task_id: String) -> Result<(), String> {
|
||||
bttask::delete_task(&task_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn bttask_review_queue_count(group_id: String) -> Result<i64, String> {
|
||||
bttask::review_queue_count(&group_id)
|
||||
}
|
||||
158
src-tauri/src/commands/claude.rs
Normal file
|
|
@ -0,0 +1,158 @@
|
|||
// Claude profile and skill discovery commands
|
||||
|
||||
#[derive(serde::Serialize)]
|
||||
pub struct ClaudeProfile {
|
||||
pub name: String,
|
||||
pub email: Option<String>,
|
||||
pub subscription_type: Option<String>,
|
||||
pub display_name: Option<String>,
|
||||
pub config_dir: String,
|
||||
}
|
||||
|
||||
#[derive(serde::Serialize)]
|
||||
pub struct ClaudeSkill {
|
||||
pub name: String,
|
||||
pub description: String,
|
||||
pub source_path: String,
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn claude_list_profiles() -> Vec<ClaudeProfile> {
|
||||
let mut profiles = Vec::new();
|
||||
|
||||
let config_dir = dirs::config_dir().unwrap_or_default();
|
||||
let profiles_dir = config_dir.join("switcher").join("profiles");
|
||||
let alt_dir_root = config_dir.join("switcher-claude");
|
||||
|
||||
if let Ok(entries) = std::fs::read_dir(&profiles_dir) {
|
||||
for entry in entries.flatten() {
|
||||
if !entry.path().is_dir() { continue; }
|
||||
let name = entry.file_name().to_string_lossy().to_string();
|
||||
|
||||
let toml_path = entry.path().join("profile.toml");
|
||||
let (email, subscription_type, display_name) = if toml_path.exists() {
|
||||
let content = std::fs::read_to_string(&toml_path).unwrap_or_else(|e| {
|
||||
log::warn!("Failed to read {}: {e}", toml_path.display());
|
||||
String::new()
|
||||
});
|
||||
(
|
||||
extract_toml_value(&content, "email"),
|
||||
extract_toml_value(&content, "subscription_type"),
|
||||
extract_toml_value(&content, "display_name"),
|
||||
)
|
||||
} else {
|
||||
(None, None, None)
|
||||
};
|
||||
|
||||
let alt_path = alt_dir_root.join(&name);
|
||||
let config_dir_str = if alt_path.exists() {
|
||||
alt_path.to_string_lossy().to_string()
|
||||
} else {
|
||||
dirs::home_dir()
|
||||
.unwrap_or_default()
|
||||
.join(".claude")
|
||||
.to_string_lossy()
|
||||
.to_string()
|
||||
};
|
||||
|
||||
profiles.push(ClaudeProfile {
|
||||
name,
|
||||
email,
|
||||
subscription_type,
|
||||
display_name,
|
||||
config_dir: config_dir_str,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if profiles.is_empty() {
|
||||
let home = dirs::home_dir().unwrap_or_default();
|
||||
profiles.push(ClaudeProfile {
|
||||
name: "default".to_string(),
|
||||
email: None,
|
||||
subscription_type: None,
|
||||
display_name: None,
|
||||
config_dir: home.join(".claude").to_string_lossy().to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
profiles
|
||||
}
|
||||
|
||||
fn extract_toml_value(content: &str, key: &str) -> Option<String> {
|
||||
for line in content.lines() {
|
||||
let trimmed = line.trim();
|
||||
if let Some(rest) = trimmed.strip_prefix(key) {
|
||||
if let Some(rest) = rest.trim().strip_prefix('=') {
|
||||
let val = rest.trim().trim_matches('"');
|
||||
if !val.is_empty() {
|
||||
return Some(val.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn claude_list_skills() -> Vec<ClaudeSkill> {
|
||||
let mut skills = Vec::new();
|
||||
let home = dirs::home_dir().unwrap_or_default();
|
||||
|
||||
let skills_dir = home.join(".claude").join("skills");
|
||||
if let Ok(entries) = std::fs::read_dir(&skills_dir) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
let (name, skill_file) = if path.is_dir() {
|
||||
let skill_md = path.join("SKILL.md");
|
||||
if skill_md.exists() {
|
||||
(entry.file_name().to_string_lossy().to_string(), skill_md)
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
} else if path.extension().map_or(false, |e| e == "md") {
|
||||
let stem = path.file_stem().unwrap_or_default().to_string_lossy().to_string();
|
||||
(stem, path.clone())
|
||||
} else {
|
||||
continue;
|
||||
};
|
||||
|
||||
let description = if let Ok(content) = std::fs::read_to_string(&skill_file) {
|
||||
content.lines()
|
||||
.filter(|l| !l.trim().is_empty() && !l.starts_with('#'))
|
||||
.next()
|
||||
.unwrap_or("")
|
||||
.trim()
|
||||
.chars()
|
||||
.take(120)
|
||||
.collect()
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
|
||||
skills.push(ClaudeSkill {
|
||||
name,
|
||||
description,
|
||||
source_path: skill_file.to_string_lossy().to_string(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
skills
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn claude_read_skill(path: String) -> Result<String, String> {
|
||||
let skills_dir = dirs::home_dir()
|
||||
.ok_or("Cannot determine home directory")?
|
||||
.join(".claude")
|
||||
.join("skills");
|
||||
let canonical_skills = skills_dir.canonicalize()
|
||||
.map_err(|_| "Skills directory does not exist".to_string())?;
|
||||
let canonical_path = std::path::Path::new(&path).canonicalize()
|
||||
.map_err(|e| format!("Invalid skill path: {e}"))?;
|
||||
if !canonical_path.starts_with(&canonical_skills) {
|
||||
return Err("Access denied: path is outside skills directory".to_string());
|
||||
}
|
||||
std::fs::read_to_string(&canonical_path).map_err(|e| format!("Failed to read skill: {e}"))
|
||||
}
|
||||
130
src-tauri/src/commands/files.rs
Normal file
|
|
@ -0,0 +1,130 @@
|
|||
// File browser commands (Files tab)
|
||||
|
||||
#[derive(serde::Serialize)]
|
||||
pub struct DirEntry {
|
||||
pub name: String,
|
||||
pub path: String,
|
||||
pub is_dir: bool,
|
||||
pub size: u64,
|
||||
pub ext: String,
|
||||
}
|
||||
|
||||
/// Content types for file viewer routing
|
||||
#[derive(serde::Serialize)]
|
||||
#[serde(tag = "type")]
|
||||
pub enum FileContent {
|
||||
Text { content: String, lang: String },
|
||||
Binary { message: String },
|
||||
TooLarge { size: u64 },
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn list_directory_children(path: String) -> Result<Vec<DirEntry>, String> {
|
||||
let dir = std::path::Path::new(&path);
|
||||
if !dir.is_dir() {
|
||||
return Err(format!("Not a directory: {path}"));
|
||||
}
|
||||
let mut entries = Vec::new();
|
||||
let read_dir = std::fs::read_dir(dir).map_err(|e| format!("Failed to read directory: {e}"))?;
|
||||
for entry in read_dir {
|
||||
let entry = entry.map_err(|e| format!("Failed to read entry: {e}"))?;
|
||||
let metadata = entry.metadata().map_err(|e| format!("Failed to read metadata: {e}"))?;
|
||||
let name = entry.file_name().to_string_lossy().into_owned();
|
||||
if name.starts_with('.') {
|
||||
continue;
|
||||
}
|
||||
let is_dir = metadata.is_dir();
|
||||
let ext = if is_dir {
|
||||
String::new()
|
||||
} else {
|
||||
std::path::Path::new(&name)
|
||||
.extension()
|
||||
.map(|e| e.to_string_lossy().to_lowercase())
|
||||
.unwrap_or_default()
|
||||
};
|
||||
entries.push(DirEntry {
|
||||
name,
|
||||
path: entry.path().to_string_lossy().into_owned(),
|
||||
is_dir,
|
||||
size: metadata.len(),
|
||||
ext,
|
||||
});
|
||||
}
|
||||
entries.sort_by(|a, b| {
|
||||
b.is_dir.cmp(&a.is_dir).then_with(|| a.name.to_lowercase().cmp(&b.name.to_lowercase()))
|
||||
});
|
||||
Ok(entries)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn read_file_content(path: String) -> Result<FileContent, String> {
|
||||
let file_path = std::path::Path::new(&path);
|
||||
if !file_path.is_file() {
|
||||
return Err(format!("Not a file: {path}"));
|
||||
}
|
||||
let metadata = std::fs::metadata(&path).map_err(|e| format!("Failed to read metadata: {e}"))?;
|
||||
let size = metadata.len();
|
||||
|
||||
if size > 10 * 1024 * 1024 {
|
||||
return Ok(FileContent::TooLarge { size });
|
||||
}
|
||||
|
||||
let ext = file_path
|
||||
.extension()
|
||||
.map(|e| e.to_string_lossy().to_lowercase())
|
||||
.unwrap_or_default();
|
||||
|
||||
let binary_exts = ["png", "jpg", "jpeg", "gif", "webp", "svg", "ico", "bmp",
|
||||
"pdf", "zip", "tar", "gz", "7z", "rar",
|
||||
"mp3", "mp4", "wav", "ogg", "webm", "avi",
|
||||
"woff", "woff2", "ttf", "otf", "eot",
|
||||
"exe", "dll", "so", "dylib", "wasm"];
|
||||
if binary_exts.contains(&ext.as_str()) {
|
||||
return Ok(FileContent::Binary { message: format!("Binary file ({ext}), {size} bytes") });
|
||||
}
|
||||
|
||||
let content = std::fs::read_to_string(&path)
|
||||
.map_err(|_| format!("Binary or non-UTF-8 file"))?;
|
||||
|
||||
let lang = match ext.as_str() {
|
||||
"rs" => "rust",
|
||||
"ts" | "tsx" => "typescript",
|
||||
"js" | "jsx" | "mjs" | "cjs" => "javascript",
|
||||
"py" => "python",
|
||||
"svelte" => "svelte",
|
||||
"html" | "htm" => "html",
|
||||
"css" | "scss" | "less" => "css",
|
||||
"json" => "json",
|
||||
"toml" => "toml",
|
||||
"yaml" | "yml" => "yaml",
|
||||
"md" | "markdown" => "markdown",
|
||||
"sh" | "bash" | "zsh" => "bash",
|
||||
"sql" => "sql",
|
||||
"xml" => "xml",
|
||||
"csv" => "csv",
|
||||
"dockerfile" => "dockerfile",
|
||||
"lock" => "text",
|
||||
_ => "text",
|
||||
}.to_string();
|
||||
|
||||
Ok(FileContent::Text { content, lang })
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn write_file_content(path: String, content: String) -> Result<(), String> {
|
||||
let file_path = std::path::Path::new(&path);
|
||||
if !file_path.is_file() {
|
||||
return Err(format!("Not an existing file: {path}"));
|
||||
}
|
||||
std::fs::write(&path, content.as_bytes())
|
||||
.map_err(|e| format!("Failed to write file: {e}"))
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn pick_directory(window: tauri::Window) -> Result<Option<String>, String> {
|
||||
let dialog = rfd::AsyncFileDialog::new()
|
||||
.set_title("Select Directory")
|
||||
.set_parent(&window);
|
||||
let folder = dialog.pick_folder().await;
|
||||
Ok(folder.map(|f| f.path().to_string_lossy().into_owned()))
|
||||
}
|
||||
16
src-tauri/src/commands/groups.rs
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
use crate::groups::{GroupsFile, MdFileEntry};
|
||||
|
||||
#[tauri::command]
|
||||
pub fn groups_load() -> Result<GroupsFile, String> {
|
||||
crate::groups::load_groups()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn groups_save(config: GroupsFile) -> Result<(), String> {
|
||||
crate::groups::save_groups(&config)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn discover_markdown_files(cwd: String) -> Result<Vec<MdFileEntry>, String> {
|
||||
crate::groups::discover_markdown_files(&cwd)
|
||||
}
|
||||
67
src-tauri/src/commands/knowledge.rs
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::{ctx, memora};
|
||||
|
||||
// --- ctx commands ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_init_db(state: State<'_, AppState>) -> Result<(), String> {
|
||||
state.ctx_db.init_db()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_register_project(state: State<'_, AppState>, name: String, description: String, work_dir: Option<String>) -> Result<(), String> {
|
||||
state.ctx_db.register_project(&name, &description, work_dir.as_deref())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_get_context(state: State<'_, AppState>, project: String) -> Result<Vec<ctx::CtxEntry>, String> {
|
||||
state.ctx_db.get_context(&project)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_get_shared(state: State<'_, AppState>) -> Result<Vec<ctx::CtxEntry>, String> {
|
||||
state.ctx_db.get_shared()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_get_summaries(state: State<'_, AppState>, project: String, limit: i64) -> Result<Vec<ctx::CtxSummary>, String> {
|
||||
state.ctx_db.get_summaries(&project, limit)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ctx_search(state: State<'_, AppState>, query: String) -> Result<Vec<ctx::CtxEntry>, String> {
|
||||
state.ctx_db.search(&query)
|
||||
}
|
||||
|
||||
// --- Memora commands (read-only) ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn memora_available(state: State<'_, AppState>) -> bool {
|
||||
state.memora_db.is_available()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn memora_list(
|
||||
state: State<'_, AppState>,
|
||||
tags: Option<Vec<String>>,
|
||||
limit: Option<i64>,
|
||||
offset: Option<i64>,
|
||||
) -> Result<memora::MemoraSearchResult, String> {
|
||||
state.memora_db.list(tags, limit.unwrap_or(50), offset.unwrap_or(0))
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn memora_search(
|
||||
state: State<'_, AppState>,
|
||||
query: String,
|
||||
tags: Option<Vec<String>>,
|
||||
limit: Option<i64>,
|
||||
) -> Result<memora::MemoraSearchResult, String> {
|
||||
state.memora_db.search(&query, tags, limit.unwrap_or(50))
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn memora_get(state: State<'_, AppState>, id: i64) -> Result<Option<memora::MemoraNode>, String> {
|
||||
state.memora_db.get(id)
|
||||
}
|
||||
46
src-tauri/src/commands/misc.rs
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
// Miscellaneous commands — CLI args, URL opening, frontend telemetry
|
||||
|
||||
#[tauri::command]
|
||||
pub fn cli_get_group() -> Option<String> {
|
||||
let args: Vec<String> = std::env::args().collect();
|
||||
let mut i = 1;
|
||||
while i < args.len() {
|
||||
if args[i] == "--group" {
|
||||
if i + 1 < args.len() {
|
||||
return Some(args[i + 1].clone());
|
||||
}
|
||||
} else if let Some(val) = args[i].strip_prefix("--group=") {
|
||||
return Some(val.to_string());
|
||||
}
|
||||
i += 1;
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn open_url(url: String) -> Result<(), String> {
|
||||
if !url.starts_with("http://") && !url.starts_with("https://") {
|
||||
return Err("Only http/https URLs are allowed".into());
|
||||
}
|
||||
std::process::Command::new("xdg-open")
|
||||
.arg(&url)
|
||||
.spawn()
|
||||
.map_err(|e| format!("Failed to open URL: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn is_test_mode() -> bool {
|
||||
std::env::var("BTERMINAL_TEST").map_or(false, |v| v == "1")
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn frontend_log(level: String, message: String, context: Option<serde_json::Value>) {
|
||||
match level.as_str() {
|
||||
"error" => tracing::error!(source = "frontend", ?context, "{message}"),
|
||||
"warn" => tracing::warn!(source = "frontend", ?context, "{message}"),
|
||||
"info" => tracing::info!(source = "frontend", ?context, "{message}"),
|
||||
"debug" => tracing::debug!(source = "frontend", ?context, "{message}"),
|
||||
_ => tracing::trace!(source = "frontend", ?context, "{message}"),
|
||||
}
|
||||
}
|
||||
17
src-tauri/src/commands/mod.rs
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
pub mod pty;
|
||||
pub mod agent;
|
||||
pub mod watcher;
|
||||
pub mod session;
|
||||
pub mod persistence;
|
||||
pub mod knowledge;
|
||||
pub mod claude;
|
||||
pub mod groups;
|
||||
pub mod files;
|
||||
pub mod remote;
|
||||
pub mod misc;
|
||||
pub mod btmsg;
|
||||
pub mod bttask;
|
||||
pub mod notifications;
|
||||
pub mod search;
|
||||
pub mod plugins;
|
||||
pub mod secrets;
|
||||
8
src-tauri/src/commands/notifications.rs
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
// Notification commands — desktop notification via notify-rust
|
||||
|
||||
use crate::notifications;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn notify_desktop(title: String, body: String, urgency: String) -> Result<(), String> {
|
||||
notifications::send_desktop_notification(&title, &body, &urgency)
|
||||
}
|
||||
109
src-tauri/src/commands/persistence.rs
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::session::{AgentMessageRecord, ProjectAgentState, SessionMetric, SessionAnchorRecord};
|
||||
|
||||
// --- Agent message persistence ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn agent_messages_save(
|
||||
state: State<'_, AppState>,
|
||||
session_id: String,
|
||||
project_id: String,
|
||||
sdk_session_id: Option<String>,
|
||||
messages: Vec<AgentMessageRecord>,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.save_agent_messages(
|
||||
&session_id,
|
||||
&project_id,
|
||||
sdk_session_id.as_deref(),
|
||||
&messages,
|
||||
)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn agent_messages_load(
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
) -> Result<Vec<AgentMessageRecord>, String> {
|
||||
state.session_db.load_agent_messages(&project_id)
|
||||
}
|
||||
|
||||
// --- Project agent state ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn project_agent_state_save(
|
||||
state: State<'_, AppState>,
|
||||
agent_state: ProjectAgentState,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.save_project_agent_state(&agent_state)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn project_agent_state_load(
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
) -> Result<Option<ProjectAgentState>, String> {
|
||||
state.session_db.load_project_agent_state(&project_id)
|
||||
}
|
||||
|
||||
// --- Session metrics ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_metric_save(
|
||||
state: State<'_, AppState>,
|
||||
metric: SessionMetric,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.save_session_metric(&metric)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_metrics_load(
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
limit: i64,
|
||||
) -> Result<Vec<SessionMetric>, String> {
|
||||
state.session_db.load_session_metrics(&project_id, limit)
|
||||
}
|
||||
|
||||
// --- Session anchors ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_anchors_save(
|
||||
state: State<'_, AppState>,
|
||||
anchors: Vec<SessionAnchorRecord>,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.save_session_anchors(&anchors)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_anchors_load(
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
) -> Result<Vec<SessionAnchorRecord>, String> {
|
||||
state.session_db.load_session_anchors(&project_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_anchor_delete(
|
||||
state: State<'_, AppState>,
|
||||
id: String,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.delete_session_anchor(&id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_anchors_clear(
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.delete_project_anchors(&project_id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_anchor_update_type(
|
||||
state: State<'_, AppState>,
|
||||
id: String,
|
||||
anchor_type: String,
|
||||
) -> Result<(), String> {
|
||||
state.session_db.update_anchor_type(&id, &anchor_type)
|
||||
}
|
||||
20
src-tauri/src/commands/plugins.rs
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
// Plugin discovery and file access commands
|
||||
|
||||
use crate::AppState;
|
||||
use crate::plugins;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn plugins_discover(state: tauri::State<'_, AppState>) -> Vec<plugins::PluginMeta> {
|
||||
let plugins_dir = state.app_config.plugins_dir();
|
||||
plugins::discover_plugins(&plugins_dir)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn plugin_read_file(
|
||||
state: tauri::State<'_, AppState>,
|
||||
plugin_id: String,
|
||||
filename: String,
|
||||
) -> Result<String, String> {
|
||||
let plugins_dir = state.app_config.plugins_dir();
|
||||
plugins::read_plugin_file(&plugins_dir, &plugin_id, &filename)
|
||||
}
|
||||
33
src-tauri/src/commands/pty.rs
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::pty::PtyOptions;
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state), fields(shell = ?options.shell))]
|
||||
pub fn pty_spawn(
|
||||
state: State<'_, AppState>,
|
||||
options: PtyOptions,
|
||||
) -> Result<String, String> {
|
||||
state.pty_manager.spawn(options)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn pty_write(state: State<'_, AppState>, id: String, data: String) -> Result<(), String> {
|
||||
state.pty_manager.write(&id, &data)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn pty_resize(
|
||||
state: State<'_, AppState>,
|
||||
id: String,
|
||||
cols: u16,
|
||||
rows: u16,
|
||||
) -> Result<(), String> {
|
||||
state.pty_manager.resize(&id, cols, rows)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub fn pty_kill(state: State<'_, AppState>, id: String) -> Result<(), String> {
|
||||
state.pty_manager.kill(&id)
|
||||
}
|
||||
85
src-tauri/src/commands/remote.rs
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::remote::{self, RemoteMachineConfig, RemoteMachineInfo};
|
||||
use crate::pty::PtyOptions;
|
||||
use crate::sidecar::AgentQueryOptions;
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_list(state: State<'_, AppState>) -> Result<Vec<RemoteMachineInfo>, String> {
|
||||
Ok(state.remote_manager.list_machines().await)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_add(state: State<'_, AppState>, config: RemoteMachineConfig) -> Result<String, String> {
|
||||
Ok(state.remote_manager.add_machine(config).await)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_remove(state: State<'_, AppState>, machine_id: String) -> Result<(), String> {
|
||||
state.remote_manager.remove_machine(&machine_id).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(app, state))]
|
||||
pub async fn remote_connect(app: tauri::AppHandle, state: State<'_, AppState>, machine_id: String) -> Result<(), String> {
|
||||
state.remote_manager.connect(&app, &machine_id).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub async fn remote_disconnect(state: State<'_, AppState>, machine_id: String) -> Result<(), String> {
|
||||
state.remote_manager.disconnect(&machine_id).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state, options), fields(session_id = %options.session_id))]
|
||||
pub async fn remote_agent_query(state: State<'_, AppState>, machine_id: String, options: AgentQueryOptions) -> Result<(), String> {
|
||||
state.remote_manager.agent_query(&machine_id, &options).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub async fn remote_agent_stop(state: State<'_, AppState>, machine_id: String, session_id: String) -> Result<(), String> {
|
||||
state.remote_manager.agent_stop(&machine_id, &session_id).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state), fields(shell = ?options.shell))]
|
||||
pub async fn remote_pty_spawn(state: State<'_, AppState>, machine_id: String, options: PtyOptions) -> Result<String, String> {
|
||||
state.remote_manager.pty_spawn(&machine_id, &options).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_pty_write(state: State<'_, AppState>, machine_id: String, id: String, data: String) -> Result<(), String> {
|
||||
state.remote_manager.pty_write(&machine_id, &id, &data).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_pty_resize(state: State<'_, AppState>, machine_id: String, id: String, cols: u16, rows: u16) -> Result<(), String> {
|
||||
state.remote_manager.pty_resize(&machine_id, &id, cols, rows).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn remote_pty_kill(state: State<'_, AppState>, machine_id: String, id: String) -> Result<(), String> {
|
||||
state.remote_manager.pty_kill(&machine_id, &id).await
|
||||
}
|
||||
|
||||
// --- SPKI certificate pinning ---
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument]
|
||||
pub async fn remote_probe_spki(url: String) -> Result<String, String> {
|
||||
remote::probe_spki_hash(&url).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub async fn remote_add_pin(state: State<'_, AppState>, machine_id: String, pin: String) -> Result<(), String> {
|
||||
state.remote_manager.add_spki_pin(&machine_id, pin).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
#[tracing::instrument(skip(state))]
|
||||
pub async fn remote_remove_pin(state: State<'_, AppState>, machine_id: String, pin: String) -> Result<(), String> {
|
||||
state.remote_manager.remove_spki_pin(&machine_id, &pin).await
|
||||
}
|
||||
59
src-tauri/src/commands/search.rs
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
use crate::AppState;
|
||||
use crate::search::SearchResult;
|
||||
use tauri::State;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_init(state: State<'_, AppState>) -> Result<(), String> {
|
||||
// SearchDb is already initialized during app setup; this is a no-op
|
||||
// but allows the frontend to confirm readiness.
|
||||
let _db = &state.search_db;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_query(
|
||||
state: State<'_, AppState>,
|
||||
query: String,
|
||||
limit: Option<i32>,
|
||||
) -> Result<Vec<SearchResult>, String> {
|
||||
state.search_db.search_all(&query, limit.unwrap_or(20))
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_rebuild(state: State<'_, AppState>) -> Result<(), String> {
|
||||
state.search_db.rebuild_index()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_index_message(
|
||||
state: State<'_, AppState>,
|
||||
session_id: String,
|
||||
role: String,
|
||||
content: String,
|
||||
) -> Result<(), String> {
|
||||
state.search_db.index_message(&session_id, &role, &content)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_index_task(
|
||||
state: State<'_, AppState>,
|
||||
task_id: String,
|
||||
title: String,
|
||||
description: String,
|
||||
status: String,
|
||||
assigned_to: String,
|
||||
) -> Result<(), String> {
|
||||
state.search_db.index_task(&task_id, &title, &description, &status, &assigned_to)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn search_index_btmsg(
|
||||
state: State<'_, AppState>,
|
||||
msg_id: String,
|
||||
from_agent: String,
|
||||
to_agent: String,
|
||||
content: String,
|
||||
channel: String,
|
||||
) -> Result<(), String> {
|
||||
state.search_db.index_btmsg(&msg_id, &from_agent, &to_agent, &content, &channel)
|
||||
}
|
||||
34
src-tauri/src/commands/secrets.rs
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
use crate::secrets::SecretsManager;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_store(key: String, value: String) -> Result<(), String> {
|
||||
SecretsManager::store_secret(&key, &value)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_get(key: String) -> Result<Option<String>, String> {
|
||||
SecretsManager::get_secret(&key)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_delete(key: String) -> Result<(), String> {
|
||||
SecretsManager::delete_secret(&key)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_list() -> Result<Vec<String>, String> {
|
||||
SecretsManager::list_keys()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_has_keyring() -> bool {
|
||||
SecretsManager::has_keyring()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn secrets_known_keys() -> Vec<String> {
|
||||
crate::secrets::KNOWN_KEYS
|
||||
.iter()
|
||||
.map(|s| s.to_string())
|
||||
.collect()
|
||||
}
|
||||
81
src-tauri/src/commands/session.rs
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::session::{Session, LayoutState, SshSession};
|
||||
|
||||
// --- Session persistence ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_list(state: State<'_, AppState>) -> Result<Vec<Session>, String> {
|
||||
state.session_db.list_sessions()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_save(state: State<'_, AppState>, session: Session) -> Result<(), String> {
|
||||
state.session_db.save_session(&session)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_delete(state: State<'_, AppState>, id: String) -> Result<(), String> {
|
||||
state.session_db.delete_session(&id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_update_title(state: State<'_, AppState>, id: String, title: String) -> Result<(), String> {
|
||||
state.session_db.update_title(&id, &title)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_touch(state: State<'_, AppState>, id: String) -> Result<(), String> {
|
||||
state.session_db.touch_session(&id)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn session_update_group(state: State<'_, AppState>, id: String, group_name: String) -> Result<(), String> {
|
||||
state.session_db.update_group(&id, &group_name)
|
||||
}
|
||||
|
||||
// --- Layout ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn layout_save(state: State<'_, AppState>, layout: LayoutState) -> Result<(), String> {
|
||||
state.session_db.save_layout(&layout)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn layout_load(state: State<'_, AppState>) -> Result<LayoutState, String> {
|
||||
state.session_db.load_layout()
|
||||
}
|
||||
|
||||
// --- Settings ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn settings_get(state: State<'_, AppState>, key: String) -> Result<Option<String>, String> {
|
||||
state.session_db.get_setting(&key)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn settings_set(state: State<'_, AppState>, key: String, value: String) -> Result<(), String> {
|
||||
state.session_db.set_setting(&key, &value)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn settings_list(state: State<'_, AppState>) -> Result<Vec<(String, String)>, String> {
|
||||
state.session_db.get_all_settings()
|
||||
}
|
||||
|
||||
// --- SSH sessions ---
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ssh_session_list(state: State<'_, AppState>) -> Result<Vec<SshSession>, String> {
|
||||
state.session_db.list_ssh_sessions()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ssh_session_save(state: State<'_, AppState>, session: SshSession) -> Result<(), String> {
|
||||
state.session_db.save_ssh_session(&session)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn ssh_session_delete(state: State<'_, AppState>, id: String) -> Result<(), String> {
|
||||
state.session_db.delete_ssh_session(&id)
|
||||
}
|
||||
43
src-tauri/src/commands/watcher.rs
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
use tauri::State;
|
||||
use crate::AppState;
|
||||
use crate::fs_watcher::FsWatcherStatus;
|
||||
|
||||
#[tauri::command]
|
||||
pub fn file_watch(
|
||||
app: tauri::AppHandle,
|
||||
state: State<'_, AppState>,
|
||||
pane_id: String,
|
||||
path: String,
|
||||
) -> Result<String, String> {
|
||||
state.file_watcher.watch(&app, &pane_id, &path)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn file_unwatch(state: State<'_, AppState>, pane_id: String) {
|
||||
state.file_watcher.unwatch(&pane_id);
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn file_read(state: State<'_, AppState>, path: String) -> Result<String, String> {
|
||||
state.file_watcher.read_file(&path)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn fs_watch_project(
|
||||
app: tauri::AppHandle,
|
||||
state: State<'_, AppState>,
|
||||
project_id: String,
|
||||
cwd: String,
|
||||
) -> Result<(), String> {
|
||||
state.fs_watcher.watch_project(&app, &project_id, &cwd)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn fs_unwatch_project(state: State<'_, AppState>, project_id: String) {
|
||||
state.fs_watcher.unwatch_project(&project_id);
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub fn fs_watcher_status(state: State<'_, AppState>) -> FsWatcherStatus {
|
||||
state.fs_watcher.status()
|
||||
}
|
||||
317
src-tauri/src/ctx.rs
Normal file
|
|
@ -0,0 +1,317 @@
|
|||
// ctx — Read-only access to the Claude Code context manager database
|
||||
// Database: ~/.claude-context/context.db (managed by ctx CLI tool)
|
||||
// Path configurable via new_with_path() for test isolation.
|
||||
|
||||
use rusqlite::{Connection, params};
|
||||
use serde::Serialize;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Mutex;
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct CtxEntry {
|
||||
pub project: String,
|
||||
pub key: String,
|
||||
pub value: String,
|
||||
pub updated_at: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct CtxSummary {
|
||||
pub project: String,
|
||||
pub summary: String,
|
||||
pub created_at: String,
|
||||
}
|
||||
|
||||
pub struct CtxDb {
|
||||
conn: Mutex<Option<Connection>>,
|
||||
path: PathBuf,
|
||||
}
|
||||
|
||||
impl CtxDb {
|
||||
#[cfg(test)]
|
||||
fn default_db_path() -> PathBuf {
|
||||
dirs::home_dir()
|
||||
.unwrap_or_default()
|
||||
.join(".claude-context")
|
||||
.join("context.db")
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub fn new() -> Self {
|
||||
Self::new_with_path(Self::default_db_path())
|
||||
}
|
||||
|
||||
/// Create a CtxDb with a custom database path (for test isolation).
|
||||
pub fn new_with_path(db_path: PathBuf) -> Self {
|
||||
let conn = if db_path.exists() {
|
||||
Connection::open_with_flags(
|
||||
&db_path,
|
||||
rusqlite::OpenFlags::SQLITE_OPEN_READ_ONLY | rusqlite::OpenFlags::SQLITE_OPEN_NO_MUTEX,
|
||||
).ok()
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
Self { conn: Mutex::new(conn), path: db_path }
|
||||
}
|
||||
|
||||
/// Create the context database directory and schema, then open a read-only connection.
|
||||
pub fn init_db(&self) -> Result<(), String> {
|
||||
let db_path = &self.path;
|
||||
|
||||
// Create parent directory
|
||||
if let Some(parent) = db_path.parent() {
|
||||
std::fs::create_dir_all(parent)
|
||||
.map_err(|e| format!("Failed to create directory: {e}"))?;
|
||||
}
|
||||
|
||||
// Open read-write to create schema
|
||||
let conn = Connection::open(&db_path)
|
||||
.map_err(|e| format!("Failed to create database: {e}"))?;
|
||||
|
||||
conn.execute_batch("PRAGMA journal_mode=WAL;").map_err(|e| format!("WAL mode failed: {e}"))?;
|
||||
|
||||
conn.execute_batch(
|
||||
"CREATE TABLE IF NOT EXISTS sessions (
|
||||
name TEXT PRIMARY KEY,
|
||||
description TEXT,
|
||||
work_dir TEXT,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS contexts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
project TEXT NOT NULL,
|
||||
key TEXT NOT NULL,
|
||||
value TEXT NOT NULL,
|
||||
updated_at TEXT DEFAULT (datetime('now')),
|
||||
UNIQUE(project, key)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS shared (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT NOT NULL,
|
||||
updated_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
project TEXT NOT NULL,
|
||||
summary TEXT NOT NULL,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS contexts_fts USING fts5(
|
||||
project, key, value, content=contexts, content_rowid=id
|
||||
);
|
||||
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS shared_fts USING fts5(
|
||||
key, value, content=shared
|
||||
);
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS contexts_ai AFTER INSERT ON contexts BEGIN
|
||||
INSERT INTO contexts_fts(rowid, project, key, value)
|
||||
VALUES (new.id, new.project, new.key, new.value);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS contexts_ad AFTER DELETE ON contexts BEGIN
|
||||
INSERT INTO contexts_fts(contexts_fts, rowid, project, key, value)
|
||||
VALUES ('delete', old.id, old.project, old.key, old.value);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS contexts_au AFTER UPDATE ON contexts BEGIN
|
||||
INSERT INTO contexts_fts(contexts_fts, rowid, project, key, value)
|
||||
VALUES ('delete', old.id, old.project, old.key, old.value);
|
||||
INSERT INTO contexts_fts(rowid, project, key, value)
|
||||
VALUES (new.id, new.project, new.key, new.value);
|
||||
END;"
|
||||
).map_err(|e| format!("Schema creation failed: {e}"))?;
|
||||
|
||||
drop(conn);
|
||||
|
||||
// Re-open as read-only for normal operation
|
||||
let ro_conn = Connection::open_with_flags(
|
||||
&db_path,
|
||||
rusqlite::OpenFlags::SQLITE_OPEN_READ_ONLY | rusqlite::OpenFlags::SQLITE_OPEN_NO_MUTEX,
|
||||
).map_err(|e| format!("Failed to reopen database: {e}"))?;
|
||||
|
||||
let mut lock = self.conn.lock().map_err(|_| "ctx database lock poisoned".to_string())?;
|
||||
*lock = Some(ro_conn);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Register a project in the ctx database (creates if not exists).
|
||||
/// Opens a brief read-write connection; the main self.conn stays read-only.
|
||||
pub fn register_project(&self, name: &str, description: &str, work_dir: Option<&str>) -> Result<(), String> {
|
||||
let db_path = &self.path;
|
||||
let conn = Connection::open(&db_path)
|
||||
.map_err(|e| format!("ctx database not found: {e}"))?;
|
||||
|
||||
conn.execute(
|
||||
"INSERT OR IGNORE INTO sessions (name, description, work_dir) VALUES (?1, ?2, ?3)",
|
||||
rusqlite::params![name, description, work_dir],
|
||||
).map_err(|e| format!("Failed to register project: {e}"))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn get_context(&self, project: &str) -> Result<Vec<CtxEntry>, String> {
|
||||
let lock = self.conn.lock().map_err(|_| "ctx database lock poisoned".to_string())?;
|
||||
let conn = lock.as_ref().ok_or("ctx database not found")?;
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT project, key, value, updated_at FROM contexts WHERE project = ?1 ORDER BY key")
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?;
|
||||
|
||||
let entries = stmt
|
||||
.query_map(params![project], |row| {
|
||||
Ok(CtxEntry {
|
||||
project: row.get(0)?,
|
||||
key: row.get(1)?,
|
||||
value: row.get(2)?,
|
||||
updated_at: row.get(3)?,
|
||||
})
|
||||
})
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("ctx row read failed: {e}"))?;
|
||||
|
||||
Ok(entries)
|
||||
}
|
||||
|
||||
pub fn get_shared(&self) -> Result<Vec<CtxEntry>, String> {
|
||||
let lock = self.conn.lock().map_err(|_| "ctx database lock poisoned".to_string())?;
|
||||
let conn = lock.as_ref().ok_or("ctx database not found")?;
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT key, value, updated_at FROM shared ORDER BY key")
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?;
|
||||
|
||||
let entries = stmt
|
||||
.query_map([], |row| {
|
||||
Ok(CtxEntry {
|
||||
project: "shared".to_string(),
|
||||
key: row.get(0)?,
|
||||
value: row.get(1)?,
|
||||
updated_at: row.get(2)?,
|
||||
})
|
||||
})
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("ctx row read failed: {e}"))?;
|
||||
|
||||
Ok(entries)
|
||||
}
|
||||
|
||||
pub fn get_summaries(&self, project: &str, limit: i64) -> Result<Vec<CtxSummary>, String> {
|
||||
let lock = self.conn.lock().map_err(|_| "ctx database lock poisoned".to_string())?;
|
||||
let conn = lock.as_ref().ok_or("ctx database not found")?;
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT project, summary, created_at FROM summaries WHERE project = ?1 ORDER BY created_at DESC LIMIT ?2")
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?;
|
||||
|
||||
let summaries = stmt
|
||||
.query_map(params![project, limit], |row| {
|
||||
Ok(CtxSummary {
|
||||
project: row.get(0)?,
|
||||
summary: row.get(1)?,
|
||||
created_at: row.get(2)?,
|
||||
})
|
||||
})
|
||||
.map_err(|e| format!("ctx query failed: {e}"))?
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("ctx row read failed: {e}"))?;
|
||||
|
||||
Ok(summaries)
|
||||
}
|
||||
|
||||
pub fn search(&self, query: &str) -> Result<Vec<CtxEntry>, String> {
|
||||
let lock = self.conn.lock().map_err(|_| "ctx database lock poisoned".to_string())?;
|
||||
let conn = lock.as_ref().ok_or("ctx database not found")?;
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT project, key, value FROM contexts_fts WHERE contexts_fts MATCH ?1 LIMIT 50")
|
||||
.map_err(|e| format!("ctx search failed: {e}"))?;
|
||||
|
||||
let entries = stmt
|
||||
.query_map(params![query], |row| {
|
||||
Ok(CtxEntry {
|
||||
project: row.get(0)?,
|
||||
key: row.get(1)?,
|
||||
value: row.get(2)?,
|
||||
updated_at: String::new(), // FTS5 virtual table doesn't store updated_at
|
||||
})
|
||||
})
|
||||
.map_err(|e| {
|
||||
let msg = e.to_string();
|
||||
if msg.contains("fts5") || msg.contains("syntax") {
|
||||
format!("Invalid search query syntax: {e}")
|
||||
} else {
|
||||
format!("ctx search failed: {e}")
|
||||
}
|
||||
})?
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| format!("ctx row read failed: {e}"))?;
|
||||
|
||||
Ok(entries)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
/// Create a CtxDb with conn set to None, simulating a missing database.
|
||||
fn make_missing_db() -> CtxDb {
|
||||
CtxDb { conn: Mutex::new(None), path: PathBuf::from("/nonexistent/context.db") }
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_new_does_not_panic() {
|
||||
// CtxDb::new() should never panic even if ~/.claude-context/context.db
|
||||
// doesn't exist — it just stores None for the connection.
|
||||
let _db = CtxDb::new();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_context_missing_db_returns_error() {
|
||||
let db = make_missing_db();
|
||||
let result = db.get_context("any-project");
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "ctx database not found");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_shared_missing_db_returns_error() {
|
||||
let db = make_missing_db();
|
||||
let result = db.get_shared();
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "ctx database not found");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_summaries_missing_db_returns_error() {
|
||||
let db = make_missing_db();
|
||||
let result = db.get_summaries("any-project", 10);
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "ctx database not found");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_search_missing_db_returns_error() {
|
||||
let db = make_missing_db();
|
||||
let result = db.search("anything");
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "ctx database not found");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_search_empty_query_missing_db_returns_error() {
|
||||
let db = make_missing_db();
|
||||
let result = db.search("");
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "ctx database not found");
|
||||
}
|
||||
}
|
||||
11
src-tauri/src/event_sink.rs
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
use bterminal_core::event::EventSink;
|
||||
use tauri::{AppHandle, Emitter};
|
||||
|
||||
/// Bridges bterminal-core's EventSink trait to Tauri's event system.
|
||||
pub struct TauriEventSink(pub AppHandle);
|
||||
|
||||
impl EventSink for TauriEventSink {
|
||||
fn emit(&self, event: &str, payload: serde_json::Value) {
|
||||
let _ = self.0.emit(event, &payload);
|
||||
}
|
||||
}
|
||||