feat: canonical SQL DDL + schema validator + migration tool

- schema/canonical.sql: 29 tables across 3 databases, CHECK constraints,
  foreign keys, 13 indexes, WAL mode, schema_version tracking
- tools/validate-schema.ts: applies DDL to in-memory SQLite, extracts
  PRAGMA table_info + sqlite_master metadata as JSON
- tools/migrate-db.ts: CLI for Tauri→Electrobun data migration with
  atomic transaction, version fencing, INSERT OR IGNORE
- docs/SWITCHING.md: migration guide with prerequisites and troubleshooting
This commit is contained in:
Hibryda 2026-03-22 03:33:15 +01:00
parent 0f75cb8e32
commit 631fc2efc8
4 changed files with 718 additions and 0 deletions

91
docs/SWITCHING.md Normal file
View file

@ -0,0 +1,91 @@
# Switching from Tauri to Electrobun
This guide covers migrating your data when switching from the Tauri v2/v3 build to the Electrobun build of AGOR.
## Overview
Both stacks use SQLite for persistence but store databases in different locations with slightly different schemas. The `migrate-db` tool copies data from a Tauri source database into an Electrobun target database using the canonical schema as the contract.
**This is a one-way migration.** The source database is opened read-only and never modified. The target database receives copies of all compatible data. Running the tool multiple times is safe (uses `INSERT OR IGNORE`).
## Prerequisites
- **Bun** >= 1.0 (ships with bun:sqlite)
- A working Tauri installation with existing data in `~/.local/share/agor/sessions.db`
- The Electrobun build installed (creates `~/.config/agor/settings.db` on first run)
## Database Locations
| Database | Tauri path | Electrobun path |
|----------|-----------|-----------------|
| Settings + sessions | `~/.local/share/agor/sessions.db` | `~/.config/agor/settings.db` |
| btmsg + bttask | `~/.local/share/agor/btmsg.db` | `~/.local/share/agor/btmsg.db` (shared) |
| FTS5 search | `~/.local/share/agor/search.db` | `~/.local/share/agor/search.db` (shared) |
The btmsg and search databases are already shared between stacks -- no migration needed for those.
## Steps
### 1. Stop both applications
Close the Tauri app and the Electrobun app if either is running. SQLite WAL mode handles concurrent reads, but stopping both avoids partial writes.
### 2. Back up your data
```bash
cp ~/.local/share/agor/sessions.db ~/.local/share/agor/sessions.db.bak
cp ~/.config/agor/settings.db ~/.config/agor/settings.db.bak 2>/dev/null
```
### 3. Run the migration
```bash
# Migrate settings/sessions from Tauri to Electrobun
bun tools/migrate-db.ts \
--from ~/.local/share/agor/sessions.db \
--to ~/.config/agor/settings.db
```
The tool will:
- Open the source database read-only
- Apply the canonical schema to the target if needed
- Copy all matching tables using `INSERT OR IGNORE` (existing rows are preserved)
- Report per-table row counts
- Write a version fence to `schema_version`
### 4. Verify
Launch the Electrobun app and confirm your projects, settings, and session history appear correctly.
### 5. (Optional) Validate the schema
```bash
bun tools/validate-schema.ts | jq '.tableCount'
# Should output the number of tables defined in canonical.sql
```
## Version Fence
After migration, the target database's `schema_version` table contains:
| Column | Value |
|--------|-------|
| `version` | `1` |
| `migration_source` | `migrate-db` |
| `migration_timestamp` | ISO-8601 timestamp of the migration |
This fence prevents accidental re-application of older schemas and provides an audit trail.
## Troubleshooting
**"Source database not found"** -- Verify the Tauri data directory path. On some systems, `XDG_DATA_HOME` may override `~/.local/share`.
**"Schema application failed"** -- The canonical.sql file may be out of sync with a newer database. Pull the latest version and retry.
**"Migration failed on table X"** -- The migration runs inside a single transaction. If any table fails, all changes are rolled back. Check the error message for column mismatches, which typically mean the canonical schema needs updating.
## What is NOT Migrated
- **FTS5 search indexes** -- These are virtual tables that cannot be copied via `SELECT *`. Rebuild the index from the Electrobun app (Ctrl+Shift+F, then rebuild).
- **Layout state** -- The Electrobun UI uses a different layout system. Your layout preferences will reset.
- **SSH key files** -- Only the SSH connection metadata (host, port, username) is migrated. Private key files remain on disk at their original paths.

299
schema/canonical.sql Normal file
View file

@ -0,0 +1,299 @@
-- canonical.sql — Authoritative DDL for all AGOR SQLite databases.
-- Both Tauri (Rust) and Electrobun (Bun) stacks MUST match this schema.
-- DBs: settings.db, btmsg.db, search.db. See docs/SWITCHING.md.
PRAGMA journal_mode = WAL;
PRAGMA foreign_keys = ON;
PRAGMA busy_timeout = 5000;
-- ── VERSION TRACKING ──────────────────────────────────────────────────
-- Schema version fence. One row per database file.
CREATE TABLE IF NOT EXISTS schema_version (
version INTEGER NOT NULL,
migration_source TEXT, -- e.g. 'tauri', 'electrobun', 'migrate-db'
migration_timestamp TEXT -- ISO-8601 when last migrated
);
-- ── settings.db TABLES ────────────────────────────────────────────────
-- Key-value application settings (theme, fonts, shell, cwd, etc.)
CREATE TABLE IF NOT EXISTS settings (key TEXT PRIMARY KEY, value TEXT NOT NULL);
-- Project configurations stored as JSON blobs.
CREATE TABLE IF NOT EXISTS projects (id TEXT PRIMARY KEY, config TEXT NOT NULL);
-- Workspace groups (sidebar tabs).
CREATE TABLE IF NOT EXISTS groups (
id TEXT PRIMARY KEY, name TEXT NOT NULL,
icon TEXT NOT NULL, position INTEGER NOT NULL
);
-- User-created custom themes.
CREATE TABLE IF NOT EXISTS custom_themes (
id TEXT PRIMARY KEY, name TEXT NOT NULL,
palette TEXT NOT NULL -- JSON blob (color map)
);
-- User-customized keyboard shortcuts.
CREATE TABLE IF NOT EXISTS keybindings (id TEXT PRIMARY KEY, chord TEXT NOT NULL);
-- Agent sessions per project (provider-agnostic).
CREATE TABLE IF NOT EXISTS agent_sessions (
project_id TEXT NOT NULL,
session_id TEXT PRIMARY KEY,
provider TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'idle'
CHECK (status IN ('idle','running','error','stopped','completed')),
cost_usd REAL NOT NULL DEFAULT 0,
input_tokens INTEGER NOT NULL DEFAULT 0,
output_tokens INTEGER NOT NULL DEFAULT 0,
model TEXT NOT NULL DEFAULT '',
error TEXT,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_agent_sessions_project ON agent_sessions(project_id);
-- Individual agent messages within a session.
CREATE TABLE IF NOT EXISTS agent_messages (
session_id TEXT NOT NULL,
msg_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL DEFAULT '',
tool_name TEXT,
tool_input TEXT,
timestamp INTEGER NOT NULL,
cost_usd REAL NOT NULL DEFAULT 0,
input_tokens INTEGER NOT NULL DEFAULT 0,
output_tokens INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY (session_id, msg_id),
FOREIGN KEY (session_id) REFERENCES agent_sessions(session_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_agent_messages_session ON agent_messages(session_id, timestamp);
-- Historical session metrics (cost, tokens, turns) per project.
CREATE TABLE IF NOT EXISTS session_metrics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL,
session_id TEXT NOT NULL,
start_time INTEGER NOT NULL,
end_time INTEGER NOT NULL,
peak_tokens INTEGER DEFAULT 0,
turn_count INTEGER DEFAULT 0,
tool_call_count INTEGER DEFAULT 0,
cost_usd REAL DEFAULT 0,
model TEXT,
status TEXT NOT NULL,
error_message TEXT
);
CREATE INDEX IF NOT EXISTS idx_session_metrics_project ON session_metrics(project_id);
-- Session anchors — preserved turns surviving compaction.
CREATE TABLE IF NOT EXISTS session_anchors (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
message_id TEXT NOT NULL,
anchor_type TEXT NOT NULL
CHECK (anchor_type IN ('auto','pinned','promoted')),
content TEXT NOT NULL,
estimated_tokens INTEGER NOT NULL,
turn_index INTEGER NOT NULL DEFAULT 0,
created_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_session_anchors_project ON session_anchors(project_id);
-- Remote relay machine connections.
CREATE TABLE IF NOT EXISTS remote_machines (
id TEXT PRIMARY KEY,
label TEXT NOT NULL,
url TEXT NOT NULL,
token TEXT NOT NULL,
auto_connect INTEGER NOT NULL DEFAULT 0,
spki_pins TEXT NOT NULL DEFAULT '[]', -- JSON array of SPKI hashes
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
-- Legacy v2 layout state (single-row).
CREATE TABLE IF NOT EXISTS layout_state (
id INTEGER PRIMARY KEY CHECK (id = 1),
preset TEXT NOT NULL DEFAULT '1-col', pane_ids TEXT NOT NULL DEFAULT '[]'
);
-- Legacy v2 terminal sessions.
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
title TEXT NOT NULL,
shell TEXT,
cwd TEXT,
args TEXT,
created_at INTEGER NOT NULL,
last_used_at INTEGER NOT NULL,
group_name TEXT DEFAULT '',
project_id TEXT DEFAULT ''
);
-- SSH connection profiles.
CREATE TABLE IF NOT EXISTS ssh_sessions (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER NOT NULL DEFAULT 22,
username TEXT NOT NULL,
key_file TEXT DEFAULT '',
folder TEXT DEFAULT '',
color TEXT DEFAULT '#89b4fa',
created_at INTEGER NOT NULL,
last_used_at INTEGER NOT NULL
);
-- ── btmsg.db TABLES — inter-agent messaging & task board ──────────────
-- Registered agents (Tier 1 management + Tier 2 project).
CREATE TABLE IF NOT EXISTS agents (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
role TEXT NOT NULL,
group_id TEXT NOT NULL,
tier INTEGER NOT NULL DEFAULT 2,
model TEXT,
cwd TEXT,
system_prompt TEXT,
status TEXT DEFAULT 'stopped',
last_active_at TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
-- Agent-to-agent visibility ACL.
CREATE TABLE IF NOT EXISTS contacts (
agent_id TEXT NOT NULL, contact_id TEXT NOT NULL,
PRIMARY KEY (agent_id, contact_id)
);
-- Direct messages between agents.
CREATE TABLE IF NOT EXISTS messages (
id TEXT PRIMARY KEY,
from_agent TEXT NOT NULL,
to_agent TEXT NOT NULL,
content TEXT NOT NULL,
read INTEGER DEFAULT 0,
reply_to TEXT,
group_id TEXT NOT NULL,
sender_group_id TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_messages_to ON messages(to_agent, read);
CREATE INDEX IF NOT EXISTS idx_messages_from ON messages(from_agent);
-- Broadcast channels within a group.
CREATE TABLE IF NOT EXISTS channels (
id TEXT PRIMARY KEY, name TEXT NOT NULL, group_id TEXT NOT NULL,
created_by TEXT NOT NULL, created_at TEXT DEFAULT (datetime('now'))
);
-- Channel membership (many-to-many).
CREATE TABLE IF NOT EXISTS channel_members (
channel_id TEXT NOT NULL, agent_id TEXT NOT NULL,
joined_at TEXT DEFAULT (datetime('now')),
PRIMARY KEY (channel_id, agent_id)
);
-- Messages posted to channels.
CREATE TABLE IF NOT EXISTS channel_messages (
id TEXT PRIMARY KEY,
channel_id TEXT NOT NULL,
from_agent TEXT NOT NULL,
content TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_channel_messages ON channel_messages(channel_id, created_at);
-- Agent liveness heartbeats (unix epoch seconds).
CREATE TABLE IF NOT EXISTS heartbeats (agent_id TEXT PRIMARY KEY, timestamp INTEGER NOT NULL);
-- Undeliverable messages (recipient not found, etc.)
CREATE TABLE IF NOT EXISTS dead_letter_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
from_agent TEXT NOT NULL,
to_agent TEXT NOT NULL,
content TEXT NOT NULL,
error TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
-- Audit trail for agent actions.
CREATE TABLE IF NOT EXISTS audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT NOT NULL,
event_type TEXT NOT NULL,
detail TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
-- Per-session message acknowledgment (prevents re-processing).
CREATE TABLE IF NOT EXISTS seen_messages (
session_id TEXT NOT NULL, message_id TEXT NOT NULL,
seen_at INTEGER NOT NULL DEFAULT (unixepoch()),
PRIMARY KEY (session_id, message_id)
);
CREATE INDEX IF NOT EXISTS idx_seen_messages_session ON seen_messages(session_id);
-- Kanban task board.
CREATE TABLE IF NOT EXISTS tasks (
id TEXT PRIMARY KEY,
title TEXT NOT NULL,
description TEXT DEFAULT '',
status TEXT DEFAULT 'todo'
CHECK (status IN ('todo','progress','review','done','blocked')),
priority TEXT DEFAULT 'medium'
CHECK (priority IN ('low','medium','high')),
assigned_to TEXT,
created_by TEXT NOT NULL,
group_id TEXT NOT NULL,
parent_task_id TEXT,
sort_order INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now')),
updated_at TEXT DEFAULT (datetime('now')),
version INTEGER DEFAULT 1 -- optimistic locking
);
CREATE INDEX IF NOT EXISTS idx_tasks_group ON tasks(group_id);
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status);
-- Task discussion comments.
CREATE TABLE IF NOT EXISTS task_comments (
id TEXT PRIMARY KEY,
task_id TEXT NOT NULL,
agent_id TEXT NOT NULL,
content TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_task_comments_task ON task_comments(task_id);
-- ── search.db TABLES — FTS5 full-text search ─────────────────────────
-- Agent message search index.
CREATE VIRTUAL TABLE IF NOT EXISTS search_messages USING fts5(
session_id,
role,
content,
timestamp
);
-- Task search index.
CREATE VIRTUAL TABLE IF NOT EXISTS search_tasks USING fts5(
task_id,
title,
description,
status,
assigned_to
);
-- Inter-agent message search index.
CREATE VIRTUAL TABLE IF NOT EXISTS search_btmsg USING fts5(
message_id,
from_agent,
to_agent,
content,
channel_name
);

219
tools/migrate-db.ts Normal file
View file

@ -0,0 +1,219 @@
#!/usr/bin/env bun
/**
* migrate-db.ts Migrate AGOR data from a Tauri (source) database to an
* Electrobun (target) database using the canonical schema.
*
* Usage:
* bun tools/migrate-db.ts --from <source.db> --to <target.db>
* bun tools/migrate-db.ts --from ~/.local/share/agor/sessions.db \
* --to ~/.config/agor/settings.db
*
* Behavior:
* - Opens source DB read-only (never modifies it).
* - Creates/opens target DB, applies canonical.sql if schema_version absent.
* - Copies rows for every table present in BOTH source and target.
* - Wraps all inserts in a single transaction (atomic rollback on failure).
* - Reports per-table row counts.
* - Writes migration fence to schema_version in target.
*/
import { Database } from "bun:sqlite";
import { readFileSync, existsSync } from "fs";
import { join, resolve } from "path";
// ── CLI args ──────────────────────────────────────────────────────────────────
function usage(): never {
console.error("Usage: bun tools/migrate-db.ts --from <source.db> --to <target.db>");
process.exit(1);
}
const args = process.argv.slice(2);
let fromPath = "";
let toPath = "";
for (let i = 0; i < args.length; i++) {
if (args[i] === "--from" && args[i + 1]) fromPath = args[++i];
else if (args[i] === "--to" && args[i + 1]) toPath = args[++i];
else if (args[i] === "--help" || args[i] === "-h") usage();
}
if (!fromPath || !toPath) usage();
fromPath = resolve(fromPath);
toPath = resolve(toPath);
if (!existsSync(fromPath)) {
console.error(`Source database not found: ${fromPath}`);
process.exit(1);
}
// ── Load canonical DDL ────────────────────────────────────────────────────────
const schemaPath = join(import.meta.dir, "..", "schema", "canonical.sql");
let ddl: string;
try {
ddl = readFileSync(schemaPath, "utf-8");
} catch (err) {
console.error(`Failed to read canonical schema: ${err}`);
process.exit(1);
}
// ── Open databases ────────────────────────────────────────────────────────────
const sourceDb = new Database(fromPath, { readonly: true });
const targetDb = new Database(toPath);
// Apply pragmas to target
targetDb.exec("PRAGMA journal_mode = WAL");
targetDb.exec("PRAGMA foreign_keys = OFF"); // Disable during migration for insert order flexibility
targetDb.exec("PRAGMA busy_timeout = 5000");
// Apply canonical schema to target if needed
const hasVersion = (() => {
try {
const row = targetDb
.query<{ cnt: number }, []>("SELECT COUNT(*) AS cnt FROM schema_version")
.get();
return (row?.cnt ?? 0) > 0;
} catch {
return false;
}
})();
if (!hasVersion) {
console.log("Applying canonical schema to target database...");
targetDb.exec(ddl);
}
// ── Discover migratable tables ────────────────────────────────────────────────
/** Get regular (non-virtual, non-internal) table names from a database. */
function getTableNames(db: Database): Set<string> {
const rows = db
.prepare(
`SELECT name FROM sqlite_master
WHERE type = 'table'
AND name NOT LIKE 'sqlite_%'
AND name NOT LIKE '%_content'
AND name NOT LIKE '%_data'
AND name NOT LIKE '%_idx'
AND name NOT LIKE '%_config'
AND name NOT LIKE '%_docsize'
ORDER BY name`,
)
.all() as Array<{ name: string }>;
return new Set(rows.map((r) => r.name));
}
const sourceTables = getTableNames(sourceDb);
const targetTables = getTableNames(targetDb);
// Only migrate tables present in both source and target
const migratable = [...sourceTables].filter((t) => targetTables.has(t));
if (migratable.length === 0) {
console.log("No overlapping tables found between source and target.");
sourceDb.close();
targetDb.close();
process.exit(0);
}
console.log(`\nMigrating ${migratable.length} tables from:`);
console.log(` source: ${fromPath}`);
console.log(` target: ${toPath}\n`);
// ── Migrate data ──────────────────────────────────────────────────────────────
interface MigrationResult {
table: string;
rows: number;
skipped: boolean;
error?: string;
}
const results: MigrationResult[] = [];
const migrate = targetDb.transaction(() => {
for (const table of migratable) {
// Skip schema_version — we write our own fence
if (table === "schema_version") {
results.push({ table, rows: 0, skipped: true });
continue;
}
try {
// Read all rows from source
const rows = sourceDb.prepare(`SELECT * FROM "${table}"`).all();
if (rows.length === 0) {
results.push({ table, rows: 0, skipped: false });
continue;
}
// Get column names from the first row
const columns = Object.keys(rows[0] as Record<string, unknown>);
const placeholders = columns.map(() => "?").join(", ");
const colList = columns.map((c) => `"${c}"`).join(", ");
const insertStmt = targetDb.prepare(
`INSERT OR IGNORE INTO "${table}" (${colList}) VALUES (${placeholders})`,
);
let count = 0;
for (const row of rows) {
const values = columns.map((c) => (row as Record<string, unknown>)[c] ?? null);
insertStmt.run(...values);
count++;
}
results.push({ table, rows: count, skipped: false });
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
results.push({ table, rows: 0, skipped: false, error: msg });
throw new Error(`Migration failed on table '${table}': ${msg}`);
}
}
});
try {
migrate();
} catch (err) {
console.error(`\nMIGRATION ROLLED BACK: ${err}`);
sourceDb.close();
targetDb.close();
process.exit(1);
}
// ── Write version fence ───────────────────────────────────────────────────────
const timestamp = new Date().toISOString();
targetDb.exec("DELETE FROM schema_version");
targetDb
.prepare(
"INSERT INTO schema_version (version, migration_source, migration_timestamp) VALUES (?, ?, ?)",
)
.run(1, "migrate-db", timestamp);
// Re-enable foreign keys
targetDb.exec("PRAGMA foreign_keys = ON");
// ── Report ────────────────────────────────────────────────────────────────────
console.log("Table Rows Status");
console.log("─".repeat(50));
let totalRows = 0;
for (const r of results) {
const status = r.skipped ? "skipped" : r.error ? `ERROR: ${r.error}` : "ok";
const rowStr = String(r.rows).padStart(8);
console.log(`${r.table.padEnd(25)}${rowStr} ${status}`);
totalRows += r.rows;
}
console.log("─".repeat(50));
console.log(`Total: ${totalRows} rows migrated across ${results.filter((r) => !r.skipped && !r.error).length} tables`);
console.log(`Version fence: v1 at ${timestamp}`);
sourceDb.close();
targetDb.close();

109
tools/validate-schema.ts Normal file
View file

@ -0,0 +1,109 @@
#!/usr/bin/env bun
/**
* validate-schema.ts Apply canonical.sql to an in-memory SQLite database
* and extract structural metadata for CI comparison.
*
* Usage: bun tools/validate-schema.ts
* Output: JSON to stdout with tables, columns, indexes, and schema version.
*/
import { Database } from "bun:sqlite";
import { readFileSync } from "fs";
import { join } from "path";
// ── Load canonical DDL ────────────────────────────────────────────────────────
const schemaPath = join(import.meta.dir, "..", "schema", "canonical.sql");
let ddl: string;
try {
ddl = readFileSync(schemaPath, "utf-8");
} catch (err) {
console.error(`Failed to read ${schemaPath}: ${err}`);
process.exit(1);
}
// ── Apply to in-memory DB ─────────────────────────────────────────────────────
const db = new Database(":memory:");
try {
db.exec(ddl);
} catch (err) {
console.error(`Schema application failed: ${err}`);
process.exit(1);
}
// ── Extract metadata ──────────────────────────────────────────────────────────
interface ColumnInfo {
cid: number;
name: string;
type: string;
notnull: number;
dflt_value: string | null;
pk: number;
}
interface TableMeta {
name: string;
type: string; // 'table' | 'virtual'
columns: ColumnInfo[];
indexes: string[];
}
// Get all tables and virtual tables from sqlite_master
const masterRows = db
.prepare(
`SELECT name, type FROM sqlite_master
WHERE type IN ('table', 'virtual table')
AND name NOT LIKE 'sqlite_%'
AND name NOT LIKE '%_content'
AND name NOT LIKE '%_data'
AND name NOT LIKE '%_idx'
AND name NOT LIKE '%_config'
AND name NOT LIKE '%_docsize'
ORDER BY name`,
)
.all() as Array<{ name: string; type: string }>;
const tables: TableMeta[] = [];
for (const { name, type } of masterRows) {
// Get column info (not available for FTS5 virtual tables)
let columns: ColumnInfo[] = [];
try {
columns = db
.prepare(`PRAGMA table_info('${name}')`)
.all() as ColumnInfo[];
} catch {
// FTS5 tables don't support table_info
}
// Get indexes for this table
const indexRows = db
.prepare(
`SELECT name FROM sqlite_master
WHERE type = 'index' AND tbl_name = ?
ORDER BY name`,
)
.all(name) as Array<{ name: string }>;
tables.push({
name,
type: type === "table" ? "table" : "virtual",
columns,
indexes: indexRows.map((r) => r.name),
});
}
// ── Output ────────────────────────────────────────────────────────────────────
const output = {
schemaFile: "schema/canonical.sql",
version: 1,
tableCount: tables.length,
tables,
};
console.log(JSON.stringify(output, null, 2));
db.close();