Cartero now has an operator deck, not just a rewrite.
The platform has moved beyond the first Go migration. Cartero now ships a write-capable local admin surface, stable JSON output across the CLI, delivery profiles, launch schedules, richer findings correlation, a deeper first-party plugin catalog, and stronger end-to-end verification.
$ cartero serve --addr 127.0.0.1:8080
$ open /ops
$ cartero --json workspace status
$ cartero profile create --name dry-run-preview ...
$ cartero schedule create --name q2-finance ...
$ cartero finding import --file scans/nuclei.jsonl
One workspace, three real front doors.
CLI for repeatable runs
Preview, validate, import, migrate, export, planning, and plugin inspection all stay scriptable from the terminal.
Browser-based operator deck
cartero serve now exposes a write-capable deck at /ops with audience import, findings import, clone import, report export, and legacy migration.
Stable JSON API contract
The CLI gained --json, and the same workspace data is exposed under /api/... for dashboards, local tooling, and CI glue.
Embedded planning layer
Delivery profiles and launch schedules now live inside SQLite next to campaign snapshots, events, imports, and findings.
Correlated findings model
Imported scanner results are deduplicated, categorized, status-tracked, linked to campaigns, and exported with the rest of the workspace.
Portable local runtime
Everything still centers on .cartero/cartero.sqlite. No MongoDB, no daemon, and no extra service choreography.
Plan, operate, correlate, and verify from the same local state.
Seed the workspace
Initialize SQLite, sync plugin manifests, seed template content, and prepare default planning profiles.
cartero workspace init
Open the deck
Launch the local admin surface and use the browser to import audiences, normalize findings, create delivery profiles, and schedule launches.
cartero serve --addr 127.0.0.1:8080
Bring in external signals
CSV, JSON, JSONL, and SARIF imports land in the same workspace used for campaign history, telemetry, and exports.
cartero finding import --file scans/nuclei.jsonl --source nightly-nuclei
Attach delivery intent
Define reusable profiles, connect them to schedules, and carry launch planning forward as workspace data instead of external notes.
cartero schedule create --name q2-finance --profile dry-run-preview ...
Keep the contract testable
Store tests, CLI JSON tests, HTTP flow tests, go vet, and smoke runs all now cover the expanded operator surface.
go test ./...
go vet ./...
bash ./scripts/smoke.sh
Same workspace, same model, different entry points.
cartero --json workspace status
Machine-readable state for scripts and CI.
/ops
Write-capable operator deck for local workflows.
/api/workspace
Structured workspace stats, doctor output, and plugin discovery.
/api/profiles + /api/schedules
Portable launch metadata with no extra service layer.
{
"ok": true,
"action": "workspace.get",
"data": {
"stats": {
"template_count": 10,
"profile_count": 2,
"schedule_count": 1
},
"plugins": {
"manifests": 9
}
}
}
The browser deck, CLI JSON output, and report exports now speak the same shape instead of forcing operators to scrape terminal text.
More first-party packs, broader local coverage.
The built-in template pack is larger now, with more locales, departments, and scenarios. Delivery profiles are seeded alongside manifests, so the operator deck comes up with a usable planning baseline instead of an empty shell.
The bigger surface now has matching tests.
Store coverage
Profiles, schedules, deduplicated findings, and planning stats are asserted directly against the SQLite model.
CLI JSON coverage
Regression tests now cover structured success and failure output, including planning commands and validation failures.
HTTP flow coverage
The admin surface is exercised with browser-style request tests for the deck, uploads, profile creation, schedule creation, exports, and workspace API reads.
Smoke path
The smoke script now runs the new planning commands and JSON paths in addition to the older preview, import, and migration flows.
Modern runtime, preserved history.
Legacy Bolt workspaces
Old embedded Bolt state still imports automatically into SQLite the first time a workspace is opened.
Legacy Mongo exports
People, hits, and credential export files can be brought forward through a one-way migration path and reviewed from the operator deck.
Redacted carry-forward
Historical credential artifacts are converted into redacted findings so review value survives without dragging raw values into the modern platform.