Section 9 Labs / GitHub Pages

Cartero now has an operator deck, not just a rewrite.

The platform has moved beyond the first Go migration. Cartero now ships a write-capable local admin surface, stable JSON output across the CLI, delivery profiles, launch schedules, richer findings correlation, a deeper first-party plugin catalog, and stronger end-to-end verification.

Current surface
$ cartero serve --addr 127.0.0.1:8080
$ open /ops
$ cartero --json workspace status
$ cartero profile create --name dry-run-preview ...
$ cartero schedule create --name q2-finance ...
$ cartero finding import --file scans/nuclei.jsonl
0 runtime services to provision
0 first-party plugin manifests
0 seeded template scenarios
0 finding import formats
Product surface

One workspace, three real front doors.

01

CLI for repeatable runs

Preview, validate, import, migrate, export, planning, and plugin inspection all stay scriptable from the terminal.

02

Browser-based operator deck

cartero serve now exposes a write-capable deck at /ops with audience import, findings import, clone import, report export, and legacy migration.

03

Stable JSON API contract

The CLI gained --json, and the same workspace data is exposed under /api/... for dashboards, local tooling, and CI glue.

04

Embedded planning layer

Delivery profiles and launch schedules now live inside SQLite next to campaign snapshots, events, imports, and findings.

05

Correlated findings model

Imported scanner results are deduplicated, categorized, status-tracked, linked to campaigns, and exported with the rest of the workspace.

06

Portable local runtime

Everything still centers on .cartero/cartero.sqlite. No MongoDB, no daemon, and no extra service choreography.

Operator workflow

Plan, operate, correlate, and verify from the same local state.

Bootstrap

Seed the workspace

Initialize SQLite, sync plugin manifests, seed template content, and prepare default planning profiles.

cartero workspace init
Operate

Open the deck

Launch the local admin surface and use the browser to import audiences, normalize findings, create delivery profiles, and schedule launches.

cartero serve --addr 127.0.0.1:8080
Correlate

Bring in external signals

CSV, JSON, JSONL, and SARIF imports land in the same workspace used for campaign history, telemetry, and exports.

cartero finding import --file scans/nuclei.jsonl --source nightly-nuclei
Plan

Attach delivery intent

Define reusable profiles, connect them to schedules, and carry launch planning forward as workspace data instead of external notes.

cartero schedule create --name q2-finance --profile dry-run-preview ...
Verify

Keep the contract testable

Store tests, CLI JSON tests, HTTP flow tests, go vet, and smoke runs all now cover the expanded operator surface.

go test ./...
go vet ./...
bash ./scripts/smoke.sh
Shared contract

Same workspace, same model, different entry points.

CLI and browser stay aligned
Terminal cartero --json workspace status

Machine-readable state for scripts and CI.

Browser /ops

Write-capable operator deck for local workflows.

HTTP /api/workspace

Structured workspace stats, doctor output, and plugin discovery.

Planning /api/profiles + /api/schedules

Portable launch metadata with no extra service layer.

Expanded web surface
{
  "ok": true,
  "action": "workspace.get",
  "data": {
    "stats": {
      "template_count": 10,
      "profile_count": 2,
      "schedule_count": 1
    },
    "plugins": {
      "manifests": 9
    }
  }
}

The browser deck, CLI JSON output, and report exports now speak the same shape instead of forcing operators to scrape terminal text.

Plugin and content catalog

More first-party packs, broader local coverage.

local-previewpreview.render
template-librarycampaign.template
clone-importercampaign.import
analytics-exportresults.export
audience-syncaudience.sync
engagement-recorderevents.ingest
findings-correlatorfindings.correlate
delivery-profilesdelivery.profile
schedule-plannerschedule.plan

The built-in template pack is larger now, with more locales, departments, and scenarios. Delivery profiles are seeded alongside manifests, so the operator deck comes up with a usable planning baseline instead of an empty shell.

0 seeded templates
0 default delivery profiles
0 planning-aware surfaces
Verification

The bigger surface now has matching tests.

Store coverage

Profiles, schedules, deduplicated findings, and planning stats are asserted directly against the SQLite model.

CLI JSON coverage

Regression tests now cover structured success and failure output, including planning commands and validation failures.

HTTP flow coverage

The admin surface is exercised with browser-style request tests for the deck, uploads, profile creation, schedule creation, exports, and workspace API reads.

Smoke path

The smoke script now runs the new planning commands and JSON paths in addition to the older preview, import, and migration flows.

Migration path

Modern runtime, preserved history.

Legacy Bolt workspaces

Old embedded Bolt state still imports automatically into SQLite the first time a workspace is opened.

Legacy Mongo exports

People, hits, and credential export files can be brought forward through a one-way migration path and reviewed from the operator deck.

Redacted carry-forward

Historical credential artifacts are converted into redacted findings so review value survives without dragging raw values into the modern platform.