Report Chat — Interactive Q&A on Your Last Run
Report Chat lets you ask concise, natural-language questions about a processing run. It works out-of-the-box with your latest run and can reconstruct a minimal context even if no LLM context was saved, using the per-file processing log CSV and the generated PDF report. The experience is optimized for discoverability:- Defaults to your latest run via the database (no paths required)
- If you decline using the latest, it presents a styled, readable table of recent runs and lets you pick one by number
- Robust to auto-backups and renames; paths are resolved automatically
- Clear, friendly messages and clean cancel behavior
Note: Report Chat uses an LLM to answer questions about your run context. You need a valid OPENAI_API_KEY (or compatible) to chat. The “create” reporting command can still generate deterministic methods text without an API key.
Quick Start
- The CLI finds your latest run in the database
- It resolves any backup move recorded for that run
- It tries to load
llm_reports/context.json; if not found, it reconstructs a minimal context from:*_processing_log.csv(per-file log) and*_autoclean_report.pdf(standard report)
- It starts an interactive prompt for your Q&A
Prerequisites
- A completed (or failed) run stored in the database
- Standard outputs on disk for that run:
- Per-file processing log CSV (saved under
reports/run_reports/with a copy inexports/) - PDF report under
reports/run_reports/ - For LLM chat:
OPENAI_API_KEY(and optionallyOPENAI_BASE_URLfor compatible providers)
You do NOT need to have enabled ai_reporting; chat reconstructs context from always-present outputs when a saved LLM context is missing.
Default Behavior (Latest Run)
When you runreport chat with no flags:
- The CLI initializes the database path to your workspace output directory.
- It queries the latest run (by
created_at/ id) from thepipeline_runstable. - If a directory backup was recorded for that run, it applies the backup-aware resolver to locate moved folders.
- It prompts: “Use latest run?” with a summary of run id, time, and task.
- Yes/Enter → chat starts on that run.
- No → you’re shown a rich, interactive table of recent runs.
--context-json (see below).
Interactive Run Selector
If you decline using the latest run, a readable table is presented (newest first):- Columns:
#: index to selectRun ID: short form of the unique idCreated: timestampTask: task/class nameFile: original input file nameStatus: e.g.,completed,failedSuccess:YesorNoBackup:Yesif a directory backup was recorded for this runArtifacts: count of output artifacts (from json_summary when present)
How Context Is Built
Report Chat answers strictly from a JSON context. It obtains that context in one of two ways:-
Preferred: Use
llm_reports/context.jsonif it exists (created byreport createor during automatic LLM reporting if enabled) -
Fallback reconstruction: A minimal, but sufficient context is built from:
- Per-file
*_processing_log.csv(always written after runs) *_autoclean_report.pdfpath for reference- The run record (for
run_id,task,input_file)
- Per-file
filter_params: high/low-pass, notch freqs/widthsepochs: epoch limits and totals (kept + rejected when available)ica(best-effort): total components and removed indicesresample_hz,durations_s,n_channelsfigures.autoclean_report_pdf: the PDF path if presentnotes: includes flags when present in the CSV
The reconstruction is designed to be robust, using only always-present outputs. If llm_reports/context.json exists, it is used as-is.
Backup-Aware Path Resolution
If AutoClean detected an existing directory and created a timestamped backup before your latest run, that move is recorded in the database under the current run’s metadata:docs/directory-backup-resolution.md.
Manual Context Mode (Optional)
If you want to chat about a specific context JSON you saved elsewhere:RunContext schema (nested dicts are accepted; they will be coerced automatically). This bypasses the database and filesystem discovery.
Environment & Security
OPENAI_API_KEY— required to chat (OpenAI-compatible provider)OPENAI_BASE_URL— optional; set for compatible self-hosted endpoints
- Report Chat does not persist your chat content. It submits a short prompt to the LLM with the context, and displays the LLM’s JSON answer.
- The separate “report create” command writes an
llm_trace.jsonl(with hashed user content) for transparency. Chat does not append to this trace.
- Chat is not gated by the compliance authentication decorator. If your deployment requires extra gating, please coordinate with admins.
Examples
- Default (Latest Run)
- Pick Another Run
- Manual Context JSON
Troubleshooting
Common issues and remedies:-
“Could not locate latest run context or reconstruct from outputs.”
- Ensure there is at least one run in the DB (process a file via the pipeline)
- Ensure your per-file processing log CSV and PDF are present for that run
- If using manual JSON mode, verify the path and format
-
Missing API key
- Set
OPENAI_API_KEYin your shell:export OPENAI_API_KEY=sk-...
- Set
-
Interactive table “style” errors
- Chat now uses AutoClean’s themed console; styles like
title/headerare defined. If you see style errors, update the CLI to the latest build.
- Chat now uses AutoClean’s themed console; styles like
-
Graceful cancel
- Press Enter at the selection prompt to cancel without error (exit code 0).
FAQ
Does chat requireai_reporting to be enabled?
No. Chat reconstructs context from the per-file log and PDF if no LLM context exists. The ai_reporting flag controls whether LLM-backed summaries are written automatically at the end of a run; chat is independent.
Where do answers come from?
The LLM answers using only the JSON context (either llm_reports/context.json or the reconstructed context). The prompt instructs the model to not speculate beyond the context.
Is chat content stored?
No. The CLI does not persist chat content. If you need auditability, use report create, which writes llm_trace.jsonl with hashed prompts and model metadata.
What if the run’s folders were auto-backed up?
The system records the backup in the DB and resolves moved paths automatically when locating artifacts. See Backup-Aware Path Resolution above.
Can I change the model or temperature?
Not via CLI flags yet. You can set OPENAI_BASE_URL and rely on the default gpt-4o-mini/temperature=0.0. If you’d like model/temperature flags, open an issue.
See Also
autocleaneeg-pipeline report create— Generate methods + executive summary + QC narrative (writes trace, requires API key for LLM parts)docs/directory-backup-resolution.md— How path resolution works with auto-backups (Phase 1/2 plan)