reports/llm/ folder and are built from the processing log and
PDF report.
By default AI reporting is disabled. Set
"ai_reporting": true
in task config and provide OPENAI_API_KEY to enable.Files Produced
| File | Purpose |
|---|---|
context.json | Serialized run context used to produce all text. |
methods.md | Deterministic methods paragraph created without any API calls. |
executive_summary.md | Study-ready summary produced by the LLM (requires API key). |
qc_narrative.md | LLM-generated quality control narrative and recommendations (requires API key). |
llm_trace.jsonl | Hash-based trace of prompts and results for compliance. |
Enabling the Feature
- Add
"ai_reporting": trueto your task configuration or workspace template. - Ensure an
OPENAI_API_KEYis available in the environment. - Run the pipeline as usual – reports appear under
reports/llm/(or a subfolder keyed by the subject base name).
CLI Usage
You can regenerate reports or chat about a run from the command line:report create always writes context.json and methods.md. If an API
key is present, it also generates executive_summary.md and
qc_narrative.md.
When to Use
- Share short summaries with collaborators.
- Capture deterministic methods text for manuscripts.
- Quickly review quality metrics without opening the full PDF.
Missing API keys or expected input files never break your run; the
pipeline simply skips LLM outputs.