Skip to main content
AutoCleanEEG can optionally generate textual summaries using a large language model. These reports live under the task‑root reports/llm/ folder and are built from the processing log and PDF report.
By default AI reporting is disabled. Set "ai_reporting": true in task config and provide OPENAI_API_KEY to enable.
Privacy Considerations for LLM ReportingFor 100% privacy, use a local language model instead of cloud-based APIs. When using OpenAI’s API, ensure your setup includes zero data retention and a Business Associate Agreement (BAA) if handling Protected Health Information (PHI). Note that most data sent is deidentified and consists of aggregate processing metrics, not raw EEG data.

Files Produced

FilePurpose
context.jsonSerialized run context used to produce all text.
methods.mdDeterministic methods paragraph created without any API calls.
executive_summary.mdStudy-ready summary produced by the LLM (requires API key).
qc_narrative.mdLLM-generated quality control narrative and recommendations (requires API key).
llm_trace.jsonlHash-based trace of prompts and results for compliance.

Enabling the Feature

  1. Add "ai_reporting": true to your task configuration or workspace template.
  2. Ensure an OPENAI_API_KEY is available in the environment.
  3. Run the pipeline as usual – reports appear under reports/llm/ (or a subfolder keyed by the subject base name).

CLI Usage

You can regenerate reports or chat about a run from the command line:
autocleaneeg-pipeline report create --run-id demo --context-json ./context.json \
  --out-dir ./reports
autocleaneeg-pipeline report chat --context-json ./context.json
report create always writes context.json and methods.md. If an API key is present, it also generates executive_summary.md and qc_narrative.md.

When to Use

  • Share short summaries with collaborators.
  • Capture deterministic methods text for manuscripts.
  • Quickly review quality metrics without opening the full PDF.
Missing API keys or expected input files never break your run; the pipeline simply skips LLM outputs.