Artifact snapshot
| Property | Value |
|---|---|
| File | prompts.jsonl |
| Location | runs/<label>/<episode_id>/prompts.jsonl |
| Schema | $schema_name: "prompt", $schema_version: "1.1.0" |
| Kind | attachment in manifest.json |
| Opt-in | prompt_provenance_enabled (bool) |
| Mode | prompt_provenance_mode (full, hash_only, redacted) |
Record fields (schema 1.1.0)
Core:$schema_name,$schema_versionepisode_idphase(observe,interpret,plan,governance,act,reflect,learn)agent_idfingerprint(sha256:<hex>of normalized prompt)timestamp(ISO-8601)mode(full/hash_only/redacted)
event_id(fromevents.jsonl, optional)outcome_event_id(optional)
rendered,template,variables,template_id
role(system/user/assistant/tool)kind(system,reasoning,governance, etc.)modeltags(mirrors episode tags)
full mode):
Storage modes
| Mode | rendered/template | variables | fingerprint | Use |
|---|---|---|---|---|
full | Stored | Stored | Stored | Debug/research |
hash_only | Omitted | Omitted | Stored | Production (minimal PII) |
redacted | "__redacted__" | Omitted | Stored | Compliance-sensitive |
noesis/runtime/prompt_recorder.py.
Enable and configure
- Python:
ns.set(prompt_provenance_enabled=True, prompt_provenance_mode="hash_only") - Env: set
NOESIS_PROMPT_PROVENANCE_ENABLED=trueandNOESIS_PROMPT_PROVENANCE_MODE=hash_only - TOML:
prompt_provenance_enabled = trueandprompt_provenance_mode = "hash_only" - Session:
SessionBuilder(...).with_config(prompt_provenance_enabled=True, prompt_provenance_mode="full")
prompts.jsonl is written. Writes are blocked after manifest.json is finalized (artifact immutability).
Integration points (current coverage)
PromptRecorder is wired into:plan(direction planner)governance(pre-act governor)reflect(reflection)interpret(when intuition uses LLM)
Reading prompts
There is no dedicated helper yet; read JSONL directly:event_id, or filter by phase to analyze prompt usage.
