Skip to main content
The summary.json artifact provides a high-level view of episode outcomes, metrics, and flags. It’s designed for quick access to key information without parsing the full event timeline.

Schema overview

{
  "schema_version": "1.2.0",
  "episode_id": "ep_2024_abc123_s0",
  "task": "Draft release notes for v1.2.0",
  "started_at": "2024-01-15T10:30:00Z",
  "duration_sec": 5.0,
  "flags": { ... },
  "metrics": { ... },
  "insight": { ... },
  "tags": { ... }
}

Root fields

schema_version
string
required
Version of the summary schema. Currently "1.2.0".
episode_id
string
required
Unique episode identifier.
task
string
required
The original task or goal.
started_at
string
required
ISO 8601 start timestamp.
duration_sec
number
required
Total episode duration in seconds.

Flags

Configuration flags for the episode run.
{
  "flags": {
    "intuition": true,
    "mode": "meta",
    "direction": {
      "applied": 2,
      "vetoed": 0,
      "policy": "SafetyPolicy@1.0",
      "threshold": 0.75,
      "last_diff": ["plan.steps[0].parameters"]
    }
  }
}
flags.intuition
boolean
required
Whether intuition was enabled.
flags.mode
string
required
Execution mode: "meta" or "minimal".
flags.direction
object
Direction statistics (meta mode only).

Metrics

Core execution metrics.
{
  "metrics": {
    "success": 1,
    "plan_count": 2,
    "act_count": 3,
    "reflect_count": 1,
    "veto_count": 0,
    "learn_proposals": 1,
    "learn_applied": 0,
    "latencies": {
      "first_action_ms": 150,
      "total_ms": 5000,
      "planning_ms": 500,
      "execution_ms": 4000
    }
  }
}
metrics.success
number
required
Success indicator: 1 for success, 0 for failure.
metrics.plan_count
number
required
Number of planning iterations.
metrics.act_count
number
required
Number of actions executed.
metrics.reflect_count
number
required
Number of reflection passes.
metrics.veto_count
number
required
Number of policy vetoes.
metrics.learn_proposals
number
Number of learning proposals generated.
metrics.learn_applied
number
Number of learning proposals applied.
metrics.latencies
object
Timing metrics.

Insight

Advanced insight metrics (meta mode only; may be absent in minimal mode).
{
  "insight": {
    "metrics": {
      "plan_adherence": 0.95,
      "tool_coverage": 1.0,
      "branching_factor": 2,
      "plan_revisions": 1
    }
  }
}
insight.metrics
object
Computed insight metrics.

Tags

User-provided metadata.
{
  "tags": {
    "environment": "staging",
    "team": "platform",
    "priority": "high"
  }
}
tags
object
Key-value pairs of user-provided metadata.

Complete example

{
  "schema_version": "1.2.0",
  "episode_id": "ep_2024_abc123_s0",
  "task": "Draft release notes for v1.2.0",
  "started_at": "2024-01-15T10:30:00Z",
  "duration_sec": 5.0,
  "flags": {
    "intuition": true,
    "mode": "meta",
    "direction": {
      "applied": 1,
      "vetoed": 0,
      "policy": "SafetyPolicy@1.0",
      "threshold": 0.75,
      "last_diff": ["plan.steps[0].description"]
    }
  },
  "metrics": {
    "success": 1,
    "plan_count": 2,
    "act_count": 3,
    "reflect_count": 1,
    "veto_count": 0,
    "learn_proposals": 0,
    "learn_applied": 0,
    "latencies": {
      "first_action_ms": 150,
      "total_ms": 5000
    }
  },
  "insight": {
    "metrics": {
      "plan_adherence": 0.95,
      "tool_coverage": 1.0,
      "branching_factor": 2,
      "plan_revisions": 1
    }
  },
  "tags": {
    "environment": "staging",
    "team": "platform"
  }
}

Reading summaries

Python

import noesis as ns

summary = ns.summary.read("ep_abc123")

# Basic info
print(f"Task: {summary['task']}")
print(f"Success: {summary['metrics']['success']}")

# Metrics
metrics = summary['metrics']
print(f"Actions: {metrics['act_count']}")
print(f"Vetoes: {metrics['veto_count']}")
print(f"First action: {metrics['latencies']['first_action_ms']}ms")

# Insight (if available)
insight = summary.get('insight', {}).get('metrics', {})
print(f"Plan adherence: {insight.get('plan_adherence', 'N/A')}")

CLI

# Human-readable
noesis show ep_abc123

# Raw JSON
noesis show ep_abc123 -j

# Extract specific fields
noesis show ep_abc123 -j | jq '.metrics.success'

File access

cat runs/demo/ep_abc123/summary.json | jq .

Use cases

Check success rate

import noesis as ns

episodes = ns.list_runs(limit=100)
successes = sum(
    ns.summary.read(ep['episode_id'])['metrics']['success']
    for ep in episodes
)
rate = successes / len(episodes)
print(f"Success rate: {rate:.2%}")

Find vetoed episodes

import noesis as ns

episodes = ns.list_runs(limit=100)
vetoed = [
    ep for ep in episodes
    if ns.summary.read(ep['episode_id'])['metrics']['veto_count'] > 0
]
print(f"Vetoed episodes: {len(vetoed)}")

Analyze latencies

import noesis as ns
import statistics

episodes = ns.list_runs(limit=100)
latencies = [
    ns.summary.read(ep['episode_id'])['metrics']['latencies'].get('first_action_ms', 0)
    for ep in episodes
]

print(f"Median first action: {statistics.median(latencies):.0f}ms")
print(f"P95 first action: {statistics.quantiles(latencies, n=20)[-1]:.0f}ms")

Next steps