Decision Lifecycle
Decisions are not permanent. The world changes — requirements shift, load increases, team grows, dependencies update. quint-code treats every decision as having a shelf life.
The lifecycle
- Pending — decision recorded, not yet implemented (no measurement)
- Shipped — implemented and measured (verdict: accepted/partial/failed)
- Active — shipped, R_eff ≥ 0.5, not expired
- Stale — R_eff < 0.5, or valid_until expired, or code drift detected
- Superseded — replaced by a newer decision
- Deprecated — no longer relevant (archived)
Evidence and R_eff
Every decision can have evidence attached — test results, benchmarks, user feedback, measurements. Each evidence item has:
- Verdict — supports, weakens, or refutes the decision
- Congruence Level (CL) — how relevant is this evidence to this context?
- CL3: Same project, same context — full weight
- CL2: Similar project, similar stack — 0.1 penalty
- CL1: Different context — 0.4 penalty
- CL0: Opposed context — 0.9 penalty
- Expiry date — when this evidence is no longer trustworthy
R_eff (effective reliability) is computed as the minimum across all evidence scores, with CL penalties applied. This is the weakest link principle: the chain is as strong as its weakest link.
- R_eff ≥ 0.5 — healthy, decision is trustworthy
- R_eff < 0.5 — degraded, needs review
- R_eff < 0.3 — AT RISK, needs immediate attention
Code drift detection
After a decision is implemented, the agent snapshots affected file hashes. quint-code then
snapshots the file hashes. On every /q-refresh scan, it recomputes hashes and
detects changes:
- MODIFIED — file changed since baseline (may or may not invalidate the decision)
- FILE MISSING — file was deleted or moved (likely needs decision update)
- No drift — file unchanged
Drift is a signal, not an automatic invalidation. The agent reads the diff and judges whether the change is material (broke an invariant) or cosmetic (comments, formatting).
Module coverage
/q-status shows module coverage — which parts of your codebase are governed by
decisions and which are blind spots:
## Module Coverage (9 modules, 77% governed)
✓ src/internal/artifact — 3 decisions
✓ src/internal/codebase — 2 decisions
✗ src/assurance — no decisions (blind)
✗ src/cmd/indexer — no decisions (blind) Blind modules are parts of your architecture with no formal engineering decisions. Not necessarily bad — but worth knowing about.
Refreshing decisions
/q-refresh Actions:
| Action | What it does | When to use |
|---|---|---|
scan | Find all stale artifacts + code drift | Routine health check |
waive | Extend validity with justification | Decision still valid, just expired |
reopen | Start new problem cycle from old decision | Conditions changed, need to reconsider |
supersede | Replace with a different artifact | New decision replaces old one |
deprecate | Archive as no longer relevant | Decision obsolete, nothing replaces it |
reconcile | Find notes that overlap with decisions | Clean up duplicate knowledge |
The two cycles
Anatoly Levenchuk's systems engineering methodology describes two interconnected cycles that feed into each other:
The observation cycle
Notice what changed → characterize the situation → measure → identify problems → pick the right problem to work on.
quint-code supports this with:
- Drift detection — watches for code changes under existing decisions
- R_eff degradation — evidence expires, trust scores drop, stale decisions surface
- Module coverage — shows which parts of the architecture have no decisions (blind spots)
- Cross-project recall — surfaces related decisions from other projects when framing new problems
- Goldilocks problem selection —
/q-problemsshows readiness signals to help pick the right problem
The decision cycle
Define comparison criteria → generate variants → fair comparison → select → implement → measure impact → feed results back into the observation cycle.
quint-code supports this with:
- Frame → Char → Explore → Compare → Decide — the full command cycle with focused prompts at each step
- Adversarial verification — challenges decisions before recording
- Baseline + Measure — snapshots implementation, records whether acceptance criteria were met
- Evidence supersession — newer measurements replace older ones, R_eff reflects current state
- Failed measurement → reopen — when implementation doesn't meet criteria, the system suggests reopening the problem
Closing the loop
The two cycles are connected: the results of implementing a decision (measurements, evidence, drift) become inputs to the observation cycle. Stale decisions trigger re-evaluation. Failed implementations create new problems. The goal is that this happens naturally — you don't need to remember to check, the system surfaces what needs attention.
This isn't fully automatic today. Some steps require the agent to be proactive, and some require your judgment. But each release adds more mechanisms to close the loop tighter.
Next
- All commands — complete reference
- Key concepts — R_eff, CL, WLNK explained