Automation
Version 1.4 by Robert Schaub on 2025/12/11 21:34
Automation
Automation in FactHarbor amplifies human capability but never replaces human oversight.
All automated outputs require human review before publication.
This chapter defines:
- What must remain human-only
- What AI (AKEL) can draft
- What can be fully automated
- How automation evolves through POC → Beta 0 → Release 1.0
Manual vs Automated Responsibilities
Human-Only Tasks
These require human judgment, ethics, or contextual interpretation:
- Definition of key terms in claims
- Approval or rejection of scenarios
- Interpretation of evidence in context
- Final verdict approval
- Governance decisions and dispute resolution
- High-risk domain oversight
- Ethical boundary decisions (especially medical, political, psychological)
Semi-Automated (AI Draft → Human Review)
AKEL can draft these, but humans must refine/approve:
- Scenario structures (definitions, assumptions, context)
- Evaluation methods
- Evidence relevance suggestions
- Reliability hints
- Verdict reasoning chains
- Uncertainty and limitations
- Scenario comparison explanations
- Suggestions for merging or splitting scenarios
- Draft public summaries
Fully Automated Structural Tasks
These require no human interpretation:
- Claim normalization
- Duplicate & cluster detection (vector embeddings)
- Evidence metadata extraction
- Basic reliability heuristics
- Contradiction detection
- Re-evaluation triggers
- Batch layout generation (diagrams, summaries)
- Federation integrity checks
Automation Roadmap
Automation increases with maturity.
POC (Low Automation)
Automated
- Claim normalization
- Light scenario templates
- Evidence metadata extraction
- Simple verdict drafts (internal only)
Human
- All scenario definitions
- Evidence interpretation
- Verdict creation
- Governance
Beta 0 (Medium Automation)
Automated
- Detailed scenario drafts
- Evidence reliability scoring
- Cross-scenario comparisons
- Contradiction detection (local + remote nodes)
- Internal Truth Landscape drafts
Human
- Scenario approval
- Final verdict validation
Release 1.0 (High Automation)
Automated
- Full scenario generation (definitions, assumptions, boundaries)
- Evidence relevance scoring and ranking
- Bayesian verdict scoring across scenario sets
- Multi-scenario summary generation
- Anomaly detection across nodes
- AKEL-assisted federated synchronization
Human
- Final approval of all scenarios and verdicts
- Ethical decisions
- Oversight and conflict resolution
Automation Levels
Level 0 — Human-Centric (POC)
AI is purely advisory, nothing auto-published.
Level 1 — Assisted (Beta 0)
AI drafts structures; humans approve each part.
Level 2 — Structured (Release 1.0)
AI produces near-complete drafts; humans refine.
Level 3 — Distributed Intelligence (Future)
Nodes exchange embeddings, contradiction alerts, and scenario templates.
Humans still approve everything.
Automation Matrix
Always Human
- Final verdict approval
- Scenario validity
- Ethical decisions
- Dispute resolution
Mostly AI (Human Validation Needed)
- Claim normalization
- Clustering
- Evidence metadata
- Reliability heuristics
- Scenario drafts
- Contradiction detection
Mixed
- Definitions of ambiguous terms
- Boundary choices
- Assumption evaluation
- Evidence selection
- Verdict reasoning