Wiki source code of Automation
Version 3.1 by Robert Schaub on 2025/12/12 08:32
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Automation = | ||
| 2 | |||
| 3 | Automation in FactHarbor amplifies human capability but never replaces human oversight. | ||
| 4 | All automated outputs require human review before publication. | ||
| 5 | |||
| 6 | This chapter defines: | ||
| 7 | * What must remain human-only | ||
| 8 | * What AI (AKEL) can draft | ||
| 9 | * What can be fully automated | ||
| 10 | * How automation evolves through POC → Beta 0 → Release 1.0 | ||
| 11 | |||
| 12 | == POC v1 (Fully Automated "Text to Truth Landscape") == | ||
| 13 | |||
| 14 | The goal of POC v1 is to validate the automated reasoning capabilities of the data model without human intervention. | ||
| 15 | |||
| 16 | === Workflow === | ||
| 17 | |||
| 18 | 1. **Input**: User pastes a block of raw text. | ||
| 19 | 2. **Deep Analysis (Background)**: The system autonomously performs the full pipeline **before** displaying the text: | ||
| 20 | ** Extraction & Normalisation | ||
| 21 | ** Scenario & Sub-query generation | ||
| 22 | ** Evidence retrieval & Verdict computation | ||
| 23 | 3. **Visualisation (Extraction & Marking)**: The system displays the text with claims extracted and marked. | ||
| 24 | ** **Verdict-Based Coloring**: The extraction highlights (e.g. Orange/Green) are chosen **according to the computed verdict** for each claim. | ||
| 25 | 4. **Inspection**: User clicks a highlighted claim to see the **Reasoning Trail**, showing exactly which evidence and sub-queries led to that verdict. | ||
| 26 | |||
| 27 | === Technical Scope === | ||
| 28 | |||
| 29 | * **Fully Automated**: No human-in-the-loop for this phase. | ||
| 30 | * **Structured Sub-Queries**: Logic is generated by decomposing claims into the FactHarbor data model. | ||
| 31 | * **Latency**: Focus on accuracy of reasoning over real-time speed for v1. | ||
| 32 | |||
| 33 | ---- | ||
| 34 | |||
| 35 | == Manual vs Automated Responsibilities == | ||
| 36 | |||
| 37 | === Human-Only Tasks === | ||
| 38 | |||
| 39 | These require human judgment, ethics, or contextual interpretation: | ||
| 40 | |||
| 41 | * Definition of key terms in claims | ||
| 42 | * Approval or rejection of scenarios | ||
| 43 | * Interpretation of evidence in context | ||
| 44 | * Final verdict approval | ||
| 45 | * Governance decisions and dispute resolution | ||
| 46 | * High-risk domain oversight | ||
| 47 | * Ethical boundary decisions (especially medical, political, psychological) | ||
| 48 | |||
| 49 | === Semi-Automated (AI Draft → Human Review) === | ||
| 50 | |||
| 51 | AKEL can draft these, but humans must refine/approve: | ||
| 52 | |||
| 53 | * Scenario structures (definitions, assumptions, context) | ||
| 54 | * Evaluation methods | ||
| 55 | * Evidence relevance suggestions | ||
| 56 | * Reliability hints | ||
| 57 | * Verdict reasoning chains | ||
| 58 | * Uncertainty and limitations | ||
| 59 | * Scenario comparison explanations | ||
| 60 | * Suggestions for merging or splitting scenarios | ||
| 61 | * Draft public summaries | ||
| 62 | |||
| 63 | === Fully Automated Structural Tasks === | ||
| 64 | |||
| 65 | These require no human interpretation: | ||
| 66 | |||
| 67 | * Claim normalization | ||
| 68 | * Duplicate & cluster detection (vector embeddings) | ||
| 69 | * Evidence metadata extraction | ||
| 70 | * Basic reliability heuristics | ||
| 71 | * Contradiction detection | ||
| 72 | * Re-evaluation triggers | ||
| 73 | * Batch layout generation (diagrams, summaries) | ||
| 74 | * Federation integrity checks | ||
| 75 | |||
| 76 | == Automation Roadmap == | ||
| 77 | |||
| 78 | Automation increases with maturity. | ||
| 79 | |||
| 80 | === POC (Low Automation) === | ||
| 81 | * **Automated**: Claim normalization, Light scenario templates, Metadata extraction, Internal drafts. | ||
| 82 | * **Human**: All scenario definitions, Evidence interpretation, Verdict creation, Governance. | ||
| 83 | |||
| 84 | === Beta 0 (Medium Automation) === | ||
| 85 | * **Automated**: Detailed scenario drafts, Evidence reliability scoring, Cross-scenario comparisons, Contradiction detection. | ||
| 86 | * **Human**: Scenario approval, Final verdict validation. | ||
| 87 | |||
| 88 | === Release 1.0 (High Automation) === | ||
| 89 | * **Automated**: Full scenario generation, Evidence relevance ranking, Bayesian verdict scoring, Anomaly detection, Federation sync. | ||
| 90 | * **Human**: Final approval, Ethical decisions, Oversight. | ||
| 91 | |||
| 92 | == Automation Levels == | ||
| 93 | |||
| 94 | * **Level 0 — Human-Centric (POC)**: AI is purely advisory, nothing auto-published. | ||
| 95 | * **Level 1 — Assisted (Beta 0)**: AI drafts structures; humans approve each part. | ||
| 96 | * **Level 2 — Structured (Release 1.0)**: AI produces near-complete drafts; humans refine. | ||
| 97 | * **Level 3 — Distributed Intelligence (Future)**: Nodes exchange embeddings and alerts; humans still approve. | ||
| 98 | |||
| 99 | == Automation Matrix == | ||
| 100 | |||
| 101 | * **Always Human**: Final verdict, Scenario validity, Ethics, Disputes. | ||
| 102 | * **Mostly AI**: Normalization, Clustering, Metadata, Heuristics, Alerts. | ||
| 103 | * **Mixed**: Definitions, Boundaries, Assumptions, Reasoning. | ||
| 104 | |||
| 105 | == Diagram References == | ||
| 106 | |||
| 107 | {{include reference="FactHarbor.Specification.Diagrams.Automation Roadmap.WebHome"/}} | ||
| 108 | |||
| 109 | {{include reference="FactHarbor.Specification.Diagrams.Automation Level.WebHome"/}} | ||
| 110 | |||
| 111 | {{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} |