Changes for page Automation
Last modified by Robert Schaub on 2025/12/24 20:34
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,167 +1,271 @@ 1 1 = Automation = 2 2 3 -Automation in FactHarbor amplifies human capability but never replaces human oversight. 4 -All automated outputs require human review before publication. 3 +Automation in FactHarbor amplifies human capability while implementing risk-based oversight. 5 5 6 6 This chapter defines: 7 - 6 +* Risk-based publication model 7 +* Quality gates for AI-generated content 8 8 * What must remain human-only 9 -* What AI (AKEL) can draft 9 +* What AI (AKEL) can draft and publish 10 10 * What can be fully automated 11 11 * How automation evolves through POC → Beta 0 → Release 1.0 12 12 13 -- ---13 +== POC v1 (AI-Generated Publication Demonstration) == 14 14 15 - =Manual vsAutomatedResponsibilities=15 +The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication. 16 16 17 -== Human-OnlyTasks==17 +=== Workflow === 18 18 19 -These require human judgment, ethics, or contextual interpretation: 19 +1. **Input**: User pastes a block of raw text. 20 +1. **Deep Analysis (Background)**: The system autonomously performs the full pipeline **before** displaying the text: 21 +* Extraction & Normalisation 22 +* Scenario & Sub-query generation 23 +* Evidence retrieval with **contradiction search** 24 +* Quality gate validation 25 +* Verdict computation 26 +1. **Visualisation (Extraction & Marking)**: The system displays the text with claims extracted and marked. 27 +* **Verdict-Based Coloring**: The extraction highlights (e.g. Orange/Green) are chosen **according to the computed verdict** for each claim. 28 +* **AI-Generated Label**: Clear indication that content is AI-produced 29 +1. **Inspection**: User clicks a highlighted claim to see the **Reasoning Trail**, showing exactly which evidence and sub-queries led to that verdict. 20 20 21 -* Definition of key terms in claims 22 -* Approval or rejection of scenarios 23 -* Interpretation of evidence in context 24 -* Final verdict approval 25 -* Governance decisions and dispute resolution 26 -* High-risk domain oversight 27 -* Ethical boundary decisions (especially medical, political, psychological) 31 +=== Technical Scope === 28 28 29 -== Semi-Automated (AI Draft → Human Review) == 33 +* **AI-Generated Publication**: Content published as Mode 2 (AI-Generated, no prior human review) 34 +* **Quality Gates Active**: All automated quality checks enforced 35 +* **Contradiction Search Demonstrated**: Shows counter-evidence and reservation detection 36 +* **Risk Tier Classification**: POC shows tier assignment (demo purposes) 37 +* **No Human Approval Gate**: Demonstrates scalable AI publication 38 +* **Structured Sub-Queries**: Logic generated by decomposing claims into the FactHarbor data model 30 30 31 - AKEL can draft these, but humans must refine/approve:40 +---- 32 32 33 -* Scenario structures (definitions, assumptions, context) 34 -* Evaluation methods 35 -* Evidence relevance suggestions 36 -* Reliability hints 37 -* Verdict reasoning chains 38 -* Uncertainty and limitations 39 -* Scenario comparison explanations 40 -* Suggestions for merging or splitting scenarios 41 -* Draft public summaries 42 +== Publication Model == 42 42 43 - ==Fully AutomatedStructuralTasks==44 +FactHarbor implements a risk-based publication model with three modes: 44 44 45 -These require no human interpretation: 46 +=== Mode 1: Draft-Only === 47 +* Failed quality gates 48 +* High-risk content pending expert review 49 +* Internal review queue only 46 46 47 -* Claim normalization 48 -* Duplicate & cluster detection (vector embeddings) 49 -* Evidence metadata extraction 50 -* Basic reliability heuristics 51 -* Contradiction detection 52 -* Re-evaluation triggers 53 -* Batch layout generation (diagrams, summaries) 54 -* Federation integrity checks 51 +=== Mode 2: AI-Generated (Public) === 52 +* Passed all quality gates 53 +* Risk tier B or C 54 +* Clear AI-generated labeling 55 +* Users can request human review 55 55 57 +=== Mode 3: Human-Reviewed === 58 +* Validated by human reviewers/experts 59 +* "Human-Reviewed" status badge 60 +* Required for Tier A content publication 61 + 62 +See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed publication mode descriptions. 63 + 56 56 ---- 57 57 58 -= Automation Roadmap=66 +== Risk Tiers and Automation Levels == 59 59 60 -Automation increases with maturity. 68 +=== Tier A (High Risk) === 69 +* **Domains**: Medical, legal, elections, safety, security 70 +* **Automation**: AI can draft, human review required for "Human-Reviewed" status 71 +* **AI publication**: Allowed with prominent disclaimers and warnings 72 +* **Audit rate**: Recommendation: 30-50% 61 61 62 -== POC (Low Automation) == 74 +=== Tier B (Medium Risk) === 75 +* **Domains**: Complex policy, science, causality claims 76 +* **Automation**: AI can draft and publish (Mode 2) 77 +* **Human review**: Optional, audit-based 78 +* **Audit rate**: Recommendation: 10-20% 63 63 64 -=== Automated === 80 +=== Tier C (Low Risk) === 81 +* **Domains**: Definitions, established facts, historical data 82 +* **Automation**: AI publication default 83 +* **Human review**: On request or via sampling 84 +* **Audit rate**: Recommendation: 5-10% 65 65 66 -* Claim normalization 67 -* Light scenario templates 68 -* Evidence metadata extraction 69 -* Simple verdict drafts (internal only) 86 +---- 70 70 71 -== =Human ===88 +== Human-Only Tasks == 72 72 73 -* All scenario definitions 74 -* Evidence interpretation 75 -* Verdict creation 76 -* Governance 90 +These require human judgment and cannot be automated: 77 77 78 -== Beta 0 (Medium Automation) == 92 +* **Ethical boundary decisions** (especially medical, political, psychological harm assessment) 93 +* **Dispute resolution** between conflicting expert opinions 94 +* **Governance policy** setting and enforcement 95 +* **Final authority** on Tier A "Human-Reviewed" status 96 +* **Audit system oversight** and quality standard definition 97 +* **Risk tier policy** adjustments based on societal context 79 79 80 - === Automated ===99 +---- 81 81 82 -* Detailed scenario drafts 83 -* Evidence reliability scoring 84 -* Cross-scenario comparisons 85 -* Contradiction detection (local + remote nodes) 86 -* Internal Truth Landscape drafts 101 +== AI-Draft with Audit (Semi-Automated) == 87 87 88 - ===Human===103 +AKEL drafts these; humans validate via sampling audits: 89 89 90 -* Scenario approval 91 -* Final verdict validation 105 +* **Scenario structures** (definitions, assumptions, context) 106 +* **Evaluation methods** and reasoning chains 107 +* **Evidence relevance** assessment and ranking 108 +* **Reliability scoring** and source evaluation 109 +* **Verdict reasoning** with uncertainty quantification 110 +* **Contradiction and reservation** identification 111 +* **Scenario comparison** explanations 112 +* **Public summaries** and accessibility text 92 92 93 -== Release 1.0 (High Automation) == 114 +Most Tier B and C content remains in AI-draft status unless: 115 +* Users request human review 116 +* Audits identify errors 117 +* High engagement triggers review 118 +* Community flags issues 94 94 95 - === Automated ===120 +---- 96 96 97 -* Full scenario generation (definitions, assumptions, boundaries) 98 -* Evidence relevance scoring and ranking 99 -* Bayesian verdict scoring across scenario sets 100 -* Multi-scenario summary generation 101 -* Anomaly detection across nodes 102 -* AKEL-assisted federated synchronization 122 +== Fully Automated Structural Tasks == 103 103 104 - ===Human===124 +These require no human interpretation: 105 105 106 -* Final approval of all scenarios and verdicts 107 -* Ethical decisions 108 -* Oversight and conflict resolution 126 +* **Claim normalization** (canonical form generation) 127 +* **Duplicate detection** (vector embeddings, clustering) 128 +* **Evidence metadata extraction** (dates, authors, publication info) 129 +* **Basic reliability heuristics** (source reputation scoring) 130 +* **Contradiction detection** (conflicting statements across sources) 131 +* **Re-evaluation triggers** (new evidence, source updates) 132 +* **Layout generation** (diagrams, summaries, UI presentation) 133 +* **Federation integrity checks** (cross-node data validation) 109 109 110 110 ---- 111 111 112 -= Automat ion Levels=137 +== Quality Gates (Automated) == 113 113 114 - == Level0— Human-Centric(POC)==139 +Before AI-draft publication (Mode 2), content must pass: 115 115 116 -AI is purely advisory, nothing auto-published. 141 +1. **Source Quality Gate** 142 + * Primary sources verified 143 + * Citations complete and accessible 144 + * Source reliability scored 117 117 118 -== Level 1 — Assisted (Beta 0) == 146 +2. **Contradiction Search Gate** (MANDATORY) 147 + * Counter-evidence actively sought 148 + * Reservations and limitations identified 149 + * Bubble detection (echo chambers, conspiracy theories) 150 + * Diverse perspective verification 119 119 120 -AI drafts structures; humans approve each part. 152 +3. **Uncertainty Quantification Gate** 153 + * Confidence scores calculated 154 + * Limitations stated 155 + * Data gaps disclosed 121 121 122 -== Level 2 — Structured (Release 1.0) == 157 +4. **Structural Integrity Gate** 158 + * No hallucinations detected 159 + * Logic chain valid 160 + * References verifiable 123 123 124 -A Iproducesnear-completedrafts;humansrefine.162 +See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed quality gate specifications. 125 125 126 - == Level 3 — Distributed Intelligence (Future) ==164 +---- 127 127 128 -Nodes exchange embeddings, contradiction alerts, and scenario templates. 129 -Humans still approve everything. 166 +== Audit System == 130 130 168 +Instead of reviewing all AI output, systematic sampling audits ensure quality: 169 + 170 +=== Stratified Sampling === 171 +* Risk tier (A > B > C sampling rates) 172 +* Confidence scores (low confidence → more audits) 173 +* Traffic/engagement (popular content audited more) 174 +* Novelty (new topics/claim types prioritized) 175 +* User flags and disagreement signals 176 + 177 +=== Continuous Improvement Loop === 178 +Audit findings improve: 179 +* Query templates 180 +* Source reliability weights 181 +* Contradiction detection algorithms 182 +* Risk tier assignment rules 183 +* Bubble detection heuristics 184 + 185 +=== Transparency === 186 +* Audit statistics published 187 +* Accuracy rates by tier reported 188 +* System improvements documented 189 + 131 131 ---- 132 132 133 -= Automation Matrix=192 +== Automation Roadmap == 134 134 135 - ==AlwaysHuman==194 +Automation capabilities increase with system maturity while maintaining quality oversight. 136 136 137 -* Final verdict approval 138 -* Scenario validity 139 -* Ethical decisions 140 -* Dispute resolution 196 +=== POC (Current Focus) === 141 141 142 -== Mostly AI (Human Validation Needed) == 198 +**Automated:** 199 +* Claim normalization 200 +* Scenario template generation 201 +* Evidence metadata extraction 202 +* Simple verdict drafts 203 +* **AI-generated publication** (Mode 2, with quality gates) 204 +* **Contradiction search** 205 +* **Risk tier assignment** 143 143 144 -* Claim normalization 145 -* Clustering 146 -* Evidence metadata 147 -* Reliability heuristics 148 -* Scenario drafts 149 -* Contradiction detection 207 +**Human:** 208 +* High-risk content validation (Tier A) 209 +* Sampling audits across all tiers 210 +* Quality standard refinement 211 +* Governance decisions 150 150 151 -== Mixed ==213 +=== Beta 0 (Enhanced Automation) === 152 152 153 -* Definitions of ambiguous terms 154 -* Boundary choices 155 -* Assumption evaluation 156 -* Evidence selection 157 -* Verdict reasoning 215 +**Automated:** 216 +* Detailed scenario generation 217 +* Advanced evidence reliability scoring 218 +* Cross-scenario comparisons 219 +* Multi-source contradiction detection 220 +* Internal Truth Landscape generation 221 +* **Increased AI-draft coverage** (more Tier B content) 158 158 223 +**Human:** 224 +* Tier A final approval 225 +* Audit sampling (continued) 226 +* Expert validation of complex domains 227 +* Quality improvement oversight 228 + 229 +=== Release 1.0 (High Automation) === 230 + 231 +**Automated:** 232 +* Full scenario generation (comprehensive) 233 +* Bayesian verdict scoring across scenarios 234 +* Multi-scenario summary generation 235 +* Anomaly detection across federated nodes 236 +* AKEL-assisted cross-node synchronization 237 +* **Most Tier B and all Tier C** auto-published 238 + 239 +**Human:** 240 +* Tier A oversight (still required) 241 +* Strategic audits (lower sampling rates, higher value) 242 +* Ethical decisions and policy 243 +* Conflict resolution 244 + 159 159 ---- 160 160 161 -= Diagram References=247 +== Automation Levels Diagram == 162 162 163 -{{include reference="FactHarbor. Archive.Diagramsv0\.8q.AutomationRoadmap.WebHome"/}}249 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Automation Level.WebHome"/}} 164 164 165 - {{include reference="FactHarbor.Archive.Diagrams v0\.8q.Automation Level.WebHome"/}}251 +---- 166 166 167 -{{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} 253 +== Automation Roadmap Diagram == 254 + 255 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Automation Roadmap.WebHome"/}} 256 + 257 +---- 258 + 259 +== Manual vs Automated Matrix == 260 + 261 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} 262 + 263 +---- 264 + 265 +== Related Pages == 266 + 267 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 268 +* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 269 +* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 270 +* [[Governance>>FactHarbor.Organisation.Governance]] 271 +