Changes for page Automation
Last modified by Robert Schaub on 2025/12/24 20:34
From version 7.3
edited by Robert Schaub
on 2025/12/16 20:26
on 2025/12/16 20:26
Change comment:
Update document after refactoring.
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Parent
-
... ... @@ -1,1 +1,1 @@ 1 -FactHarbor. Archive.FactHarbor V0\.9\.18.Specification.WebHome1 +FactHarbor.Specification.WebHome - Content
-
... ... @@ -1,18 +1,17 @@ 1 1 = Automation = 2 2 3 -Automation in FactHarbor amplifies human capability while implementing risk-based oversight. 3 +Automation in FactHarbor amplifies human capability but never replaces human oversight. 4 +All automated outputs require human review before publication. 4 4 5 5 This chapter defines: 6 -* Risk-based publication model 7 -* Quality gates for AI-generated content 8 8 * What must remain human-only 9 -* What AI (AKEL) can draft and publish8 +* What AI (AKEL) can draft 10 10 * What can be fully automated 11 11 * How automation evolves through POC → Beta 0 → Release 1.0 12 12 13 -== POC v1 (A I-GeneratedPublicationDemonstration) ==12 +== POC v1 (Fully Automated "Text to Truth Landscape") == 14 14 15 -The goal of POC v1 is to validate the automated reasoning capabilities and demonstrateAI-generated content publication.14 +The goal of POC v1 is to validate the automated reasoning capabilities of the data model without human intervention. 16 16 17 17 === Workflow === 18 18 ... ... @@ -20,252 +20,93 @@ 20 20 1. **Deep Analysis (Background)**: The system autonomously performs the full pipeline **before** displaying the text: 21 21 * Extraction & Normalisation 22 22 * Scenario & Sub-query generation 23 -* Evidence retrieval with **contradiction search** 24 -* Quality gate validation 25 -* Verdict computation 22 +* Evidence retrieval & Verdict computation 26 26 1. **Visualisation (Extraction & Marking)**: The system displays the text with claims extracted and marked. 27 27 * **Verdict-Based Coloring**: The extraction highlights (e.g. Orange/Green) are chosen **according to the computed verdict** for each claim. 28 -* **AI-Generated Label**: Clear indication that content is AI-produced 29 29 1. **Inspection**: User clicks a highlighted claim to see the **Reasoning Trail**, showing exactly which evidence and sub-queries led to that verdict. 30 30 31 31 === Technical Scope === 32 32 33 -* **AI-Generated Publication**: Content published as Mode 2 (AI-Generated, no prior human review) 34 -* **Quality Gates Active**: All automated quality checks enforced 35 -* **Contradiction Search Demonstrated**: Shows counter-evidence and reservation detection 36 -* **Risk Tier Classification**: POC shows tier assignment (demo purposes) 37 -* **No Human Approval Gate**: Demonstrates scalable AI publication 38 -* **Structured Sub-Queries**: Logic generated by decomposing claims into the FactHarbor data model 29 +* **Fully Automated**: No human-in-the-loop for this phase. 30 +* **Structured Sub-Queries**: Logic is generated by decomposing claims into the FactHarbor data model. 31 +* **Latency**: Focus on accuracy of reasoning over real-time speed for v1. 39 39 40 40 ---- 41 41 42 -== Publication Model ==35 +== Manual vs Automated Responsibilities == 43 43 44 - FactHarborimplementsarisk-basedpublication model with three modes:37 +=== Human-Only Tasks === 45 45 46 -=== Mode 1: Draft-Only === 47 -* Failed quality gates 48 -* High-risk content pending expert review 49 -* Internal review queue only 39 +These require human judgment, ethics, or contextual interpretation: 50 50 51 -=== Mode 2: AI-Generated (Public) === 52 -* Passed all quality gates 53 -* Risk tier B or C 54 -* Clear AI-generated labeling 55 -* Users can request human review 41 +* Definition of key terms in claims 42 +* Approval or rejection of scenarios 43 +* Interpretation of evidence in context 44 +* Final verdict approval 45 +* Governance decisions and dispute resolution 46 +* High-risk domain oversight 47 +* Ethical boundary decisions (especially medical, political, psychological) 56 56 57 -=== Mode 3: Human-Reviewed === 58 -* Validated by human reviewers/experts 59 -* "Human-Reviewed" status badge 60 -* Required for Tier A content publication 49 +=== Semi-Automated (AI Draft → Human Review) === 61 61 62 - See [[AKELpage>>FactHarbor.Specification.AIKnowledge ExtractionLayer (AKEL).WebHome]]for detailedpublication mode descriptions.51 +AKEL can draft these, but humans must refine/approve: 63 63 64 ----- 53 +* Scenario structures (definitions, assumptions, context) 54 +* Evaluation methods 55 +* Evidence relevance suggestions 56 +* Reliability hints 57 +* Verdict reasoning chains 58 +* Uncertainty and limitations 59 +* Scenario comparison explanations 60 +* Suggestions for merging or splitting scenarios 61 +* Draft public summaries 65 65 66 -== RiskTiers andAutomation Levels ==63 +=== Fully Automated Structural Tasks === 67 67 68 -=== Tier A (High Risk) === 69 -* **Domains**: Medical, legal, elections, safety, security 70 -* **Automation**: AI can draft, human review required for "Human-Reviewed" status 71 -* **AI publication**: Allowed with prominent disclaimers and warnings 72 -* **Audit rate**: Recommendation: 30-50% 73 - 74 -=== Tier B (Medium Risk) === 75 -* **Domains**: Complex policy, science, causality claims 76 -* **Automation**: AI can draft and publish (Mode 2) 77 -* **Human review**: Optional, audit-based 78 -* **Audit rate**: Recommendation: 10-20% 79 - 80 -=== Tier C (Low Risk) === 81 -* **Domains**: Definitions, established facts, historical data 82 -* **Automation**: AI publication default 83 -* **Human review**: On request or via sampling 84 -* **Audit rate**: Recommendation: 5-10% 85 - 86 ----- 87 - 88 -== Human-Only Tasks == 89 - 90 -These require human judgment and cannot be automated: 91 - 92 -* **Ethical boundary decisions** (especially medical, political, psychological harm assessment) 93 -* **Dispute resolution** between conflicting expert opinions 94 -* **Governance policy** setting and enforcement 95 -* **Final authority** on Tier A "Human-Reviewed" status 96 -* **Audit system oversight** and quality standard definition 97 -* **Risk tier policy** adjustments based on societal context 98 - 99 ----- 100 - 101 -== AI-Draft with Audit (Semi-Automated) == 102 - 103 -AKEL drafts these; humans validate via sampling audits: 104 - 105 -* **Scenario structures** (definitions, assumptions, context) 106 -* **Evaluation methods** and reasoning chains 107 -* **Evidence relevance** assessment and ranking 108 -* **Reliability scoring** and source evaluation 109 -* **Verdict reasoning** with uncertainty quantification 110 -* **Contradiction and reservation** identification 111 -* **Scenario comparison** explanations 112 -* **Public summaries** and accessibility text 113 - 114 -Most Tier B and C content remains in AI-draft status unless: 115 -* Users request human review 116 -* Audits identify errors 117 -* High engagement triggers review 118 -* Community flags issues 119 - 120 ----- 121 - 122 -== Fully Automated Structural Tasks == 123 - 124 124 These require no human interpretation: 125 125 126 -* **Claim normalization** (canonical form generation) 127 -* **Duplicate detection** (vector embeddings, clustering) 128 -* **Evidence metadata extraction** (dates, authors, publication info) 129 -* **Basic reliability heuristics** (source reputation scoring) 130 -* **Contradiction detection** (conflicting statements across sources) 131 -* **Re-evaluation triggers** (new evidence, source updates) 132 -* **Layout generation** (diagrams, summaries, UI presentation) 133 -* **Federation integrity checks** (cross-node data validation) 134 - 135 ----- 136 - 137 -== Quality Gates (Automated) == 138 - 139 -Before AI-draft publication (Mode 2), content must pass: 140 - 141 -1. **Source Quality Gate** 142 - * Primary sources verified 143 - * Citations complete and accessible 144 - * Source reliability scored 145 - 146 -2. **Contradiction Search Gate** (MANDATORY) 147 - * Counter-evidence actively sought 148 - * Reservations and limitations identified 149 - * Bubble detection (echo chambers, conspiracy theories) 150 - * Diverse perspective verification 151 - 152 -3. **Uncertainty Quantification Gate** 153 - * Confidence scores calculated 154 - * Limitations stated 155 - * Data gaps disclosed 156 - 157 -4. **Structural Integrity Gate** 158 - * No hallucinations detected 159 - * Logic chain valid 160 - * References verifiable 161 - 162 -See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed quality gate specifications. 163 - 164 ----- 165 - 166 -== Audit System == 167 - 168 -Instead of reviewing all AI output, systematic sampling audits ensure quality: 169 - 170 -=== Stratified Sampling === 171 -* Risk tier (A > B > C sampling rates) 172 -* Confidence scores (low confidence → more audits) 173 -* Traffic/engagement (popular content audited more) 174 -* Novelty (new topics/claim types prioritized) 175 -* User flags and disagreement signals 176 - 177 -=== Continuous Improvement Loop === 178 -Audit findings improve: 179 -* Query templates 180 -* Source reliability weights 181 -* Contradiction detection algorithms 182 -* Risk tier assignment rules 183 -* Bubble detection heuristics 184 - 185 -=== Transparency === 186 -* Audit statistics published 187 -* Accuracy rates by tier reported 188 -* System improvements documented 189 - 190 ----- 191 - 192 -== Automation Roadmap == 193 - 194 -Automation capabilities increase with system maturity while maintaining quality oversight. 195 - 196 -=== POC (Current Focus) === 197 - 198 -**Automated:** 199 199 * Claim normalization 200 -* Scenariotemplategeneration68 +* Duplicate & cluster detection (vector embeddings) 201 201 * Evidence metadata extraction 202 -* Simple verdict drafts 203 -* **AI-generated publication** (Mode 2, with quality gates) 204 -* **Contradiction search** 205 -* **Risk tier assignment** 70 +* Basic reliability heuristics 71 +* Contradiction detection 72 +* Re-evaluation triggers 73 +* Batch layout generation (diagrams, summaries) 74 +* Federation integrity checks 206 206 207 -**Human:** 208 -* High-risk content validation (Tier A) 209 -* Sampling audits across all tiers 210 -* Quality standard refinement 211 -* Governance decisions 76 +== Automation Roadmap == 212 212 213 - === Beta0 (EnhancedAutomation) ===78 +Automation increases with maturity. 214 214 215 -**Automated:** 216 -* Detailed scenario generation 217 -* Advanced evidence reliability scoring 218 -* Cross-scenario comparisons 219 -* Multi-source contradiction detection 220 -* Internal Truth Landscape generation 221 -* **Increased AI-draft coverage** (more Tier B content) 80 +=== POC (Low Automation) === 81 +* **Automated**: Claim normalization, Light scenario templates, Metadata extraction, Internal drafts. 82 +* **Human**: All scenario definitions, Evidence interpretation, Verdict creation, Governance. 222 222 223 -**Human:** 224 -* Tier A final approval 225 -* Audit sampling (continued) 226 -* Expert validation of complex domains 227 -* Quality improvement oversight 84 +=== Beta 0 (Medium Automation) === 85 +* **Automated**: Detailed scenario drafts, Evidence reliability scoring, Cross-scenario comparisons, Contradiction detection. 86 +* **Human**: Scenario approval, Final verdict validation. 228 228 229 229 === Release 1.0 (High Automation) === 89 +* **Automated**: Full scenario generation, Evidence relevance ranking, Bayesian verdict scoring, Anomaly detection, Federation sync. 90 +* **Human**: Final approval, Ethical decisions, Oversight. 230 230 231 -**Automated:** 232 -* Full scenario generation (comprehensive) 233 -* Bayesian verdict scoring across scenarios 234 -* Multi-scenario summary generation 235 -* Anomaly detection across federated nodes 236 -* AKEL-assisted cross-node synchronization 237 -* **Most Tier B and all Tier C** auto-published 92 +== Automation Levels == 238 238 239 -**Human:** 240 -* Tier A oversight (still required) 241 -* Strategic audits (lower sampling rates, higher value) 242 -* Ethical decisions and policy 243 -* Conflict resolution 94 +* **Level 0 — Human-Centric (POC)**: AI is purely advisory, nothing auto-published. 95 +* **Level 1 — Assisted (Beta 0)**: AI drafts structures; humans approve each part. 96 +* **Level 2 — Structured (Release 1.0)**: AI produces near-complete drafts; humans refine. 97 +* **Level 3 — Distributed Intelligence (Future)**: Nodes exchange embeddings and alerts; humans still approve. 244 244 245 - ----99 +== Automation Matrix == 246 246 247 -== Automation Levels Diagram == 101 +* **Always Human**: Final verdict, Scenario validity, Ethics, Disputes. 102 +* **Mostly AI**: Normalization, Clustering, Metadata, Heuristics, Alerts. 103 +* **Mixed**: Definitions, Boundaries, Assumptions, Reasoning. 248 248 249 - {{include reference="Test.FactHarborV09.Specification.Diagrams.AutomationLevel.WebHome"/}}105 +== Diagram References == 250 250 251 - ----107 +{{include reference="FactHarbor.Specification.Diagrams.Automation Roadmap.WebHome"/}} 252 252 253 - == AutomationRoadmapDiagram==109 +{{include reference="FactHarbor.Specification.Diagrams.Automation Level.WebHome"/}} 254 254 255 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Automation Roadmap.WebHome"/}} 256 - 257 ----- 258 - 259 -== Manual vs Automated Matrix == 260 - 261 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} 262 - 263 ----- 264 - 265 -== Related Pages == 266 - 267 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 268 -* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 269 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 270 -* [[Governance>>FactHarbor.Organisation.Governance]] 271 - 111 +{{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}