Changes for page Automation
Last modified by Robert Schaub on 2025/12/24 20:34
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,296 +1,167 @@ 1 1 = Automation = 2 2 3 -Automation in FactHarbor amplifies human capability while implementing risk-based oversight. 3 +Automation in FactHarbor amplifies human capability but never replaces human oversight. 4 +All automated outputs require human review before publication. 4 4 5 5 This chapter defines: 6 6 7 -* Risk-based publication model 8 -* Quality gates for AI-generated content 9 9 * What must remain human-only 10 -* What AI (AKEL) can draft and publish9 +* What AI (AKEL) can draft 11 11 * What can be fully automated 12 12 * How automation evolves through POC → Beta 0 → Release 1.0 13 13 14 -== POC v1 (AI-Generated Publication Demonstration) == 15 - 16 -The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication. 17 - 18 -=== Workflow === 19 - 20 -1. **Input**: User pastes a block of raw text. 21 -1. **Deep Analysis (Background)**: The system autonomously performs the full pipeline **before** displaying the text: 22 - 23 -* Extraction & Normalisation 24 -* Scenario & Sub-query generation 25 -* Evidence retrieval with **contradiction search** 26 -* Quality gate validation 27 -* Verdict computation 28 - 29 -1. **Visualisation (Extraction & Marking)**: The system displays the text with claims extracted and marked. 30 - 31 -* **Verdict-Based Coloring**: The extraction highlights (e.g. Orange/Green) are chosen **according to the computed verdict** for each claim. 32 -* **AI-Generated Label**: Clear indication that content is AI-produced 33 - 34 -1. **Inspection**: User clicks a highlighted claim to see the **Reasoning Trail**, showing exactly which evidence and sub-queries led to that verdict. 35 - 36 -=== Technical Scope === 37 - 38 -* **AI-Generated Publication**: Content published as Mode 2 (AI-Generated, no prior human review) 39 -* **Quality Gates Active**: All automated quality checks enforced 40 -* **Contradiction Search Demonstrated**: Shows counter-evidence and reservation detection 41 -* **Risk Tier Classification**: POC shows tier assignment (demo purposes) 42 -* **No Human Approval Gate**: Demonstrates scalable AI publication 43 -* **Structured Sub-Queries**: Logic generated by decomposing claims into the FactHarbor data model 44 - 45 45 ---- 46 46 47 -= =Publication Model ==15 += Manual vs Automated Responsibilities = 48 48 49 -FactHarbor implements a risk-based publication model with three modes: 50 - 51 -=== Mode 1: Draft-Only === 52 - 53 -* Failed quality gates 54 -* High-risk content pending expert review 55 -* Internal review queue only 56 - 57 -=== Mode 2: AI-Generated (Public) === 58 - 59 -* Passed all quality gates 60 -* Risk tier B or C 61 -* Clear AI-generated labeling 62 -* Users can request human review 63 - 64 -=== Mode 3: Human-Reviewed === 65 - 66 -* Validated by human reviewers/experts 67 -* "Human-Reviewed" status badge 68 -* Required for Tier A content publication 69 - 70 -See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed publication mode descriptions. 71 - 72 ----- 73 - 74 -== Risk Tiers and Automation Levels == 75 - 76 -=== Tier A (High Risk) === 77 - 78 -* **Domains**: Medical, legal, elections, safety, security 79 -* **Automation**: AI can draft, human review required for "Human-Reviewed" status 80 -* **AI publication**: Allowed with prominent disclaimers and warnings 81 -* **Audit rate**: Recommendation: 30-50% 82 - 83 -=== Tier B (Medium Risk) === 84 - 85 -* **Domains**: Complex policy, science, causality claims 86 -* **Automation**: AI can draft and publish (Mode 2) 87 -* **Human review**: Optional, audit-based 88 -* **Audit rate**: Recommendation: 10-20% 89 - 90 -=== Tier C (Low Risk) === 91 - 92 -* **Domains**: Definitions, established facts, historical data 93 -* **Automation**: AI publication default 94 -* **Human review**: On request or via sampling 95 -* **Audit rate**: Recommendation: 5-10% 96 - 97 ----- 98 - 99 99 == Human-Only Tasks == 100 100 101 -These require human judgment andcannotbeautomated:19 +These require human judgment, ethics, or contextual interpretation: 102 102 103 -* **Ethical boundary decisions** (especially medical, political, psychological harm assessment) 104 -* **Dispute resolution** between conflicting expert opinions 105 -* **Governance policy** setting and enforcement 106 -* **Final authority** on Tier A "Human-Reviewed" status 107 -* **Audit system oversight** and quality standard definition 108 -* **Risk tier policy** adjustments based on societal context 21 +* Definition of key terms in claims 22 +* Approval or rejection of scenarios 23 +* Interpretation of evidence in context 24 +* Final verdict approval 25 +* Governance decisions and dispute resolution 26 +* High-risk domain oversight 27 +* Ethical boundary decisions (especially medical, political, psychological) 109 109 110 -- ---29 +== Semi-Automated (AI Draft → Human Review) == 111 111 112 - ==AI-DraftwithAudit(Semi-Automated)==31 +AKEL can draft these, but humans must refine/approve: 113 113 114 -AKEL drafts these; humans validate via sampling audits: 33 +* Scenario structures (definitions, assumptions, context) 34 +* Evaluation methods 35 +* Evidence relevance suggestions 36 +* Reliability hints 37 +* Verdict reasoning chains 38 +* Uncertainty and limitations 39 +* Scenario comparison explanations 40 +* Suggestions for merging or splitting scenarios 41 +* Draft public summaries 115 115 116 -* **Scenario structures** (definitions, assumptions, context) 117 -* **Evaluation methods** and reasoning chains 118 -* **Evidence relevance** assessment and ranking 119 -* **Reliability scoring** and source evaluation 120 -* **Verdict reasoning** with uncertainty quantification 121 -* **Contradiction and reservation** identification 122 -* **Scenario comparison** explanations 123 -* **Public summaries** and accessibility text 124 - 125 -Most Tier B and C content remains in AI-draft status unless: 126 - 127 -* Users request human review 128 -* Audits identify errors 129 -* High engagement triggers review 130 -* Community flags issues 131 - 132 ----- 133 - 134 134 == Fully Automated Structural Tasks == 135 135 136 136 These require no human interpretation: 137 137 138 -* **Claim normalization** (canonical form generation)139 -* **Duplicate detection**(vector embeddings, clustering)140 -* **Evidence metadata extraction** (dates, authors, publication info)141 -* **Basic reliability heuristics** (source reputation scoring)142 -* **Contradiction detection** (conflicting statements across sources)143 -* **Re-evaluation triggers** (new evidence, source updates)144 -* **Layout generation**(diagrams, summaries, UI presentation)145 -* **Federation integrity checks** (cross-node data validation)47 +* Claim normalization 48 +* Duplicate & cluster detection (vector embeddings) 49 +* Evidence metadata extraction 50 +* Basic reliability heuristics 51 +* Contradiction detection 52 +* Re-evaluation triggers 53 +* Batch layout generation (diagrams, summaries) 54 +* Federation integrity checks 146 146 147 147 ---- 148 148 149 -= =Quality Gates (Automated)==58 += Automation Roadmap = 150 150 151 - BeforeAI-draft publication(Mode2), contentmustpass:60 +Automation increases with maturity. 152 152 153 - 1.**SourceQuality Gate**62 +== POC (Low Automation) == 154 154 155 -* Primary sources verified 156 -* Citations complete and accessible 157 -* Source reliability scored 64 +=== Automated === 158 158 159 -2. **Contradiction Search Gate** (MANDATORY) 66 +* Claim normalization 67 +* Light scenario templates 68 +* Evidence metadata extraction 69 +* Simple verdict drafts (internal only) 160 160 161 -* Counter-evidence actively sought 162 -* Reservations and limitations identified 163 -* Bubble detection (echo chambers, conspiracy theories) 164 -* Diverse perspective verification 71 +=== Human === 165 165 166 -3. **Uncertainty Quantification Gate** 73 +* All scenario definitions 74 +* Evidence interpretation 75 +* Verdict creation 76 +* Governance 167 167 168 -* Confidence scores calculated 169 -* Limitations stated 170 -* Data gaps disclosed 78 +== Beta 0 (Medium Automation) == 171 171 172 - 4.**Structural IntegrityGate**80 +=== Automated === 173 173 174 -* No hallucinations detected 175 -* Logic chain valid 176 -* References verifiable 82 +* Detailed scenario drafts 83 +* Evidence reliability scoring 84 +* Cross-scenario comparisons 85 +* Contradiction detection (local + remote nodes) 86 +* Internal Truth Landscape drafts 177 177 178 - See[[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed quality gate specifications.88 +=== Human === 179 179 180 ----- 90 +* Scenario approval 91 +* Final verdict validation 181 181 182 -== Au ditSystem ==93 +== Release 1.0 (High Automation) == 183 183 184 - Insteadof reviewing allAI output, systematic sampling auditsensure quality:95 +=== Automated === 185 185 186 -=== Stratified Sampling === 97 +* Full scenario generation (definitions, assumptions, boundaries) 98 +* Evidence relevance scoring and ranking 99 +* Bayesian verdict scoring across scenario sets 100 +* Multi-scenario summary generation 101 +* Anomaly detection across nodes 102 +* AKEL-assisted federated synchronization 187 187 188 -* Risk tier (A > B > C sampling rates) 189 -* Confidence scores (low confidence → more audits) 190 -* Traffic/engagement (popular content audited more) 191 -* Novelty (new topics/claim types prioritized) 192 -* User flags and disagreement signals 104 +=== Human === 193 193 194 -=== Continuous Improvement Loop === 106 +* Final approval of all scenarios and verdicts 107 +* Ethical decisions 108 +* Oversight and conflict resolution 195 195 196 -Audit findings improve: 197 - 198 -* Query templates 199 -* Source reliability weights 200 -* Contradiction detection algorithms 201 -* Risk tier assignment rules 202 -* Bubble detection heuristics 203 - 204 -=== Transparency === 205 - 206 -* Audit statistics published 207 -* Accuracy rates by tier reported 208 -* System improvements documented 209 - 210 210 ---- 211 211 212 -= =AutomationRoadmap==112 += Automation Levels = 213 213 214 - Automationcapabilities increasewithsystemmaturity whilemaintaining quality oversight.114 +== Level 0 — Human-Centric (POC) == 215 215 216 - ===POC(CurrentFocus) ===116 +AI is purely advisory, nothing auto-published. 217 217 218 - **Automated:**118 +== Level 1 — Assisted (Beta 0) == 219 219 220 -* Claim normalization 221 -* Scenario template generation 222 -* Evidence metadata extraction 223 -* Simple verdict drafts 224 -* **AI-generated publication** (Mode 2, with quality gates) 225 -* **Contradiction search** 226 -* **Risk tier assignment** 120 +AI drafts structures; humans approve each part. 227 227 228 - **Human:**122 +== Level 2 — Structured (Release 1.0) == 229 229 230 -* High-risk content validation (Tier A) 231 -* Sampling audits across all tiers 232 -* Quality standard refinement 233 -* Governance decisions 124 +AI produces near-complete drafts; humans refine. 234 234 235 -== =Beta0(EnhancedAutomation) ===126 +== Level 3 — Distributed Intelligence (Future) == 236 236 237 -**Automated:** 128 +Nodes exchange embeddings, contradiction alerts, and scenario templates. 129 +Humans still approve everything. 238 238 239 -* Detailed scenario generation 240 -* Advanced evidence reliability scoring 241 -* Cross-scenario comparisons 242 -* Multi-source contradiction detection 243 -* Internal Truth Landscape generation 244 -* **Increased AI-draft coverage** (more Tier B content) 131 +---- 245 245 246 - **Human:**133 += Automation Matrix = 247 247 248 -* Tier A final approval 249 -* Audit sampling (continued) 250 -* Expert validation of complex domains 251 -* Quality improvement oversight 135 +== Always Human == 252 252 253 -=== Release 1.0 (High Automation) === 137 +* Final verdict approval 138 +* Scenario validity 139 +* Ethical decisions 140 +* Dispute resolution 254 254 255 - **Automated:**142 +== Mostly AI (Human Validation Needed) == 256 256 257 -* Full scenariogeneration(comprehensive)258 -* Bayesian verdict scoringacrossscenarios259 -* Multi-scenariosummary generation260 -* Anomalydetection acrossfederatednodes261 -* AKEL-assistedcross-nodesynchronization262 -* **MostTierBandall Tier C** auto-published144 +* Claim normalization 145 +* Clustering 146 +* Evidence metadata 147 +* Reliability heuristics 148 +* Scenario drafts 149 +* Contradiction detection 263 263 264 - **Human:**151 +== Mixed == 265 265 266 -* Tier A oversight (still required) 267 -* Strategic audits (lower sampling rates, higher value) 268 -* Ethical decisions and policy 269 -* Conflict resolution 153 +* Definitions of ambiguous terms 154 +* Boundary choices 155 +* Assumption evaluation 156 +* Evidence selection 157 +* Verdict reasoning 270 270 271 271 ---- 272 272 273 -= =Automation LevelsDiagram ==161 += Diagram References = 274 274 275 -{{include reference="FactHarbor.Archive. FactHarbor V0\.9\.23 LostData.Specification.Diagrams.AutomationLevel.WebHome"/}}163 +{{include reference="FactHarbor.Archive.Diagrams v0\.8q.Automation Roadmap.WebHome"/}} 276 276 277 - ----165 +{{include reference="FactHarbor.Archive.Diagrams v0\.8q.Automation Level.WebHome"/}} 278 278 279 -== Automation Roadmap Diagram == 280 - 281 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Automation Roadmap.WebHome"/}} 282 - 283 ----- 284 - 285 -== Manual vs Automated Matrix == 286 - 287 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} 288 - 289 ----- 290 - 291 -== Related Pages == 292 - 293 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 294 -* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 295 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 296 -* [[Governance>>FactHarbor.Organisation.Governance]] 167 +{{include reference="FactHarbor.Archive.Diagrams v0\.8q.Manual vs Automated matrix.WebHome"/}}