Wiki source code of Automation

Version 1.1 by Robert Schaub on 2025/12/16 21:42

Show last authors
1 = Automation =
2
3 Automation in FactHarbor amplifies human capability while implementing risk-based oversight.
4
5 This chapter defines:
6 * Risk-based publication model
7 * Quality gates for AI-generated content
8 * What must remain human-only
9 * What AI (AKEL) can draft and publish
10 * What can be fully automated
11 * How automation evolves through POC → Beta 0 → Release 1.0
12
13 == 1. POC v1 (AI-Generated Publication Demonstration) ==
14
15 The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication.
16
17 === 1.1 Workflow ===
18
19 1. **Input**: User pastes a block of raw text.
20 1. **Deep Analysis (Background)**: The system autonomously performs the full pipeline **before** displaying the text:
21 * Extraction & Normalisation
22 * Scenario & Sub-query generation
23 * Evidence retrieval with **contradiction search**
24 * Quality gate validation
25 * Verdict computation
26 1. **Visualisation (Extraction & Marking)**: The system displays the text with claims extracted and marked.
27 * **Verdict-Based Coloring**: The extraction highlights (e.g. Orange/Green) are chosen **according to the computed verdict** for each claim.
28 * **AI-Generated Label**: Clear indication that content is AI-produced
29 1. **Inspection**: User clicks a highlighted claim to see the **Reasoning Trail**, showing exactly which evidence and sub-queries led to that verdict.
30
31 === 1.2 Technical Scope ===
32
33 * **AI-Generated Publication**: Content published as Mode 2 (AI-Generated, no prior human review)
34 * **Quality Gates Active**: All automated quality checks enforced
35 * **Contradiction Search Demonstrated**: Shows counter-evidence and reservation detection
36 * **Risk Tier Classification**: POC shows tier assignment (demo purposes)
37 * **No Human Approval Gate**: Demonstrates scalable AI publication
38 * **Structured Sub-Queries**: Logic generated by decomposing claims into the FactHarbor data model
39
40
41 == 2. Publication Model ==
42
43 FactHarbor implements a risk-based publication model with three modes:
44
45 === 2.1 Mode 1: Draft-Only ===
46 * Failed quality gates
47 * High-risk content pending expert review
48 * Internal review queue only
49
50 === 2.2 Mode 2: AI-Generated (Public) ===
51 * Passed all quality gates
52 * Risk tier B or C
53 * Clear AI-generated labeling
54 * Users can request human review
55
56 === 2.3 Mode 3: Human-Reviewed ===
57 * Validated by human reviewers/experts
58 * "Human-Reviewed" status badge
59 * Required for Tier A content publication
60
61 See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed publication mode descriptions.
62
63
64 == 3. Risk Tiers and Automation Levels ==
65
66 === 3.1 Tier A (High Risk) ===
67 * **Domains**: Medical, legal, elections, safety, security
68 * **Automation**: AI can draft, human review required for "Human-Reviewed" status
69 * **AI publication**: Allowed with prominent disclaimers and warnings
70 * **Audit rate**: Recommendation: 30-50%
71
72 === 3.2 Tier B (Medium Risk) ===
73 * **Domains**: Complex policy, science, causality claims
74 * **Automation**: AI can draft and publish (Mode 2)
75 * **Human review**: Optional, audit-based
76 * **Audit rate**: Recommendation: 10-20%
77
78 === 3.3 Tier C (Low Risk) ===
79 * **Domains**: Definitions, established facts, historical data
80 * **Automation**: AI publication default
81 * **Human review**: On request or via sampling
82 * **Audit rate**: Recommendation: 5-10%
83
84
85 == 4. Human-Only Tasks ==
86
87 These require human judgment and cannot be automated:
88
89 * **Ethical boundary decisions** (especially medical, political, psychological harm assessment)
90 * **Dispute resolution** between conflicting expert opinions
91 * **Governance policy** setting and enforcement
92 * **Final authority** on Tier A "Human-Reviewed" status
93 * **Audit system oversight** and quality standard definition
94 * **Risk tier policy** adjustments based on societal context
95
96
97 == 5. AI-Draft with Audit (Semi-Automated) ==
98
99 AKEL drafts these; humans validate via sampling audits:
100
101 * **Scenario structures** (definitions, assumptions, context)
102 * **Evaluation methods** and reasoning chains
103 * **Evidence relevance** assessment and ranking
104 * **Reliability scoring** and source evaluation
105 * **Verdict reasoning** with uncertainty quantification
106 * **Contradiction and reservation** identification
107 * **Scenario comparison** explanations
108 * **Public summaries** and accessibility text
109
110 Most Tier B and C content remains in AI-draft status unless:
111 * Users request human review
112 * Audits identify errors
113 * High engagement triggers review
114 * Community flags issues
115
116
117 == 6. Fully Automated Structural Tasks ==
118
119 These require no human interpretation:
120
121 * **Claim normalization** (canonical form generation)
122 * **Duplicate detection** (vector embeddings, clustering)
123 * **Evidence metadata extraction** (dates, authors, publication info)
124 * **Basic reliability heuristics** (source reputation scoring)
125 * **Contradiction detection** (conflicting statements across sources)
126 * **Re-evaluation triggers** (new evidence, source updates)
127 * **Layout generation** (diagrams, summaries, UI presentation)
128 * **Federation integrity checks** (cross-node data validation)
129
130
131 == 7. Quality Gates (Automated) ==
132
133 Before AI-draft publication (Mode 2), content must pass:
134
135 1. **Source Quality Gate**
136 * Primary sources verified
137 * Citations complete and accessible
138 * Source reliability scored
139
140 2. **Contradiction Search Gate** (MANDATORY)
141 * Counter-evidence actively sought
142 * Reservations and limitations identified
143 * Bubble detection (echo chambers, conspiracy theories)
144 * Diverse perspective verification
145
146 3. **Uncertainty Quantification Gate**
147 * Confidence scores calculated
148 * Limitations stated
149 * Data gaps disclosed
150
151 4. **Structural Integrity Gate**
152 * No hallucinations detected
153 * Logic chain valid
154 * References verifiable
155
156 See [[AKEL page>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] for detailed quality gate specifications.
157
158
159 == 8. Audit System ==
160
161 Instead of reviewing all AI output, systematic sampling audits ensure quality:
162
163 === 8.1 Stratified Sampling ===
164 * Risk tier (A > B > C sampling rates)
165 * Confidence scores (low confidence → more audits)
166 * Traffic/engagement (popular content audited more)
167 * Novelty (new topics/claim types prioritized)
168 * User flags and disagreement signals
169
170 === 8.2 Continuous Improvement Loop ===
171 Audit findings improve:
172 * Query templates
173 * Source reliability weights
174 * Contradiction detection algorithms
175 * Risk tier assignment rules
176 * Bubble detection heuristics
177
178 === 8.3 Transparency ===
179 * Audit statistics published
180 * Accuracy rates by tier reported
181 * System improvements documented
182
183
184 == 9. Automation Roadmap ==
185
186 Automation capabilities increase with system maturity while maintaining quality oversight.
187
188 === 9.1 POC (Current Focus) ===
189
190 **Automated:**
191 * Claim normalization
192 * Scenario template generation
193 * Evidence metadata extraction
194 * Simple verdict drafts
195 * **AI-generated publication** (Mode 2, with quality gates)
196 * **Contradiction search**
197 * **Risk tier assignment**
198
199 **Human:**
200 * High-risk content validation (Tier A)
201 * Sampling audits across all tiers
202 * Quality standard refinement
203 * Governance decisions
204
205 === 9.2 Beta 0 (Enhanced Automation) ===
206
207 **Automated:**
208 * Detailed scenario generation
209 * Advanced evidence reliability scoring
210 * Cross-scenario comparisons
211 * Multi-source contradiction detection
212 * Internal Truth Landscape generation
213 * **Increased AI-draft coverage** (more Tier B content)
214
215 **Human:**
216 * Tier A final approval
217 * Audit sampling (continued)
218 * Expert validation of complex domains
219 * Quality improvement oversight
220
221 === 9.3 Release 1.0 (High Automation) ===
222
223 **Automated:**
224 * Full scenario generation (comprehensive)
225 * Bayesian verdict scoring across scenarios
226 * Multi-scenario summary generation
227 * Anomaly detection across federated nodes
228 * AKEL-assisted cross-node synchronization
229 * **Most Tier B and all Tier C** auto-published
230
231 **Human:**
232 * Tier A oversight (still required)
233 * Strategic audits (lower sampling rates, higher value)
234 * Ethical decisions and policy
235 * Conflict resolution
236
237
238 == 10. Automation Levels Diagram ==
239
240 {{include reference="FactHarbor.Specification.Diagrams.Automation Level.WebHome"/}}
241
242
243 == 11. Automation Roadmap Diagram ==
244
245 {{include reference="FactHarbor.Specification.Diagrams.Automation Roadmap.WebHome"/}}
246
247
248 == 12. Manual vs Automated Matrix ==
249
250 {{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
251
252
253 == 13. Related Pages ==
254
255 * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
256 * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
257 * [[Workflows>>FactHarbor.Specification.Workflows.WebHome]]
258 * [[Governance>>FactHarbor.Organisation.Governance]]