Changes for page Automation

Last modified by Robert Schaub on 2025/12/22 13:50

From version 1.3
edited by Robert Schaub
on 2025/12/22 13:49
Change comment: Renamed back-links.
To version 1.2
edited by Robert Schaub
on 2025/12/22 13:49
Change comment: Update document after refactoring.

Summary

Details

Page properties
Content
... ... @@ -1,31 +1,21 @@
1 1  = Automation =
2 -
3 3  **How FactHarbor scales through automated claim evaluation.**
4 -
5 5  == 1. Automation Philosophy ==
6 -
7 7  FactHarbor is **automation-first**: AKEL (AI Knowledge Extraction Layer) makes all content decisions. Humans monitor system performance and improve algorithms.
8 8  **Why automation:**
9 -
10 10  * **Scale**: Can process millions of claims
11 11  * **Consistency**: Same evaluation criteria applied uniformly
12 12  * **Transparency**: Algorithms are auditable
13 13  * **Speed**: Results in <20 seconds typically
14 14  See [[Automation Philosophy>>Test.FactHarbor.Organisation.Automation-Philosophy]] for detailed principles.
15 -
16 16  == 2. Claim Processing Flow ==
17 -
18 18  === 2.1 User Submits Claim ===
19 -
20 20  * User provides claim text + source URLs
21 21  * System validates format
22 22  * Assigns processing ID
23 23  * Queues for AKEL processing
24 -
25 25  === 2.2 AKEL Processing ===
26 -
27 27  **AKEL automatically:**
28 -
29 29  1. Parses claim into testable components
30 30  2. Extracts evidence from sources
31 31  3. Scores source credibility
... ... @@ -35,12 +35,9 @@
35 35  7. Publishes result
36 36  **Processing time**: Typically <20 seconds
37 37  **No human approval required** - publication is automatic
38 -
39 39  === 2.3 Publication States ===
40 -
41 41  **Processing**: AKEL working on claim (not visible to public)
42 42  **Published**: AKEL completed evaluation (public)
43 -
44 44  * Verdict displayed with confidence score
45 45  * Evidence and sources shown
46 46  * Risk tier indicated
... ... @@ -58,19 +58,16 @@
58 58  === POC: Two-Phase Approach ===
59 59  
60 60  **Phase 1: Claim Extraction**
61 -
62 62  * Single LLM call to extract all claims from submitted content
63 63  * Light structure, focused on identifying distinct verifiable claims
64 64  * Output: List of claims with context
65 65  
66 66  **Phase 2: Claim Analysis (Parallel)**
67 -
68 68  * Single LLM call per claim (parallelizable)
69 69  * Full structured output: Evidence, Scenarios, Sources, Verdict, Risk
70 70  * Each claim analyzed independently
71 71  
72 72  **Advantages:**
73 -
74 74  * Fast to implement (2-4 weeks to working POC)
75 75  * Only 2-3 API calls total (1 + N claims)
76 76  * Simple to debug (claim-level isolation)
... ... @@ -79,30 +79,26 @@
79 79  === Production: Three-Phase Approach ===
80 80  
81 81  **Phase 1: Claim Extraction + Validation**
82 -
83 83  * Extract distinct verifiable claims
84 84  * Validate claim clarity and uniqueness
85 85  * Remove duplicates and vague claims
86 86  
87 87  **Phase 2: Evidence Gathering (Parallel)**
88 -
89 89  * For each claim independently:
90 -* Find supporting and contradicting evidence
91 -* Identify authoritative sources
92 -* Generate test scenarios
72 + * Find supporting and contradicting evidence
73 + * Identify authoritative sources
74 + * Generate test scenarios
93 93  * Validation: Check evidence quality and source validity
94 94  * Error containment: Issues in one claim don't affect others
95 95  
96 96  **Phase 3: Verdict Generation (Parallel)**
97 -
98 98  * For each claim:
99 -* Generate verdict based on validated evidence
100 -* Assess confidence and risk level
101 -* Flag low-confidence results for human review
80 + * Generate verdict based on validated evidence
81 + * Assess confidence and risk level
82 + * Flag low-confidence results for human review
102 102  * Validation: Check verdict consistency with evidence
103 103  
104 104  **Advantages:**
105 -
106 106  * Error containment between phases
107 107  * Clear quality gates and validation
108 108  * Observable metrics per phase
... ... @@ -112,7 +112,6 @@
112 112  === LLM Task Delegation ===
113 113  
114 114  All complex cognitive tasks are delegated to LLMs:
115 -
116 116  * **Claim Extraction**: Understanding context, identifying distinct claims
117 117  * **Evidence Finding**: Analyzing sources, assessing relevance
118 118  * **Scenario Generation**: Creating testable hypotheses
... ... @@ -123,7 +123,6 @@
123 123  === Error Mitigation ===
124 124  
125 125  Research shows sequential LLM calls face compound error risks. FactHarbor mitigates this through:
126 -
127 127  * **Validation gates** between phases
128 128  * **Confidence thresholds** for quality control
129 129  * **Parallel processing** to avoid error propagation across claims
... ... @@ -130,48 +130,36 @@
130 130  * **Human review queue** for low-confidence verdicts
131 131  * **Independent claim processing** - errors in one claim don't cascade to others
132 132  
133 -== 3. Risk Tiers ==
134 134  
112 +== 3. Risk Tiers ==
135 135  Risk tiers classify claims by potential impact and guide audit sampling rates.
136 -
137 137  === 3.1 Tier A (High Risk) ===
138 -
139 139  **Domains**: Medical, legal, elections, safety, security
140 140  **Characteristics**:
141 -
142 142  * High potential for harm if incorrect
143 143  * Complex specialized knowledge required
144 144  * Often subject to regulation
145 145  **Publication**: AKEL publishes automatically with prominent risk warning
146 146  **Audit rate**: Higher sampling recommended
147 -
148 148  === 3.2 Tier B (Medium Risk) ===
149 -
150 150  **Domains**: Complex policy, science, causality claims
151 151  **Characteristics**:
152 -
153 153  * Moderate potential impact
154 154  * Requires careful evidence evaluation
155 155  * Multiple valid interpretations possible
156 156  **Publication**: AKEL publishes automatically with standard risk label
157 157  **Audit rate**: Moderate sampling recommended
158 -
159 159  === 3.3 Tier C (Low Risk) ===
160 -
161 161  **Domains**: Definitions, established facts, historical data
162 162  **Characteristics**:
163 -
164 164  * Low potential for harm
165 165  * Well-documented information
166 166  * Clear right/wrong answers typically
167 167  **Publication**: AKEL publishes by default
168 168  **Audit rate**: Lower sampling recommended
169 -
170 170  == 4. Quality Gates ==
171 -
172 172  AKEL applies quality gates before publication. If any fail, claim is **flagged** (not blocked - still published).
173 173  **Quality gates**:
174 -
175 175  * Sufficient evidence extracted (≥2 sources)
176 176  * Sources meet minimum credibility threshold
177 177  * Confidence score calculable
... ... @@ -178,10 +178,8 @@
178 178  * No detected manipulation patterns
179 179  * Claim parseable into testable form
180 180  **Failed gates**: Claim published with flag for moderator review
181 -
182 182  == 5. Automation Levels ==
183 -
184 -{{include reference="Test.FactHarbor pre10 V0\.9\.70.Specification.Diagrams.Automation Level.WebHome"/}}
148 +{{include reference="Test.FactHarbor.Specification.Diagrams.Automation Level.WebHome"/}}
185 185  FactHarbor progresses through automation maturity levels:
186 186  **Release 0.5** (Proof-of-Concept): Tier C only, human review required
187 187  **Release 1.0** (Initial): Tier B/C auto-published, Tier A flagged for review
... ... @@ -193,7 +193,6 @@
193 193  {{include reference="Test.FactHarbor.Specification.Diagrams.Automation Roadmap.WebHome"/}}
194 194  
195 195  == 6. Human Role ==
196 -
197 197  Humans do NOT review content for approval. Instead:
198 198  **Monitoring**: Watch aggregate performance metrics
199 199  **Improvement**: Fix algorithms when patterns show issues
... ... @@ -206,7 +206,6 @@
206 206  {{include reference="Test.FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
207 207  
208 208  == 7. Moderation ==
209 -
210 210  Moderators handle items AKEL flags:
211 211  **Abuse detection**: Spam, manipulation, harassment
212 212  **Safety issues**: Content that could cause immediate harm
... ... @@ -214,9 +214,7 @@
214 214  **Action**: May temporarily hide content, ban users, or propose algorithm improvements
215 215  **Does NOT**: Routinely review claims or override verdicts
216 216  See [[Organisational Model>>Test.FactHarbor.Organisation.Organisational-Model]] for moderator role details.
217 -
218 218  == 8. Continuous Improvement ==
219 -
220 220  **Performance monitoring**: Track AKEL accuracy, speed, coverage
221 221  **Issue identification**: Find systematic errors from metrics
222 222  **Algorithm updates**: Deploy improvements to fix patterns
... ... @@ -223,21 +223,15 @@
223 223  **A/B testing**: Validate changes before full rollout
224 224  **Retrospectives**: Learn from failures systematically
225 225  See [[Continuous Improvement>>Test.FactHarbor.Organisation.How-We-Work-Together.Continuous-Improvement]] for improvement cycle.
226 -
227 227  == 9. Scalability ==
228 -
229 229  Automation enables FactHarbor to scale:
230 -
231 231  * **Millions of claims** processable
232 232  * **Consistent quality** at any volume
233 233  * **Cost efficiency** through automation
234 234  * **Rapid iteration** on algorithms
235 235  Without automation: Human review doesn't scale, creates bottlenecks, introduces inconsistency.
236 -
237 237  == 10. Transparency ==
238 -
239 239  All automation is transparent:
240 -
241 241  * **Algorithm parameters** documented
242 242  * **Evaluation criteria** public
243 243  * **Source scoring rules** explicit