Wiki source code of Automation Philosophy

Last modified by Robert Schaub on 2026/02/08 08:29

Show last authors
1 = Automation Philosophy =
2
3 **Core Principle**: AKEL is primary. Humans monitor, improve, and handle exceptions.
4
5 == 1. The Principle ==
6
7 **FactHarbor is AI-first, not AI-assisted.**
8 This is not:
9
10 * ❌ "AI helps humans make better decisions"
11 * ❌ "Humans review AI recommendations"
12 * ❌ "AI drafts, humans approve"
13 This is:
14 * ✅ "AI makes decisions, humans improve the AI"
15 * ✅ "Humans monitor metrics, not individual outputs"
16 * ✅ "Fix the system, not the data"
17
18 == 2. Why This Matters ==
19
20 === 2.1 Scalability ===
21
22 **Human review doesn't scale**:
23
24 * 1 person can review 100 claims/day carefully
25 * FactHarbor aims for millions of claims
26 * Would need 10,000+ reviewers
27 * Impossible to maintain consistency
28 **Algorithmic processing scales**:
29 * AKEL processes 1 claim or 1 million claims with same consistency
30 * Cost per claim approaches zero at scale
31 * Quality improves with more data
32 * 24/7 availability
33
34 === 2.2 Consistency ===
35
36 **Human judgment varies**:
37
38 * Different reviewers apply criteria differently
39 * Same reviewer makes different decisions on different days
40 * Influenced by fatigue, mood, recent examples
41 * Unconscious biases affect decisions
42 **Algorithmic processing is consistent**:
43 * Same input → same output, always
44 * Rules applied uniformly
45 * No mood, fatigue, or bias
46 * Predictable behavior
47
48 === 2.3 Transparency ===
49
50 **Human judgment is opaque**:
51
52 * "I just know" - hard to explain
53 * Expertise in human head
54 * Can't audit thought process
55 * Difficult to improve systematically
56 **Algorithmic processing is transparent**:
57 * Code can be audited
58 * Parameters are documented
59 * Decision logic is explicit
60 * Changes are tracked
61 * Can test "what if" scenarios
62
63 === 2.4 Improvement ===
64
65 **Improving human judgment**:
66
67 * Train each person individually
68 * Hope training transfers consistently
69 * Subjective quality assessment
70 * Slow iteration
71 **Improving algorithms**:
72 * Change code once, affects all decisions
73 * Test on historical data before deploying
74 * Measure improvement objectively
75 * Rapid iteration (deploy multiple times per week)
76
77 == 3. The Human Role ==
78
79 Humans in FactHarbor are **system architects**, not **content judges**.
80
81 === 3.1 What Humans Do ===
82
83 **Monitor** system performance:
84
85 * Watch dashboards showing aggregate metrics
86 * Identify when metrics fall outside acceptable ranges
87 * Spot patterns in errors or edge cases
88 * Track user feedback trends
89 **Improve** algorithms and policies:
90 * Analyze systematic errors
91 * Propose algorithm improvements
92 * Update policies based on learning
93 * Test changes before deployment
94 * Document learnings
95 **Handle** exceptions:
96 * Items AKEL explicitly flags for review
97 * System gaming attempts
98 * Abuse and harassment
99 * Legal/safety emergencies
100 **Govern** the system:
101 * Set risk tier policies
102 * Define acceptable performance ranges
103 * Allocate resources
104 * Make strategic decisions
105
106 === 3.2 What Humans Do NOT Do ===
107
108 **Review** individual claims for correctness:
109
110 * ❌ "Let me check if this verdict is right"
111 * ❌ "I'll approve these before publication"
112 * ❌ "This needs human judgment"
113 **Override** AKEL decisions routinely:
114 * ❌ "AKEL got this wrong, I'll fix it"
115 * ❌ "I disagree with this verdict"
116 * ❌ "This source should be rated higher"
117 **Act as** approval gates:
118 * ❌ "All claims must be human-approved"
119 * ❌ "High-risk claims need review"
120 * ❌ "Quality assurance before publication"
121 **Why not?** Because this defeats the purpose and doesn't scale.
122
123 == 4. When Humans Intervene ==
124
125 === 4.1 Legitimate Interventions ===
126
127 **Humans should intervene when**:
128
129 ==== AKEL explicitly flags for review ====
130
131 :
132
133 * AKEL's confidence is too low
134 * Detected potential manipulation
135 * Unusual pattern requiring human judgment
136 * Clear policy: "Flag if confidence <X"
137
138 ==== System metrics show problems ====
139
140 :
141
142 * Processing time suddenly increases
143 * Error rate jumps
144 * Confidence distribution shifts
145 * User feedback becomes negative
146
147 ==== Systematic bias detected ====
148
149 :
150
151 * Metrics show pattern of unfairness
152 * Particular domains consistently scored oddly
153 * Source types systematically mis-rated
154
155 ==== Legal/safety emergency ====
156
157 :
158
159 * Legal takedown required
160 * Imminent harm to individuals
161 * Security breach
162 * Compliance violation
163
164 === 4.2 Illegitimate Interventions ===
165
166 **Humans should NOT intervene for**:
167
168 ==== "I disagree with this verdict" ====
169
170 :
171
172 * Problem: Your opinion vs AKEL's analysis
173 * Solution: If AKEL is systematically wrong, fix the algorithm
174 * Action: Gather data, propose algorithm improvement
175
176 ==== "This source should rank higher" ====
177
178 :
179
180 * Problem: Subjective preference
181 * Solution: Fix scoring rules systematically
182 * Action: Analyze why AKEL scored it lower, adjust scoring algorithm if justified
183
184 ==== "Manual quality gate" ====
185
186 :
187
188 * Problem: Creates bottleneck, defeats automation
189 * Solution: Improve AKEL's quality to not need human gate
190 * Action: Set quality thresholds in algorithm, not human review
191
192 ==== "I know better than the algorithm" ====
193
194 :
195
196 * Problem: Doesn't scale, introduces bias
197 * Solution: Teach the algorithm what you know
198 * Action: Update training data, adjust parameters, document expertise in policy
199
200 == 5. Fix the System, Not the Data ==
201
202 **Fundamental principle**: When AKEL makes mistakes, improve AKEL, don't fix individual outputs.
203
204 === 5.1 Why? ===
205
206 **Fixing individual outputs**:
207
208 * Doesn't prevent future similar errors
209 * Doesn't scale (too many outputs)
210 * Creates inconsistency
211 * Hides systematic problems
212 **Fixing the system**:
213 * Prevents future similar errors
214 * Scales automatically
215 * Maintains consistency
216 * Surfaces and resolves root causes
217
218 === 5.2 Process ===
219
220 **When you see a "wrong" AKEL decision**:
221
222 ==== Document it ====
223
224 :
225
226 * What was the claim?
227 * What did AKEL decide?
228 * What should it have decided?
229 * Why do you think it's wrong?
230
231 ==== Investigate ====
232
233 :
234
235 * Is this a one-off, or a pattern?
236 * Check similar claims - same issue?
237 * What caused AKEL to decide this way?
238 * What rule/parameter needs changing?
239
240 ==== Propose systematic fix ====
241
242 :
243
244 * Algorithm change?
245 * Policy clarification?
246 * Training data update?
247 * Parameter adjustment?
248
249 ==== Test the fix ====
250
251 :
252
253 * Run on historical data
254 * Does it fix this case?
255 * Does it break other cases?
256 * What's the overall impact?
257
258 ==== Deploy and monitor ====
259
260 :
261
262 * Gradual rollout
263 * Watch metrics closely
264 * Gather feedback
265 * Iterate if needed
266
267 == 6. Balancing Automation and Human Values ==
268
269 === 6.1 Algorithms Embody Values ===
270
271 **Important**: Automation doesn't mean "value-free"
272 **Algorithms encode human values**:
273
274 * Which evidence types matter most?
275 * How much weight to peer review?
276 * What constitutes "high risk"?
277 * When to flag for human review?
278 **These are human choices**, implemented in code.
279
280 === 6.2 Human Governance of Automation ===
281
282 **Humans set**:
283
284 * ✅ Risk tier policies (what's high-risk?)
285 * ✅ Evidence weighting (what types of evidence matter?)
286 * ✅ Source scoring criteria (what makes a source credible?)
287 * ✅ Moderation policies (what's abuse?)
288 * ✅ Bias mitigation strategies
289 **AKEL applies**:
290 * ✅ These policies consistently
291 * ✅ At scale
292 * ✅ Transparently
293 * ✅ Without subjective variation
294
295 === 6.3 Continuous Value Alignment ===
296
297 **Ongoing process**:
298
299 * Monitor: Are outcomes aligned with values?
300 * Analyze: Where do values and outcomes diverge?
301 * Adjust: Update policies or algorithms
302 * Test: Validate alignment improved
303 * Repeat: Values alignment is never "done"
304
305 == 7. Cultural Implications ==
306
307 === 7.1 Mindset Shift Required ===
308
309 **From**: "I'm a content expert who reviews claims"
310 **To**: "I'm a system architect who improves algorithms"
311 **From**: "Good work means catching errors"
312 **To**: "Good work means preventing errors systematically"
313 **From**: "I trust my judgment"
314 **To**: "I make my judgment codifiable and testable"
315
316 === 7.2 New Skills Needed ===
317
318 **Less emphasis on**:
319
320 * Individual content judgment
321 * Manual review skills
322 * Subjective expertise application
323 **More emphasis on**:
324 * Data analysis and metrics interpretation
325 * Algorithm design and optimization
326 * Policy formulation
327 * Testing and validation
328 * Documentation and knowledge transfer
329
330 === 7.3 Job Satisfaction Sources ===
331
332 **Satisfaction comes from**:
333
334 * ✅ Seeing metrics improve after your changes
335 * ✅ Building systems that help millions
336 * ✅ Solving systematic problems elegantly
337 * ✅ Continuous learning and improvement
338 * ✅ Transparent, auditable impact
339 **Not from**:
340 * ❌ Being the expert who makes final call
341 * ❌ Manual review and approval
342 * ❌ Gatekeeping
343 * ❌ Individual heroics
344
345 == 8. Trust and Automation ==
346
347 === 8.1 Building Trust in AKEL ===
348
349 **Users trust AKEL when**:
350
351 * Transparent: How decisions are made is documented
352 * Consistent: Same inputs → same outputs
353 * Measurable: Performance metrics are public
354 * Improvable: Clear process for getting better
355 * Governed: Human oversight of policies, not outputs
356
357 === 8.2 What Trust Does NOT Mean ===
358
359 **Trust in automation ≠**:
360
361 * ❌ "Never makes mistakes" (impossible)
362 * ❌ "Better than any human could ever be" (unnecessary)
363 * ❌ "Beyond human understanding" (must be understandable)
364 * ❌ "Set it and forget it" (requires continuous improvement)
365 **Trust in automation =**:
366 * ✅ Mistakes are systematic, not random
367 * ✅ Mistakes can be detected and fixed systematically
368 * ✅ Performance continuously improves
369 * ✅ Decision process is transparent and auditable
370
371 == 9. Edge Cases and Exceptions ==
372
373 === 9.1 Some Things Still Need Humans ===
374
375 **AKEL flags for human review when**:
376
377 * Confidence below threshold
378 * Detected manipulation attempt
379 * Novel situation not seen before
380 * Explicit policy requires human judgment
381 **Humans handle**:
382 * Items AKEL flags
383 * Not routine review
384
385 === 9.2 Learning from Exceptions ===
386
387 **When humans handle an exception**:
388
389 1. Resolve the immediate case
390 2. Document: What made this exceptional?
391 3. Analyze: Could AKEL have handled this?
392 4. Improve: Update AKEL to handle similar cases
393 5. Monitor: Did exception rate decrease?
394 **Goal**: Fewer exceptions over time as AKEL learns.
395 ---
396 **Remember**: AKEL is primary. You improve the SYSTEM. The system improves the CONTENT.--
397
398 == 10. Related Pages ==
399
400 * [[Governance>>Archive.FactHarbor 2026\.02\.08.Organisation.Governance.WebHome]] - How AKEL is governed
401 * [[Contributor Processes>>FactHarbor.Organisation.Contributor-Processes]] - How to improve the system
402 * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] - Team structure and roles
403 * [[System Performance Metrics>>FactHarbor.Specification.System-Performance-Metrics]] - What we monitor