Wiki source code of Automation

Version 1.1 by Robert Schaub on 2025/12/18 12:03

Show last authors
1 = Automation =
2 **How FactHarbor scales through automated claim evaluation.**
3 == 1. Automation Philosophy ==
4 FactHarbor is **automation-first**: AKEL (AI Knowledge Extraction Layer) makes all content decisions. Humans monitor system performance and improve algorithms.
5 **Why automation:**
6 * **Scale**: Can process millions of claims
7 * **Consistency**: Same evaluation criteria applied uniformly
8 * **Transparency**: Algorithms are auditable
9 * **Speed**: Results in <20 seconds typically
10 See [[Automation Philosophy>>FactHarbor.Organisation.Automation-Philosophy]] for detailed principles.
11 == 2. Claim Processing Flow ==
12 === 2.1 User Submits Claim ===
13 * User provides claim text + source URLs
14 * System validates format
15 * Assigns processing ID
16 * Queues for AKEL processing
17 === 2.2 AKEL Processing ===
18 **AKEL automatically:**
19 1. Parses claim into testable components
20 2. Extracts evidence from sources
21 3. Scores source credibility
22 4. Evaluates claim against evidence
23 5. Generates verdict with confidence score
24 6. Assigns risk tier (A/B/C)
25 7. Publishes result
26 **Processing time**: Typically <20 seconds
27 **No human approval required** - publication is automatic
28 === 2.3 Publication States ===
29 **Processing**: AKEL working on claim (not visible to public)
30 **Published**: AKEL completed evaluation (public)
31 * Verdict displayed with confidence score
32 * Evidence and sources shown
33 * Risk tier indicated
34 * Users can report issues
35 **Flagged**: AKEL identified issue requiring moderator attention (still public)
36 * Low confidence below threshold
37 * Detected manipulation attempt
38 * Unusual pattern
39 * Moderator reviews and may take action
40 == 3. Risk Tiers ==
41 Risk tiers classify claims by potential impact and guide audit sampling rates.
42 === 3.1 Tier A (High Risk) ===
43 **Domains**: Medical, legal, elections, safety, security
44 **Characteristics**:
45 * High potential for harm if incorrect
46 * Complex specialized knowledge required
47 * Often subject to regulation
48 **Publication**: AKEL publishes automatically with prominent risk warning
49 **Audit rate**: Higher sampling recommended
50 === 3.2 Tier B (Medium Risk) ===
51 **Domains**: Complex policy, science, causality claims
52 **Characteristics**:
53 * Moderate potential impact
54 * Requires careful evidence evaluation
55 * Multiple valid interpretations possible
56 **Publication**: AKEL publishes automatically with standard risk label
57 **Audit rate**: Moderate sampling recommended
58 === 3.3 Tier C (Low Risk) ===
59 **Domains**: Definitions, established facts, historical data
60 **Characteristics**:
61 * Low potential for harm
62 * Well-documented information
63 * Clear right/wrong answers typically
64 **Publication**: AKEL publishes by default
65 **Audit rate**: Lower sampling recommended
66 == 4. Quality Gates ==
67 AKEL applies quality gates before publication. If any fail, claim is **flagged** (not blocked - still published).
68 **Quality gates**:
69 * Sufficient evidence extracted (≥2 sources)
70 * Sources meet minimum credibility threshold
71 * Confidence score calculable
72 * No detected manipulation patterns
73 * Claim parseable into testable form
74 **Failed gates**: Claim published with flag for moderator review
75 == 5. Automation Levels ==
76 {{include reference="FactHarbor.Specification.Diagrams.Automation Level.WebHome"/}}
77 FactHarbor progresses through automation maturity levels:
78 **Release 0.5** (Proof-of-Concept): Tier C only, human review required
79 **Release 1.0** (Initial): Tier B/C auto-published, Tier A flagged for review
80 **Release 2.0** (Mature): All tiers auto-published with risk labels, sampling audits
81 See [[Automation Roadmap>>FactHarbor.Specification.Diagrams.Automation Roadmap.WebHome]] for detailed progression.
82 == 6. Human Role ==
83 Humans do NOT review content for approval. Instead:
84 **Monitoring**: Watch aggregate performance metrics
85 **Improvement**: Fix algorithms when patterns show issues
86 **Exception handling**: Review AKEL-flagged items
87 **Governance**: Set policies AKEL applies
88 See [[Contributor Processes>>FactHarbor.Organisation.Contributor-Processes]] for how to improve the system.
89 == 7. Moderation ==
90 Moderators handle items AKEL flags:
91 **Abuse detection**: Spam, manipulation, harassment
92 **Safety issues**: Content that could cause immediate harm
93 **System gaming**: Attempts to manipulate scoring
94 **Action**: May temporarily hide content, ban users, or propose algorithm improvements
95 **Does NOT**: Routinely review claims or override verdicts
96 See [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] for moderator role details.
97 == 8. Continuous Improvement ==
98 **Performance monitoring**: Track AKEL accuracy, speed, coverage
99 **Issue identification**: Find systematic errors from metrics
100 **Algorithm updates**: Deploy improvements to fix patterns
101 **A/B testing**: Validate changes before full rollout
102 **Retrospectives**: Learn from failures systematically
103 See [[Continuous Improvement>>FactHarbor.Organisation.How-We-Work-Together.Continuous-Improvement]] for improvement cycle.
104 == 9. Scalability ==
105 Automation enables FactHarbor to scale:
106 * **Millions of claims** processable
107 * **Consistent quality** at any volume
108 * **Cost efficiency** through automation
109 * **Rapid iteration** on algorithms
110 Without automation: Human review doesn't scale, creates bottlenecks, introduces inconsistency.
111 == 10. Transparency ==
112 All automation is transparent:
113 * **Algorithm parameters** documented
114 * **Evaluation criteria** public
115 * **Source scoring rules** explicit
116 * **Confidence calculations** explained
117 * **Performance metrics** visible
118 See [[System Performance Metrics>>FactHarbor.Specification.System-Performance-Metrics]] for what we measure.