Wiki source code of Workflows

Version 1.9 by Robert Schaub on 2026/02/08 08:16

Show last authors
1 = Workflows =
2
3 FactHarbor workflows are **simple, automated, focused on continuous improvement**.
4
5 == 1. Core Principles ==
6
7 * **Automated by default**: AI processes everything
8 * **Publish immediately**: No centralized approval (removed in V0.9.50)
9 * **Quality through monitoring**: Not gatekeeping
10 * **Fix systems, not data**: Errors trigger improvements
11 * **Human-in-loop**: Only for edge cases and abuse
12
13 == 2. Claim Submission Workflow ==
14
15 === 2.1 Claim Extraction ===
16
17 When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims:
18
19 **Input Types:**
20
21 * Single claim: "The Earth is flat"
22 * Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
23 * URLs: Web pages analyzed for factual claims
24
25 **Extraction Process:**
26
27 * LLM analyzes submitted content
28 * Identifies distinct, verifiable factual claims
29 * Separates claims from opinions, questions, or commentary
30 * Each claim becomes independent for processing
31
32 **Output:**
33
34 * List of claims with context
35 * Each claim assigned unique ID
36 * Original context preserved for reference
37
38 This extraction ensures:
39
40 * Each claim receives focused analysis
41 * Multiple claims in one submission are all processed
42 * Claims are properly isolated for independent verification
43 * Context is preserved for accurate interpretation
44
45 ```
46 User submits → Duplicate detection → Categorization → Processing queue → User receives ID
47 ```
48 **Timeline**: Seconds
49 **No approval needed**
50
51 == 2.5 Claim and Scenario Workflow ==
52
53 {{include reference="Archive.FactHarbor.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
54
55 == 3. Automated Analysis Workflow ==
56
57 ```
58 Claim from queue
59
60 Evidence gathering (AKEL)
61
62 Source evaluation (track record check)
63
64 Scenario generation
65
66 Verdict synthesis
67
68 Risk assessment
69
70 Quality gates (confidence > 40%? risk < 80%?)
71
72 Publish OR Flag for improvement
73 ```
74 **Timeline**: 10-30 seconds
75 **90%+ published automatically**
76
77 == 3.5 Evidence and Verdict Workflow ==
78
79 {{include reference="Archive.FactHarbor.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
80
81 == 4. Publication Workflow ==
82
83 **Standard (90%+)**: Pass quality gates → Publish immediately with confidence scores
84 **High Risk (<10%)**: Risk > 80% → Moderator review
85 **Low Quality**: Confidence < 40% → Improvement queue → Re-process
86
87 == 5. User Contribution Workflow ==
88
89 ```
90 Contributor edits → System validates → Applied immediately → Logged → Reputation earned
91 ```
92 **No approval required** (Wikipedia model)
93 **New contributors** (<50 reputation): Limited to minor edits
94
95 == 5.5 Quality and Audit Workflow ==
96
97 {{include reference="Archive.FactHarbor.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
98
99
100 == 6. Flagging Workflow ==
101
102 ```
103 User flags issue → Categorize (abuse/quality) → Automated or manual resolution
104 ```
105 **Quality issues**: Add to improvement queue → System fix → Auto re-process
106 **Abuse**: Moderator review → Action taken
107
108 == 7. Moderation Workflow ==
109
110 **Automated pre-moderation**: 95% published automatically
111 **Moderator queue**: Only high-risk or flagged content
112 **Appeal process**: Different moderator → Governing Team if needed
113
114 == 8. System Improvement Workflow ==
115
116 **Weekly cycle**:
117 ```
118 Monday: Review error patterns
119 Tuesday-Wednesday: Develop fixes
120 Thursday: Test improvements
121 Friday: Deploy & re-process
122 Weekend: Monitor metrics
123 ```
124 **Error capture**:
125 ```
126 Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
127 ```
128 **A/B Testing**:
129 ```
130 New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
131 ```
132
133 == 9. Quality Monitoring Workflow ==
134
135 **Continuous**: Every hour calculate metrics, detect anomalies
136 **Daily**: Update source track records, aggregate error patterns
137 **Weekly**: System improvement cycle, performance review
138
139 == 10. Source Track Record Workflow ==
140
141 **Initial score**: New source starts at 50 (neutral)
142 **Daily updates**: Calculate accuracy, correction frequency, update score
143 **Continuous**: All claims using source recalculated when score changes
144
145 == 11. Re-Processing Workflow ==
146
147 **Triggers**: System improvement deployed, source score updated, new evidence, error fixed
148 **Process**: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change
149
150 == 12. Related Pages ==
151
152 * [[Requirements>>Archive.FactHarbor.Specification.Requirements.WebHome]]
153 * [[Architecture>>Archive.FactHarbor 2026\.01\.20.Specification.Architecture.WebHome]]
154 * [[Data Model>>Archive.FactHarbor.Specification.Data Model.WebHome]]