Wiki source code of Workflows

Last modified by Robert Schaub on 2026/02/08 08:26

Show last authors
1 = Workflows =
2
3 FactHarbor workflows are **simple, automated, focused on continuous improvement**.
4
5 == 1. Core Principles ==
6
7 * **Automated by default**: AI processes everything
8 * **Publish immediately**: No centralized approval (removed in V0.9.50)
9 * **Quality through monitoring**: Not gatekeeping
10 * **Fix systems, not data**: Errors trigger improvements
11 * **Human-in-loop**: Only for edge cases and abuse
12
13 == 2. Claim Submission Workflow ==
14
15 === 2.1 Claim Extraction ===
16
17 When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims:
18
19 **Input Types:**
20
21 * Single claim: "The Earth is flat"
22 * Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
23 * URLs: Web pages analyzed for factual claims
24
25 **Extraction Process:**
26
27 * LLM analyzes submitted content
28 * Identifies distinct, verifiable factual claims
29 * Separates claims from opinions, questions, or commentary
30 * Each claim becomes independent for processing
31
32 **Output:**
33
34 * List of claims with context
35 * Each claim assigned unique ID
36 * Original context preserved for reference
37
38 This extraction ensures:
39
40 * Each claim receives focused analysis
41 * Multiple claims in one submission are all processed
42 * Claims are properly isolated for independent verification
43 * Context is preserved for accurate interpretation
44
45 ```
46 User submits → Duplicate detection → Categorization → Processing queue → User receives ID
47 ```
48 **Timeline**: Seconds
49 **No approval needed**
50
51 == 2.5 Claim and Scenario Workflow ==
52
53 {{include reference="Archive.FactHarbor 2026\.01\.20.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
54
55 == 3. Automated Analysis Workflow ==
56
57 ```
58 Claim from queue
59
60 Evidence gathering (AKEL)
61
62 Source evaluation (track record check)
63
64 Scenario generation
65
66 Verdict synthesis
67
68 Risk assessment
69
70 Quality gates (confidence > 40%? risk < 80%?)
71
72 Publish OR Flag for improvement
73 ```
74 **Timeline**: 10-30 seconds
75 **90%+ published automatically**
76
77 == 3.5 Evidence and Verdict Workflow ==
78
79 {{include reference="Archive.FactHarbor 2026\.01\.20.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
80
81 == 4. Publication Workflow ==
82
83 **Standard (90%+)**: Pass quality gates → Publish immediately with confidence scores
84 **High Risk (<10%)**: Risk > 80% → Moderator review
85 **Low Quality**: Confidence < 40% → Improvement queue → Re-process
86
87 == 5. User Contribution Workflow ==
88
89 ```
90 Contributor edits → System validates → Applied immediately → Logged → Reputation earned
91 ```
92 **No approval required** (Wikipedia model)
93 **New contributors** (<50 reputation): Limited to minor edits
94
95 == 5.5 Quality and Audit Workflow ==
96
97 {{include reference="Archive.FactHarbor 2026\.01\.20.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
98
99 == 6. Flagging Workflow ==
100
101 ```
102 User flags issue → Categorize (abuse/quality) → Automated or manual resolution
103 ```
104 **Quality issues**: Add to improvement queue → System fix → Auto re-process
105 **Abuse**: Moderator review → Action taken
106
107 == 7. Moderation Workflow ==
108
109 **Automated pre-moderation**: 95% published automatically
110 **Moderator queue**: Only high-risk or flagged content
111 **Appeal process**: Different moderator → Governing Team if needed
112
113 == 8. System Improvement Workflow ==
114
115 **Weekly cycle**:
116 ```
117 Monday: Review error patterns
118 Tuesday-Wednesday: Develop fixes
119 Thursday: Test improvements
120 Friday: Deploy & re-process
121 Weekend: Monitor metrics
122 ```
123 **Error capture**:
124 ```
125 Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
126 ```
127 **A/B Testing**:
128 ```
129 New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
130 ```
131
132 == 9. Quality Monitoring Workflow ==
133
134 **Continuous**: Every hour calculate metrics, detect anomalies
135 **Daily**: Update source track records, aggregate error patterns
136 **Weekly**: System improvement cycle, performance review
137
138 == 10. Source Track Record Workflow ==
139
140 **Initial score**: New source starts at 50 (neutral)
141 **Daily updates**: Calculate accuracy, correction frequency, update score
142 **Continuous**: All claims using source recalculated when score changes
143
144 == 11. Re-Processing Workflow ==
145
146 **Triggers**: System improvement deployed, source score updated, new evidence, error fixed
147 **Process**: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change
148
149 == 12. Related Pages ==
150
151 * [[Requirements>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.WebHome]]
152 * [[Architecture>>Archive.FactHarbor 2026\.01\.20.Specification.Architecture.WebHome]]
153 * [[Data Model>>Archive.FactHarbor 2026\.01\.20.Specification.Data Model.WebHome]]