Wiki source code of Workflows

Last modified by Robert Schaub on 2025/12/24 18:27

Show last authors
1 = Workflows =
2 FactHarbor workflows are **simple, automated, focused on continuous improvement**.
3 == 1. Core Principles ==
4 * **Automated by default**: AI processes everything
5 * **Publish immediately**: No centralized approval (removed in V0.9.50)
6 * **Quality through monitoring**: Not gatekeeping
7 * **Fix systems, not data**: Errors trigger improvements
8 * **Human-in-loop**: Only for edge cases and abuse
9 == 2. Claim Submission Workflow ==
10
11 === 2.1 Claim Extraction ===
12
13 When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims:
14
15 **Input Types:**
16 * Single claim: "The Earth is flat"
17 * Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
18 * URLs: Web pages analyzed for factual claims
19
20 **Extraction Process:**
21 * LLM analyzes submitted content
22 * Identifies distinct, verifiable factual claims
23 * Separates claims from opinions, questions, or commentary
24 * Each claim becomes independent for processing
25
26 **Output:**
27 * List of claims with context
28 * Each claim assigned unique ID
29 * Original context preserved for reference
30
31 This extraction ensures:
32 * Each claim receives focused analysis
33 * Multiple claims in one submission are all processed
34 * Claims are properly isolated for independent verification
35 * Context is preserved for accurate interpretation
36
37
38 ```
39 User submits → Duplicate detection → Categorization → Processing queue → User receives ID
40 ```
41 **Timeline**: Seconds
42 **No approval needed**
43
44 == 2.5 Claim and Scenario Workflow ==
45
46 {{include reference="FactHarbor.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
47
48 == 3. Automated Analysis Workflow ==
49 ```
50 Claim from queue
51
52 Evidence gathering (AKEL)
53
54 Source evaluation (track record check)
55
56 Scenario generation
57
58 Verdict synthesis
59
60 Risk assessment
61
62 Quality gates (confidence > 40%? risk < 80%?)
63
64 Publish OR Flag for improvement
65 ```
66 **Timeline**: 10-30 seconds
67 **90%+ published automatically**
68 == 3.5 Evidence and Verdict Workflow ==
69 {{include reference="FactHarbor.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
70 == 4. Publication Workflow ==
71 **Standard (90%+)**: Pass quality gates → Publish immediately with confidence scores
72 **High Risk (<10%)**: Risk > 80% → Moderator review
73 **Low Quality**: Confidence < 40% → Improvement queue → Re-process
74 == 5. User Contribution Workflow ==
75 ```
76 Contributor edits → System validates → Applied immediately → Logged → Reputation earned
77 ```
78 **No approval required** (Wikipedia model)
79 **New contributors** (<50 reputation): Limited to minor edits
80 == 5.5 Quality and Audit Workflow ==
81
82 {{include reference="FactHarbor.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
83
84
85 == 6. Flagging Workflow ==
86 ```
87 User flags issue → Categorize (abuse/quality) → Automated or manual resolution
88 ```
89 **Quality issues**: Add to improvement queue → System fix → Auto re-process
90 **Abuse**: Moderator review → Action taken
91 == 7. Moderation Workflow ==
92 **Automated pre-moderation**: 95% published automatically
93 **Moderator queue**: Only high-risk or flagged content
94 **Appeal process**: Different moderator → Governing Team if needed
95 == 8. System Improvement Workflow ==
96 **Weekly cycle**:
97 ```
98 Monday: Review error patterns
99 Tuesday-Wednesday: Develop fixes
100 Thursday: Test improvements
101 Friday: Deploy & re-process
102 Weekend: Monitor metrics
103 ```
104 **Error capture**:
105 ```
106 Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
107 ```
108 **A/B Testing**:
109 ```
110 New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
111 ```
112 == 9. Quality Monitoring Workflow ==
113 **Continuous**: Every hour calculate metrics, detect anomalies
114 **Daily**: Update source track records, aggregate error patterns
115 **Weekly**: System improvement cycle, performance review
116 == 10. Source Track Record Workflow ==
117 **Initial score**: New source starts at 50 (neutral)
118 **Daily updates**: Calculate accuracy, correction frequency, update score
119 **Continuous**: All claims using source recalculated when score changes
120 == 11. Re-Processing Workflow ==
121 **Triggers**: System improvement deployed, source score updated, new evidence, error fixed
122 **Process**: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change
123 == 12. Related Pages ==
124 * [[Requirements>>FactHarbor.Specification.Requirements.WebHome]]
125 * [[Architecture>>FactHarbor.Specification.Architecture.WebHome]]
126 * [[Data Model>>FactHarbor.Specification.Data Model.WebHome]]