Wiki source code of Workflows

Last modified by Robert Schaub on 2025/12/24 21:53

Show last authors
1 = Workflows =
2 FactHarbor workflows are **simple, automated, focused on continuous improvement**.
3 == 1. Core Principles ==
4 * **Automated by default**: AI processes everything
5 * **Publish immediately**: No centralized approval (removed in V0.9.50)
6 * **Quality through monitoring**: Not gatekeeping
7 * **Fix systems, not data**: Errors trigger improvements
8 * **Human-in-loop**: Only for edge cases and abuse
9 == 2. Claim Submission Workflow ==
10
11 === 2.1 Claim Extraction ===
12
13 When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims:
14
15 **Input Types:**
16 * Single claim: "The Earth is flat"
17 * Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
18 * URLs: Web pages analyzed for factual claims
19
20 **Extraction Process:**
21 * LLM analyzes submitted content
22 * Identifies distinct, verifiable factual claims
23 * Separates claims from opinions, questions, or commentary
24 * Each claim becomes independent for processing
25
26 **Output:**
27 * List of claims with context
28 * Each claim assigned unique ID
29 * Original context preserved for reference
30
31 This extraction ensures:
32 * Each claim receives focused analysis
33 * Multiple claims in one submission are all processed
34 * Claims are properly isolated for independent verification
35 * Context is preserved for accurate interpretation
36
37 ```
38 User submits → Duplicate detection → Categorization → Processing queue → User receives ID
39 ```
40 **Timeline**: Seconds
41 **No approval needed**
42
43 == 2.5 Claim and Scenario Workflow ==
44
45 {{include reference="FactHarbor.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
46
47 == 3. Automated Analysis Workflow ==
48 ```
49 Claim from queue
50
51 Evidence gathering (AKEL)
52
53 Source evaluation (track record check)
54
55 Scenario generation
56
57 Verdict synthesis
58
59 Risk assessment
60
61 Quality gates (confidence > 40%? risk < 80%?)
62
63 Publish OR Flag for improvement
64 ```
65 **Timeline**: 10-30 seconds
66 **90%+ published automatically**
67 == 3.5 Evidence and Verdict Workflow ==
68 {{include reference="FactHarbor.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
69 == 4. Publication Workflow ==
70 **Standard (90%+)**: Pass quality gates → Publish immediately with confidence scores
71 **High Risk (<10%)**: Risk > 80% → Moderator review
72 **Low Quality**: Confidence < 40% → Improvement queue → Re-process
73 == 5. User Contribution Workflow ==
74 ```
75 Contributor edits → System validates → Applied immediately → Logged → Reputation earned
76 ```
77 **No approval required** (Wikipedia model)
78 **New contributors** (<50 reputation): Limited to minor edits
79 == 5.5 Quality and Audit Workflow ==
80
81 {{include reference="FactHarbor.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
82
83 == 6. Flagging Workflow ==
84 ```
85 User flags issue → Categorize (abuse/quality) → Automated or manual resolution
86 ```
87 **Quality issues**: Add to improvement queue → System fix → Auto re-process
88 **Abuse**: Moderator review → Action taken
89 == 7. Moderation Workflow ==
90 **Automated pre-moderation**: 95% published automatically
91 **Moderator queue**: Only high-risk or flagged content
92 **Appeal process**: Different moderator → Governing Team if needed
93 == 8. System Improvement Workflow ==
94 **Weekly cycle**:
95 ```
96 Monday: Review error patterns
97 Tuesday-Wednesday: Develop fixes
98 Thursday: Test improvements
99 Friday: Deploy & re-process
100 Weekend: Monitor metrics
101 ```
102 **Error capture**:
103 ```
104 Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
105 ```
106 **A/B Testing**:
107 ```
108 New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
109 ```
110 == 9. Quality Monitoring Workflow ==
111 **Continuous**: Every hour calculate metrics, detect anomalies
112 **Daily**: Update source track records, aggregate error patterns
113 **Weekly**: System improvement cycle, performance review
114 == 10. Source Track Record Workflow ==
115 **Initial score**: New source starts at 50 (neutral)
116 **Daily updates**: Calculate accuracy, correction frequency, update score
117 **Continuous**: All claims using source recalculated when score changes
118 == 11. Re-Processing Workflow ==
119 **Triggers**: System improvement deployed, source score updated, new evidence, error fixed
120 **Process**: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change
121 == 12. Related Pages ==
122 * [[Requirements>>FactHarbor.Specification.Requirements.WebHome]]
123 * [[Architecture>>FactHarbor.Specification.Architecture.WebHome]]
124 * [[Data Model>>FactHarbor.Specification.Data Model.WebHome]]