Wiki source code of Workflows

Last modified by Robert Schaub on 2026/02/08 08:26

Hide last authors
Robert Schaub 1.4 1 = Workflows =
2
3 **Version:** 0.9.70 **Last Updated:** December 21, 2025 **Status:** CORRECTED - Automation Philosophy Consistent This page describes FactHarbor's core workflows with the automation-first philosophy. == 1. Core Workflow Principles == * **Automation First:** 90%+ content published automatically
4
Robert Schaub 1.1 5 * **No Approval Bottlenecks:** No centralized review queues
6 * **Quality Gates:** Automated validation before publication
7 * **Sampling Audits:** Pattern analysis for system improvement
8 * **Transparent Confidence:** All outputs labeled with confidence scores == 2. Claim Submission Workflow == === 2.1 Claim Extraction === When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims: **Input Types:**
9 * Single claim: "The Earth is flat"
10 * Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
11 * URLs: Web pages analyzed for factual claims **Extraction Process:**
12 * LLM analyzes submitted content
13 * Identifies distinct, verifiable factual claims
14 * Separates claims from opinions, questions, or commentary
15 * Each claim becomes independent for processing **Output:**
16 * List of claims with context
17 * Each claim assigned unique ID
18 * Original context preserved for reference This extraction ensures:
19 * Each claim receives focused analysis
20 * Multiple claims in one submission are all processed
21 * Claims are properly isolated for independent verification
22 * Context is preserved for accurate interpretation **Flow:**
23 ```
24 User submits → Duplicate detection → Categorization → Processing queue → User receives ID
25 ``` **Timeline:** Seconds **No approval needed:** Instant processing == 3. Automated Analysis Workflow == **Complete Pipeline:** ```
26 Claim from queue
27
28 Evidence gathering (AKEL)
29
30 Source evaluation (track record check)
31
32 Scenario generation
33
34 Verdict synthesis
35
36 Risk assessment
37
38 Quality gates validation
39
40 Decision: PUBLISH or BLOCK
41 ``` **Timeline:** 10-30 seconds **Automation Rate:** 90%+ published automatically === 3.1 Quality Gates Decision == **Gate Validation:**
Robert Schaub 1.4 42
Robert Schaub 1.1 43 1. Gate 1: Source Quality ✓
44 2. Gate 2: Contradiction Search ✓
45 3. Gate 3: Uncertainty Quantification ✓
46 4. Gate 4: Structural Integrity ✓ **If ALL gates PASS:**
47 → **Publish immediately** (Mode 2: AI-Generated)
48 → Apply appropriate risk tier label
49 → Display confidence score
50 → Make available for sampling audit **If ANY gate FAILS:**
51 → **Block publication** (Mode 1: Draft-Only)
52 → Log failure reason
53 → Analyze failure pattern
54 → Queue system improvement task
55 → May re-process after improvements **CRITICAL:** No human approval step - gates are automated. == 4. Publication Workflow == **V0.9.70 CLARIFIED:** Risk tiers affect LABELS and AUDIT FREQUENCY, NOT approval requirements. === Standard Flow (90%+) === ```
56 Pass quality gates
57
58 Determine risk tier (A/B/C)
59
60 Apply appropriate labels
61
62 PUBLISH IMMEDIATELY
63
64 Add to audit sampling pool
65 ``` **No delays, no approval queues** === High-Risk Content (Tier A - <10%) === **V0.9.70 CORRECTION:** ```
66 Pass quality gates
67
68 Identified as Tier A (medical/legal/safety)
69
70 PUBLISH IMMEDIATELY with prominent warnings
71
72 Higher sampling audit frequency (50%)
73 ``` **What changed from V0.9.69:**
74 - ❌ REMOVED: "Risk > 80% → Moderator review"
75 - ✅ ADDED: "Risk > 80% → Publish with WARNING labels" **Philosophy:** Publish with strong warnings, monitor closely through sampling. **Warning Labels for Tier A:**
76 ```
77 ⚠️ HIGH-IMPACT TOPIC
78 AI-Generated Analysis This claim involves [medical/legal/financial/safety] topics.
79 - Confidence: [X]%
80 - Last Updated: [timestamp]
81 - This is NOT professional advice
82 - Consult qualified professionals for decisions [View Evidence] [See Methodology] [Report Issue]
83 ``` === Low Quality Content (<10%) === ```
84 FAIL quality gates
85
86 Confidence < threshold OR structural issues
87
88 BLOCK (Mode 1: Draft-Only)
89
90 Log failure patterns
91
92 Queue for system improvement
93 ``` **NOT:** Send for human review **IS:** Improve prompts/algorithms based on failure patterns == 5. User Contribution Workflow == **Philosophy:** Wikipedia-style immediate application + audit trail ```
94 Contributor edits published content
95
96 System validates (basic checks)
97
98 Applied IMMEDIATELY
99
100 Logged in version history
101
102 Reputation earned
103
104 May be selected for sampling audit
105 ``` **No approval required:** Changes apply instantly **Quality control:** Through sampling audits and reputation system **New contributors** (<50 reputation): Limited to minor edits == 6. Sampling Audit Workflow == **Purpose:** Improve system quality through pattern analysis === 6.1 Selection Process === ```
106 Published content
107
108 Stratified sampling (by risk tier, confidence, traffic)
109
110 Selected for audit (Tier A: 50%, B: 20%, C: 5%)
111
112 Added to audit queue
113 ``` === 6.2 Audit Execution === ```
114 Auditor receives sample
115
116 Reviews against quality standards
117
118 Identifies issues/patterns
119
120 Logs findings
121
122 System improvement tasks created
123 ``` **What auditors DO:**
Robert Schaub 1.4 124
Robert Schaub 1.1 125 * ✅ Analyze patterns across multiple outputs
126 * ✅ Identify systematic issues
127 * ✅ Recommend algorithm/prompt improvements
128 * ✅ Track accuracy trends **What auditors DON'T DO:**
129 * ❌ Approve individual outputs before publication
130 * ❌ Manually fix individual outputs
131 * ❌ Act as gatekeepers
132 * ❌ Override quality gates === 6.3 Improvement Loop === ```
133 Audit findings aggregated
134
135 Patterns identified
136
137 System improvements proposed
138
139 Implemented and tested
140
141 Deployed
142
143 Metrics monitored
144 ``` **Examples of Improvements:**
145 * Refine evidence search queries
146 * Adjust source reliability weights
147 * Enhance contradiction detection
148 * Improve claim extraction prompts
149 * Recalibrate risk tier thresholds == 7. Flagging Workflow == **Two types of flags:** === 7.1 Quality Issues === ```
150 User flags quality issue
151
152 Categorized automatically
153
154 Added to sampling audit pool (priority)
155
156 Pattern analysis
157
158 System improvement if pattern found
159 ``` **NOT:** Manual correction of individual claim **IS:** Improve system to prevent similar issues === 7.2 Abuse/Spam === ```
160 User flags abuse/spam
161
162 Automated pre-moderation check
163
164 Moderator review (if needed)
165
166 Action taken (hide/ban)
167 ``` **Moderator role:** Handle abuse/spam, NOT content quality == 8. Moderation Workflow == **V0.9.70 CLARIFIED:** Moderators handle ABUSE, not content quality === 8.1 Content Moderation (Abuse/Spam) === **Moderator Queue Contains:**
168 * Flagged abusive content
169 * Spam detection alerts
170 * Harassment reports
171 * Privacy violations
172 * Terms of service violations **Moderator Actions:**
173 * Hide abusive content
174 * Ban repeat offenders
175 * Handle appeals
176 * Escalate to governing team **Moderators DO NOT:**
177 * ❌ Approve content for publication
178 * ❌ Review content quality before publication
179 * ❌ Act as editorial gatekeepers
180 * ❌ Manually fix AI outputs === 8.2 Appeal Process === ```
181 User disagrees with moderation
182
183 Appeals to different moderator
184
185 If still disagrees, escalates to Governing Team
186
187 Governing Team decision (final)
188 ``` == 9. Time Evolution Workflow == **Automatic Re-evaluation:** ```
189 Published claim
190
191 Monitoring for triggers: - New evidence published - Source retractions - Significant events - Scheduled review ↓
192 Trigger detected
193
194 AKEL re-processes claim
195
196 Quality gates validate
197
198 If verdict changes: Correction workflow
199
200 If passes: Update published analysis
201 ``` **Correction Workflow (New in V0.9.70):** ```
202 Verdict changed significantly
203
204 Generate correction notice
205
206 Publish correction banner (30 days)
207
208 Update corrections log
209
210 Notify users (email, RSS, API)
211
212 Update ClaimReview schema
213 ``` == 10. Contributor Journey == 1. **Visitor** – Explores platform, reads documentation
214 2. **New Contributor** – Submits first improvements (typo fixes, clarifications)
215 3. **Contributor** – Contributes regularly, follows conventions
216 4. **Trusted Contributor** – Track record of quality work
217 5. **Reviewer** – Participates in sampling audits (pattern analysis)
218 6. **Moderator** – Handles abuse/spam (not content quality)
Robert Schaub 1.7 219 7. **Expert** (optional) – Provides domain expertise for contested claims **All contributions apply immediately** - no approval workflow == 11. Related Pages == * [[AKEL>>Archive.FactHarbor 2026\.01\.20.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI processing system
Robert Schaub 1.8 220 * [[Architecture>>Archive.FactHarbor 2026\.01\.20.Specification.Architecture.WebHome]] - System architecture
Robert Schaub 1.9 221 * [[Requirements>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.WebHome]] - Requirements and roles
Robert Schaub 1.1 222 * [[Decision Processes>>FactHarbor.Organisation.Decision-Processes.WebHome]] - Governance **V0.9.70 CHANGES:** **REMOVED:**
223 - ❌ "High Risk → Moderator review" (was approval workflow)
224 - ❌ "Review queue" language for publication
225 - ❌ Any implication that moderators approve content quality **ADDED/CLARIFIED:**
226 - ✅ Risk tiers affect warnings and audit frequency, NOT approval
227 - ✅ High-risk content publishes immediately with prominent warnings
228 - ✅ Quality gate failures → Block + improve system (not human review)
229 - ✅ Clear distinction: Sampling audits (improvement) vs. Content moderation (abuse)
230 - ✅ Moderator role clarified: Abuse only, NOT content quality
231 - ✅ User contributions apply immediately (Wikipedia model)
232 - ✅ Correction workflow for significant verdict changes
233 - ✅ Time evolution and re-evaluation workflow