Workflows
Workflows
FactHarbor workflows are simple, automated, focused on continuous improvement.
1. Core Principles
- Automated by default: AI processes everything
- Publish immediately: No centralized approval (removed in V0.9.50)
- Quality through monitoring: Not gatekeeping
- Fix systems, not data: Errors trigger improvements
- Human-in-loop: Only for edge cases and abuse
2. Claim Submission Workflow
```
User submits → Duplicate detection → Categorization → Processing queue → User receives ID
```
Timeline: Seconds
No approval needed
3. Automated Analysis Workflow
```
Claim from queue
↓
Evidence gathering (AKEL)
↓
Source evaluation (track record check)
↓
Scenario generation
↓
Verdict synthesis
↓
Risk assessment
↓
Quality gates (confidence > 40%? risk < 80%?)
↓
Publish OR Flag for improvement
```
Timeline: 10-30 seconds
90%+ published automatically
3.5 Evidence and Verdict Workflow
graph TD CLAIM[Claim] --> EVIDENCE[Evidence] EVIDENCE --> SOURCE[Source] SOURCE --> TRACK[Track Record Check] EVIDENCE --> SCENARIO[Scenario] SCENARIO --> VERDICT[Verdict] VERDICT --> CONFIDENCE[Confidence Score] TRACK --> QUALITY[Quality Score] QUALITY --> SCENARIO USER[User/Contributor] --> CLAIM USER --> EVIDENCE USER --> SCENARIO VERDICT --> DISPLAY[Display to Users] style CLAIM fill:#e1f5ff style VERDICT fill:#99ff99 style CONFIDENCE fill:#ffff99
Evidence and Verdict Workflow - Shows how Claim, Evidence, and Verdict relate:
- Claim: Starting point, the assertion to evaluate
- Evidence: Gathered from sources to support/refute claim
- Source: Checked for track record quality
- Scenario: Possible interpretations based on evidence
- Verdict: Synthesized conclusion with confidence score
- Users: Can contribute at any stage
4. Publication Workflow
Standard (90%+): Pass quality gates → Publish immediately with confidence scores
High Risk (<10%): Risk > 80% → Moderator review
Low Quality: Confidence < 40% → Improvement queue → Re-process
5. User Contribution Workflow
```
Contributor edits → System validates → Applied immediately → Logged → Reputation earned
```
No approval required (Wikipedia model)
New contributors (<50 reputation): Limited to minor edits
6. Flagging Workflow
```
User flags issue → Categorize (abuse/quality) → Automated or manual resolution
```
Quality issues: Add to improvement queue → System fix → Auto re-process
Abuse: Moderator review → Action taken
7. Moderation Workflow
Automated pre-moderation: 95% published automatically
Moderator queue: Only high-risk or flagged content
Appeal process: Different moderator → Governing Team if needed
8. System Improvement Workflow
Weekly cycle:
```
Monday: Review error patterns
Tuesday-Wednesday: Develop fixes
Thursday: Test improvements
Friday: Deploy & re-process
Weekend: Monitor metrics
```
Error capture:
```
Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
```
A/B Testing:
```
New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
```
9. Quality Monitoring Workflow
Continuous: Every hour calculate metrics, detect anomalies
Daily: Update source track records, aggregate error patterns
Weekly: System improvement cycle, performance review
10. Source Track Record Workflow
Initial score: New source starts at 50 (neutral)
Daily updates: Calculate accuracy, correction frequency, update score
Continuous: All claims using source recalculated when score changes
11. Re-Processing Workflow
Triggers: System improvement deployed, source score updated, new evidence, error fixed
Process: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change