Workflows
Workflows
FactHarbor workflows are simple, automated, focused on continuous improvement.
1. Core Principles
- Automated by default: AI processes everything
- Publish immediately: No centralized approval (removed in V0.9.50)
- Quality through monitoring: Not gatekeeping
- Fix systems, not data: Errors trigger improvements
- Human-in-loop: Only for edge cases and abuse
2. Claim Submission Workflow
2.1 Claim Extraction
When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims:
Input Types:
- Single claim: "The Earth is flat"
- Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
- URLs: Web pages analyzed for factual claims
Extraction Process:
- LLM analyzes submitted content
- Identifies distinct, verifiable factual claims
- Separates claims from opinions, questions, or commentary
- Each claim becomes independent for processing
Output:
- List of claims with context
- Each claim assigned unique ID
- Original context preserved for reference
This extraction ensures:
- Each claim receives focused analysis
- Multiple claims in one submission are all processed
- Claims are properly isolated for independent verification
- Context is preserved for accurate interpretation
```
User submits → Duplicate detection → Categorization → Processing queue → User receives ID
```
Timeline: Seconds
No approval needed
2.5 Claim and Scenario Workflow
Claim & Scenario Workflow
This diagram shows how Claims are submitted and Scenarios are created and reviewed.
graph TB Start[User Submission
Text/URL/Single Claim] Extract{Claim Extraction
LLM Analysis} ValidateClaims{Validate Claims
Clear & Distinct?} Single[Single Claim] Multi[Multiple Claims] Queue[Parallel Processing] Process[Process Claim
AKEL Analysis] Evidence[Gather Evidence
LLM + Sources] Scenarios[Generate Scenarios
LLM Analysis] CrossRef[Cross-Reference
Evidence & Scenarios] Verdict[Generate Verdict
Confidence + Risk] Review{Confidence
Check} Publish[Publish Verdict] HumanReview[Human Review Queue] Start --> Extract Extract --> ValidateClaims ValidateClaims -->|Valid| Single ValidateClaims -->|Valid| Multi ValidateClaims -->|Invalid| Start Single --> Process Multi --> Queue Queue -->|Each Claim| Process Process --> Evidence Process --> Scenarios Evidence --> CrossRef Scenarios --> CrossRef CrossRef --> Verdict Verdict --> Review Review -->|High Confidence| Publish Review -->|Low Confidence| HumanReview HumanReview --> Publish style Extract fill:#e1f5ff style Queue fill:#fff4e1 style Process fill:#f0f0f0 style HumanReview fill:#ffe1e1
3. Automated Analysis Workflow
```
Claim from queue
↓
Evidence gathering (AKEL)
↓
Source evaluation (track record check)
↓
Scenario generation
↓
Verdict synthesis
↓
Risk assessment
↓
Quality gates (confidence > 40%? risk < 80%?)
↓
Publish OR Flag for improvement
```
Timeline: 10-30 seconds
90%+ published automatically
3.5 Evidence and Verdict Workflow
graph TD CLAIM[Claim] --> EVIDENCE[Evidence] EVIDENCE --> SOURCE[Source] SOURCE --> TRACK[Track Record Check] EVIDENCE --> SCENARIO[Scenario] SCENARIO --> VERDICT[Verdict] VERDICT --> CONFIDENCE[Confidence Score] TRACK --> QUALITY[Quality Score] QUALITY --> SCENARIO USER[User/Contributor] --> CLAIM USER --> EVIDENCE USER --> SCENARIO VERDICT --> DISPLAY[Display to Users] style CLAIM fill:#e1f5ff style VERDICT fill:#99ff99 style CONFIDENCE fill:#ffff99
Evidence and Verdict Workflow - Shows how Claim, Evidence, and Verdict relate:
- Claim: Starting point, the assertion to evaluate
- Evidence: Gathered from sources to support/refute claim
- Source: Checked for track record quality
- Scenario: Possible interpretations based on evidence
- Verdict: Synthesized conclusion with confidence score
- Users: Can contribute at any stage
4. Publication Workflow
Standard (90%+): Pass quality gates → Publish immediately with confidence scores
High Risk (<10%): Risk > 80% → Moderator review
Low Quality: Confidence < 40% → Improvement queue → Re-process
5. User Contribution Workflow
```
Contributor edits → System validates → Applied immediately → Logged → Reputation earned
```
No approval required (Wikipedia model)
New contributors (<50 reputation): Limited to minor edits
5.5 Quality and Audit Workflow
Quality & Audit Workflow
This diagram shows quality gates and audit processes.
erDiagram
TECHNICAL_USER {
string SystemID PK
}
AUDITOR {
string ModeratorID PK
}
MAINTAINER {
string ModeratorID PK
}
CLAIM_VERSION {
string VersionID PK
}
VERDICT_VERSION {
string VersionID PK
}
QUALITY_GATE_LOG {
string LogID PK
string EntityVersionID FK
enum GateType "SourceQuality,ContradictionSearch,UncertaintyQuant,StructuralIntegrity"
boolean Passed
json Details
datetime ExecutedAt
}
AUDIT_RECORD {
string AuditID PK
string ModeratorID FK
string EntityVersionID FK
enum EntityType "Claim,Verdict"
enum Outcome "Pass,Fail"
json Feedback
datetime AuditedAt
}
AUDIT_POLICY {
string PolicyID PK
string ModeratorID FK
enum RiskTier "A,B,C"
float SamplingRate
json Rules
}
TECHNICAL_USER ||--o{ QUALITY_GATE_LOG : "executes"
QUALITY_GATE_LOG }o--|| CLAIM_VERSION : "validates"
QUALITY_GATE_LOG }o--|| VERDICT_VERSION : "validates"
AUDITOR ||--o{ AUDIT_RECORD : "creates"
AUDIT_RECORD }o--|| CLAIM_VERSION : "audits"
AUDIT_RECORD }o--|| VERDICT_VERSION : "audits"
MAINTAINER ||--o{ AUDIT_POLICY : "configures"
6. Flagging Workflow
```
User flags issue → Categorize (abuse/quality) → Automated or manual resolution
```
Quality issues: Add to improvement queue → System fix → Auto re-process
Abuse: Moderator review → Action taken
7. Moderation Workflow
Automated pre-moderation: 95% published automatically
Moderator queue: Only high-risk or flagged content
Appeal process: Different moderator → Governing Team if needed
8. System Improvement Workflow
Weekly cycle:
```
Monday: Review error patterns
Tuesday-Wednesday: Develop fixes
Thursday: Test improvements
Friday: Deploy & re-process
Weekend: Monitor metrics
```
Error capture:
```
Error detected → Categorize → Root cause → Improvement queue → Pattern analysis
```
A/B Testing:
```
New algorithm → Split traffic (90% control, 10% test) → Run 1 week → Compare metrics → Deploy if better
```
9. Quality Monitoring Workflow
Continuous: Every hour calculate metrics, detect anomalies
Daily: Update source track records, aggregate error patterns
Weekly: System improvement cycle, performance review
10. Source Track Record Workflow
Initial score: New source starts at 50 (neutral)
Daily updates: Calculate accuracy, correction frequency, update score
Continuous: All claims using source recalculated when score changes
11. Re-Processing Workflow
Triggers: System improvement deployed, source score updated, new evidence, error fixed
Process: Identify affected claims → Re-run AKEL → Compare → Update if better → Log change