Workflows
Last modified by Robert Schaub on 2025/12/22 14:33
Workflows
Version: 0.9.70 Last Updated: December 21, 2025 Status: CORRECTED - Automation Philosophy Consistent This page describes FactHarbor's core workflows with the automation-first philosophy. == 1. Core Workflow Principles == * Automation First: 90%+ content published automatically
- No Approval Bottlenecks: No centralized review queues
- Quality Gates: Automated validation before publication
- Sampling Audits: Pattern analysis for system improvement
- Transparent Confidence: All outputs labeled with confidence scores == 2. Claim Submission Workflow == === 2.1 Claim Extraction === When users submit content (text, articles, web pages), FactHarbor first extracts individual verifiable claims: Input Types:
- Single claim: "The Earth is flat"
- Text with multiple claims: "Climate change is accelerating. Sea levels rose 3mm in 2023. Arctic ice decreased 13% annually."
- URLs: Web pages analyzed for factual claims Extraction Process:
- LLM analyzes submitted content
- Identifies distinct, verifiable factual claims
- Separates claims from opinions, questions, or commentary
- Each claim becomes independent for processing Output:
- List of claims with context
- Each claim assigned unique ID
- Original context preserved for reference This extraction ensures:
- Each claim receives focused analysis
- Multiple claims in one submission are all processed
- Claims are properly isolated for independent verification
- Context is preserved for accurate interpretation Flow:
```
User submits → Duplicate detection → Categorization → Processing queue → User receives ID
``` Timeline: Seconds No approval needed: Instant processing == 3. Automated Analysis Workflow == Complete Pipeline: ```
Claim from queue
↓
Evidence gathering (AKEL)
↓
Source evaluation (track record check)
↓
Scenario generation
↓
Verdict synthesis
↓
Risk assessment
↓
Quality gates validation
↓
Decision: PUBLISH or BLOCK
``` Timeline: 10-30 seconds Automation Rate: 90%+ published automatically === 3.1 Quality Gates Decision == Gate Validation:
- Gate 1: Source Quality ✓
2. Gate 2: Contradiction Search ✓
3. Gate 3: Uncertainty Quantification ✓
4. Gate 4: Structural Integrity ✓ If ALL gates PASS:
→ Publish immediately (Mode 2: AI-Generated)
→ Apply appropriate risk tier label
→ Display confidence score
→ Make available for sampling audit If ANY gate FAILS:
→ Block publication (Mode 1: Draft-Only)
→ Log failure reason
→ Analyze failure pattern
→ Queue system improvement task
→ May re-process after improvements CRITICAL: No human approval step - gates are automated. == 4. Publication Workflow == V0.9.70 CLARIFIED: Risk tiers affect LABELS and AUDIT FREQUENCY, NOT approval requirements. === Standard Flow (90%+) === ```
Pass quality gates
↓
Determine risk tier (A/B/C)
↓
Apply appropriate labels
↓
PUBLISH IMMEDIATELY
↓
Add to audit sampling pool
``` No delays, no approval queues === High-Risk Content (Tier A - <10%) === V0.9.70 CORRECTION: ```
Pass quality gates
↓
Identified as Tier A (medical/legal/safety)
↓
PUBLISH IMMEDIATELY with prominent warnings
↓
Higher sampling audit frequency (50%)
``` What changed from V0.9.69:
- ❌ REMOVED: "Risk > 80% → Moderator review"
- ✅ ADDED: "Risk > 80% → Publish with WARNING labels" Philosophy: Publish with strong warnings, monitor closely through sampling. Warning Labels for Tier A:
```
⚠️ HIGH-IMPACT TOPIC
AI-Generated Analysis This claim involves [medical/legal/financial/safety] topics.
- Confidence: [X]%
- Last Updated: [timestamp]
- This is NOT professional advice
- Consult qualified professionals for decisions [View Evidence] [See Methodology] [Report Issue]
``` === Low Quality Content (<10%) === ```
FAIL quality gates
↓
Confidence < threshold OR structural issues
↓
BLOCK (Mode 1: Draft-Only)
↓
Log failure patterns
↓
Queue for system improvement
``` NOT: Send for human review IS: Improve prompts/algorithms based on failure patterns == 5. User Contribution Workflow == Philosophy: Wikipedia-style immediate application + audit trail ```
Contributor edits published content
↓
System validates (basic checks)
↓
Applied IMMEDIATELY
↓
Logged in version history
↓
Reputation earned
↓
May be selected for sampling audit
``` No approval required: Changes apply instantly Quality control: Through sampling audits and reputation system New contributors (<50 reputation): Limited to minor edits == 6. Sampling Audit Workflow == Purpose: Improve system quality through pattern analysis === 6.1 Selection Process === ```
Published content
↓
Stratified sampling (by risk tier, confidence, traffic)
↓
Selected for audit (Tier A: 50%, B: 20%, C: 5%)
↓
Added to audit queue
``` === 6.2 Audit Execution === ```
Auditor receives sample
↓
Reviews against quality standards
↓
Identifies issues/patterns
↓
Logs findings
↓
System improvement tasks created
``` What auditors DO:
- ✅ Analyze patterns across multiple outputs
- ✅ Identify systematic issues
- ✅ Recommend algorithm/prompt improvements
- ✅ Track accuracy trends What auditors DON'T DO:
- ❌ Approve individual outputs before publication
- ❌ Manually fix individual outputs
- ❌ Act as gatekeepers
- ❌ Override quality gates === 6.3 Improvement Loop === ```
Audit findings aggregated
↓
Patterns identified
↓
System improvements proposed
↓
Implemented and tested
↓
Deployed
↓
Metrics monitored
``` Examples of Improvements: - Refine evidence search queries
- Adjust source reliability weights
- Enhance contradiction detection
- Improve claim extraction prompts
- Recalibrate risk tier thresholds == 7. Flagging Workflow == Two types of flags: === 7.1 Quality Issues === ```
User flags quality issue
↓
Categorized automatically
↓
Added to sampling audit pool (priority)
↓
Pattern analysis
↓
System improvement if pattern found
``` NOT: Manual correction of individual claim IS: Improve system to prevent similar issues === 7.2 Abuse/Spam === ```
User flags abuse/spam
↓
Automated pre-moderation check
↓
Moderator review (if needed)
↓
Action taken (hide/ban)
``` Moderator role: Handle abuse/spam, NOT content quality == 8. Moderation Workflow == V0.9.70 CLARIFIED: Moderators handle ABUSE, not content quality === 8.1 Content Moderation (Abuse/Spam) === Moderator Queue Contains: - Flagged abusive content
- Spam detection alerts
- Harassment reports
- Privacy violations
- Terms of service violations Moderator Actions:
- Hide abusive content
- Ban repeat offenders
- Handle appeals
- Escalate to governing team Moderators DO NOT:
- ❌ Approve content for publication
- ❌ Review content quality before publication
- ❌ Act as editorial gatekeepers
- ❌ Manually fix AI outputs === 8.2 Appeal Process === ```
User disagrees with moderation
↓
Appeals to different moderator
↓
If still disagrees, escalates to Governing Team
↓
Governing Team decision (final)
``` == 9. Time Evolution Workflow == Automatic Re-evaluation: ```
Published claim
↓
Monitoring for triggers: - New evidence published - Source retractions - Significant events - Scheduled review ↓
Trigger detected
↓
AKEL re-processes claim
↓
Quality gates validate
↓
If verdict changes: Correction workflow
↓
If passes: Update published analysis
``` Correction Workflow (New in V0.9.70): ```
Verdict changed significantly
↓
Generate correction notice
↓
Publish correction banner (30 days)
↓
Update corrections log
↓
Notify users (email, RSS, API)
↓
Update ClaimReview schema
``` == 10. Contributor Journey == 1. Visitor – Explores platform, reads documentation
2. New Contributor – Submits first improvements (typo fixes, clarifications)
3. Contributor – Contributes regularly, follows conventions
4. Trusted Contributor – Track record of quality work
5. Reviewer – Participates in sampling audits (pattern analysis)
6. Moderator – Handles abuse/spam (not content quality)
7. Expert (optional) – Provides domain expertise for contested claims All contributions apply immediately - no approval workflow == 11. Related Pages == * AKEL - AI processing system - Architecture - System architecture
- Requirements - Requirements and roles
- Decision Processes - Governance V0.9.70 CHANGES: REMOVED:
- ❌ "High Risk → Moderator review" (was approval workflow)
- ❌ "Review queue" language for publication
- ❌ Any implication that moderators approve content quality ADDED/CLARIFIED:
- ✅ Risk tiers affect warnings and audit frequency, NOT approval
- ✅ High-risk content publishes immediately with prominent warnings
- ✅ Quality gate failures → Block + improve system (not human review)
- ✅ Clear distinction: Sampling audits (improvement) vs. Content moderation (abuse)
- ✅ Moderator role clarified: Abuse only, NOT content quality
- ✅ User contributions apply immediately (Wikipedia model)
- ✅ Correction workflow for significant verdict changes
- ✅ Time evolution and re-evaluation workflow