Workflows
Workflows
This page describes the core workflows for content creation, review, and publication in FactHarbor.
1. Overview
FactHarbor workflows support three publication modes with risk-based review:
- Mode 1 (Draft): Internal only, failed quality gates or pending review
- Mode 2 (AI-Generated): Public with AI-generated label, passed quality gates
- Mode 3 (Human-Reviewed): Public with human-reviewed status, highest trust
Workflows vary by Risk Tier (A/B/C) and Content Type (Claim, Scenario, Evidence, Verdict).
2. Claim Submission & Publication Workflow
2.1 Step 1: Claim Submission
Actor: Contributor or AKEL
Actions:
- Submit claim text
- Provide initial sources (optional for human contributors, mandatory for AKEL)
- System assigns initial AuthorType (Human or AI)
Output: Claim draft created
2.2 Step 2: AKEL Processing
Automated Steps:
- Claim extraction and normalization
2. Classification (domain, type, evaluability)
3. Risk tier assignment (A/B/C suggested)
4. Initial scenario generation
5. Evidence search
6. Contradiction search (mandatory)
7. Quality gate validation
Output: Processed claim with risk tier and quality gate results
2.3 Step 3: Quality Gate Checkpoint
Gates Evaluated:
- Source quality
- Contradiction search completion
- Uncertainty quantification
- Structural integrity
Outcomes:
- All gates pass → Proceed to Mode 2 publication (if Tier B or C)
- Any gate fails → Mode 1 (Draft), flag for human review
- Tier A → Mode 2 with warnings + auto-escalate to expert queue
2.4 Step 4: Publication (Risk-Tier Dependent)
Tier C (Low Risk):
- Direct to Mode 2: AI-generated, public, clearly labeled
- User can request human review
- Sampling audit applies
Tier B (Medium Risk):
- Direct to Mode 2: AI-generated, public, clearly labeled
- Higher audit sampling rate
- High-engagement content may auto-escalate
Tier A (High Risk):
- Mode 2 with warnings: AI-generated, public, prominent disclaimers
- Auto-escalated to expert review queue
- User warnings displayed
- Highest audit sampling rate
2.5 Step 5: Human Review (Optional for B/C, Escalated for A)
Triggers:
- User requests review
- Audit flags issues
- High engagement (Tier B)
- Automatic (Tier A)
Process:
- Reviewer/Expert examines claim
2. Validates quality gates
3. Checks contradiction search results
4. Assesses risk tier appropriateness
5. Decision: Approve, Request Changes, or Reject
Outcomes:
- Approved → Mode 3 (Human-Reviewed)
- Changes Requested → Back to contributor or AKEL for revision
- Rejected → Rejected status with reasoning
3. Scenario Creation Workflow
3.1 Step 1: Scenario Generation
Automated (AKEL):
- Generate scenarios for claim
- Define boundaries, assumptions, context
- Identify evaluation methods
Manual (Expert/Reviewer):
- Create custom scenarios
- Refine AKEL-generated scenarios
- Add domain-specific nuances
3.2 Step 2: Scenario Validation
Quality Checks:
- Completeness (definitions, boundaries, assumptions clear)
- Relevance to claim
- Evaluability
- No circular logic
Risk Tier Assignment:
- Inherits from parent claim
- Can be overridden by expert if scenario increases/decreases risk
3.3 Step 3: Scenario Publication
Mode 2 (AI-Generated):
- Tier B/C scenarios can publish immediately
- Subject to sampling audits
Mode 1 (Draft):
- Tier A scenarios default to draft
- Require expert validation for Mode 2 or Mode 3
4. Evidence Evaluation Workflow
4.1 Step 1: Evidence Search & Retrieval
AKEL Actions:
- Search academic databases, reputable media
- Mandatory contradiction search (counter-evidence, reservations)
- Extract metadata (author, date, publication, methodology)
- Assess source reliability
Quality Requirements:
- Primary sources preferred
- Diverse perspectives included
- Echo chambers flagged
- Conflicting evidence acknowledged
4.2 Step 2: Evidence Summarization
AKEL Generates:
- Summary of evidence
- Relevance assessment
- Reliability score
- Limitations and caveats
- Conflicting evidence summary
Quality Gate: Structural integrity, source quality
4.3 Step 3: Evidence Review
Reviewer/Expert Validates:
- Accuracy of summaries
- Appropriateness of sources
- Completeness of contradiction search
- Reliability assessments
Outcomes:
- Mode 2: Evidence summaries published as AI-generated
- Mode 3: After human validation
- Mode 1: Failed quality checks or pending expert review
5. Verdict Generation Workflow
5.1 Step 1: Verdict Computation
AKEL Computes:
- Verdict across scenarios
- Confidence scores
- Uncertainty quantification
- Key assumptions
- Limitations
Inputs:
- Claim text
- Scenario definitions
- Evidence assessments
- Contradiction search results
5.2 Step 2: Verdict Validation
Quality Gates:
- All four gates apply (source, contradiction, uncertainty, structure)
- Reasoning chain must be traceable
- Assumptions must be explicit
Risk Tier Check:
- Tier A: Always requires expert validation for Mode 3
- Tier B: Mode 2 allowed, audit sampling
- Tier C: Mode 2 default
5.3 Step 3: Verdict Publication
Mode 2 (AI-Generated Verdict):
- Clear labeling with confidence scores
- Uncertainty disclosure
- Links to reasoning trail
- User can request expert review
Mode 3 (Expert-Validated Verdict):
- Human reviewer/expert stamp
- Additional commentary (optional)
- Highest trust level
6. Audit Workflow
6.1 Step 1: Audit Sampling Selection
Stratified Sampling:
- Risk tier priority (A > B > C)
- Low confidence scores
- High traffic content
- Novel topics
- User flags
Sampling Rates (Recommendations):
- Tier A: 30-50%
- Tier B: 10-20%
- Tier C: 5-10%
6.2 Step 2: Audit Execution
Auditor Actions:
- Review sampled AI-generated content
2. Validate quality gates were properly applied
3. Check contradiction search completeness
4. Assess reasoning quality
5. Identify errors or hallucinations
Audit Outcome:
- Pass: Content remains in Mode 2, logged as validated
- Fail: Content flagged for review, system improvement triggered
6.3 Step 3: Feedback Loop
System Improvements:
- Failed audits analyzed for patterns
- AKEL parameters adjusted
- Quality gates refined
- Risk tier assignments recalibrated
Transparency:
- Audit statistics published periodically
- Patterns shared with community
- System improvements documented
7. Mode Transition Workflow
7.1 Mode 1 → Mode 2
Requirements:
- All quality gates pass
- Risk tier B or C (or A with warnings)
- Contradiction search completed
Trigger: Automatic upon quality gate validation
7.2 Mode 2 → Mode 3
Requirements:
- Human reviewer/expert validation
- Quality standards confirmed
- For Tier A: Expert approval required
- For Tier B/C: Reviewer approval sufficient
Trigger: Human review completion
7.3 Mode 3 → Mode 1 (Demotion)
Rare - Only if:
- New evidence contradicts verdict
- Error discovered in reasoning
- Source retraction
Process:
- Content flagged for re-evaluation
2. Moved to draft (Mode 1)
3. Re-processed through workflow
4. Reason for demotion documented
8. User Actions Across Modes
8.1 On Mode 1 (Draft) Content
Contributors:
- Edit their own drafts
- Submit for review
Reviewers/Experts:
- View and comment
- Request changes
- Approve for Mode 2 or Mode 3
8.2 On Mode 2 (AI-Generated) Content
All Users:
- Read and use content
- Request human review
- Flag for expert attention
- Provide feedback
Reviewers/Experts:
- Validate for Mode 3 transition
- Edit and refine
- Adjust risk tier if needed
8.3 On Mode 3 (Human-Reviewed) Content
All Users:
- Read with highest confidence
- Still can flag if new evidence emerges
Reviewers/Experts:
- Update if needed
- Trigger re-evaluation if new evidence
9. Diagram References
9.1 Claim and Scenario Lifecycle (Overview)
Claim and Scenario Lifecycle (Overview)
flowchart TD classDef human fill:#fff,stroke:#333,stroke-width:1px; classDef ai fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,stroke-dasharray: 5 5; classDef phase fill:#f5f5f5,stroke:#999,stroke-width:1px; %% 1. Claim Submission subgraph Submission ["1. Claim Submission"] direction TB Input[User/Source Input] --> Normalise[AI/Human Normalisation] Normalise:::ai --> Cluster[Identify Claim Cluster] Cluster:::ai --> DraftScen[Draft Initial Scenarios] end %% 2. Scenario Building subgraph Scenarios ["2. Scenario Building"] direction TB DraftScen:::ai --> Defs[Define Assumptions & Boundaries] Defs:::ai --> Generation[AKEL Generates Scenarios] end %% 3. Evidence Handling subgraph Evidence ["3. Evidence Handling"] direction TB Retrieval[AI Retrieval & Summary] --> Assess[Human Quality Assessment] Retrieval:::ai --> Assess:::human Assess --> Link[Link Evidence to Scenarios] Link:::human end %% 4. Verdict Creation subgraph Verdicts ["4. Verdict Creation"] direction TB DraftVer[AI Draft Verdict] --> Refine[Human Refinement] DraftVer:::ai --> Refine:::human Refine --> Reason[Explain Reasoning] Reason:::human --> ApproveVer[Verdict Approval] end %% 5. Public Presentation subgraph Public ["5. Public Presentation"] direction TB Summary[Concise Summary] Landscape[Truth Landscape Comparison] DeepDive[Deep Dive Evidence Access] end %% Flow connections between phases Submission --> Scenarios Scenarios --> Evidence Evidence --> Verdicts Verdicts --> Public %% 6. Time Evolution (Feedback Loop) subgraph Evolution ["6. Time Evolution"] NewEv[New Evidence / Correction] end Public -.-> NewEv NewEv -.-> Evidence
9.2 Claim and Scenario Workflow
Claim & Scenario Workflow
This diagram shows how Claims are submitted and Scenarios are created and reviewed.
graph TB Start[User Submission
Text/URL/Single Claim] Extract{Claim Extraction
LLM Analysis} ValidateClaims{Validate Claims
Clear & Distinct?} Single[Single Claim] Multi[Multiple Claims] Queue[Parallel Processing] Process[Process Claim
AKEL Analysis] Evidence[Gather Evidence
LLM + Sources] Scenarios[Generate Scenarios
LLM Analysis] CrossRef[Cross-Reference
Evidence & Scenarios] Verdict[Generate Verdict
Confidence + Risk] Review{Confidence
Check} Publish[Publish Verdict] HumanReview[Human Review Queue] Start --> Extract Extract --> ValidateClaims ValidateClaims -->|Valid| Single ValidateClaims -->|Valid| Multi ValidateClaims -->|Invalid| Start Single --> Process Multi --> Queue Queue -->|Each Claim| Process Process --> Evidence Process --> Scenarios Evidence --> CrossRef Scenarios --> CrossRef CrossRef --> Verdict Verdict --> Review Review -->|High Confidence| Publish Review -->|Low Confidence| HumanReview HumanReview --> Publish style Extract fill:#e1f5ff style Queue fill:#fff4e1 style Process fill:#f0f0f0 style HumanReview fill:#ffe1e1
9.3 Evidence and Verdict Workflow
graph TD CLAIM[Claim] --> EVIDENCE[Evidence] EVIDENCE --> SOURCE[Source] SOURCE --> TRACK[Track Record Check] EVIDENCE --> SCENARIO[Scenario] SCENARIO --> VERDICT[Verdict] VERDICT --> CONFIDENCE[Confidence Score] TRACK --> QUALITY[Quality Score] QUALITY --> SCENARIO USER[User/Contributor] --> CLAIM USER --> EVIDENCE USER --> SCENARIO VERDICT --> DISPLAY[Display to Users] style CLAIM fill:#e1f5ff style VERDICT fill:#99ff99 style CONFIDENCE fill:#ffff99
Evidence and Verdict Workflow - Shows how Claim, Evidence, and Verdict relate:
- Claim: Starting point, the assertion to evaluate
- Evidence: Gathered from sources to support/refute claim
- Source: Checked for track record quality
- Scenario: Possible interpretations based on evidence
- Verdict: Synthesized conclusion with confidence score
- Users: Can contribute at any stage
9.4 Quality and Audit Workflow
Quality & Audit Workflow
This diagram shows quality gates and audit processes.
erDiagram
TECHNICAL_USER {
string SystemID PK
}
AUDITOR {
string ModeratorID PK
}
MAINTAINER {
string ModeratorID PK
}
CLAIM_VERSION {
string VersionID PK
}
VERDICT_VERSION {
string VersionID PK
}
QUALITY_GATE_LOG {
string LogID PK
string EntityVersionID FK
enum GateType "SourceQuality,ContradictionSearch,UncertaintyQuant,StructuralIntegrity"
boolean Passed
json Details
datetime ExecutedAt
}
AUDIT_RECORD {
string AuditID PK
string ModeratorID FK
string EntityVersionID FK
enum EntityType "Claim,Verdict"
enum Outcome "Pass,Fail"
json Feedback
datetime AuditedAt
}
AUDIT_POLICY {
string PolicyID PK
string ModeratorID FK
enum RiskTier "A,B,C"
float SamplingRate
json Rules
}
TECHNICAL_USER ||--o{ QUALITY_GATE_LOG : "executes"
QUALITY_GATE_LOG }o--|| CLAIM_VERSION : "validates"
QUALITY_GATE_LOG }o--|| VERDICT_VERSION : "validates"
AUDITOR ||--o{ AUDIT_RECORD : "creates"
AUDIT_RECORD }o--|| CLAIM_VERSION : "audits"
AUDIT_RECORD }o--|| VERDICT_VERSION : "audits"
MAINTAINER ||--o{ AUDIT_POLICY : "configures"
Manual vs Automated matrix
Manual vs Automated matrix Mermaid
graph TD subgraph "Automated by AKEL" A1["Claim Evaluation
- Evidence extraction
- Source scoring
- Verdict generation
- Risk classification
- Publication"] A2["Quality Assessment
- Contradiction detection
- Confidence scoring
- Pattern recognition
- Anomaly flagging"] A3["Content Management
- Scenario generation
- Evidence linking
- Source tracking
- Version control"] end subgraph "Human Responsibilities" H1["Algorithm Improvement
- Monitor performance metrics
- Identify systematic issues
- Propose fixes
- Test improvements
- Deploy updates"] H2["Policy Governance
- Set evaluation criteria
- Define risk tiers
- Establish thresholds
- Update guidelines"] H3["Exception Handling
- Review AKEL-flagged items
- Handle abuse/manipulation
- Address safety concerns
- Manage legal issues"] H4["Strategic Decisions
- Budget and resources
- Hiring and roles
- Major policy changes
- Partnership agreements"] end style A1 fill:#c7e5ff style A2 fill:#c7e5ff style A3 fill:#c7e5ff style H1 fill:#ffe5cc style H2 fill:#ffe5cc style H3 fill:#ffe5cc style H4 fill:#ffe5cc
Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.
Never Manual:
- Individual claim approval
- Routine content review
- Verdict overrides (fix algorithm instead)
- Publication gates