Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

Workflows

This page describes the core workflows for content creation, review, and publication in FactHarbor.

Overview

FactHarbor workflows support three publication modes with risk-based review:

  • Mode 1 (Draft): Internal only, failed quality gates or pending review
  • Mode 2 (AI-Generated): Public with AI-generated label, passed quality gates
  • Mode 3 (Human-Reviewed): Public with human-reviewed status, highest trust

Workflows vary by Risk Tier (A/B/C) and Content Type (Claim, Scenario, Evidence, Verdict).


Claim Submission & Publication Workflow

Step 1: Claim Submission

Actor: Contributor or AKEL

Actions:

  • Submit claim text
  • Provide initial sources (optional for human contributors, mandatory for AKEL)
  • System assigns initial AuthorType (Human or AI)

Output: Claim draft created

Step 2: AKEL Processing

Automated Steps:

  1. Claim extraction and normalization
    2. Classification (domain, type, evaluability)
    3. Risk tier assignment (A/B/C suggested)
    4. Initial scenario generation
    5. Evidence search
    6. Contradiction search (mandatory)
    7. Quality gate validation

Output: Processed claim with risk tier and quality gate results

Step 3: Quality Gate Checkpoint

Gates Evaluated:

  • Source quality
  • Contradiction search completion
  • Uncertainty quantification
  • Structural integrity

Outcomes:

  • All gates pass → Proceed to Mode 2 publication (if Tier B or C)
  • Any gate fails → Mode 1 (Draft), flag for human review
  • Tier A → Mode 2 with warnings + auto-escalate to expert queue

Step 4: Publication (Risk-Tier Dependent)

Tier C (Low Risk):

  • Direct to Mode 2: AI-generated, public, clearly labeled
  • User can request human review
  • Sampling audit applies

Tier B (Medium Risk):

  • Direct to Mode 2: AI-generated, public, clearly labeled
  • Higher audit sampling rate
  • High-engagement content may auto-escalate

Tier A (High Risk):

  • Mode 2 with warnings: AI-generated, public, prominent disclaimers
  • Auto-escalated to expert review queue
  • User warnings displayed
  • Highest audit sampling rate

Step 5: Human Review (Optional for B/C, Escalated for A)

Triggers:

  • User requests review
  • Audit flags issues
  • High engagement (Tier B)
  • Automatic (Tier A)

Process:

  1. Reviewer/Expert examines claim
    2. Validates quality gates
    3. Checks contradiction search results
    4. Assesses risk tier appropriateness
    5. Decision: Approve, Request Changes, or Reject

Outcomes:

  • Approved → Mode 3 (Human-Reviewed)
  • Changes Requested → Back to contributor or AKEL for revision
  • Rejected → Rejected status with reasoning

Scenario Creation Workflow

Step 1: Scenario Generation

Automated (AKEL):

  • Generate scenarios for claim
  • Define boundaries, assumptions, context
  • Identify evaluation methods

Manual (Expert/Reviewer):

  • Create custom scenarios
  • Refine AKEL-generated scenarios
  • Add domain-specific nuances

Step 2: Scenario Validation

Quality Checks:

  • Completeness (definitions, boundaries, assumptions clear)
  • Relevance to claim
  • Evaluability
  • No circular logic

Risk Tier Assignment:

  • Inherits from parent claim
  • Can be overridden by expert if scenario increases/decreases risk

Step 3: Scenario Publication

Mode 2 (AI-Generated):

  • Tier B/C scenarios can publish immediately
  • Subject to sampling audits

Mode 1 (Draft):

  • Tier A scenarios default to draft
  • Require expert validation for Mode 2 or Mode 3

Evidence Evaluation Workflow

Step 1: Evidence Search & Retrieval

AKEL Actions:

  • Search academic databases, reputable media
  • Mandatory contradiction search (counter-evidence, reservations)
  • Extract metadata (author, date, publication, methodology)
  • Assess source reliability

Quality Requirements:

  • Primary sources preferred
  • Diverse perspectives included
  • Echo chambers flagged
  • Conflicting evidence acknowledged

Step 2: Evidence Summarization

AKEL Generates:

  • Summary of evidence
  • Relevance assessment
  • Reliability score
  • Limitations and caveats
  • Conflicting evidence summary

Quality Gate: Structural integrity, source quality

Step 3: Evidence Review

Reviewer/Expert Validates:

  • Accuracy of summaries
  • Appropriateness of sources
  • Completeness of contradiction search
  • Reliability assessments

Outcomes:

  • Mode 2: Evidence summaries published as AI-generated
  • Mode 3: After human validation
  • Mode 1: Failed quality checks or pending expert review

Verdict Generation Workflow

Step 1: Verdict Computation

AKEL Computes:

  • Verdict across scenarios
  • Confidence scores
  • Uncertainty quantification
  • Key assumptions
  • Limitations

Inputs:

  • Claim text
  • Scenario definitions
  • Evidence assessments
  • Contradiction search results

Step 2: Verdict Validation

Quality Gates:

  • All four gates apply (source, contradiction, uncertainty, structure)
  • Reasoning chain must be traceable
  • Assumptions must be explicit

Risk Tier Check:

  • Tier A: Always requires expert validation for Mode 3
  • Tier B: Mode 2 allowed, audit sampling
  • Tier C: Mode 2 default

Step 3: Verdict Publication

Mode 2 (AI-Generated Verdict):

  • Clear labeling with confidence scores
  • Uncertainty disclosure
  • Links to reasoning trail
  • User can request expert review

Mode 3 (Expert-Validated Verdict):

  • Human reviewer/expert stamp
  • Additional commentary (optional)
  • Highest trust level

Audit Workflow

Step 1: Audit Sampling Selection

Stratified Sampling:

  • Risk tier priority (A > B > C)
  • Low confidence scores
  • High traffic content
  • Novel topics
  • User flags

Sampling Rates (Recommendations):

  • Tier A: 30-50%
  • Tier B: 10-20%
  • Tier C: 5-10%

Step 2: Audit Execution

Auditor Actions:

  1. Review sampled AI-generated content
    2. Validate quality gates were properly applied
    3. Check contradiction search completeness
    4. Assess reasoning quality
    5. Identify errors or hallucinations

Audit Outcome:

  • Pass: Content remains in Mode 2, logged as validated
  • Fail: Content flagged for review, system improvement triggered

Step 3: Feedback Loop

System Improvements:

  • Failed audits analyzed for patterns
  • AKEL parameters adjusted
  • Quality gates refined
  • Risk tier assignments recalibrated

Transparency:

  • Audit statistics published periodically
  • Patterns shared with community
  • System improvements documented

Mode Transition Workflow

Mode 1 → Mode 2

Requirements:

  • All quality gates pass
  • Risk tier B or C (or A with warnings)
  • Contradiction search completed

Trigger: Automatic upon quality gate validation

Mode 2 → Mode 3

Requirements:

  • Human reviewer/expert validation
  • Quality standards confirmed
  • For Tier A: Expert approval required
  • For Tier B/C: Reviewer approval sufficient

Trigger: Human review completion

Mode 3 → Mode 1 (Demotion)

Rare - Only if:

  • New evidence contradicts verdict
  • Error discovered in reasoning
  • Source retraction

Process:

  1. Content flagged for re-evaluation
    2. Moved to draft (Mode 1)
    3. Re-processed through workflow
    4. Reason for demotion documented

User Actions Across Modes

On Mode 1 (Draft) Content

Contributors:

  • Edit their own drafts
  • Submit for review

Reviewers/Experts:

  • View and comment
  • Request changes
  • Approve for Mode 2 or Mode 3

On Mode 2 (AI-Generated) Content

All Users:

  • Read and use content
  • Request human review
  • Flag for expert attention
  • Provide feedback

Reviewers/Experts:

  • Validate for Mode 3 transition
  • Edit and refine
  • Adjust risk tier if needed

On Mode 3 (Human-Reviewed) Content

All Users:

  • Read with highest confidence
  • Still can flag if new evidence emerges

Reviewers/Experts:

  • Update if needed
  • Trigger re-evaluation if new evidence

Diagram References

Claim and Scenario Lifecycle (Overview)

Claim and Scenario Lifecycle (Overview)

flowchart TD
    classDef human fill:#fff,stroke:#333,stroke-width:1px;
    classDef ai fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,stroke-dasharray: 5 5;
    classDef phase fill:#f5f5f5,stroke:#999,stroke-width:1px;

    %% 1. Claim Submission
    subgraph Submission ["1. Claim Submission"]
        direction TB
        Input[User/Source Input] --> Normalise[AI/Human Normalisation]
        Normalise:::ai --> Cluster[Identify Claim Cluster]
        Cluster:::ai --> DraftScen[Draft Initial Scenarios]
    end

    %% 2. Scenario Building
    subgraph Scenarios ["2. Scenario Building"]
        direction TB
        DraftScen:::ai --> Defs[Define Assumptions & Boundaries]
        Defs:::human --> Approval[Human Approval of Scenarios]
    end

    %% 3. Evidence Handling
    subgraph Evidence ["3. Evidence Handling"]
        direction TB
        Retrieval[AI Retrieval & Summary] --> Assess[Human Quality Assessment]
        Retrieval:::ai --> Assess:::human
        Assess --> Link[Link Evidence to Scenarios]
        Link:::human
    end

    %% 4. Verdict Creation
    subgraph Verdicts ["4. Verdict Creation"]
        direction TB
        DraftVer[AI Draft Verdict] --> Refine[Human Refinement]
        DraftVer:::ai --> Refine:::human
        Refine --> Reason[Explain Reasoning]
        Reason:::human --> ApproveVer[Verdict Approval]
    end

    %% 5. Public Presentation
    subgraph Public ["5. Public Presentation"]
        direction TB
        Summary[Concise Summary]
        Landscape[Truth Landscape Comparison]
        DeepDive[Deep Dive Evidence Access]
    end

    %% Flow connections between phases
    Submission --> Scenarios
    Scenarios --> Evidence
    Evidence --> Verdicts
    Verdicts --> Public

    %% 6. Time Evolution (Feedback Loop)
    subgraph Evolution ["6. Time Evolution"]
        NewEv[New Evidence / Correction]
    end

    Public -.-> NewEv
    NewEv -.-> Evidence

Claim and Scenario Workflow

Claim & Scenario Workflow

This diagram shows how Claims and Scenarios are created and reviewed.

erDiagram
    CONTRIBUTOR {
        string UserID PK
    }
    
    TECHNICAL_USER {
        string SystemID PK
    }
    
    REVIEWER {
        string ReviewerID PK
    }
    
    CLAIM_VERSION {
        string VersionID PK
        string ClaimID FK
        string ParentVersionID FK
        string Text
        enum RiskTier "A,B,C"
        enum PublicationMode "Mode1,Mode2,Mode3"
        enum ReviewStatus
        string CreatedBy FK
        datetime CreatedAt
    }
    
    SCENARIO_VERSION {
        string VersionID PK
        string ScenarioID FK
        string ParentVersionID FK
        string ClaimID FK
        json Definitions
        json Assumptions
        enum PublicationMode
        enum ReviewStatus
        string CreatedBy FK
        datetime CreatedAt
    }
    
    CONTRIBUTOR ||--o{ CLAIM_VERSION : "submits"
    CONTRIBUTOR ||--o{ SCENARIO_VERSION : "proposes"
    TECHNICAL_USER ||--o{ CLAIM_VERSION : "generates"
    TECHNICAL_USER ||--o{ SCENARIO_VERSION : "drafts"
    REVIEWER ||--o{ CLAIM_VERSION : "reviews"
    REVIEWER ||--o{ SCENARIO_VERSION : "validates"
    CLAIM_VERSION ||--o{ SCENARIO_VERSION : "has-scenarios"

Evidence and Verdict Workflow

Evidence & Verdict Workflow

This diagram shows how Evidence supports Verdicts for Scenarios.

erDiagram
    CONTRIBUTOR {
        string UserID PK
    }
    
    TECHNICAL_USER {
        string SystemID PK
    }
    
    REVIEWER {
        string ReviewerID PK
    }
    
    EXPERT {
        string ExpertID PK
    }
    
    SCENARIO_VERSION {
        string VersionID PK
        string ScenarioID FK
    }
    
    EVIDENCE_VERSION {
        string VersionID PK
        string EvidenceID FK
        enum Reliability "low,medium,high"
        string Provenance
        enum PublicationMode
        enum ReviewStatus
        datetime CreatedAt
    }
    
    VERDICT_VERSION {
        string VersionID PK
        string VerdictID FK
        string ScenarioVersionID FK
        json EvidenceVersionSet
        float LikelihoodRange
        enum PublicationMode
        enum ReviewStatus
        datetime CreatedAt
    }
    
    SCENARIO_EVIDENCE_LINK {
        string ScenarioVersionID FK
        string EvidenceVersionID FK
        float RelevanceScore
    }
    
    CONTRIBUTOR ||--o{ EVIDENCE_VERSION : "attaches"
    TECHNICAL_USER ||--o{ EVIDENCE_VERSION : "retrieves"
    TECHNICAL_USER ||--o{ VERDICT_VERSION : "proposes"
    REVIEWER ||--o{ VERDICT_VERSION : "approves-TierBC"
    EXPERT ||--o{ VERDICT_VERSION : "approves-TierA"
    SCENARIO_VERSION ||--o{ VERDICT_VERSION : "produces"
    SCENARIO_VERSION ||--o{ SCENARIO_EVIDENCE_LINK : "uses"
    EVIDENCE_VERSION ||--o{ SCENARIO_EVIDENCE_LINK : "supports"

Quality and Audit Workflow

Quality & Audit Workflow

This diagram shows quality gates and audit processes.

erDiagram
    TECHNICAL_USER {
        string SystemID PK
    }
    
    AUDITOR {
        string AuditorID PK
    }
    
    MAINTAINER {
        string MaintainerID PK
    }
    
    CLAIM_VERSION {
        string VersionID PK
    }
    
    VERDICT_VERSION {
        string VersionID PK
    }
    
    QUALITY_GATE_LOG {
        string LogID PK
        string EntityVersionID FK
        enum GateType "SourceQuality,ContradictionSearch,UncertaintyQuant,StructuralIntegrity"
        boolean Passed
        json Details
        datetime ExecutedAt
    }
    
    AUDIT_RECORD {
        string AuditID PK
        string AuditorID FK
        string EntityVersionID FK
        enum EntityType "Claim,Verdict"
        enum Outcome "Pass,Fail"
        json Feedback
        datetime AuditedAt
    }
    
    AUDIT_POLICY {
        string PolicyID PK
        string MaintainerID FK
        enum RiskTier "A,B,C"
        float SamplingRate
        json Rules
    }
    
    TECHNICAL_USER ||--o{ QUALITY_GATE_LOG : "executes"
    QUALITY_GATE_LOG }o--|| CLAIM_VERSION : "validates"
    QUALITY_GATE_LOG }o--|| VERDICT_VERSION : "validates"
    AUDITOR ||--o{ AUDIT_RECORD : "creates"
    AUDIT_RECORD }o--|| CLAIM_VERSION : "audits"
    AUDIT_RECORD }o--|| VERDICT_VERSION : "audits"
    MAINTAINER ||--o{ AUDIT_POLICY : "configures"

Manual vs Automated matrix

graph TD
    Human[Always Human
- Final Verdict Approval
- Ethics & Governance
- Dispute Resolution
- Scenario Validity]
    
    Mixed[Mixed / AI-Assisted
- Ambiguous Definitions
- Boundary Choices
- Verdict Reasoning Text]
    
    AI[Mostly AI + Human Check
- Claim Normalization
- Clustering
- Metadata Extraction
- Contradiction Alerts]

    Human --- Mixed
    Mixed --- AI

Related Pages