Workflows

Version 1.1 by Robert Schaub on 2025/12/16 21:42

Workflows

This page describes the core workflows for content creation, review, and publication in FactHarbor.

1. Overview

FactHarbor workflows support three publication modes with risk-based review:

  • Mode 1 (Draft): Internal only, failed quality gates or pending review
  • Mode 2 (AI-Generated): Public with AI-generated label, passed quality gates
  • Mode 3 (Human-Reviewed): Public with human-reviewed status, highest trust

Workflows vary by Risk Tier (A/B/C) and Content Type (Claim, Scenario, Evidence, Verdict).

2. Claim Submission & Publication Workflow

2.1 Step 1: Claim Submission

Actor: Contributor or AKEL

Actions:

  • Submit claim text
  • Provide initial sources (optional for human contributors, mandatory for AKEL)
  • System assigns initial AuthorType (Human or AI)

Output: Claim draft created

2.2 Step 2: AKEL Processing

Automated Steps:

  1. Claim extraction and normalization
    2. Classification (domain, type, evaluability)
    3. Risk tier assignment (A/B/C suggested)
    4. Initial scenario generation
    5. Evidence search
    6. Contradiction search (mandatory)
    7. Quality gate validation

Output: Processed claim with risk tier and quality gate results

2.3 Step 3: Quality Gate Checkpoint

Gates Evaluated:

  • Source quality
  • Contradiction search completion
  • Uncertainty quantification
  • Structural integrity

Outcomes:

  • All gates pass → Proceed to Mode 2 publication (if Tier B or C)
  • Any gate fails → Mode 1 (Draft), flag for human review
  • Tier A → Mode 2 with warnings + auto-escalate to expert queue

2.4 Step 4: Publication (Risk-Tier Dependent)

Tier C (Low Risk):

  • Direct to Mode 2: AI-generated, public, clearly labeled
  • User can request human review
  • Sampling audit applies

Tier B (Medium Risk):

  • Direct to Mode 2: AI-generated, public, clearly labeled
  • Higher audit sampling rate
  • High-engagement content may auto-escalate

Tier A (High Risk):

  • Mode 2 with warnings: AI-generated, public, prominent disclaimers
  • Auto-escalated to expert review queue
  • User warnings displayed
  • Highest audit sampling rate

2.5 Step 5: Human Review (Optional for B/C, Escalated for A)

Triggers:

  • User requests review
  • Audit flags issues
  • High engagement (Tier B)
  • Automatic (Tier A)

Process:

  1. Reviewer/Expert examines claim
    2. Validates quality gates
    3. Checks contradiction search results
    4. Assesses risk tier appropriateness
    5. Decision: Approve, Request Changes, or Reject

Outcomes:

  • Approved → Mode 3 (Human-Reviewed)
  • Changes Requested → Back to contributor or AKEL for revision
  • Rejected → Rejected status with reasoning

3. Scenario Creation Workflow

3.1 Step 1: Scenario Generation

Automated (AKEL):

  • Generate scenarios for claim
  • Define boundaries, assumptions, context
  • Identify evaluation methods

Manual (Expert/Reviewer):

  • Create custom scenarios
  • Refine AKEL-generated scenarios
  • Add domain-specific nuances

3.2 Step 2: Scenario Validation

Quality Checks:

  • Completeness (definitions, boundaries, assumptions clear)
  • Relevance to claim
  • Evaluability
  • No circular logic

Risk Tier Assignment:

  • Inherits from parent claim
  • Can be overridden by expert if scenario increases/decreases risk

3.3 Step 3: Scenario Publication

Mode 2 (AI-Generated):

  • Tier B/C scenarios can publish immediately
  • Subject to sampling audits

Mode 1 (Draft):

  • Tier A scenarios default to draft
  • Require expert validation for Mode 2 or Mode 3

4. Evidence Evaluation Workflow

4.1 Step 1: Evidence Search & Retrieval

AKEL Actions:

  • Search academic databases, reputable media
  • Mandatory contradiction search (counter-evidence, reservations)
  • Extract metadata (author, date, publication, methodology)
  • Assess source reliability

Quality Requirements:

  • Primary sources preferred
  • Diverse perspectives included
  • Echo chambers flagged
  • Conflicting evidence acknowledged

4.2 Step 2: Evidence Summarization

AKEL Generates:

  • Summary of evidence
  • Relevance assessment
  • Reliability score
  • Limitations and caveats
  • Conflicting evidence summary

Quality Gate: Structural integrity, source quality

4.3 Step 3: Evidence Review

Reviewer/Expert Validates:

  • Accuracy of summaries
  • Appropriateness of sources
  • Completeness of contradiction search
  • Reliability assessments

Outcomes:

  • Mode 2: Evidence summaries published as AI-generated
  • Mode 3: After human validation
  • Mode 1: Failed quality checks or pending expert review

5. Verdict Generation Workflow

5.1 Step 1: Verdict Computation

AKEL Computes:

  • Verdict across scenarios
  • Confidence scores
  • Uncertainty quantification
  • Key assumptions
  • Limitations

Inputs:

  • Claim text
  • Scenario definitions
  • Evidence assessments
  • Contradiction search results

5.2 Step 2: Verdict Validation

Quality Gates:

  • All four gates apply (source, contradiction, uncertainty, structure)
  • Reasoning chain must be traceable
  • Assumptions must be explicit

Risk Tier Check:

  • Tier A: Always requires expert validation for Mode 3
  • Tier B: Mode 2 allowed, audit sampling
  • Tier C: Mode 2 default

5.3 Step 3: Verdict Publication

Mode 2 (AI-Generated Verdict):

  • Clear labeling with confidence scores
  • Uncertainty disclosure
  • Links to reasoning trail
  • User can request expert review

Mode 3 (Expert-Validated Verdict):

  • Human reviewer/expert stamp
  • Additional commentary (optional)
  • Highest trust level

6. Audit Workflow

6.1 Step 1: Audit Sampling Selection

Stratified Sampling:

  • Risk tier priority (A > B > C)
  • Low confidence scores
  • High traffic content
  • Novel topics
  • User flags

Sampling Rates (Recommendations):

  • Tier A: 30-50%
  • Tier B: 10-20%
  • Tier C: 5-10%

6.2 Step 2: Audit Execution

Auditor Actions:

  1. Review sampled AI-generated content
    2. Validate quality gates were properly applied
    3. Check contradiction search completeness
    4. Assess reasoning quality
    5. Identify errors or hallucinations

Audit Outcome:

  • Pass: Content remains in Mode 2, logged as validated
  • Fail: Content flagged for review, system improvement triggered

6.3 Step 3: Feedback Loop

System Improvements:

  • Failed audits analyzed for patterns
  • AKEL parameters adjusted
  • Quality gates refined
  • Risk tier assignments recalibrated

Transparency:

  • Audit statistics published periodically
  • Patterns shared with community
  • System improvements documented

7. Mode Transition Workflow

7.1 Mode 1 → Mode 2

Requirements:

  • All quality gates pass
  • Risk tier B or C (or A with warnings)
  • Contradiction search completed

Trigger: Automatic upon quality gate validation

7.2 Mode 2 → Mode 3

Requirements:

  • Human reviewer/expert validation
  • Quality standards confirmed
  • For Tier A: Expert approval required
  • For Tier B/C: Reviewer approval sufficient

Trigger: Human review completion

7.3 Mode 3 → Mode 1 (Demotion)

Rare - Only if:

  • New evidence contradicts verdict
  • Error discovered in reasoning
  • Source retraction

Process:

  1. Content flagged for re-evaluation
    2. Moved to draft (Mode 1)
    3. Re-processed through workflow
    4. Reason for demotion documented

8. User Actions Across Modes

8.1 On Mode 1 (Draft) Content

Contributors:

  • Edit their own drafts
  • Submit for review

Reviewers/Experts:

  • View and comment
  • Request changes
  • Approve for Mode 2 or Mode 3

8.2 On Mode 2 (AI-Generated) Content

All Users:

  • Read and use content
  • Request human review
  • Flag for expert attention
  • Provide feedback

Reviewers/Experts:

  • Validate for Mode 3 transition
  • Edit and refine
  • Adjust risk tier if needed

8.3 On Mode 3 (Human-Reviewed) Content

All Users:

  • Read with highest confidence
  • Still can flag if new evidence emerges

Reviewers/Experts:

  • Update if needed
  • Trigger re-evaluation if new evidence

9. Diagram References

9.1 Claim and Scenario Lifecycle (Overview)

Claim and Scenario Lifecycle (Overview)

flowchart TD
    classDef ai fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,stroke-dasharray: 5 5;
    classDef phase fill:#f5f5f5,stroke:#999,stroke-width:1px;
    classDef ucm fill:#fff3e0,stroke:#e65100,stroke-width:2px;
    %% 1. Submission
    subgraph Submission ["1. Submission"]
        direction TB
        Input[Registered User Submits URL/Text] --> Parse[AKEL Parses Input]
        Parse:::ai --> Claims[Extract Claims]
        Claims:::ai --> Contexts[Detect AnalysisContexts]
    end
    %% 2. Evidence Retrieval
    subgraph Evidence ["2. Evidence Retrieval"]
        direction TB
        Search[AI Web Search]:::ai --> Fetch[Source Fetching]
        Fetch:::ai --> Extract[Evidence Extraction]
        Extract:::ai --> Quality[Quality Filtering]
        Quality:::ai
    end
    %% 3. Verdict Generation
    subgraph Verdicts ["3. Verdict Generation"]
        direction TB
        PerContext[Per-Context Verdicts]:::ai --> Aggregate[Cross-Context Aggregation]
        Aggregate:::ai --> GateCheck[Quality Gate Check]
        GateCheck:::ai
    end
    %% 4. Presentation
    subgraph Public ["4. Public Presentation"]
        direction TB
        Summary[Analysis Summary]
        TruthScale[7-Point Truth Scale]
        EvidenceView[Evidence & Sources]
    end
    %% 5. UCM Feedback Loop
    subgraph UCMLoop ["5. System Improvement"]
        direction TB
        Metrics[Monitor Quality Metrics]
        UCMConfig[UCM Config Update]:::ucm
    end
    %% Flow
    Submission --> Evidence
    Evidence --> Verdicts
    Verdicts --> Public
    Public -.-> Metrics
    UCMConfig -.->|improved config| Submission

Fully automated pipeline. No human editing of analysis data. System improvements flow through UCM configuration changes (dashed orange).

9.2 Claim and Scenario Workflow

Information

Current Implementation (v2.10.2) — The pipeline uses AnalysisContexts (bounded analytical frames) and KeyFactors (decomposition questions with contestation tracking), discovered during the understanding phase.

Claim Analysis Workflow


graph TB
    Start[User Submission]

    subgraph Step1[Step 1 Understand]
        Extract{understandClaim LLM Analysis}
        Gate1{Gate 1 Claim Validation}
        DetectType[Detect Input Type]
        DetectContexts[Detect Contexts]
        KeyFactors[Discover KeyFactors]
    end

    subgraph Step2[Step 2 Research]
        Decide[decideNextResearch]
        Search[Web Search]
        Fetch[Fetch Sources]
        Facts[extractEvidence]
    end

    subgraph Step3[Step 3 Verdict]
        Verdict[generateVerdicts]
        Gate4{Gate 4 Confidence Check}
    end

    subgraph Output[Output]
        Publish[Publish Result]
        LowConf[Low Confidence Flag]
    end

    Start --> Extract
    Extract --> Gate1
    Gate1 -->|Pass Factual| DetectType
    Gate1 -->|Fail Opinion| Exclude[Exclude from analysis]
    DetectType --> DetectContexts
    DetectContexts --> KeyFactors
    KeyFactors --> Decide
    Decide --> Search
    Search --> Fetch
    Fetch --> Facts
    Facts -->|More research needed| Decide
    Facts -->|Sufficient evidence| Verdict
    Verdict --> Gate4
    Gate4 -->|High or Medium confidence| Publish
    Gate4 -->|Low or Insufficient| LowConf

Quality Gates (Implemented)

 Gate  Name  Purpose  Pass Criteria
 Gate 1  Claim Validation  Filter non-factual claims  Factual, opinion score 0.3 or less, specificity 0.3 or more
 Gate 4  Verdict Confidence  Ensure sufficient evidence  2 or more sources, avg quality 0.6 or more, agreement 60% or more

Gates 2 (Contradiction Search) and 3 (Uncertainty Quantification) are not yet implemented.

KeyFactors (Replaces Scenarios)

KeyFactors are optional decomposition questions discovered during the understanding phase:

  • Not stored as separate entities
  • Help break down complex claims into checkable sub-questions
  • See KeyFactors Design for design rationale

7-Point Verdict Scale

  • TRUE (86-100%) - Claim is well-supported by evidence
  • MOSTLY-TRUE (72-85%) - Largely accurate with minor caveats
  • LEANING-TRUE (58-71%) - More evidence supports than contradicts
  • MIXED (43-57%, high confidence) - Roughly equal evidence both ways
  • UNVERIFIED (43-57%, low confidence) - Insufficient evidence to determine
  • LEANING-FALSE (29-42%) - More evidence contradicts than supports
  • MOSTLY-FALSE (15-28%) - Largely inaccurate
  • FALSE (0-14%) - Claim is refuted by evidence

9.3 Evidence and Verdict Workflow

Information

Current Implementation (v2.10.2) - Simplified model without versioning. Uses 7-point symmetric verdict scale.

Evidence and Verdict Data Model


erDiagram
    CLAIM ||--|| CLAIM_VERDICT : has
    CLAIM_VERDICT }o--o{ EVIDENCE_ITEM : supported_by
    EVIDENCE_ITEM }o--|| SOURCE : from

    CLAIM {
        string id_PK
        string text
        string type
        string claimRole
        boolean isCentral
        string_array dependsOn
    }

    CLAIM_VERDICT {
        string id_PK
        string claimId_FK
        string verdict
        int truthPercentage
        int confidence
        string explanation
        string_array supportingEvidenceIds
        string_array opposingEvidenceIds
        string contestationStatus
        float harmPotential
    }

    EVIDENCE_ITEM {
        string id_PK
        string sourceId_FK
        string statement
        string sourceExcerpt
        string category
        string claimDirection
        string contextId
    }

    SOURCE {
        string id_PK
        string title
        string domain
        string url
        float trackRecordScore
        string bias
        string factualReporting
    }

Verdict Generation Flow


flowchart TB
    subgraph Research[Research Phase]
        EVIDENCE[Collected Evidence]
        SOURCES[Source Metadata]
    end

    subgraph Analysis[Analysis]
        WEIGHT[Weight Evidence by source reliability]
        CONTEST[Check Contestation doubted vs contested]
        HARM[Assess Harm Potential]
    end

    subgraph Verdict[Verdict Generation]
        CALC[Calculate Truth Percentage]
        MAP[Map to 7-point Scale]
        CONF[Assign Confidence]
    end

    subgraph Output[Result]
        CLAIM_V[Claim Verdict]
        ARTICLE_V[Article Verdict]
    end

    EVIDENCE --> WEIGHT
    SOURCES --> WEIGHT
    WEIGHT --> CONTEST
    CONTEST --> HARM
    HARM --> CALC
    CALC --> MAP
    MAP --> CONF
    CONF --> CLAIM_V
    CLAIM_V --> ARTICLE_V

7-Point Verdict Scale

 Verdict  Truth % Range  Description
 TRUE  86-100%  Claim is well-supported by evidence
 MOSTLY-TRUE  72-85%  Largely accurate with minor caveats
 LEANING-TRUE  58-71%  More evidence supports than contradicts
 MIXED  43-57% (high conf)  Roughly equal evidence both ways
 UNVERIFIED  43-57% (low conf)  Insufficient evidence to determine
 LEANING-FALSE  29-42%  More evidence contradicts than supports
 MOSTLY-FALSE  15-28%  Largely inaccurate
 FALSE  0-14%  Claim is refuted by evidence

Contestation Status

  • Doubted: Evidence is weak, uncertain, or ambiguous
  • Contested: Strong evidence exists on both sides

Source Reliability

Source reliability scores use LLM + Cache architecture (v2.2):

  • LLM-based assessment with in-memory caching
  • Batch prefetch → in-memory map → sync lookup
  • Configurable via UCM SR config (source-reliability.ts)

9.4 Quality and Audit Workflow

Information

Current Implementation (v2.6.33) - Only Gate 1 (Claim Validation) and Gate 4 (Verdict Confidence) are implemented. Gates 2-3 are planned for future.

Quality Gates Flow


flowchart TB
    subgraph Input[Input]
        CLAIM[Extracted Claim]
    end

    subgraph Gate1[Gate 1 Claim Validation]
        G1_CHECK{Is claim factual}
        G1_OPINION[Opinion Detection]
        G1_SPECIFIC[Specificity Check]
        G1_FUTURE[Future Prediction]
    end

    subgraph Research[Research]
        EVIDENCE[Gather Evidence]
    end

    subgraph Gate4[Gate 4 Verdict Confidence]
        G4_COUNT{Evidence Count}
        G4_QUALITY{Source Quality}
        G4_AGREE{Evidence Agreement}
        G4_TIER[Assign Confidence Tier]
    end

    subgraph Output[Output]
        PUBLISH[Publish Verdict]
        EXCLUDE[Exclude]
        LOWCONF[Flag for Review]
    end

    CLAIM --> G1_CHECK
    G1_CHECK --> G1_OPINION
    G1_OPINION --> G1_SPECIFIC
    G1_SPECIFIC --> G1_FUTURE
    G1_FUTURE -->|Pass| EVIDENCE
    G1_FUTURE -->|Fail| EXCLUDE
    EVIDENCE --> G4_COUNT
    G4_COUNT -->|2 or more| G4_QUALITY
    G4_COUNT -->|less than 2| LOWCONF
    G4_QUALITY -->|0.6 or more| G4_AGREE
    G4_QUALITY -->|less than 0.6| LOWCONF
    G4_AGREE -->|60 percent or more| G4_TIER
    G4_AGREE -->|less than 60 percent| LOWCONF
    G4_TIER -->|HIGH or MEDIUM| PUBLISH
    G4_TIER -->|LOW| LOWCONF

Gate Details

Gate 1: Claim Validation

Purpose: Ensure extracted claims are factual assertions that can be verified.

 Check  Purpose  Pass Criteria
 Factuality Test  Can this claim be proven true/false?  Must be verifiable
 Opinion Detection  Contains subjective language?  Opinion score 0.3 or less
 Specificity Check  Contains concrete details?  Specificity score 0.3 or more
 Future Prediction  About future events?  Must be about past/present

Gate 4: Verdict Confidence Assessment

Purpose: Only display verdicts with sufficient evidence and confidence.

 Tier  Evidence  Avg Quality  Agreement  Publishable?
 HIGH  3+ sources  0.7 or more  80% or more  Yes
 MEDIUM  2+ sources  0.6 or more  60% or more  Yes
 LOW  2+ sources  0.5 or more  40% or more  Needs review
 INSUFFICIENT  Less than 2 sources  Any  Any  More research needed

Not Yet Implemented

Gate 2: Contradiction Search (planned) - Counter-evidence actively searched

Gate 3: Uncertainty Quantification (planned) - Data gaps identified and disclosed

Information

Design Philosophy - This matrix shows the intended division of responsibilities between AKEL and humans. v2.6.33 implements the automated claim evaluation; human responsibilities require the user system (not yet implemented).

Manual vs Automated Matrix


graph TD
    subgraph Automated[Automated by AKEL]
        A1[Claim Evaluation]
        A2[Quality Assessment]
        A3[Content Management]
    end
    subgraph Human[Human Responsibilities]
        H1[Algorithm Improvement]
        H2[Policy Governance]
        H3[Exception Handling]
        H4[Strategic Decisions]
    end

Automated by AKEL

 Function  Details  Status
 Claim Evaluation  Evidence extraction, source scoring, verdict generation, risk classification, publication  Implemented
 Quality Assessment  Contradiction detection, confidence scoring, pattern recognition, anomaly flagging  Partial (Gates 1 and 4)
 Content Management  KeyFactor generation, evidence linking, source tracking  Implemented

Human Responsibilities

 Function  Details  Status
 Algorithm Improvement  Monitor metrics, identify issues, propose fixes, test, deploy  Via code changes
 Policy Governance  Set criteria, define risk tiers, establish thresholds, update guidelines  Not implemented (env vars only)
 Exception Handling  Review flagged items, handle abuse, address safety, manage legal  Not implemented
 Strategic Decisions  Budget, hiring, major policy, partnerships  N/A

Key Principles

Never Manual:

  • Individual claim approval
  • Routine content review
  • Verdict overrides (fix algorithm instead)
  • Publication gates

Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.

10. Related Pages