Automation

Version 2.3 by Robert Schaub on 2025/12/24 20:30

Automation

Automation in FactHarbor amplifies human capability while implementing risk-based oversight.

This chapter defines:

  • Risk-based publication model
  • Quality gates for AI-generated content
  • What must remain human-only
  • What AI (AKEL) can draft and publish
  • What can be fully automated
  • How automation evolves through POC → Beta 0 → Release 1.0

1. POC v1 (AI-Generated Publication Demonstration)

The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication.

1.1 Workflow

  1. Input: User pastes a block of raw text.
  2. Deep Analysis (Background): The system autonomously performs the full pipeline before displaying the text:
  • Extraction & Normalisation
  • Scenario & Sub-query generation
  • Evidence retrieval with contradiction search
  • Quality gate validation
  • Verdict computation
  1. Visualisation (Extraction & Marking): The system displays the text with claims extracted and marked.
  • Verdict-Based Coloring: The extraction highlights (e.g. Orange/Green) are chosen according to the computed verdict for each claim.
  • AI-Generated Label: Clear indication that content is AI-produced
  1. Inspection: User clicks a highlighted claim to see the Reasoning Trail, showing exactly which evidence and sub-queries led to that verdict.

1.2 Technical Scope

  • AI-Generated Publication: Content published as Mode 2 (AI-Generated, no prior human review)
  • Quality Gates Active: All automated quality checks enforced
  • Contradiction Search Demonstrated: Shows counter-evidence and reservation detection
  • Risk Tier Classification: POC shows tier assignment (demo purposes)
  • No Human Approval Gate: Demonstrates scalable AI publication
  • Structured Sub-Queries: Logic generated by decomposing claims into the FactHarbor data model

2. Publication Model

FactHarbor implements a risk-based publication model with three modes:

2.1 Mode 1: Draft-Only

Mode 1 (Draft-Only): Failed quality gates or high-risk content pending expert review. Internal review queue only.

See AKEL Publication Modes for detailed mode specifications.

2.2 Mode 2: AI-Generated (Public)

Mode 2 (AI-Generated, Published): Passed all quality gates, risk tier B or C, clearly labeled as AI-generated. Users can request human review.

See AKEL Publication Modes for detailed requirements.

2.3 Mode 3: Human-Reviewed

Mode 3 (Human-Reviewed, Published): Validated by human reviewers or experts, highest trust level. Required for Tier A content publication.

See AKEL Publication Modes for detailed requirements.

3. Risk tiers and Automation Levels

Risk tiers determine review requirements and automation levels. See Governance for tier policy governance.

3.1 Tier A (High Risk)

  • Domains: Medical, legal, elections, safety, security
  • Automation: AI can draft, human review required for "Human-Reviewed" status
  • AI publication: Allowed with prominent disclaimers and warnings
  • Audit rate: Recommendation: 30-50%

3.2 Tier B (Medium Risk)

  • Domains: Complex policy, science, causality claims
  • Automation: AI can draft and publish (Mode 2)
  • Human review: Optional, audit-based
  • Audit rate: Recommendation: 10-20%

3.3 Tier C (Low Risk)

  • Domains: Definitions, established facts, historical data
  • Automation: AI publication default
  • Human review: On request or via sampling
  • Audit rate: Recommendation: 5-10%

4. Human-Only Tasks

These require human judgment and cannot be automated:

  • Ethical boundary decisions (especially medical, political, psychological harm assessment)
  • Dispute resolution between conflicting expert opinions
  • Governance policy setting and enforcement
  • Final authority on Tier A "Human-Reviewed" status
  • Audit system oversight and quality standard definition
  • Risk tier policy adjustments based on societal context

5. AI-Draft with Audit (Semi-Automated)

AKEL drafts these; humans validate via sampling audits:

  • Scenario structures (definitions, assumptions, context)
  • Evaluation methods and reasoning chains
  • Evidence relevance assessment and ranking
  • Reliability scoring and source evaluation
  • Verdict reasoning with uncertainty quantification
  • Contradiction and reservation identification
  • Scenario comparison explanations
  • Public summaries and accessibility text

Most Tier B and C content remains in AI-draft status unless:

  • Users request human review
  • Audits identify errors
  • High engagement triggers review
  • Community flags issues

6. Fully Automated Structural Tasks

These require no human interpretation:

  • Claim normalization (canonical form generation)
  • Duplicate detection (vector embeddings, clustering)
  • Evidence metadata extraction (dates, authors, publication info)
  • Basic reliability heuristics (source reputation scoring)
  • Contradiction detection (conflicting statements across sources)
  • Re-evaluation triggers (new evidence, source updates)
  • Layout generation (diagrams, summaries, UI presentation)
  • Federation integrity checks (cross-node data validation)

7. Quality Gates (Automated)

Before AI-generated publication (Mode 2), content must pass four automated quality gates:

  1. Source Quality - Primary sources verified, citations complete
    2. Contradiction Search (MANDATORY) - Counter-evidence actively sought
    3. Uncertainty Quantification - Confidence scores calculated
    4. Structural Validation - Required fields present, format valid

See AKEL Quality Gates for complete gate specifications.

8. Audit System

Instead of reviewing all AI output, systematic sampling audits ensure quality:

8.1 Stratified Sampling

  • Risk tier (A > B > C sampling rates)
  • Confidence scores (low confidence → more audits)
  • Traffic/engagement (popular content audited more)
  • Novelty (new topics/claim types prioritized)
  • User flags and disagreement signals

8.2 Continuous Improvement Loop

Audit findings improve:

  • Query templates
  • Source reliability weights
  • Contradiction detection algorithms
  • Risk tier assignment rules
  • Bubble detection heuristics

8.3 Transparency

  • Audit statistics published
  • Accuracy rates by tier reported
  • System improvements documented

9. Automation Roadmap

Automation capabilities increase with system maturity while maintaining quality oversight.

9.1 POC (Current Focus)

Automated:

  • Claim normalization
  • Scenario template generation
  • Evidence metadata extraction
  • Simple verdict drafts
  • AI-generated publication (Mode 2, with quality gates)
  • Contradiction search
  • Risk tier assignment

Human:

  • High-risk content validation (Tier A)
  • Sampling audits across all tiers
  • Quality standard refinement
  • Governance decisions

9.2 Beta 0 (Enhanced Automation)

Automated:

  • Detailed scenario generation
  • Advanced evidence reliability scoring
  • Cross-scenario comparisons
  • Multi-source contradiction detection
  • Internal Truth Landscape generation
  • Increased AI-draft coverage (more Tier B content)

Human:

  • Tier A final approval
  • Audit sampling (continued)
  • Expert validation of complex domains
  • Quality improvement oversight

9.3 Release 1.0 (High Automation)

Automated:

  • Full scenario generation (comprehensive)
  • Bayesian verdict scoring across scenarios
  • Multi-scenario summary generation
  • Anomaly detection across federated nodes
  • AKEL-assisted cross-node synchronization
  • Most Tier B and all Tier C auto-published

Human:

  • Tier A oversight (still required)
  • Strategic audits (lower sampling rates, higher value)
  • Ethical decisions and policy
  • Conflict resolution

10. Automation Levels Diagram

Information

Current Status: Level 0 (POC/Demo) - v2.6.33. FactHarbor is currently at POC level with full AKEL automation but limited production features.

Automation Maturity Progression


graph TD
    POC[Level 0 POC Demo CURRENT]
    R05[Level 0.5 Limited Production]
    R10[Level 1.0 Full Production]
    R20[Level 2.0+ Distributed Intelligence]

    POC --> R05
    R05 --> R10
    R10 --> R20

Level Descriptions

 Level  Name  Key Features
 Level 0  POC/Demo (CURRENT)  All content auto-analyzed, AKEL generates verdicts, no risk tier filtering, single-user demo mode
 Level 0.5  Limited Production  Multi-user support, risk tier classification, basic sampling audit, algorithm improvement focus
 Level 1.0  Full Production  All tiers auto-published, clear risk labels, reduced sampling, mature algorithms
 Level 2.0+  Distributed  Federated multi-node, cross-node audits, advanced patterns, strategic sampling only

Current Implementation (v2.6.33)

 Feature  POC Target  Actual Status
 AKEL auto-analysis  Yes  Implemented
 Verdict generation  Yes  Implemented (7-point scale)
 Quality Gates  Basic  Gates 1 and 4 implemented
 Risk tiers  Yes  Not implemented
 Sampling audits  High sampling  Not implemented
 User system  Demo only  Anonymous only

Key Principles

Across All Levels:

  • AKEL makes all publication decisions
  • No human approval gates
  • Humans monitor metrics and improve algorithms
  • Risk tiers guide audit priorities, not publication
  • Sampling audits inform improvements

11. Automation Roadmap Diagram

Information

Current Status: POC (v2.6.33) - FactHarbor is at Proof of Concept stage. No risk tiers, no sampling audits yet.

Automation Roadmap


graph LR
    subgraph QA[Quality Assurance Evolution]
        QA1[Initial High Sampling]
        QA2[Intermediate Strategic]
        QA3[Mature Anomaly-Triggered]

        QA1 --> QA2
        QA2 --> QA3
    end

    subgraph POC[POC CURRENT]
        POC_F[POC Features]
    end

    subgraph R05[Release 0.5]
        R05_F[Limited Production]
    end

    subgraph R10[Release 1.0]
        R10_F[Full Production]
    end

    subgraph Future[Future]
        Future_F[Distributed Intelligence]
    end

    POC_F --> R05_F
    R05_F --> R10_F
    R10_F --> Future_F

Phase Details

POC (Current v2.6.33)

  • All content analyzed
  • Basic AKEL Processing
  • No risk tiers yet
  • No sampling audits

Release 0.5 (Planned)

  • Tier A/B/C Published
  • All auto-publication
  • Risk Labels Active
  • Contradiction Detection
  • Sampling-Based QA

Release 1.0 (Planned)

  • Comprehensive AI Publication
  • Strategic Audits Only
  • Federated Nodes Beta
  • Cross-Node Data Sharing
  • Mature Algorithm Performance

Future (V2.0+)

  • Advanced Pattern Detection
  • Global Contradiction Network
  • Minimal Human QA
  • Full Federation

Philosophy

Automation Philosophy: At all stages, AKEL publishes automatically. Humans improve algorithms, not review content.

Sampling Rates: Start higher for learning, reduce as confidence grows.

12. Manual vs Automated Matrix

Information

Design Philosophy - This matrix shows the intended division of responsibilities between AKEL and humans. v2.6.33 implements the automated claim evaluation; human responsibilities require the user system (not yet implemented).

Manual vs Automated Matrix


graph TD
    subgraph Automated[Automated by AKEL]
        A1[Claim Evaluation]
        A2[Quality Assessment]
        A3[Content Management]
    end
    subgraph Human[Human Responsibilities]
        H1[Algorithm Improvement]
        H2[Policy Governance]
        H3[Exception Handling]
        H4[Strategic Decisions]
    end

Automated by AKEL

 Function  Details  Status
 Claim Evaluation  Evidence extraction, source scoring, verdict generation, risk classification, publication  Implemented
 Quality Assessment  Contradiction detection, confidence scoring, pattern recognition, anomaly flagging  Partial (Gates 1 and 4)
 Content Management  KeyFactor generation, evidence linking, source tracking  Implemented

Human Responsibilities

 Function  Details  Status
 Algorithm Improvement  Monitor metrics, identify issues, propose fixes, test, deploy  Via code changes
 Policy Governance  Set criteria, define risk tiers, establish thresholds, update guidelines  Not implemented (env vars only)
 Exception Handling  Review flagged items, handle abuse, address safety, manage legal  Not implemented
 Strategic Decisions  Budget, hiring, major policy, partnerships  N/A

Key Principles

Never Manual:

  • Individual claim approval
  • Routine content review
  • Verdict overrides (fix algorithm instead)
  • Publication gates

Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.

13. Related Pages