Automation
Automation
Automation in FactHarbor amplifies human capability while implementing risk-based oversight.
This chapter defines:
- Risk-based publication model
- Quality gates for AI-generated content
- What must remain human-only
- What AI (AKEL) can draft and publish
- What can be fully automated
- How automation evolves through POC → Beta 0 → Release 1.0
1. POC v1 (AI-Generated Publication Demonstration)
The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication.
1.1 Workflow
- Input: User pastes a block of raw text.
- Deep Analysis (Background): The system autonomously performs the full pipeline before displaying the text:
- Extraction & Normalisation
- Scenario & Sub-query generation
- Evidence retrieval with contradiction search
- Quality gate validation
- Verdict computation
- Visualisation (Extraction & Marking): The system displays the text with claims extracted and marked.
- Verdict-Based Coloring: The extraction highlights (e.g. Orange/Green) are chosen according to the computed verdict for each claim.
- AI-Generated Label: Clear indication that content is AI-produced
- Inspection: User clicks a highlighted claim to see the Reasoning Trail, showing exactly which evidence and sub-queries led to that verdict.
1.2 Technical Scope
- AI-Generated Publication: Content published as Mode 2 (AI-Generated, no prior human review)
- Quality Gates Active: All automated quality checks enforced
- Contradiction Search Demonstrated: Shows counter-evidence and reservation detection
- Risk Tier Classification: POC shows tier assignment (demo purposes)
- No Human Approval Gate: Demonstrates scalable AI publication
- Structured Sub-Queries: Logic generated by decomposing claims into the FactHarbor data model
2. Publication Model
FactHarbor implements a risk-based publication model with three modes:
2.1 Mode 1: Draft-Only
Mode 1 (Draft-Only): Failed quality gates or high-risk content pending expert review. Internal review queue only.
See AKEL Publication Modes for detailed mode specifications.
2.2 Mode 2: AI-Generated (Public)
Mode 2 (AI-Generated, Published): Passed all quality gates, risk tier B or C, clearly labeled as AI-generated. Users can request human review.
See AKEL Publication Modes for detailed requirements.
2.3 Mode 3: Human-Reviewed
Mode 3 (Human-Reviewed, Published): Validated by human reviewers or experts, highest trust level. Required for Tier A content publication.
See AKEL Publication Modes for detailed requirements.
3. Risk tiers and Automation Levels
Risk tiers determine review requirements and automation levels. See Governance for tier policy governance.
3.1 Tier A (High Risk)
- Domains: Medical, legal, elections, safety, security
- Automation: AI can draft, human review required for "Human-Reviewed" status
- AI publication: Allowed with prominent disclaimers and warnings
- Audit rate: Recommendation: 30-50%
3.2 Tier B (Medium Risk)
- Domains: Complex policy, science, causality claims
- Automation: AI can draft and publish (Mode 2)
- Human review: Optional, audit-based
- Audit rate: Recommendation: 10-20%
3.3 Tier C (Low Risk)
- Domains: Definitions, established facts, historical data
- Automation: AI publication default
- Human review: On request or via sampling
- Audit rate: Recommendation: 5-10%
4. Human-Only Tasks
These require human judgment and cannot be automated:
- Ethical boundary decisions (especially medical, political, psychological harm assessment)
- Dispute resolution between conflicting expert opinions
- Governance policy setting and enforcement
- Final authority on Tier A "Human-Reviewed" status
- Audit system oversight and quality standard definition
- Risk tier policy adjustments based on societal context
5. AI-Draft with Audit (Semi-Automated)
AKEL drafts these; humans validate via sampling audits:
- Scenario structures (definitions, assumptions, context)
- Evaluation methods and reasoning chains
- Evidence relevance assessment and ranking
- Reliability scoring and source evaluation
- Verdict reasoning with uncertainty quantification
- Contradiction and reservation identification
- Scenario comparison explanations
- Public summaries and accessibility text
Most Tier B and C content remains in AI-draft status unless:
- Users request human review
- Audits identify errors
- High engagement triggers review
- Community flags issues
6. Fully Automated Structural Tasks
These require no human interpretation:
- Claim normalization (canonical form generation)
- Duplicate detection (vector embeddings, clustering)
- Evidence metadata extraction (dates, authors, publication info)
- Basic reliability heuristics (source reputation scoring)
- Contradiction detection (conflicting statements across sources)
- Re-evaluation triggers (new evidence, source updates)
- Layout generation (diagrams, summaries, UI presentation)
- Federation integrity checks (cross-node data validation)
7. Quality Gates (Automated)
Before AI-generated publication (Mode 2), content must pass four automated quality gates:
- Source Quality - Primary sources verified, citations complete
2. Contradiction Search (MANDATORY) - Counter-evidence actively sought
3. Uncertainty Quantification - Confidence scores calculated
4. Structural Validation - Required fields present, format valid
See AKEL Quality Gates for complete gate specifications.
8. Audit System
Instead of reviewing all AI output, systematic sampling audits ensure quality:
8.1 Stratified Sampling
- Risk tier (A > B > C sampling rates)
- Confidence scores (low confidence → more audits)
- Traffic/engagement (popular content audited more)
- Novelty (new topics/claim types prioritized)
- User flags and disagreement signals
8.2 Continuous Improvement Loop
Audit findings improve:
- Query templates
- Source reliability weights
- Contradiction detection algorithms
- Risk tier assignment rules
- Bubble detection heuristics
8.3 Transparency
- Audit statistics published
- Accuracy rates by tier reported
- System improvements documented
9. Automation Roadmap
Automation capabilities increase with system maturity while maintaining quality oversight.
9.1 POC (Current Focus)
Automated:
- Claim normalization
- Scenario template generation
- Evidence metadata extraction
- Simple verdict drafts
- AI-generated publication (Mode 2, with quality gates)
- Contradiction search
- Risk tier assignment
Human:
- High-risk content validation (Tier A)
- Sampling audits across all tiers
- Quality standard refinement
- Governance decisions
9.2 Beta 0 (Enhanced Automation)
Automated:
- Detailed scenario generation
- Advanced evidence reliability scoring
- Cross-scenario comparisons
- Multi-source contradiction detection
- Internal Truth Landscape generation
- Increased AI-draft coverage (more Tier B content)
Human:
- Tier A final approval
- Audit sampling (continued)
- Expert validation of complex domains
- Quality improvement oversight
9.3 Release 1.0 (High Automation)
Automated:
- Full scenario generation (comprehensive)
- Bayesian verdict scoring across scenarios
- Multi-scenario summary generation
- Anomaly detection across federated nodes
- AKEL-assisted cross-node synchronization
- Most Tier B and all Tier C auto-published
Human:
- Tier A oversight (still required)
- Strategic audits (lower sampling rates, higher value)
- Ethical decisions and policy
- Conflict resolution
10. Automation Levels Diagram
Automation Level
This diagram shows the progression of automation levels from POC through Release 1.0 and beyond.
Automation Level Mermaid
graph TD subgraph "Automation Maturity Progression" POC["Level 0: POC/Demo
- Tier C only
- AKEL generates, publishes with disclaimers
- High sampling audit
- Proof of concept"] R05["Release 0.5: Limited Production
- Tier B/C auto-published
- Tier A flagged for moderator review
- Higher sampling initially
- Algorithm improvement focus"] R10["Release 1.0: Full Production
- All tiers auto-published
- Clear risk labels on all content
- Reduced sampling as confidence grows
- Mature algorithm performance"] R20["Release 2.0+: Distributed Intelligence
- Federated multi-node operation
- Cross-node audit sharing
- Advanced pattern detection
- Strategic sampling only"] POC --> R05 R05 --> R10 R10 --> R20 end style POC fill:#e1f5ff style R05 fill:#d4edff style R10 fill:#c7e5ff style R20 fill:#baddff
Key Principles Across All Levels:
- AKEL makes all publication decisions
- No human approval gates at any level
- Humans monitor metrics and improve algorithms
- Sampling audits inform improvements, don't block publication
- Risk tiers guide audit priorities, not publication permissions
11. Automation Roadmap Diagram
Automation Roadmap
This diagram shows the automation roadmap from POC through Release 1.0.
Automation Roadmap Mermaid
graph LR subgraph "Quality Assurance Evolution" QA1["Initial: High Sampling
Higher rates for Tier A
Moderate rates for Tier B
Lower rates for Tier C"] QA2["Intermediate: Strategic Sampling
Focus on high-value learning
Sample new domains more
Reduce routine sampling"] QA3["Mature: Anomaly-Triggered
Sample based on metrics
Investigate unusual patterns
Strategic domain sampling"] QA1 --> QA2 QA2 --> QA3 end subgraph "POC: Proof of Concept" POC["POC Features
- Tier C Only
- Basic AKEL Processing
- Simple Risk Classification
- High Audit Sampling"] end subgraph "Release 0.5: Limited Production" R05["R0.5 Features
- Tier A/B/C Published
- All auto-publication
- Risk Labels Active
- Contradiction Detection
- Sampling-Based QA"] end subgraph "Release 1.0: Full Production" R10["R1.0 Features
- Comprehensive AI Publication
- Strategic Audits Only
- Federated Nodes (Beta)
- Cross-Node Data Sharing
- Mature Algorithm Performance"] end subgraph "Future: Distributed Intelligence" Future["Future Features
- Advanced Pattern Detection
- Global Contradiction Network
- Minimal Human QA (Anomalies Only)
- Full Federation"] end POC --> R05 R05 --> R10 R10 --> Future style POC fill:#e1f5ff style R05 fill:#d4edff style R10 fill:#c7e5ff style Future fill:#baddff
Automation Philosophy: At all stages, AKEL publishes automatically. Humans improve algorithms, not review content.
Sampling Rates: Start higher for learning, reduce as confidence grows. Rates are recommendations, not commitments.
12. Manual vs Automated Matrix
Manual vs Automated matrix
Manual vs Automated matrix Mermaid
graph TD subgraph "Automated by AKEL" A1["Claim Evaluation
- Evidence extraction
- Source scoring
- Verdict generation
- Risk classification
- Publication"] A2["Quality Assessment
- Contradiction detection
- Confidence scoring
- Pattern recognition
- Anomaly flagging"] A3["Content Management
- Scenario generation
- Evidence linking
- Source tracking
- Version control"] end subgraph "Human Responsibilities" H1["Algorithm Improvement
- Monitor performance metrics
- Identify systematic issues
- Propose fixes
- Test improvements
- Deploy updates"] H2["Policy Governance
- Set evaluation criteria
- Define risk tiers
- Establish thresholds
- Update guidelines"] H3["Exception Handling
- Review AKEL-flagged items
- Handle abuse/manipulation
- Address safety concerns
- Manage legal issues"] H4["Strategic Decisions
- Budget and resources
- Hiring and roles
- Major policy changes
- Partnership agreements"] end style A1 fill:#c7e5ff style A2 fill:#c7e5ff style A3 fill:#c7e5ff style H1 fill:#ffe5cc style H2 fill:#ffe5cc style H3 fill:#ffe5cc style H4 fill:#ffe5cc
Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.
Never Manual:
- Individual claim approval
- Routine content review
- Verdict overrides (fix algorithm instead)
- Publication gates