Automation
Automation
Automation in FactHarbor amplifies human capability while implementing risk-based oversight.
This chapter defines:
- Risk-based publication model
- Quality gates for AI-generated content
- What must remain human-only
- What AI (AKEL) can draft and publish
- What can be fully automated
- How automation evolves through POC → Beta 0 → Release 1.0
1. POC v1 (AI-Generated Publication Demonstration)
The goal of POC v1 is to validate the automated reasoning capabilities and demonstrate AI-generated content publication.
1.1 Workflow
- Input: User pastes a block of raw text.
- Deep Analysis (Background): The system autonomously performs the full pipeline before displaying the text:
- Extraction & Normalisation
- Scenario & Sub-query generation
- Evidence retrieval with contradiction search
- Quality gate validation
- Verdict computation
- Visualisation (Extraction & Marking): The system displays the text with claims extracted and marked.
- Verdict-Based Coloring: The extraction highlights (e.g. Orange/Green) are chosen according to the computed verdict for each claim.
- AI-Generated Label: Clear indication that content is AI-produced
- Inspection: User clicks a highlighted claim to see the Reasoning Trail, showing exactly which evidence and sub-queries led to that verdict.
1.2 Technical Scope
- AI-Generated Publication: Content published as Mode 2 (AI-Generated, no prior human review)
- Quality Gates Active: All automated quality checks enforced
- Contradiction Search Demonstrated: Shows counter-evidence and reservation detection
- Risk Tier Classification: POC shows tier assignment (demo purposes)
- No Human Approval Gate: Demonstrates scalable AI publication
- Structured Sub-Queries: Logic generated by decomposing claims into the FactHarbor data model
2. Publication Model
FactHarbor implements a risk-based publication model with three modes:
2.1 Mode 1: Draft-Only
- Failed quality gates
- High-risk content pending expert review
- Internal review queue only
2.2 Mode 2: AI-Generated (Public)
- Passed all quality gates
- Risk tier B or C
- Clear AI-generated labeling
- Users can request human review
2.3 Mode 3: Human-Reviewed
- Validated by human reviewers/experts
- "Human-Reviewed" status badge
- Required for Tier A content publication
See AKEL page for detailed publication mode descriptions.
3. Risk Tiers and Automation Levels
3.1 Tier A (High Risk)
- Domains: Medical, legal, elections, safety, security
- Automation: AI can draft, human review required for "Human-Reviewed" status
- AI publication: Allowed with prominent disclaimers and warnings
- Audit rate: Recommendation: 30-50%
3.2 Tier B (Medium Risk)
- Domains: Complex policy, science, causality claims
- Automation: AI can draft and publish (Mode 2)
- Human review: Optional, audit-based
- Audit rate: Recommendation: 10-20%
3.3 Tier C (Low Risk)
- Domains: Definitions, established facts, historical data
- Automation: AI publication default
- Human review: On request or via sampling
- Audit rate: Recommendation: 5-10%
4. Human-Only Tasks
These require human judgment and cannot be automated:
- Ethical boundary decisions (especially medical, political, psychological harm assessment)
- Dispute resolution between conflicting expert opinions
- Governance policy setting and enforcement
- Final authority on Tier A "Human-Reviewed" status
- Audit system oversight and quality standard definition
- Risk tier policy adjustments based on societal context
5. AI-Draft with Audit (Semi-Automated)
AKEL drafts these; humans validate via sampling audits:
- Scenario structures (definitions, assumptions, context)
- Evaluation methods and reasoning chains
- Evidence relevance assessment and ranking
- Reliability scoring and source evaluation
- Verdict reasoning with uncertainty quantification
- Contradiction and reservation identification
- Scenario comparison explanations
- Public summaries and accessibility text
Most Tier B and C content remains in AI-draft status unless:
- Users request human review
- Audits identify errors
- High engagement triggers review
- Community flags issues
6. Fully Automated Structural Tasks
These require no human interpretation:
- Claim normalization (canonical form generation)
- Duplicate detection (vector embeddings, clustering)
- Evidence metadata extraction (dates, authors, publication info)
- Basic reliability heuristics (source reputation scoring)
- Contradiction detection (conflicting statements across sources)
- Re-evaluation triggers (new evidence, source updates)
- Layout generation (diagrams, summaries, UI presentation)
- Federation integrity checks (cross-node data validation)
7. Quality Gates (Automated)
Before AI-draft publication (Mode 2), content must pass:
- Source Quality Gate
- Primary sources verified
- Citations complete and accessible
- Source reliability scored
2. Contradiction Search Gate (MANDATORY)
- Counter-evidence actively sought
- Reservations and limitations identified
- Bubble detection (echo chambers, conspiracy theories)
- Diverse perspective verification
3. Uncertainty Quantification Gate
- Confidence scores calculated
- Limitations stated
- Data gaps disclosed
4. Structural Integrity Gate
- No hallucinations detected
- Logic chain valid
- References verifiable
See AKEL page for detailed quality gate specifications.
8. Audit System
Instead of reviewing all AI output, systematic sampling audits ensure quality:
8.1 Stratified Sampling
- Risk tier (A > B > C sampling rates)
- Confidence scores (low confidence → more audits)
- Traffic/engagement (popular content audited more)
- Novelty (new topics/claim types prioritized)
- User flags and disagreement signals
8.2 Continuous Improvement Loop
Audit findings improve:
- Query templates
- Source reliability weights
- Contradiction detection algorithms
- Risk tier assignment rules
- Bubble detection heuristics
8.3 Transparency
- Audit statistics published
- Accuracy rates by tier reported
- System improvements documented
9. Automation Roadmap
Automation capabilities increase with system maturity while maintaining quality oversight.
9.1 POC (Current Focus)
Automated:
- Claim normalization
- Scenario template generation
- Evidence metadata extraction
- Simple verdict drafts
- AI-generated publication (Mode 2, with quality gates)
- Contradiction search
- Risk tier assignment
Human:
- High-risk content validation (Tier A)
- Sampling audits across all tiers
- Quality standard refinement
- Governance decisions
9.2 Beta 0 (Enhanced Automation)
Automated:
- Detailed scenario generation
- Advanced evidence reliability scoring
- Cross-scenario comparisons
- Multi-source contradiction detection
- Internal Truth Landscape generation
- Increased AI-draft coverage (more Tier B content)
Human:
- Tier A final approval
- Audit sampling (continued)
- Expert validation of complex domains
- Quality improvement oversight
9.3 Release 1.0 (High Automation)
Automated:
- Full scenario generation (comprehensive)
- Bayesian verdict scoring across scenarios
- Multi-scenario summary generation
- Anomaly detection across federated nodes
- AKEL-assisted cross-node synchronization
- Most Tier B and all Tier C auto-published
Human:
- Tier A oversight (still required)
- Strategic audits (lower sampling rates, higher value)
- Ethical decisions and policy
- Conflict resolution
10. Automation Levels Diagram
Automation Maturity Progression
graph TD
POC[Level 0 POC Demo CURRENT]
R05[Level 0.5 Limited Production]
R10[Level 1.0 Full Production]
R20[Level 2.0+ Distributed Intelligence]
POC --> R05
R05 --> R10
R10 --> R20
Level Descriptions
| Level | Name | Key Features |
|---|---|---|
| Level 0 | POC/Demo (CURRENT) | All content auto-analyzed, AKEL generates verdicts, no risk tier filtering, single-user demo mode |
| Level 0.5 | Limited Production | Multi-user support, risk tier classification, basic sampling audit, algorithm improvement focus |
| Level 1.0 | Full Production | All tiers auto-published, clear risk labels, reduced sampling, mature algorithms |
| Level 2.0+ | Distributed | Federated multi-node, cross-node audits, advanced patterns, strategic sampling only |
Current Implementation (v2.6.33)
| Feature | POC Target | Actual Status |
|---|---|---|
| AKEL auto-analysis | Yes | Implemented |
| Verdict generation | Yes | Implemented (7-point scale) |
| Quality Gates | Basic | Gates 1 and 4 implemented |
| Risk tiers | Yes | Not implemented |
| Sampling audits | High sampling | Not implemented |
| User system | Demo only | Anonymous only |
Key Principles
Across All Levels:
- AKEL makes all publication decisions
- No human approval gates
- Humans monitor metrics and improve algorithms
- Risk tiers guide audit priorities, not publication
- Sampling audits inform improvements
11. Automation Roadmap Diagram
Automation Roadmap
graph LR
subgraph QA[Quality Assurance Evolution]
QA1[Initial High Sampling]
QA2[Intermediate Strategic]
QA3[Mature Anomaly-Triggered]
QA1 --> QA2
QA2 --> QA3
end
subgraph POC[POC CURRENT]
POC_F[POC Features]
end
subgraph R05[Release 0.5]
R05_F[Limited Production]
end
subgraph R10[Release 1.0]
R10_F[Full Production]
end
subgraph Future[Future]
Future_F[Distributed Intelligence]
end
POC_F --> R05_F
R05_F --> R10_F
R10_F --> Future_F
Phase Details
POC (Current v2.6.33)
- All content analyzed
- Basic AKEL Processing
- No risk tiers yet
- No sampling audits
Release 0.5 (Planned)
- Tier A/B/C Published
- All auto-publication
- Risk Labels Active
- Contradiction Detection
- Sampling-Based QA
Release 1.0 (Planned)
- Comprehensive AI Publication
- Strategic Audits Only
- Federated Nodes Beta
- Cross-Node Data Sharing
- Mature Algorithm Performance
Future (V2.0+)
- Advanced Pattern Detection
- Global Contradiction Network
- Minimal Human QA
- Full Federation
Philosophy
Automation Philosophy: At all stages, AKEL publishes automatically. Humans improve algorithms, not review content.
Sampling Rates: Start higher for learning, reduce as confidence grows.
12. Manual vs Automated Matrix
Manual vs Automated Matrix
graph TD
subgraph Automated[Automated by AKEL]
A1[Claim Evaluation]
A2[Quality Assessment]
A3[Content Management]
end
subgraph Human[Human Responsibilities]
H1[Algorithm Improvement]
H2[Policy Governance]
H3[Exception Handling]
H4[Strategic Decisions]
end
Automated by AKEL
| Function | Details | Status |
|---|---|---|
| Claim Evaluation | Evidence extraction, source scoring, verdict generation, risk classification, publication | Implemented |
| Quality Assessment | Contradiction detection, confidence scoring, pattern recognition, anomaly flagging | Partial (Gates 1 and 4) |
| Content Management | KeyFactor generation, evidence linking, source tracking | Implemented |
Human Responsibilities
| Function | Details | Status |
|---|---|---|
| Algorithm Improvement | Monitor metrics, identify issues, propose fixes, test, deploy | Via code changes |
| Policy Governance | Set criteria, define risk tiers, establish thresholds, update guidelines | Not implemented (env vars only) |
| Exception Handling | Review flagged items, handle abuse, address safety, manage legal | Not implemented |
| Strategic Decisions | Budget, hiring, major policy, partnerships | N/A |
Key Principles
Never Manual:
- Individual claim approval
- Routine content review
- Verdict overrides (fix algorithm instead)
- Publication gates
Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.