Manual vs Automated matrix
Last modified by Robert Schaub on 2025/12/18 12:03
Manual vs Automated matrix
Manual vs Automated matrix Mermaid
graph TD subgraph "Automated by AKEL" A1["Claim Evaluation
- Evidence extraction
- Source scoring
- Verdict generation
- Risk classification
- Publication"] A2["Quality Assessment
- Contradiction detection
- Confidence scoring
- Pattern recognition
- Anomaly flagging"] A3["Content Management
- Scenario generation
- Evidence linking
- Source tracking
- Version control"] end subgraph "Human Responsibilities" H1["Algorithm Improvement
- Monitor performance metrics
- Identify systematic issues
- Propose fixes
- Test improvements
- Deploy updates"] H2["Policy Governance
- Set evaluation criteria
- Define risk tiers
- Establish thresholds
- Update guidelines"] H3["Exception Handling
- Review AKEL-flagged items
- Handle abuse/manipulation
- Address safety concerns
- Manage legal issues"] H4["Strategic Decisions
- Budget and resources
- Hiring and roles
- Major policy changes
- Partnership agreements"] end style A1 fill:#c7e5ff style A2 fill:#c7e5ff style A3 fill:#c7e5ff style H1 fill:#ffe5cc style H2 fill:#ffe5cc style H3 fill:#ffe5cc style H4 fill:#ffe5cc
Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.
Never Manual:
- Individual claim approval
- Routine content review
- Verdict overrides (fix algorithm instead)
- Publication gates