Manual vs Automated Matrix
Last modified by Robert Schaub on 2026/02/08 08:31
Manual vs Automated Matrix
graph TD
subgraph Automated[Automated by AKEL]
A1[Claim Evaluation]
A2[Quality Assessment]
A3[Content Management]
end
subgraph Human[Human Responsibilities]
H1[Algorithm Improvement]
H2[Policy Governance]
H3[Exception Handling]
H4[Strategic Decisions]
end
Automated by AKEL
| Function | Details | Status |
|---|---|---|
| Claim Evaluation | Evidence extraction, source scoring, verdict generation, risk classification, publication | Implemented |
| Quality Assessment | Contradiction detection, confidence scoring, pattern recognition, anomaly flagging | Partial (Gates 1 and 4) |
| Content Management | KeyFactor generation, evidence linking, source tracking | Implemented |
Human Responsibilities
| Function | Details | Status |
|---|---|---|
| Algorithm Improvement | Monitor metrics, identify issues, propose fixes, test, deploy | Via code changes |
| Policy Governance | Set criteria, define risk tiers, establish thresholds, update guidelines | Not implemented (env vars only) |
| Exception Handling | Review flagged items, handle abuse, address safety, manage legal | Not implemented |
| Strategic Decisions | Budget, hiring, major policy, partnerships | N/A |
Key Principles
Never Manual:
- Individual claim approval
- Routine content review
- Verdict overrides (fix algorithm instead)
- Publication gates
Key Principle: AKEL handles all content decisions. Humans improve the system, not the data.