Automation Philosophy

Last modified by Robert Schaub on 2025/12/24 21:53

Automation Philosophy

Core Principle: AKEL is primary. Humans monitor, improve, and handle exceptions.

1. The Principle

FactHarbor is AI-first, not AI-assisted.
This is not:

  • ❌ "AI helps humans make better decisions"
  • ❌ "Humans review AI recommendations"
  • ❌ "AI drafts, humans approve"
    This is:
  • ✅ "AI makes decisions, humans improve the AI"
  • ✅ "Humans monitor metrics, not individual outputs"
  • ✅ "Fix the system, not the data"

2. Why This Matters

2.1 Scalability

Human review doesn't scale:

  • 1 person can review 100 claims/day carefully
  • FactHarbor aims for millions of claims
  • Would need 10,000+ reviewers
  • Impossible to maintain consistency
    Algorithmic processing scales:
  • AKEL processes 1 claim or 1 million claims with same consistency
  • Cost per claim approaches zero at scale
  • Quality improves with more data
  • 24/7 availability

2.2 Consistency

Human judgment varies:

  • Different reviewers apply criteria differently
  • Same reviewer makes different decisions on different days
  • Influenced by fatigue, mood, recent examples
  • Unconscious biases affect decisions
    Algorithmic processing is consistent:
  • Same input → same output, always
  • Rules applied uniformly
  • No mood, fatigue, or bias
  • Predictable behavior

2.3 Transparency

Human judgment is opaque:

  • "I just know" - hard to explain
  • Expertise in human head
  • Can't audit thought process
  • Difficult to improve systematically
    Algorithmic processing is transparent:
  • Code can be audited
  • Parameters are documented
  • Decision logic is explicit
  • Changes are tracked
  • Can test "what if" scenarios

2.4 Improvement

Improving human judgment:

  • Train each person individually
  • Hope training transfers consistently
  • Subjective quality assessment
  • Slow iteration
    Improving algorithms:
  • Change code once, affects all decisions
  • Test on historical data before deploying
  • Measure improvement objectively
  • Rapid iteration (deploy multiple times per week)

3. The Human Role

Humans in FactHarbor are system architects, not content judges.

3.1 What Humans Do

Monitor system performance:

  • Watch dashboards showing aggregate metrics
  • Identify when metrics fall outside acceptable ranges
  • Spot patterns in errors or edge cases
  • Track user feedback trends
    Improve algorithms and policies:
  • Analyze systematic errors
  • Propose algorithm improvements
  • Update policies based on learning
  • Test changes before deployment
  • Document learnings
    Handle exceptions:
  • Items AKEL explicitly flags for review
  • System gaming attempts
  • Abuse and harassment
  • Legal/safety emergencies
    Govern the system:
  • Set risk tier policies
  • Define acceptable performance ranges
  • Allocate resources
  • Make strategic decisions

3.2 What Humans Do NOT Do

Review individual claims for correctness:

  • ❌ "Let me check if this verdict is right"
  • ❌ "I'll approve these before publication"
  • ❌ "This needs human judgment"
    Override AKEL decisions routinely:
  • ❌ "AKEL got this wrong, I'll fix it"
  • ❌ "I disagree with this verdict"
  • ❌ "This source should be rated higher"
    Act as approval gates:
  • ❌ "All claims must be human-approved"
  • ❌ "High-risk claims need review"
  • ❌ "Quality assurance before publication"
    Why not? Because this defeats the purpose and doesn't scale.

4. When Humans Intervene

4.1 Legitimate Interventions

Humans should intervene when:

AKEL explicitly flags for review

:

  • AKEL's confidence is too low
  • Detected potential manipulation
  • Unusual pattern requiring human judgment
  • Clear policy: "Flag if confidence <X"

System metrics show problems

:

  • Processing time suddenly increases
  • Error rate jumps
  • Confidence distribution shifts
  • User feedback becomes negative

Systematic bias detected

:

  • Metrics show pattern of unfairness
  • Particular domains consistently scored oddly
  • Source types systematically mis-rated

Legal/safety emergency

:

  • Legal takedown required
  • Imminent harm to individuals
  • Security breach
  • Compliance violation

4.2 Illegitimate Interventions

Humans should NOT intervene for:

"I disagree with this verdict"

:

  • Problem: Your opinion vs AKEL's analysis
  • Solution: If AKEL is systematically wrong, fix the algorithm
  • Action: Gather data, propose algorithm improvement

"This source should rank higher"

:

  • Problem: Subjective preference
  • Solution: Fix scoring rules systematically
  • Action: Analyze why AKEL scored it lower, adjust scoring algorithm if justified

"Manual quality gate"

:

  • Problem: Creates bottleneck, defeats automation
  • Solution: Improve AKEL's quality to not need human gate
  • Action: Set quality thresholds in algorithm, not human review

"I know better than the algorithm"

:

  • Problem: Doesn't scale, introduces bias
  • Solution: Teach the algorithm what you know
  • Action: Update training data, adjust parameters, document expertise in policy

5. Fix the System, Not the Data

Fundamental principle: When AKEL makes mistakes, improve AKEL, don't fix individual outputs.

5.1 Why?

Fixing individual outputs:

  • Doesn't prevent future similar errors
  • Doesn't scale (too many outputs)
  • Creates inconsistency
  • Hides systematic problems
    Fixing the system:
  • Prevents future similar errors
  • Scales automatically
  • Maintains consistency
  • Surfaces and resolves root causes

5.2 Process

When you see a "wrong" AKEL decision:

Document it

:

  • What was the claim?
  • What did AKEL decide?
  • What should it have decided?
  • Why do you think it's wrong?

Investigate

:

  • Is this a one-off, or a pattern?
  • Check similar claims - same issue?
  • What caused AKEL to decide this way?
  • What rule/parameter needs changing?

Propose systematic fix

:

  • Algorithm change?
  • Policy clarification?
  • Training data update?
  • Parameter adjustment?

Test the fix

:

  • Run on historical data
  • Does it fix this case?
  • Does it break other cases?
  • What's the overall impact?

Deploy and monitor

:

  • Gradual rollout
  • Watch metrics closely
  • Gather feedback
  • Iterate if needed

6. Balancing Automation and Human Values

6.1 Algorithms Embody Values

Important: Automation doesn't mean "value-free"
Algorithms encode human values:

  • Which evidence types matter most?
  • How much weight to peer review?
  • What constitutes "high risk"?
  • When to flag for human review?
    These are human choices, implemented in code.

6.2 Human Governance of Automation

Humans set:

  • ✅ Risk tier policies (what's high-risk?)
  • ✅ Evidence weighting (what types of evidence matter?)
  • ✅ Source scoring criteria (what makes a source credible?)
  • ✅ Moderation policies (what's abuse?)
  • ✅ Bias mitigation strategies
    AKEL applies:
  • ✅ These policies consistently
  • ✅ At scale
  • ✅ Transparently
  • ✅ Without subjective variation

6.3 Continuous Value Alignment

Ongoing process:

  • Monitor: Are outcomes aligned with values?
  • Analyze: Where do values and outcomes diverge?
  • Adjust: Update policies or algorithms
  • Test: Validate alignment improved
  • Repeat: Values alignment is never "done"

7. Cultural Implications

7.1 Mindset Shift Required

From: "I'm a content expert who reviews claims"
To: "I'm a system architect who improves algorithms"
From: "Good work means catching errors"
To: "Good work means preventing errors systematically"
From: "I trust my judgment"
To: "I make my judgment codifiable and testable"

7.2 New Skills Needed

Less emphasis on:

  • Individual content judgment
  • Manual review skills
  • Subjective expertise application
    More emphasis on:
  • Data analysis and metrics interpretation
  • Algorithm design and optimization
  • Policy formulation
  • Testing and validation
  • Documentation and knowledge transfer

7.3 Job Satisfaction Sources

Satisfaction comes from:

  • ✅ Seeing metrics improve after your changes
  • ✅ Building systems that help millions
  • ✅ Solving systematic problems elegantly
  • ✅ Continuous learning and improvement
  • ✅ Transparent, auditable impact
    Not from:
  • ❌ Being the expert who makes final call
  • ❌ Manual review and approval
  • ❌ Gatekeeping
  • ❌ Individual heroics

8. Trust and Automation

8.1 Building Trust in AKEL

Users trust AKEL when:

  • Transparent: How decisions are made is documented
  • Consistent: Same inputs → same outputs
  • Measurable: Performance metrics are public
  • Improvable: Clear process for getting better
  • Governed: Human oversight of policies, not outputs

8.2 What Trust Does NOT Mean

Trust in automation ≠:

  • ❌ "Never makes mistakes" (impossible)
  • ❌ "Better than any human could ever be" (unnecessary)
  • ❌ "Beyond human understanding" (must be understandable)
  • ❌ "Set it and forget it" (requires continuous improvement)
    Trust in automation =:
  • ✅ Mistakes are systematic, not random
  • ✅ Mistakes can be detected and fixed systematically
  • ✅ Performance continuously improves
  • ✅ Decision process is transparent and auditable

9. Edge Cases and Exceptions

9.1 Some Things Still Need Humans

AKEL flags for human review when:

  • Confidence below threshold
  • Detected manipulation attempt
  • Novel situation not seen before
  • Explicit policy requires human judgment
    Humans handle:
  • Items AKEL flags
  • Not routine review

9.2 Learning from Exceptions

When humans handle an exception:

  1. Resolve the immediate case
    2. Document: What made this exceptional?
    3. Analyze: Could AKEL have handled this?
    4. Improve: Update AKEL to handle similar cases
    5. Monitor: Did exception rate decrease?
    Goal: Fewer exceptions over time as AKEL learns.
    -
    Remember: AKEL is primary. You improve the SYSTEM. The system improves the CONTENT.

10. Related Pages