Contributor Processes

Last modified by Robert Schaub on 2025/12/24 21:52

Contributor Processes

1. Purpose

This page explains how contributors improve the system that evaluates claims, not the claims themselves.
Key Principle: AKEL makes content decisions. Contributors improve the algorithms, policies, and infrastructure that enable AKEL to make better decisions.

2. What Contributors Do

Contributors work on system improvements, not content review:
Algorithm improvements: Better evidence detection, improved source scoring, enhanced contradiction detection
Policy proposals: Risk tier definitions, domain-specific rules, moderation criteria
Infrastructure: Performance optimization, scaling improvements, monitoring tools
Documentation: User guides, API docs, architecture documentation
Testing: A/B tests, regression tests, performance benchmarks

3. What Contributors Do NOT Do

Review individual claims for correctness - That's AKEL's job
Override AKEL verdicts - Fix the algorithm, not the output
Manually adjust source scores - Improve scoring rules systematically
Act as approval gates - Defeats purpose of automation
Make ad-hoc content decisions - All content decisions must be algorithmic
If you think AKEL made a mistake: Don't fix that one case. Fix the algorithm so it handles all similar cases correctly.

4. Contributor Journey

4.1 Visitor

  • Reads documentation
  • Explores repositories
  • May open issues reporting bugs or suggesting improvements

4.2 New Contributor

  • First contributions: Documentation fixes, clarifications, minor improvements
  • Learns: System architecture, RFC process, testing procedures
  • Builds: Understanding of FactHarbor principles

4.3 Regular Contributor

  • Contributes regularly to system improvements
  • Follows project rules and RFC process
  • Track record of quality contributions

4.4 Trusted Contributor

  • Extensive track record of high-quality work
  • Deep understanding of system architecture
  • Can review others' contributions
  • Participates in technical decisions

4.5 Maintainer

  • Approves system changes within domain
  • Technical Coordinator or designated by them
  • Authority over specific system components
  • Accountable for system performance in domain

4.6 Moderator (Separate Track)

  • Handles AKEL-flagged escalations
  • Focuses on abuse, manipulation, system gaming
  • Proposes detection improvements
  • Does NOT review content for correctness

4.7 Contributor Roles and Trust Levels

The following describes the different levels of contributors and their permissions:

1. Purpose

This page describes how people can participate in FactHarbor and how responsibilities grow with trust and experience.

2. Contributor Journey

  1. Visitor – explores the platform, reads documentation, may raise questions.
    2. New Contributor – submits first improvements (typo fixes, small clarifications, new issues).
    3. Contributor – contributes regularly and follows project conventions.
    4. Trusted Contributor – has a track record of high-quality work and reliable judgement.
    5. Contributor – reviews changes for correctness, neutrality, and process compliance.
    6. Moderator – focuses on behaviour, tone, and conflict moderation.
    7. Trusted Contributor (optional) – offers domain expertise without changing governance authority.

3. Principles

  • Low barrier to entry for new contributors.
  • Transparent criteria for gaining and losing responsibilities.
  • Clear separation between content quality review and behavioural moderation.
  • Documented processes for escalation and appeal.

4. Processes

Typical contributor processes include:

  • proposal and review of documentation or code changes
  • reporting and triaging issues or suspected errors
  • moderation of discussions and conflict resolution
  • onboarding support for new contributors.
    Details of the process steps are aligned with the Open Source Model and Licensing and Decision Processes pages.

5. System Improvement Workflow

5.1 Identify Issue

Sources:

  • Performance metrics dashboard shows anomaly
  • User feedback reveals pattern
  • AKEL processing logs show systematic error
  • Code review identifies technical debt
    Key: Focus on PATTERNS, not individual cases.

5.2 Diagnose Root Cause

Analysis methods:

  • Run experiments in test environment
  • Analyze AKEL decision patterns
  • Review algorithm parameters
  • Check training data quality
  • Profile performance bottlenecks
    Output: Clear understanding of systematic issue.

5.3 Propose Solution (RFC)

Create Request for Comments (RFC):
RFC Template:
```
 Problem Statement
What systematic issue exists? What metrics show it?
Proposed Solution
What specific changes to algorithm/policy/infrastructure?
 Alternatives Considered
What other approaches were evaluated? Why not chosen?
Trade-offs
What are downsides? What metrics might worsen?
 Success Metrics
How will we know this works? What metrics will improve?
Testing Plan
How will this be validated before full deployment?
 Rollback Plan
If this doesn't work, how do we revert?
```

5.4 Community Discussion

RFC review period: 7-appropriate time period (based on impact)
Participants:

  • Other contributors comment
  • Maintainers review for feasibility
  • Technical Coordinator for architectural impact
  • Governing Team for policy implications
    Goal: Surface concerns, improve proposal, build consensus

5.5 Test & Validate

Required before approval:

  • ✅ Deploy to test environment
  • ✅ Run on historical data (regression test)
  • ✅ Measure impact on key metrics
  • ✅ A/B testing if feasible
  • ✅ Document results
    Pass criteria:
  • Solves stated problem
  • Doesn't break existing functionality
  • Metrics improve or remain stable
  • No unacceptable trade-offs

5.6 Review & Approval

Review by:

  • Technical changes: Technical Coordinator (or designated Maintainer)
  • Policy changes: Governing Team (consent-based decision)
  • Infrastructure: Technical Coordinator
  • Documentation: Community Coordinator
    Approval criteria:
  • Solves problem effectively
  • Test results positive
  • No principled objections (for consent-based decisions)
  • Aligns with FactHarbor principles

5.7 Deploy & Monitor

Deployment strategy:

  • Gradual rollout (canary deployment)
  • Monitor key metrics closely
  • Ready to rollback if problems
  • Document deployment
    Monitoring period: intensive, then ongoing
    Success indicators:
  • Target metrics improve
  • No unexpected side effects
  • User feedback positive
  • System stability maintained

5.8 Evaluate & Iterate

Post-deployment review:

  • Did metrics improve as expected?
  • Any unexpected effects?
  • What did we learn?
  • What should we do differently next time?
    Document learnings: Update RFC with actual outcomes.

6. Contribution Types in Detail

6.1 Algorithm Improvements

Examples:

  • Better evidence extraction from web pages
  • Improved source reliability scoring
  • Enhanced contradiction detection
  • Faster claim parsing
  • More accurate risk classification
    Process: RFC → Test → Review → Deploy → Monitor
    Skills needed: Python, ML/AI, data analysis, testing

6.2 Policy Proposals

Examples:

  • Risk tier definition refinements
  • New domain-specific guidelines
  • Moderation criteria updates
  • Community behavior standards
    Process: RFC → Community discussion → Governing Team consent → Deploy → Monitor
    Skills needed: Domain knowledge, policy writing, ethics

6.3 Infrastructure Improvements

Examples:

  • Database query optimization
  • Caching strategy improvements
  • Monitoring tool enhancements
  • Deployment automation
  • Scaling improvements
    Process: RFC → Test → Technical Coordinator review → Deploy → Monitor
    Skills needed: DevOps, databases, system architecture, performance tuning

6.4 Documentation

Examples:

  • User guides
  • API documentation
  • Architecture documentation
  • Onboarding materials
  • Tutorial videos
    Process: Draft → Community feedback → Community Coordinator review → Publish
    Skills needed: Technical writing, understanding of FactHarbor

7. Quality Standards

7.1 Code Quality

Required:

  • ✅ Follows project coding standards
  • ✅ Includes tests
  • ✅ Documented (code comments + docs update)
  • ✅ Passes CI/CD checks
  • ✅ Reviewed by maintainer

7.2 Testing Requirements

Algorithm changes:

  • Unit tests
  • Integration tests
  • Regression tests on historical data
  • Performance benchmarks
    Policy changes:
  • Validation on test cases
  • Impact analysis on existing claims
  • Edge case coverage

7.3 Documentation Requirements

All changes must include:

  • Updated architecture docs (if applicable)
  • Updated API docs (if applicable)
  • Migration guide (if breaking change)
  • Changelog entry

8. Handling Disagreements

8.1 Technical Disagreements

Process:

  1. Discuss in RFC comments
    2. Present data/evidence
    3. Consider trade-offs openly
    4. Technical Coordinator makes final decision (or escalates)
    5. Document reasoning
    Principle: Data and principles over opinions

8.2 Policy Disagreements

Process:

  1. Discuss in RFC
    2. Clarify principles at stake
    3. Consider stakeholder impact
    4. Governing Team uses consent-based decision
    5. Document reasoning
    Principle: Consent-based (not consensus) - can you support this even if not perfect?

8.3 Escalation Path

For unresolved issues:

  • Technical → Technical Coordinator → Governing Team
  • Policy → Governing Team → General Assembly (if fundamental)
  • Behavior → Moderator → Governance Steward → Governing Team

9. Behavior Standards

9.1 Expected Behavior

Contributors are expected to:

  • ✅ Assume good faith
  • ✅ Focus on system improvements, not personal opinions
  • ✅ Support decisions once made (even if you disagreed)
  • ✅ Be constructive in criticism
  • ✅ Document your reasoning
  • ✅ Test thoroughly before proposing
  • ✅ Learn from mistakes

9.2 Unacceptable Behavior

Will not be tolerated:

  • ❌ Personal attacks
  • ❌ Harassment or discrimination
  • ❌ Attempting to game the system
  • ❌ Circumventing the RFC process for significant changes
  • ❌ Deploying untested changes to production
  • ❌ Ignoring feedback without explanation

9.3 Enforcement

Process:

  • First offense: Warning + coaching
  • Second offense: Temporary suspension (duration based on severity)
  • Third offense: Permanent ban
    Severe violations (harassment, malicious code): Immediate ban
    Appeal: To Governance Steward, then Governing Team

10. Recognition

Contributors are recognized through:

  • Public acknowledgment in release notes
  • Contribution statistics on profile
  • Special badges for significant contributions
  • Invitation to contributor events
  • Potential hiring opportunities
    Not recognized through:
  • Payment (unless contracted separately)
  • Automatic role promotions
  • Special privileges in content decisions (there are none)

11. Getting Started

New contributors should:

  1. Read this page + Organisational Model
    2. Join community forum
    3. Review open issues labeled "good first issue"
    4. Start with documentation improvements
    5. Learn the RFC process by observing
    6. Make first contribution
    7. Participate in discussions
    8. Build track record
    Resources:
  • Developer guide: [Coming soon]
  • RFC template: [In repository]
  • Community forum: [Link]
  • Slack/Discord: [Link]
    -
    Remember: You improve the SYSTEM. AKEL improves the CONTENT.

12. Related Pages