Governance
Governance
FactHarbor is governed collaboratively with clear separation between organizational policy and decisions and technical implementation.
1. Governance Structure
Governance Structure
graph TD A[General Assembly] -->|Elects| B[Governing Team 3-7] B -->|Oversees| C[Team Members full-time] C -->|Technical Coordinator| D[AKEL & Infrastructure] C -->|Community Coordinator| E[Moderators part-time] E -->|Moderate| F[Content] G[Contributors] -->|Create/Edit| F H[Readers] -->|View/Flag| F G -->|Earn| I[Reputation Points] I -->|Unlocks| J[Permissions] style A fill:#e1f5ff style B fill:#ffe1f5 style D fill:#fff5e1
Simplified structure: General Assembly → Governing Team → Team Members. Users progress through reputation.
- Governing Team – Sets high-level policy, organizational direction, funding priorities
- Lead – Coordinates execution, represents organization publicly
- Core Maintainers – Technical and specification decisions, code/spec review
- Domain Experts – Subject-matter authority in specialized areas
- Community Contributors – Feedback, proposals, and participation in decision-making
2. Decision-Making Levels
2.1 Technical Decisions (Maintainers)
Scope: Architecture, data model, AKEL configuration, quality gates, system performance
Process:
- Proposals discussed in technical forums
- Review by core maintainers
- Consensus-based approval
- Breaking changes require broader community input
- Quality gate adjustments require rationale and audit validation
Examples:
- Adding new quality gate
- Adjusting AKEL parameters
- Modifying audit sampling algorithms
- Database schema changes
2.2 Policy Decisions (Governing Team + Community)
Scope: Risk tier policies, publication rules, content guidelines, ethical boundaries
Process:
- Proposal published for community feedback
- Discussion period (recommendation: minimum 14 days for major changes)
- Governing Team decision with community input
- Transparency in reasoning
- Risk tier policy changes require Expert consultation
Examples:
- Defining Tier A domains
- Setting audit sampling rates
- Content moderation policies
- Community guidelines
2.3 Domain-Specific Decisions (Experts)
Scope: Domain quality standards, source reliability in specialized fields, Tier A content validation
Process:
- Expert consensus in domain
- Documented reasoning
- Review by other experts
- Escalation to Governing Team if unresolved
- Experts set domain-specific audit criteria
Examples:
- Medical claim evaluation standards
- Legal citation requirements
- Scientific methodology thresholds
- Tier A approval criteria by domain
3. AI and Human Roles in Governance
3.1 Human-Only Governance Decisions
The following can never be automated:
- Ethical boundary setting – What content is acceptable, what harm thresholds exist
- Risk tier policy – Which domains are Tier A/B/C (though AKEL can suggest)
- Audit system oversight – Quality standards, sampling strategies, auditor selection
- Dispute resolution – Conflicts between experts, controversial decisions
- Community guidelines enforcement – Bans, suspensions, conflict mediation
- Organizational direction – Mission, vision, funding priorities
3.2 AKEL Advisory Role
AKEL can assist but not decide:
- Suggest risk tier assignments (humans validate)
- Flag content for expert review (humans decide)
- Identify patterns in audit failures (humans adjust policy)
- Propose quality gate refinements (maintainers approve)
- Detect emerging topics needing new policies (Governing Team decides)
3.3 Transparency Requirement
All governance decisions must be:
- Documented with reasoning
- Published for community visibility
- Reviewable by community members
- Reversible if evidence of error or harm
4. Audit System Governance
4.1 Audit Oversight Committee
Composition: Maintainers, Domain Experts, and Governing Team member(s)
Responsibilities:
- Set quality standards for audit evaluation
- Review audit statistics and trends
- Adjust sampling rates based on performance
- Approve changes to audit algorithms
- Oversee auditor selection and rotation
- Publish transparency reports
Meeting Frequency: Recommendation: Regular meetings as needed
Reporting: Recommendation: Periodic transparency reports to community
4.2 Audit Performance Metrics
Tracked and published:
- Audit pass/fail rates by tier
- Common failure patterns
- System improvements implemented
- Time to resolution for audit failures
- Auditor performance (anonymized)
4.3 Feedback Loop Governance
Process:
- Audits identify patterns in AI errors
2. Audit Committee reviews patterns
3. Maintainers propose technical fixes
4. Changes tested in sandbox
5. Community informed of improvements
6. Deployed with monitoring
Escalation:
- Persistent high failure rates → Pause AI publication in affected tier/domain
- Critical errors → Immediate system review
- Pattern of harm → Policy revision
5. Risk tier Policy Governance
5.1 Risk Tier Assignment Authority
- AKEL: Suggests initial tier based on domain, keywords, content analysis
- Moderators: Can override AKEL for individual content
- Experts: Set tier policy for their domains
- Governing Team: Approve tier policy changes, resolve tier disputes
5.2 Risk Tier Review Process
Triggers for Review:
- Significant audit failures in a tier
- New emerging topics or domains
- Community flags systematic misclassification
- Expert domain recommendations
- Periodic policy review
Process:
- Expert domain review (identify if Tier A/B/C appropriate)
2. Community input period (recommendation: sufficient time for feedback)
3. Audit Committee assessment (error patterns in current tier)
4. Governing Team decision
5. Implementation with monitoring period
6. Transparency report on rationale
5.3 Current Tier Assignments (Baseline)
Tier A: Medical, legal, elections, safety/security, major financial decisions
Tier B: Complex science causality, contested policy, historical interpretation with political implications, significant economic impact
Tier C: Established historical facts, simple definitions, well-documented scientific consensus, basic reference info
Note: These are guidelines; edge cases require expert judgment
6. Quality Gate Governance
6.1 Quality Gate Modification Process
Who Can Propose: Maintainers, Experts, Audit Committee
Requirements:
- Rationale based on audit failures or system improvements
- Testing in sandbox environment
- Impact assessment (false positive/negative rates)
- Community notification before deployment
Approval:
- Technical changes: Maintainer consensus
- Policy changes (e.g., new gate criteria): Governing Team approval
Examples of Governed Changes:
- Adjusting contradiction search scope
- Modifying source reliability thresholds
- Adding new bubble detection patterns
- Changing uncertainty quantification formulas
7. Community Participation
7.1 Open Discussion Forums
- Technical proposals (maintainer-led)
- Policy proposals (Governing Team-led)
- Domain-specific discussions (Expert-led)
- Audit findings and improvements (Audit Committee-led)
7.2 Proposal Mechanism
Anyone can propose:
- Submit proposal with rationale
2. Community discussion (recommendation: minimum timeframe for feedback)
3. Relevant authority reviews (Maintainers/Governing Team/Experts)
4. Decision with documented reasoning
5. Implementation (if approved)
7.3 Transparency
- All decisions documented in public wiki
- Audit statistics published periodically
- Governing Team meeting minutes published
- Expert recommendations documented
- Community feedback acknowledged
8. Dispute Resolution
8.1 Conflict Between Experts
- Experts attempt consensus
2. If unresolved, escalate to Governing Team
3. Governing Team appoints neutral expert panel
4. Panel recommendation
5. Governing Team decision (final)
8.2 Conflict Between Maintainers
- Discussion in maintainer forum
2. Attempt consensus
3. If unresolved, Lead makes decision
4. Community informed of reasoning
8.3 User Appeals
Users can appeal:
- Content rejection decisions
- Risk tier assignments
- Audit outcomes
- Moderation actions
Process:
- Submit appeal with evidence
2. Reviewed by independent moderator/expert
3. Decision with reasoning
4. Final appeal to Governing Team (if warranted)