Wiki source code of Contributor Processes

Last modified by Robert Schaub on 2025/12/24 20:24

Show last authors
1 = Contributor Processes =
2 == 1. Purpose ==
3 This page explains how contributors improve **the system that evaluates claims**, not the claims themselves.
4 **Key Principle**: AKEL makes content decisions. Contributors improve the algorithms, policies, and infrastructure that enable AKEL to make better decisions.
5 == 2. What Contributors Do ==
6 Contributors work on **system improvements**, not content review:
7 ✅ **Algorithm improvements**: Better evidence detection, improved source scoring, enhanced contradiction detection
8 ✅ **Policy proposals**: Risk tier definitions, domain-specific rules, moderation criteria
9 ✅ **Infrastructure**: Performance optimization, scaling improvements, monitoring tools
10 ✅ **Documentation**: User guides, API docs, architecture documentation
11 ✅ **Testing**: A/B tests, regression tests, performance benchmarks
12 == 3. What Contributors Do NOT Do ==
13 ❌ **Review individual claims for correctness** - That's AKEL's job
14 ❌ **Override AKEL verdicts** - Fix the algorithm, not the output
15 ❌ **Manually adjust source scores** - Improve scoring rules systematically
16 ❌ **Act as approval gates** - Defeats purpose of automation
17 ❌ **Make ad-hoc content decisions** - All content decisions must be algorithmic
18 **If you think AKEL made a mistake**: Don't fix that one case. Fix the algorithm so it handles all similar cases correctly.
19 == 4. Contributor Journey ==
20 === 4.1 Visitor ===
21 * Reads documentation
22 * Explores repositories
23 * May open issues reporting bugs or suggesting improvements
24 === 4.2 New Contributor ===
25 * First contributions: Documentation fixes, clarifications, minor improvements
26 * Learns: System architecture, RFC process, testing procedures
27 * Builds: Understanding of FactHarbor principles
28 === 4.3 Regular Contributor ===
29 * Contributes regularly to system improvements
30 * Follows project rules and RFC process
31 * Track record of quality contributions
32 === 4.4 Trusted Contributor ===
33 * Extensive track record of high-quality work
34 * Deep understanding of system architecture
35 * Can review others' contributions
36 * Participates in technical decisions
37 === 4.5 Maintainer ===
38 * Approves system changes within domain
39 * Technical Coordinator or designated by them
40 * Authority over specific system components
41 * Accountable for system performance in domain
42 === 4.6 Moderator (Separate Track) ===
43 * Handles AKEL-flagged escalations
44 * Focuses on abuse, manipulation, system gaming
45 * Proposes detection improvements
46 * Does NOT review content for correctness
47 == 4.7 Contributor Roles and Trust Levels ==
48 The following describes the different levels of contributors and their permissions:
49 == 1. Purpose ==
50 This page describes how people can participate in FactHarbor and how responsibilities grow with trust and experience.
51 == 2. Contributor Journey ==
52 1. **Visitor** – explores the platform, reads documentation, may raise questions.
53 2. **New Contributor** – submits first improvements (typo fixes, small clarifications, new issues).
54 3. **Contributor** – contributes regularly and follows project conventions.
55 4. **Trusted Contributor** – has a track record of high-quality work and reliable judgement.
56 5. **Contributor** – reviews changes for correctness, neutrality, and process compliance.
57 6. **Moderator** – focuses on behaviour, tone, and conflict moderation.
58 7. **Trusted Contributor (optional)** – offers domain expertise without changing governance authority.
59 == 3. Principles ==
60 * Low barrier to entry for new contributors.
61 * Transparent criteria for gaining and losing responsibilities.
62 * Clear separation between content quality review and behavioural moderation.
63 * Documented processes for escalation and appeal.
64 == 4. Processes ==
65 Typical contributor processes include:
66 * proposal and review of documentation or code changes
67 * reporting and triaging issues or suspected errors
68 * moderation of discussions and conflict resolution
69 * onboarding support for new contributors.
70 Details of the process steps are aligned with the [[Open Source Model and Licensing>>FactHarbor.Organisation.Open Source Model and Licensing]] and [[Decision Processes>>FactHarbor.Organisation.Decision-Processes]] pages.
71 == 5. System Improvement Workflow ==
72 === 5.1 Identify Issue ===
73 **Sources**:
74 * Performance metrics dashboard shows anomaly
75 * User feedback reveals pattern
76 * AKEL processing logs show systematic error
77 * Code review identifies technical debt
78 **Key**: Focus on PATTERNS, not individual cases.
79 === 5.2 Diagnose Root Cause ===
80 **Analysis methods**:
81 * Run experiments in test environment
82 * Analyze AKEL decision patterns
83 * Review algorithm parameters
84 * Check training data quality
85 * Profile performance bottlenecks
86 **Output**: Clear understanding of systematic issue.
87 === 5.3 Propose Solution (RFC) ===
88 **Create Request for Comments (RFC)**:
89 **RFC Template**:
90 ```
91 ## Problem Statement
92 What systematic issue exists? What metrics show it?
93 ## Proposed Solution
94 What specific changes to algorithm/policy/infrastructure?
95 ## Alternatives Considered
96 What other approaches were evaluated? Why not chosen?
97 ## Trade-offs
98 What are downsides? What metrics might worsen?
99 ## Success Metrics
100 How will we know this works? What metrics will improve?
101 ## Testing Plan
102 How will this be validated before full deployment?
103 ## Rollback Plan
104 If this doesn't work, how do we revert?
105 ```
106 === 5.4 Community Discussion ===
107 **RFC review period**: 7-appropriate time period (based on impact)
108 **Participants**:
109 * Other contributors comment
110 * Maintainers review for feasibility
111 * Technical Coordinator for architectural impact
112 * Governing Team for policy implications
113 **Goal**: Surface concerns, improve proposal, build consensus
114 === 5.5 Test & Validate ===
115 **Required before approval**:
116 * ✅ Deploy to test environment
117 * ✅ Run on historical data (regression test)
118 * ✅ Measure impact on key metrics
119 * ✅ A/B testing if feasible
120 * ✅ Document results
121 **Pass criteria**:
122 * Solves stated problem
123 * Doesn't break existing functionality
124 * Metrics improve or remain stable
125 * No unacceptable trade-offs
126 === 5.6 Review & Approval ===
127 **Review by**:
128 * **Technical changes**: Technical Coordinator (or designated Maintainer)
129 * **Policy changes**: Governing Team (consent-based decision)
130 * **Infrastructure**: Technical Coordinator
131 * **Documentation**: Community Coordinator
132 **Approval criteria**:
133 * Solves problem effectively
134 * Test results positive
135 * No principled objections (for consent-based decisions)
136 * Aligns with FactHarbor principles
137 === 5.7 Deploy & Monitor ===
138 **Deployment strategy**:
139 * Gradual rollout (canary deployment)
140 * Monitor key metrics closely
141 * Ready to rollback if problems
142 * Document deployment
143 **Monitoring period**: intensive, then ongoing
144 **Success indicators**:
145 * Target metrics improve
146 * No unexpected side effects
147 * User feedback positive
148 * System stability maintained
149 === 5.8 Evaluate & Iterate ===
150 **Post-deployment review**:
151 * Did metrics improve as expected?
152 * Any unexpected effects?
153 * What did we learn?
154 * What should we do differently next time?
155 **Document learnings**: Update RFC with actual outcomes.
156 == 6. Contribution Types in Detail ==
157 === 6.1 Algorithm Improvements ===
158 **Examples**:
159 * Better evidence extraction from web pages
160 * Improved source reliability scoring
161 * Enhanced contradiction detection
162 * Faster claim parsing
163 * More accurate risk classification
164 **Process**: RFC → Test → Review → Deploy → Monitor
165 **Skills needed**: Python, ML/AI, data analysis, testing
166 === 6.2 Policy Proposals ===
167 **Examples**:
168 * Risk tier definition refinements
169 * New domain-specific guidelines
170 * Moderation criteria updates
171 * Community behavior standards
172 **Process**: RFC → Community discussion → Governing Team consent → Deploy → Monitor
173 **Skills needed**: Domain knowledge, policy writing, ethics
174 === 6.3 Infrastructure Improvements ===
175 **Examples**:
176 * Database query optimization
177 * Caching strategy improvements
178 * Monitoring tool enhancements
179 * Deployment automation
180 * Scaling improvements
181 **Process**: RFC → Test → Technical Coordinator review → Deploy → Monitor
182 **Skills needed**: DevOps, databases, system architecture, performance tuning
183 === 6.4 Documentation ===
184 **Examples**:
185 * User guides
186 * API documentation
187 * Architecture documentation
188 * Onboarding materials
189 * Tutorial videos
190 **Process**: Draft → Community feedback → Community Coordinator review → Publish
191 **Skills needed**: Technical writing, understanding of FactHarbor
192 == 7. Quality Standards ==
193 === 7.1 Code Quality ===
194 **Required**:
195 * ✅ Follows project coding standards
196 * ✅ Includes tests
197 * ✅ Documented (code comments + docs update)
198 * ✅ Passes CI/CD checks
199 * ✅ Reviewed by maintainer
200 === 7.2 Testing Requirements ===
201 **Algorithm changes**:
202 * Unit tests
203 * Integration tests
204 * Regression tests on historical data
205 * Performance benchmarks
206 **Policy changes**:
207 * Validation on test cases
208 * Impact analysis on existing claims
209 * Edge case coverage
210 === 7.3 Documentation Requirements ===
211 **All changes must include**:
212 * Updated architecture docs (if applicable)
213 * Updated API docs (if applicable)
214 * Migration guide (if breaking change)
215 * Changelog entry
216 == 8. Handling Disagreements ==
217 === 8.1 Technical Disagreements ===
218 **Process**:
219 1. Discuss in RFC comments
220 2. Present data/evidence
221 3. Consider trade-offs openly
222 4. Technical Coordinator makes final decision (or escalates)
223 5. Document reasoning
224 **Principle**: Data and principles over opinions
225 === 8.2 Policy Disagreements ===
226 **Process**:
227 1. Discuss in RFC
228 2. Clarify principles at stake
229 3. Consider stakeholder impact
230 4. Governing Team uses consent-based decision
231 5. Document reasoning
232 **Principle**: Consent-based (not consensus) - can you support this even if not perfect?
233 === 8.3 Escalation Path ===
234 **For unresolved issues**:
235 * Technical → Technical Coordinator → Governing Team
236 * Policy → Governing Team → General Assembly (if fundamental)
237 * Behavior → Moderator → Governance Steward → Governing Team
238 == 9. Behavior Standards ==
239 === 9.1 Expected Behavior ===
240 **Contributors are expected to**:
241 * ✅ Assume good faith
242 * ✅ Focus on system improvements, not personal opinions
243 * ✅ Support decisions once made (even if you disagreed)
244 * ✅ Be constructive in criticism
245 * ✅ Document your reasoning
246 * ✅ Test thoroughly before proposing
247 * ✅ Learn from mistakes
248 === 9.2 Unacceptable Behavior ===
249 **Will not be tolerated**:
250 * ❌ Personal attacks
251 * ❌ Harassment or discrimination
252 * ❌ Attempting to game the system
253 * ❌ Circumventing the RFC process for significant changes
254 * ❌ Deploying untested changes to production
255 * ❌ Ignoring feedback without explanation
256 === 9.3 Enforcement ===
257 **Process**:
258 * First offense: Warning + coaching
259 * Second offense: Temporary suspension (duration based on severity)
260 * Third offense: Permanent ban
261 **Severe violations** (harassment, malicious code): Immediate ban
262 **Appeal**: To Governance Steward, then Governing Team
263 == 10. Recognition ==
264 **Contributors are recognized through**:
265 * Public acknowledgment in release notes
266 * Contribution statistics on profile
267 * Special badges for significant contributions
268 * Invitation to contributor events
269 * Potential hiring opportunities
270 **Not recognized through**:
271 * Payment (unless contracted separately)
272 * Automatic role promotions
273 * Special privileges in content decisions (there are none)
274 == 11. Getting Started ==
275 **New contributors should**:
276 1. Read this page + [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]]
277 2. Join community forum
278 3. Review open issues labeled "good first issue"
279 4. Start with documentation improvements
280 5. Learn the RFC process by observing
281 6. Make first contribution
282 7. Participate in discussions
283 8. Build track record
284 **Resources**:
285 * Developer guide: [Coming soon]
286 * RFC template: [In repository]
287 * Community forum: [Link]
288 * Slack/Discord: [Link]
289 ---
290 **Remember**: You improve the SYSTEM. AKEL improves the CONTENT.
291 == 12. Related Pages ==
292 * [[Contributor Processes>>FactHarbor.Organisation.Contributor-Processes]] - Roles and trust levels
293 * [[Governance>>FactHarbor.Organisation.Governance.WebHome]] - Decision-making structure
294 * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] - Team structure
295 * [[Decision Processes>>FactHarbor.Organisation.Decision-Processes]] - How decisions are made