Wiki source code of Contributor Processes
Last modified by Robert Schaub on 2026/02/08 08:29
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Contributor Processes = | ||
| 2 | |||
| 3 | == 1. Purpose == | ||
| 4 | |||
| 5 | This page explains how contributors improve **the system that evaluates claims**, not the claims themselves. | ||
| 6 | **Key Principle**: AKEL makes content decisions. Contributors improve the algorithms, policies, and infrastructure that enable AKEL to make better decisions. | ||
| 7 | |||
| 8 | == 2. What Contributors Do == | ||
| 9 | |||
| 10 | Contributors work on **system improvements**, not content review: | ||
| 11 | ✅ **Algorithm improvements**: Better evidence detection, improved source scoring, enhanced contradiction detection | ||
| 12 | ✅ **Policy proposals**: Risk tier definitions, domain-specific rules, moderation criteria | ||
| 13 | ✅ **Infrastructure**: Performance optimization, scaling improvements, monitoring tools | ||
| 14 | ✅ **Documentation**: User guides, API docs, architecture documentation | ||
| 15 | ✅ **Testing**: A/B tests, regression tests, performance benchmarks | ||
| 16 | |||
| 17 | == 3. What Contributors Do NOT Do == | ||
| 18 | |||
| 19 | ❌ **Review individual claims for correctness** - That's AKEL's job | ||
| 20 | ❌ **Override AKEL verdicts** - Fix the algorithm, not the output | ||
| 21 | ❌ **Manually adjust source scores** - Improve scoring rules systematically | ||
| 22 | ❌ **Act as approval gates** - Defeats purpose of automation | ||
| 23 | ❌ **Make ad-hoc content decisions** - All content decisions must be algorithmic | ||
| 24 | **If you think AKEL made a mistake**: Don't fix that one case. Fix the algorithm so it handles all similar cases correctly. | ||
| 25 | |||
| 26 | == 4. Contributor Journey == | ||
| 27 | |||
| 28 | === 4.1 Visitor === | ||
| 29 | |||
| 30 | * Reads documentation | ||
| 31 | * Explores repositories | ||
| 32 | * May open issues reporting bugs or suggesting improvements | ||
| 33 | |||
| 34 | === 4.2 New Contributor === | ||
| 35 | |||
| 36 | * First contributions: Documentation fixes, clarifications, minor improvements | ||
| 37 | * Learns: System architecture, RFC process, testing procedures | ||
| 38 | * Builds: Understanding of FactHarbor principles | ||
| 39 | |||
| 40 | === 4.3 Regular Contributor === | ||
| 41 | |||
| 42 | * Contributes regularly to system improvements | ||
| 43 | * Follows project rules and RFC process | ||
| 44 | * Track record of quality contributions | ||
| 45 | |||
| 46 | === 4.4 Trusted Contributor === | ||
| 47 | |||
| 48 | * Extensive track record of high-quality work | ||
| 49 | * Deep understanding of system architecture | ||
| 50 | * Can review others' contributions | ||
| 51 | * Participates in technical decisions | ||
| 52 | |||
| 53 | === 4.5 Maintainer === | ||
| 54 | |||
| 55 | * Approves system changes within domain | ||
| 56 | * Technical Coordinator or designated by them | ||
| 57 | * Authority over specific system components | ||
| 58 | * Accountable for system performance in domain | ||
| 59 | |||
| 60 | === 4.6 Moderator (Separate Track) === | ||
| 61 | |||
| 62 | * Handles AKEL-flagged escalations | ||
| 63 | * Focuses on abuse, manipulation, system gaming | ||
| 64 | * Proposes detection improvements | ||
| 65 | * Does NOT review content for correctness | ||
| 66 | |||
| 67 | == 4.7 Contributor Roles and Trust Levels == | ||
| 68 | |||
| 69 | The following describes the different levels of contributors and their permissions: | ||
| 70 | |||
| 71 | == 1. Purpose == | ||
| 72 | |||
| 73 | This page describes how people can participate in FactHarbor and how responsibilities grow with trust and experience. | ||
| 74 | |||
| 75 | == 2. Contributor Journey == | ||
| 76 | |||
| 77 | 1. **Visitor** – explores the platform, reads documentation, may raise questions. | ||
| 78 | 2. **New Contributor** – submits first improvements (typo fixes, small clarifications, new issues). | ||
| 79 | 3. **Contributor** – contributes regularly and follows project conventions. | ||
| 80 | 4. **Trusted Contributor** – has a track record of high-quality work and reliable judgement. | ||
| 81 | 5. **Contributor** – reviews changes for correctness, neutrality, and process compliance. | ||
| 82 | 6. **Moderator** – focuses on behaviour, tone, and conflict moderation. | ||
| 83 | 7. **Trusted Contributor (optional)** – offers domain expertise without changing governance authority. | ||
| 84 | |||
| 85 | == 3. Principles == | ||
| 86 | |||
| 87 | * Low barrier to entry for new contributors. | ||
| 88 | * Transparent criteria for gaining and losing responsibilities. | ||
| 89 | * Clear separation between content quality review and behavioural moderation. | ||
| 90 | * Documented processes for escalation and appeal. | ||
| 91 | |||
| 92 | == 4. Processes == | ||
| 93 | |||
| 94 | Typical contributor processes include: | ||
| 95 | |||
| 96 | * proposal and review of documentation or code changes | ||
| 97 | * reporting and triaging issues or suspected errors | ||
| 98 | * moderation of discussions and conflict resolution | ||
| 99 | * onboarding support for new contributors. | ||
| 100 | Details of the process steps are aligned with the [[Open Source Model and Licensing>>FactHarbor.Organisation.Open Source Model and Licensing]] and [[Decision Processes>>FactHarbor.Organisation.Decision-Processes]] pages. | ||
| 101 | |||
| 102 | == 5. System Improvement Workflow == | ||
| 103 | |||
| 104 | === 5.1 Identify Issue === | ||
| 105 | |||
| 106 | **Sources**: | ||
| 107 | |||
| 108 | * Performance metrics dashboard shows anomaly | ||
| 109 | * User feedback reveals pattern | ||
| 110 | * AKEL processing logs show systematic error | ||
| 111 | * Code review identifies technical debt | ||
| 112 | **Key**: Focus on PATTERNS, not individual cases. | ||
| 113 | |||
| 114 | === 5.2 Diagnose Root Cause === | ||
| 115 | |||
| 116 | **Analysis methods**: | ||
| 117 | |||
| 118 | * Run experiments in test environment | ||
| 119 | * Analyze AKEL decision patterns | ||
| 120 | * Review algorithm parameters | ||
| 121 | * Check training data quality | ||
| 122 | * Profile performance bottlenecks | ||
| 123 | **Output**: Clear understanding of systematic issue. | ||
| 124 | |||
| 125 | === 5.3 Propose Solution (RFC) === | ||
| 126 | |||
| 127 | **Create Request for Comments (RFC)**: | ||
| 128 | **RFC Template**: | ||
| 129 | ``` | ||
| 130 | ## Problem Statement | ||
| 131 | What systematic issue exists? What metrics show it? | ||
| 132 | ## Proposed Solution | ||
| 133 | What specific changes to algorithm/policy/infrastructure? | ||
| 134 | ## Alternatives Considered | ||
| 135 | What other approaches were evaluated? Why not chosen? | ||
| 136 | ## Trade-offs | ||
| 137 | What are downsides? What metrics might worsen? | ||
| 138 | ## Success Metrics | ||
| 139 | How will we know this works? What metrics will improve? | ||
| 140 | ## Testing Plan | ||
| 141 | How will this be validated before full deployment? | ||
| 142 | ## Rollback Plan | ||
| 143 | If this doesn't work, how do we revert? | ||
| 144 | ```## | ||
| 145 | |||
| 146 | === 5.4 Community Discussion === | ||
| 147 | |||
| 148 | **RFC review period**: 7-appropriate time period (based on impact) | ||
| 149 | **Participants**: | ||
| 150 | |||
| 151 | * Other contributors comment | ||
| 152 | * Maintainers review for feasibility | ||
| 153 | * Technical Coordinator for architectural impact | ||
| 154 | * Governing Team for policy implications | ||
| 155 | **Goal**: Surface concerns, improve proposal, build consensus | ||
| 156 | |||
| 157 | === 5.5 Test & Validate === | ||
| 158 | |||
| 159 | **Required before approval**: | ||
| 160 | |||
| 161 | * ✅ Deploy to test environment | ||
| 162 | * ✅ Run on historical data (regression test) | ||
| 163 | * ✅ Measure impact on key metrics | ||
| 164 | * ✅ A/B testing if feasible | ||
| 165 | * ✅ Document results | ||
| 166 | **Pass criteria**: | ||
| 167 | * Solves stated problem | ||
| 168 | * Doesn't break existing functionality | ||
| 169 | * Metrics improve or remain stable | ||
| 170 | * No unacceptable trade-offs | ||
| 171 | |||
| 172 | === 5.6 Review & Approval === | ||
| 173 | |||
| 174 | **Review by**: | ||
| 175 | |||
| 176 | * **Technical changes**: Technical Coordinator (or designated Maintainer) | ||
| 177 | * **Policy changes**: Governing Team (consent-based decision) | ||
| 178 | * **Infrastructure**: Technical Coordinator | ||
| 179 | * **Documentation**: Community Coordinator | ||
| 180 | **Approval criteria**: | ||
| 181 | * Solves problem effectively | ||
| 182 | * Test results positive | ||
| 183 | * No principled objections (for consent-based decisions) | ||
| 184 | * Aligns with FactHarbor principles | ||
| 185 | |||
| 186 | === 5.7 Deploy & Monitor === | ||
| 187 | |||
| 188 | **Deployment strategy**: | ||
| 189 | |||
| 190 | * Gradual rollout (canary deployment) | ||
| 191 | * Monitor key metrics closely | ||
| 192 | * Ready to rollback if problems | ||
| 193 | * Document deployment | ||
| 194 | **Monitoring period**: intensive, then ongoing | ||
| 195 | **Success indicators**: | ||
| 196 | * Target metrics improve | ||
| 197 | * No unexpected side effects | ||
| 198 | * User feedback positive | ||
| 199 | * System stability maintained | ||
| 200 | |||
| 201 | === 5.8 Evaluate & Iterate === | ||
| 202 | |||
| 203 | **Post-deployment review**: | ||
| 204 | |||
| 205 | * Did metrics improve as expected? | ||
| 206 | * Any unexpected effects? | ||
| 207 | * What did we learn? | ||
| 208 | * What should we do differently next time? | ||
| 209 | **Document learnings**: Update RFC with actual outcomes. | ||
| 210 | |||
| 211 | == 6. Contribution Types in Detail == | ||
| 212 | |||
| 213 | === 6.1 Algorithm Improvements === | ||
| 214 | |||
| 215 | **Examples**: | ||
| 216 | |||
| 217 | * Better evidence extraction from web pages | ||
| 218 | * Improved source reliability scoring | ||
| 219 | * Enhanced contradiction detection | ||
| 220 | * Faster claim parsing | ||
| 221 | * More accurate risk classification | ||
| 222 | **Process**: RFC → Test → Review → Deploy → Monitor | ||
| 223 | **Skills needed**: Python, ML/AI, data analysis, testing | ||
| 224 | |||
| 225 | === 6.2 Policy Proposals === | ||
| 226 | |||
| 227 | **Examples**: | ||
| 228 | |||
| 229 | * Risk tier definition refinements | ||
| 230 | * New domain-specific guidelines | ||
| 231 | * Moderation criteria updates | ||
| 232 | * Community behavior standards | ||
| 233 | **Process**: RFC → Community discussion → Governing Team consent → Deploy → Monitor | ||
| 234 | **Skills needed**: Domain knowledge, policy writing, ethics | ||
| 235 | |||
| 236 | === 6.3 Infrastructure Improvements === | ||
| 237 | |||
| 238 | **Examples**: | ||
| 239 | |||
| 240 | * Database query optimization | ||
| 241 | * Caching strategy improvements | ||
| 242 | * Monitoring tool enhancements | ||
| 243 | * Deployment automation | ||
| 244 | * Scaling improvements | ||
| 245 | **Process**: RFC → Test → Technical Coordinator review → Deploy → Monitor | ||
| 246 | **Skills needed**: DevOps, databases, system architecture, performance tuning | ||
| 247 | |||
| 248 | === 6.4 Documentation === | ||
| 249 | |||
| 250 | **Examples**: | ||
| 251 | |||
| 252 | * User guides | ||
| 253 | * API documentation | ||
| 254 | * Architecture documentation | ||
| 255 | * Onboarding materials | ||
| 256 | * Tutorial videos | ||
| 257 | **Process**: Draft → Community feedback → Community Coordinator review → Publish | ||
| 258 | **Skills needed**: Technical writing, understanding of FactHarbor | ||
| 259 | |||
| 260 | == 7. Quality Standards == | ||
| 261 | |||
| 262 | === 7.1 Code Quality === | ||
| 263 | |||
| 264 | **Required**: | ||
| 265 | |||
| 266 | * ✅ Follows project coding standards | ||
| 267 | * ✅ Includes tests | ||
| 268 | * ✅ Documented (code comments + docs update) | ||
| 269 | * ✅ Passes CI/CD checks | ||
| 270 | * ✅ Reviewed by maintainer | ||
| 271 | |||
| 272 | === 7.2 Testing Requirements === | ||
| 273 | |||
| 274 | **Algorithm changes**: | ||
| 275 | |||
| 276 | * Unit tests | ||
| 277 | * Integration tests | ||
| 278 | * Regression tests on historical data | ||
| 279 | * Performance benchmarks | ||
| 280 | **Policy changes**: | ||
| 281 | * Validation on test cases | ||
| 282 | * Impact analysis on existing claims | ||
| 283 | * Edge case coverage | ||
| 284 | |||
| 285 | === 7.3 Documentation Requirements === | ||
| 286 | |||
| 287 | **All changes must include**: | ||
| 288 | |||
| 289 | * Updated architecture docs (if applicable) | ||
| 290 | * Updated API docs (if applicable) | ||
| 291 | * Migration guide (if breaking change) | ||
| 292 | * Changelog entry | ||
| 293 | |||
| 294 | == 8. Handling Disagreements == | ||
| 295 | |||
| 296 | === 8.1 Technical Disagreements === | ||
| 297 | |||
| 298 | **Process**: | ||
| 299 | |||
| 300 | 1. Discuss in RFC comments | ||
| 301 | 2. Present data/evidence | ||
| 302 | 3. Consider trade-offs openly | ||
| 303 | 4. Technical Coordinator makes final decision (or escalates) | ||
| 304 | 5. Document reasoning | ||
| 305 | **Principle**: Data and principles over opinions | ||
| 306 | |||
| 307 | === 8.2 Policy Disagreements === | ||
| 308 | |||
| 309 | **Process**: | ||
| 310 | |||
| 311 | 1. Discuss in RFC | ||
| 312 | 2. Clarify principles at stake | ||
| 313 | 3. Consider stakeholder impact | ||
| 314 | 4. Governing Team uses consent-based decision | ||
| 315 | 5. Document reasoning | ||
| 316 | **Principle**: Consent-based (not consensus) - can you support this even if not perfect? | ||
| 317 | |||
| 318 | === 8.3 Escalation Path === | ||
| 319 | |||
| 320 | **For unresolved issues**: | ||
| 321 | |||
| 322 | * Technical → Technical Coordinator → Governing Team | ||
| 323 | * Policy → Governing Team → General Assembly (if fundamental) | ||
| 324 | * Behavior → Moderator → Governance Steward → Governing Team | ||
| 325 | |||
| 326 | == 9. Behavior Standards == | ||
| 327 | |||
| 328 | === 9.1 Expected Behavior === | ||
| 329 | |||
| 330 | **Contributors are expected to**: | ||
| 331 | |||
| 332 | * ✅ Assume good faith | ||
| 333 | * ✅ Focus on system improvements, not personal opinions | ||
| 334 | * ✅ Support decisions once made (even if you disagreed) | ||
| 335 | * ✅ Be constructive in criticism | ||
| 336 | * ✅ Document your reasoning | ||
| 337 | * ✅ Test thoroughly before proposing | ||
| 338 | * ✅ Learn from mistakes | ||
| 339 | |||
| 340 | === 9.2 Unacceptable Behavior === | ||
| 341 | |||
| 342 | **Will not be tolerated**: | ||
| 343 | |||
| 344 | * ❌ Personal attacks | ||
| 345 | * ❌ Harassment or discrimination | ||
| 346 | * ❌ Attempting to game the system | ||
| 347 | * ❌ Circumventing the RFC process for significant changes | ||
| 348 | * ❌ Deploying untested changes to production | ||
| 349 | * ❌ Ignoring feedback without explanation | ||
| 350 | |||
| 351 | === 9.3 Enforcement === | ||
| 352 | |||
| 353 | **Process**: | ||
| 354 | |||
| 355 | * First offense: Warning + coaching | ||
| 356 | * Second offense: Temporary suspension (duration based on severity) | ||
| 357 | * Third offense: Permanent ban | ||
| 358 | **Severe violations** (harassment, malicious code): Immediate ban | ||
| 359 | **Appeal**: To Governance Steward, then Governing Team | ||
| 360 | |||
| 361 | == 10. Recognition == | ||
| 362 | |||
| 363 | **Contributors are recognized through**: | ||
| 364 | |||
| 365 | * Public acknowledgment in release notes | ||
| 366 | * Contribution statistics on profile | ||
| 367 | * Special badges for significant contributions | ||
| 368 | * Invitation to contributor events | ||
| 369 | * Potential hiring opportunities | ||
| 370 | **Not recognized through**: | ||
| 371 | * Payment (unless contracted separately) | ||
| 372 | * Automatic role promotions | ||
| 373 | * Special privileges in content decisions (there are none) | ||
| 374 | |||
| 375 | == 11. Getting Started == | ||
| 376 | |||
| 377 | **New contributors should**: | ||
| 378 | |||
| 379 | 1. Read this page + [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] | ||
| 380 | 2. Join community forum | ||
| 381 | 3. Review open issues labeled "good first issue" | ||
| 382 | 4. Start with documentation improvements | ||
| 383 | 5. Learn the RFC process by observing | ||
| 384 | 6. Make first contribution | ||
| 385 | 7. Participate in discussions | ||
| 386 | 8. Build track record | ||
| 387 | **Resources**: | ||
| 388 | |||
| 389 | * Developer guide: [Coming soon] | ||
| 390 | * RFC template: [In repository] | ||
| 391 | * Community forum: [Link] | ||
| 392 | * Slack/Discord: [Link] | ||
| 393 | --- | ||
| 394 | **Remember**: You improve the SYSTEM. AKEL improves the CONTENT.-- | ||
| 395 | |||
| 396 | == 12. Related Pages == | ||
| 397 | |||
| 398 | * [[Contributor Processes>>FactHarbor.Organisation.Contributor-Processes]] - Roles and trust levels | ||
| 399 | * [[Governance>>Archive.FactHarbor 2026\.02\.08.Organisation.Governance.WebHome]] - Decision-making structure | ||
| 400 | * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] - Team structure | ||
| 401 | * [[Decision Processes>>FactHarbor.Organisation.Decision-Processes]] - How decisions are made |