Wiki source code of Governance
Version 2.1 by Robert Schaub on 2025/12/14 18:59
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Governance = | ||
| 2 | |||
| 3 | FactHarbor is governed collaboratively with clear separation between **organizational policy and decisions** and **technical implementation**. | ||
| 4 | |||
| 5 | == Governance Structure == | ||
| 6 | |||
| 7 | * **Governing Team** – Sets high-level policy, organizational direction, funding priorities | ||
| 8 | * **Lead** – Coordinates execution, represents organization publicly | ||
| 9 | * **Core Maintainers** – Technical and specification decisions, code/spec review | ||
| 10 | * **Domain Experts** – Subject-matter authority in specialized areas | ||
| 11 | * **Community Contributors** – Feedback, proposals, and participation in decision-making | ||
| 12 | |||
| 13 | ---- | ||
| 14 | |||
| 15 | == Decision-Making Levels == | ||
| 16 | |||
| 17 | === Technical Decisions (Maintainers) === | ||
| 18 | |||
| 19 | **Scope**: Architecture, data model, AKEL configuration, quality gates, system performance | ||
| 20 | |||
| 21 | **Process**: | ||
| 22 | * Proposals discussed in technical forums | ||
| 23 | * Review by core maintainers | ||
| 24 | * Consensus-based approval | ||
| 25 | * Breaking changes require broader community input | ||
| 26 | * Quality gate adjustments require rationale and audit validation | ||
| 27 | |||
| 28 | **Examples**: | ||
| 29 | * Adding new quality gate | ||
| 30 | * Adjusting AKEL parameters | ||
| 31 | * Modifying audit sampling algorithms | ||
| 32 | * Database schema changes | ||
| 33 | |||
| 34 | === Policy Decisions (Governing Team + Community) === | ||
| 35 | |||
| 36 | **Scope**: Risk tier policies, publication rules, content guidelines, ethical boundaries | ||
| 37 | |||
| 38 | **Process**: | ||
| 39 | * Proposal published for community feedback | ||
| 40 | * Discussion period (recommendation: minimum 14 days for major changes) | ||
| 41 | * Governing Team decision with community input | ||
| 42 | * Transparency in reasoning | ||
| 43 | * Risk tier policy changes require Expert consultation | ||
| 44 | |||
| 45 | **Examples**: | ||
| 46 | * Defining Tier A domains | ||
| 47 | * Setting audit sampling rates | ||
| 48 | * Content moderation policies | ||
| 49 | * Community guidelines | ||
| 50 | |||
| 51 | === Domain-Specific Decisions (Experts) === | ||
| 52 | |||
| 53 | **Scope**: Domain quality standards, source reliability in specialized fields, Tier A content validation | ||
| 54 | |||
| 55 | **Process**: | ||
| 56 | * Expert consensus in domain | ||
| 57 | * Documented reasoning | ||
| 58 | * Review by other experts | ||
| 59 | * Escalation to Governing Team if unresolved | ||
| 60 | * Experts set domain-specific audit criteria | ||
| 61 | |||
| 62 | **Examples**: | ||
| 63 | * Medical claim evaluation standards | ||
| 64 | * Legal citation requirements | ||
| 65 | * Scientific methodology thresholds | ||
| 66 | * Tier A approval criteria by domain | ||
| 67 | |||
| 68 | ---- | ||
| 69 | |||
| 70 | == AI and Human Roles in Governance == | ||
| 71 | |||
| 72 | === Human-Only Governance Decisions === | ||
| 73 | |||
| 74 | The following can **never** be automated: | ||
| 75 | |||
| 76 | * **Ethical boundary setting** – What content is acceptable, what harm thresholds exist | ||
| 77 | * **Risk tier policy** – Which domains are Tier A/B/C (though AKEL can suggest) | ||
| 78 | * **Audit system oversight** – Quality standards, sampling strategies, auditor selection | ||
| 79 | * **Dispute resolution** – Conflicts between experts, controversial decisions | ||
| 80 | * **Community guidelines enforcement** – Bans, suspensions, conflict mediation | ||
| 81 | * **Organizational direction** – Mission, vision, funding priorities | ||
| 82 | |||
| 83 | === AKEL Advisory Role === | ||
| 84 | |||
| 85 | AKEL can **assist but not decide**: | ||
| 86 | |||
| 87 | * Suggest risk tier assignments (humans validate) | ||
| 88 | * Flag content for expert review (humans decide) | ||
| 89 | * Identify patterns in audit failures (humans adjust policy) | ||
| 90 | * Propose quality gate refinements (maintainers approve) | ||
| 91 | * Detect emerging topics needing new policies (Governing Team decides) | ||
| 92 | |||
| 93 | === Transparency Requirement === | ||
| 94 | |||
| 95 | All governance decisions must be: | ||
| 96 | * **Documented** with reasoning | ||
| 97 | * **Published** for community visibility | ||
| 98 | * **Reviewable** by community members | ||
| 99 | * **Reversible** if evidence of error or harm | ||
| 100 | |||
| 101 | ---- | ||
| 102 | |||
| 103 | == Audit System Governance == | ||
| 104 | |||
| 105 | === Audit Oversight Committee === | ||
| 106 | |||
| 107 | **Composition**: Maintainers, Domain Experts, and Governing Team member(s) | ||
| 108 | |||
| 109 | **Responsibilities**: | ||
| 110 | * Set quality standards for audit evaluation | ||
| 111 | * Review audit statistics and trends | ||
| 112 | * Adjust sampling rates based on performance | ||
| 113 | * Approve changes to audit algorithms | ||
| 114 | * Oversee auditor selection and rotation | ||
| 115 | * Publish transparency reports | ||
| 116 | |||
| 117 | **Meeting Frequency**: Recommendation: Regular meetings as needed | ||
| 118 | |||
| 119 | **Reporting**: Recommendation: Periodic transparency reports to community | ||
| 120 | |||
| 121 | === Audit Performance Metrics === | ||
| 122 | |||
| 123 | Tracked and published: | ||
| 124 | * Audit pass/fail rates by tier | ||
| 125 | * Common failure patterns | ||
| 126 | * System improvements implemented | ||
| 127 | * Time to resolution for audit failures | ||
| 128 | * Auditor performance (anonymized) | ||
| 129 | |||
| 130 | === Feedback Loop Governance === | ||
| 131 | |||
| 132 | **Process**: | ||
| 133 | 1. Audits identify patterns in AI errors | ||
| 134 | 2. Audit Committee reviews patterns | ||
| 135 | 3. Maintainers propose technical fixes | ||
| 136 | 4. Changes tested in sandbox | ||
| 137 | 5. Community informed of improvements | ||
| 138 | 6. Deployed with monitoring | ||
| 139 | |||
| 140 | **Escalation**: | ||
| 141 | * Persistent high failure rates → Pause AI publication in affected tier/domain | ||
| 142 | * Critical errors → Immediate system review | ||
| 143 | * Pattern of harm → Policy revision | ||
| 144 | |||
| 145 | ---- | ||
| 146 | |||
| 147 | == Risk Tier Policy Governance == | ||
| 148 | |||
| 149 | === Risk Tier Assignment Authority === | ||
| 150 | |||
| 151 | * **AKEL**: Suggests initial tier based on domain, keywords, content analysis | ||
| 152 | * **Moderators**: Can override AKEL for individual content | ||
| 153 | * **Experts**: Set tier policy for their domains | ||
| 154 | * **Governing Team**: Approve tier policy changes, resolve tier disputes | ||
| 155 | |||
| 156 | === Risk Tier Review Process === | ||
| 157 | |||
| 158 | **Triggers for Review**: | ||
| 159 | * Significant audit failures in a tier | ||
| 160 | * New emerging topics or domains | ||
| 161 | * Community flags systematic misclassification | ||
| 162 | * Expert domain recommendations | ||
| 163 | * Periodic policy review | ||
| 164 | |||
| 165 | **Process**: | ||
| 166 | 1. Expert domain review (identify if Tier A/B/C appropriate) | ||
| 167 | 2. Community input period (recommendation: sufficient time for feedback) | ||
| 168 | 3. Audit Committee assessment (error patterns in current tier) | ||
| 169 | 4. Governing Team decision | ||
| 170 | 5. Implementation with monitoring period | ||
| 171 | 6. Transparency report on rationale | ||
| 172 | |||
| 173 | === Current Tier Assignments (Baseline) === | ||
| 174 | |||
| 175 | **Tier A**: Medical, legal, elections, safety/security, major financial decisions | ||
| 176 | |||
| 177 | **Tier B**: Complex science causality, contested policy, historical interpretation with political implications, significant economic impact | ||
| 178 | |||
| 179 | **Tier C**: Established historical facts, simple definitions, well-documented scientific consensus, basic reference info | ||
| 180 | |||
| 181 | **Note**: These are guidelines; edge cases require expert judgment | ||
| 182 | |||
| 183 | ---- | ||
| 184 | |||
| 185 | == Quality Gate Governance == | ||
| 186 | |||
| 187 | === Quality Gate Modification Process === | ||
| 188 | |||
| 189 | **Who Can Propose**: Maintainers, Experts, Audit Committee | ||
| 190 | |||
| 191 | **Requirements**: | ||
| 192 | * Rationale based on audit failures or system improvements | ||
| 193 | * Testing in sandbox environment | ||
| 194 | * Impact assessment (false positive/negative rates) | ||
| 195 | * Community notification before deployment | ||
| 196 | |||
| 197 | **Approval**: | ||
| 198 | * Technical changes: Maintainer consensus | ||
| 199 | * Policy changes (e.g., new gate criteria): Governing Team approval | ||
| 200 | |||
| 201 | **Examples of Governed Changes**: | ||
| 202 | * Adjusting contradiction search scope | ||
| 203 | * Modifying source reliability thresholds | ||
| 204 | * Adding new bubble detection patterns | ||
| 205 | * Changing uncertainty quantification formulas | ||
| 206 | |||
| 207 | ---- | ||
| 208 | |||
| 209 | == Community Participation == | ||
| 210 | |||
| 211 | === Open Discussion Forums === | ||
| 212 | |||
| 213 | * Technical proposals (maintainer-led) | ||
| 214 | * Policy proposals (Governing Team-led) | ||
| 215 | * Domain-specific discussions (Expert-led) | ||
| 216 | * Audit findings and improvements (Audit Committee-led) | ||
| 217 | |||
| 218 | === Proposal Mechanism === | ||
| 219 | |||
| 220 | Anyone can propose: | ||
| 221 | 1. Submit proposal with rationale | ||
| 222 | 2. Community discussion (recommendation: minimum timeframe for feedback) | ||
| 223 | 3. Relevant authority reviews (Maintainers/Governing Team/Experts) | ||
| 224 | 4. Decision with documented reasoning | ||
| 225 | 5. Implementation (if approved) | ||
| 226 | |||
| 227 | === Transparency === | ||
| 228 | |||
| 229 | * All decisions documented in public wiki | ||
| 230 | * Audit statistics published periodically | ||
| 231 | * Governing Team meeting minutes published | ||
| 232 | * Expert recommendations documented | ||
| 233 | * Community feedback acknowledged | ||
| 234 | |||
| 235 | ---- | ||
| 236 | |||
| 237 | == Dispute Resolution == | ||
| 238 | |||
| 239 | === Conflict Between Experts === | ||
| 240 | |||
| 241 | 1. Experts attempt consensus | ||
| 242 | 2. If unresolved, escalate to Governing Team | ||
| 243 | 3. Governing Team appoints neutral expert panel | ||
| 244 | 4. Panel recommendation | ||
| 245 | 5. Governing Team decision (final) | ||
| 246 | |||
| 247 | === Conflict Between Maintainers === | ||
| 248 | |||
| 249 | 1. Discussion in maintainer forum | ||
| 250 | 2. Attempt consensus | ||
| 251 | 3. If unresolved, Lead makes decision | ||
| 252 | 4. Community informed of reasoning | ||
| 253 | |||
| 254 | === User Appeals === | ||
| 255 | |||
| 256 | Users can appeal: | ||
| 257 | * Content rejection decisions | ||
| 258 | * Risk tier assignments | ||
| 259 | * Audit outcomes | ||
| 260 | * Moderation actions | ||
| 261 | |||
| 262 | **Process**: | ||
| 263 | 1. Submit appeal with evidence | ||
| 264 | 2. Reviewed by independent moderator/expert | ||
| 265 | 3. Decision with reasoning | ||
| 266 | 4. Final appeal to Governing Team (if warranted) | ||
| 267 | |||
| 268 | ---- | ||
| 269 | |||
| 270 | == Related Pages == | ||
| 271 | |||
| 272 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 273 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 274 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 275 | * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] |