Wiki source code of FAQ
Last modified by Robert Schaub on 2025/12/23 18:00
Hide last authors
| author | version | line-number | content |
|---|---|---|---|
| |
1.1 | 1 | = Frequently Asked Questions (FAQ) = |
| |
1.2 | 2 | |
| |
1.1 | 3 | Common questions about FactHarbor's design, functionality, and approach. |
| |
1.2 | 4 | |
| |
1.1 | 5 | == 1. How do claims get evaluated in FactHarbor? == |
| |
1.2 | 6 | |
| |
1.1 | 7 | === 1.1 User Submission === |
| |
1.2 | 8 | |
| |
1.1 | 9 | **Who**: Anyone can submit claims |
| 10 | **Process**: User submits claim text + source URLs | ||
| 11 | **Speed**: Typically <20 seconds to verdict | ||
| |
1.2 | 12 | |
| |
1.1 | 13 | === 1.2 AKEL Processing (Automated) === |
| |
1.2 | 14 | |
| |
1.1 | 15 | **What**: AI Knowledge Extraction Layer analyzes claim |
| 16 | **Steps**: | ||
| |
1.2 | 17 | |
| |
1.1 | 18 | * Parse claim into testable components |
| 19 | * Extract evidence from provided sources | ||
| 20 | * Score source credibility | ||
| 21 | * Generate verdict with confidence level | ||
| 22 | * Assign risk tier | ||
| 23 | * Publish automatically | ||
| 24 | **Authority**: AKEL makes all content decisions | ||
| 25 | **Scale**: Can process millions of claims | ||
| |
1.2 | 26 | |
| |
1.1 | 27 | === 1.3 Continuous Improvement (Human Role) === |
| |
1.2 | 28 | |
| |
1.1 | 29 | **What**: Humans improve the system, not individual verdicts |
| 30 | **Activities**: | ||
| |
1.2 | 31 | |
| |
1.1 | 32 | * Monitor aggregate performance metrics |
| 33 | * Identify systematic errors | ||
| 34 | * Propose algorithm improvements | ||
| 35 | * Update policies and rules | ||
| 36 | * Test changes before deployment | ||
| 37 | **NOT**: Reviewing individual claims for approval | ||
| 38 | **Focus**: Fix the system, not the data | ||
| |
1.2 | 39 | |
| |
1.1 | 40 | === 1.4 Exception Handling === |
| |
1.2 | 41 | |
| |
1.1 | 42 | **When AKEL flags for review**: |
| |
1.2 | 43 | |
| |
1.1 | 44 | * Low confidence verdict |
| 45 | * Detected manipulation attempt | ||
| 46 | * Unusual pattern requiring attention | ||
| 47 | **Moderator role**: | ||
| 48 | * Reviews flagged items | ||
| 49 | * Takes action on abuse/manipulation | ||
| 50 | * Proposes detection improvements | ||
| 51 | * Does NOT override verdicts | ||
| |
1.2 | 52 | |
| |
1.1 | 53 | === 1.5 Why This Model Works === |
| |
1.2 | 54 | |
| |
1.1 | 55 | **Scale**: Automation handles volume humans cannot |
| 56 | **Consistency**: Same rules applied uniformly | ||
| 57 | **Transparency**: Algorithms can be audited | ||
| 58 | **Improvement**: Systematic fixes benefit all claims | ||
| |
1.2 | 59 | |
| |
1.1 | 60 | == 2. What prevents FactHarbor from becoming another echo chamber? == |
| |
1.2 | 61 | |
| |
1.1 | 62 | FactHarbor includes multiple safeguards against echo chambers and filter bubbles: |
| 63 | **Mandatory Contradiction Search**: | ||
| |
1.2 | 64 | |
| |
1.1 | 65 | * AI must actively search for counter-evidence, not just confirmations |
| 66 | * System checks for echo chamber patterns in source clusters | ||
| 67 | * Flags tribal or ideological source clustering | ||
| 68 | * Requires diverse perspectives across political/ideological spectrum | ||
| 69 | **Multiple Scenarios**: | ||
| 70 | * Claims are evaluated under different interpretations | ||
| 71 | * Reveals how assumptions change conclusions | ||
| 72 | * Makes disagreements understandable, not divisive | ||
| 73 | **Transparent Reasoning**: | ||
| 74 | * All assumptions, definitions, and boundaries are explicit | ||
| 75 | * Evidence chains are traceable | ||
| 76 | * Uncertainty is quantified, not hidden | ||
| 77 | **Audit System**: | ||
| 78 | * Human auditors check for bubble patterns | ||
| 79 | * Feedback loop improves AI search diversity | ||
| 80 | * Community can flag missing perspectives | ||
| 81 | **Federation**: | ||
| 82 | * Multiple independent nodes with different perspectives | ||
| 83 | * No single entity controls "the truth" | ||
| 84 | * Cross-node contradiction detection | ||
| |
1.2 | 85 | |
| |
1.1 | 86 | == 3. How does FactHarbor handle claims that are "true in one context but false in another"? == |
| |
1.2 | 87 | |
| |
1.1 | 88 | This is exactly what FactHarbor is designed for: |
| 89 | **Scenarios capture contexts**: | ||
| |
1.2 | 90 | |
| |
1.1 | 91 | * Each scenario defines specific boundaries, definitions, and assumptions |
| 92 | * The same claim can have different verdicts in different scenarios | ||
| 93 | * Example: "Coffee is healthy" depends on: ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) ** Population (adults? pregnant women? people with heart conditions?) ** Consumption level (1 cup/day? 5 cups/day?) ** Time horizon (short-term? long-term?) | ||
| 94 | **Truth Landscape**: | ||
| 95 | * Shows all scenarios and their verdicts side-by-side | ||
| 96 | * Users see *why* interpretations differ | ||
| 97 | * No forced consensus when legitimate disagreement exists | ||
| 98 | **Explicit Assumptions**: | ||
| 99 | * Every scenario states its assumptions clearly | ||
| 100 | * Users can compare how changing assumptions changes conclusions | ||
| 101 | * Makes context-dependence visible, not hidden | ||
| |
1.2 | 102 | |
| |
1.1 | 103 | == 4. What makes FactHarbor different from traditional fact-checking sites? == |
| |
1.2 | 104 | |
| |
1.1 | 105 | **Traditional Fact-Checking**: |
| |
1.2 | 106 | |
| |
1.1 | 107 | * Binary verdicts: True / Mostly True / False |
| 108 | * Single interpretation chosen by fact-checker | ||
| 109 | * Often hides legitimate contextual differences | ||
| 110 | * Limited ability to show *why* people disagree | ||
| 111 | **FactHarbor**: | ||
| 112 | * **Multi-scenario**: Shows multiple valid interpretations | ||
| 113 | * **Likelihood-based**: Ranges with uncertainty, not binary labels | ||
| 114 | * **Transparent assumptions**: Makes boundaries and definitions explicit | ||
| 115 | * **Version history**: Shows how understanding evolves | ||
| 116 | * **Contradiction search**: Actively seeks opposing evidence | ||
| 117 | * **Federated**: No single authority controls truth | ||
| |
1.2 | 118 | |
| |
1.1 | 119 | == 5. How do you prevent manipulation or coordinated misinformation campaigns? == |
| |
1.2 | 120 | |
| |
1.1 | 121 | **Quality Gates**: |
| |
1.2 | 122 | |
| |
1.1 | 123 | * Automated checks before AI-generated content publishes |
| 124 | * Source quality verification | ||
| 125 | * Mandatory contradiction search | ||
| 126 | * Bubble detection for coordinated campaigns | ||
| 127 | **Audit System**: | ||
| 128 | * Stratified sampling catches manipulation patterns | ||
| 129 | * Trusted Contributor auditors validate AI research quality | ||
| 130 | * Failed audits trigger immediate review | ||
| 131 | **Transparency**: | ||
| 132 | * All reasoning chains are visible | ||
| 133 | * Evidence sources are traceable | ||
| 134 | * AKEL involvement clearly labeled | ||
| 135 | * Version history preserved | ||
| 136 | **Moderation**: | ||
| 137 | * Moderators handle abuse, spam, coordinated manipulation | ||
| 138 | * Content can be flagged by community | ||
| 139 | * Audit trail maintained even if content hidden | ||
| 140 | **Federation**: | ||
| 141 | * Multiple nodes with independent governance | ||
| 142 | * No single point of control | ||
| 143 | * Cross-node contradiction detection | ||
| 144 | * Trust model prevents malicious node influence | ||
| |
1.2 | 145 | |
| |
1.1 | 146 | == 6. What happens when new evidence contradicts an existing verdict? == |
| |
1.2 | 147 | |
| |
1.1 | 148 | FactHarbor is designed for evolving knowledge: |
| 149 | **Automatic Re-evaluation**: | ||
| |
1.2 | 150 | |
| |
1.1 | 151 | 1. New evidence arrives |
| 152 | 2. System detects affected scenarios and verdicts | ||
| 153 | 3. AKEL proposes updated verdicts | ||
| 154 | 4. Contributors/experts validate | ||
| 155 | 5. New verdict version published | ||
| 156 | 6. Old versions remain accessible | ||
| 157 | **Version History**: | ||
| |
1.2 | 158 | |
| |
1.1 | 159 | * Every verdict has complete history |
| 160 | * Users can see "as of date X, what did we know?" | ||
| 161 | * Timeline shows how understanding evolved | ||
| 162 | **Transparent Updates**: | ||
| 163 | * Reason for re-evaluation documented | ||
| 164 | * New evidence clearly linked | ||
| 165 | * Changes explained, not hidden | ||
| 166 | **User Notifications**: | ||
| 167 | * Users following claims are notified of updates | ||
| 168 | * Can compare old vs new verdicts | ||
| 169 | * Can see which evidence changed conclusions | ||
| |
1.2 | 170 | |
| |
1.1 | 171 | == 7. Who can submit claims to FactHarbor? == |
| |
1.2 | 172 | |
| |
1.1 | 173 | **Anyone** - even without login: |
| 174 | **Readers** (no login required): | ||
| |
1.2 | 175 | |
| |
1.1 | 176 | * Browse and search all published content |
| 177 | * Submit text for analysis | ||
| 178 | * New claims added automatically unless duplicates exist | ||
| 179 | * System deduplicates and normalizes | ||
| 180 | **Contributors** (logged in): | ||
| 181 | * Everything Readers can do | ||
| 182 | * Submit evidence sources | ||
| 183 | * Suggest scenarios | ||
| 184 | * Participate in discussions | ||
| 185 | **Workflow**: | ||
| |
1.2 | 186 | |
| |
1.1 | 187 | 1. User submits text (as Reader or Contributor) |
| 188 | 2. AKEL extracts claims | ||
| 189 | 3. Checks for existing duplicates | ||
| 190 | 4. Normalizes claim text | ||
| 191 | 5. Assigns risk tier | ||
| 192 | 6. Generates scenarios (draft) | ||
| 193 | 7. Runs quality gates | ||
| 194 | 8. Publishes as AI-Generated (Mode 2) if passes | ||
| |
1.2 | 195 | |
| |
1.1 | 196 | == 8. What are "risk tiers" and why do they matter? == |
| |
1.2 | 197 | |
| |
1.1 | 198 | Risk tiers determine review requirements and publication workflow: |
| 199 | **Tier A (High Risk)**: | ||
| |
1.2 | 200 | |
| |
1.1 | 201 | * **Domains**: Medical, legal, elections, safety, security, major financial |
| 202 | * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status | ||
| 203 | * **Audit rate**: Recommendation 30-50% | ||
| 204 | * **Why**: Potential for significant harm if wrong | ||
| 205 | **Tier B (Medium Risk)**: | ||
| 206 | * **Domains**: Complex policy, science causality, contested issues | ||
| 207 | * **Publication**: AI can publish immediately with clear labeling | ||
| 208 | * **Audit rate**: Recommendation 10-20% | ||
| 209 | * **Why**: Nuanced but lower immediate harm risk | ||
| 210 | **Tier C (Low Risk)**: | ||
| 211 | * **Domains**: Definitions, established facts, historical data | ||
| 212 | * **Publication**: AI publication default | ||
| 213 | * **Audit rate**: Recommendation 5-10% | ||
| 214 | * **Why**: Well-established, low controversy | ||
| 215 | **Assignment**: | ||
| 216 | * AKEL suggests tier based on domain, keywords, impact | ||
| 217 | * Moderators and Trusted Contributors can override | ||
| 218 | * Risk tiers reviewed based on audit outcomes | ||
| |
1.2 | 219 | |
| |
1.1 | 220 | == 9. How does federation work and why is it important? == |
| |
1.2 | 221 | |
| |
1.1 | 222 | **Federation Model**: |
| |
1.2 | 223 | |
| |
1.1 | 224 | * Multiple independent FactHarbor nodes |
| 225 | * Each node has own database, AKEL, governance | ||
| 226 | * Nodes exchange claims, scenarios, evidence, verdicts | ||
| 227 | * No central authority | ||
| 228 | **Why Federation Matters**: | ||
| 229 | * **Resilience**: No single point of failure or censorship | ||
| 230 | * **Autonomy**: Communities govern themselves | ||
| 231 | * **Scalability**: Add nodes to handle more users | ||
| 232 | * **Specialization**: Domain-focused nodes (health, energy, etc.) | ||
| 233 | * **Trust diversity**: Multiple perspectives, not single truth source | ||
| 234 | **How Nodes Exchange Data**: | ||
| |
1.2 | 235 | |
| |
1.1 | 236 | 1. Local node creates versions |
| 237 | 2. Builds signed bundle | ||
| 238 | 3. Pushes to trusted neighbor nodes | ||
| 239 | 4. Remote nodes validate signatures and lineage | ||
| 240 | 5. Accept or branch versions | ||
| 241 | 6. Local re-evaluation if needed | ||
| 242 | **Trust Model**: | ||
| |
1.2 | 243 | |
| |
1.1 | 244 | * Trusted nodes → auto-import |
| 245 | * Neutral nodes → import with review | ||
| 246 | * Untrusted nodes → manual only | ||
| |
1.2 | 247 | |
| |
1.1 | 248 | == 10. Can experts disagree in FactHarbor? == |
| |
1.2 | 249 | |
| |
1.1 | 250 | **Yes - and that's a feature, not a bug**: |
| 251 | **Multiple Scenarios**: | ||
| |
1.2 | 252 | |
| |
1.1 | 253 | * Trusted Contributors can create different scenarios with different assumptions |
| 254 | * Each scenario gets its own verdict | ||
| 255 | * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) | ||
| 256 | **Parallel Verdicts**: | ||
| 257 | * Same scenario, different expert interpretations | ||
| 258 | * Both verdicts visible with expert attribution | ||
| 259 | * No forced consensus | ||
| 260 | **Transparency**: | ||
| 261 | * Trusted Contributor reasoning documented | ||
| 262 | * Assumptions stated explicitly | ||
| 263 | * Evidence chains traceable | ||
| 264 | * Users can evaluate competing expert opinions | ||
| 265 | **Federation**: | ||
| 266 | * Different nodes can have different expert conclusions | ||
| 267 | * Cross-node branching allowed | ||
| 268 | * Users can see how conclusions vary across nodes | ||
| |
1.2 | 269 | |
| |
1.1 | 270 | == 11. What prevents AI from hallucinating or making up facts? == |
| |
1.2 | 271 | |
| |
1.1 | 272 | **Multiple Safeguards**: |
| 273 | **Quality Gate 4: Structural Integrity**: | ||
| |
1.2 | 274 | |
| |
1.1 | 275 | * Fact-checking against sources |
| 276 | * No hallucinations allowed | ||
| 277 | * Logic chain must be valid and traceable | ||
| 278 | * References must be accessible and verifiable | ||
| 279 | **Evidence Requirements**: | ||
| 280 | * Primary sources required | ||
| 281 | * Citations must be complete | ||
| 282 | * Sources must be accessible | ||
| 283 | * Reliability scored | ||
| 284 | **Audit System**: | ||
| 285 | * Human auditors check AI-generated content | ||
| 286 | * Hallucinations caught and fed back into training | ||
| 287 | * Patterns of errors trigger system improvements | ||
| 288 | **Transparency**: | ||
| 289 | * All reasoning chains visible | ||
| 290 | * Sources linked | ||
| 291 | * Users can verify claims against sources | ||
| 292 | * AKEL outputs clearly labeled | ||
| 293 | **Human Oversight**: | ||
| 294 | * Tier A marked as highest risk | ||
| 295 | * Audit sampling catches errors | ||
| 296 | * Community can flag issues | ||
| |
1.2 | 297 | |
| |
1.1 | 298 | == 12. How does FactHarbor make money / is it sustainable? == |
| |
1.2 | 299 | |
| |
1.1 | 300 | [ToDo: Business model and sustainability to be defined] |
| 301 | Potential models under consideration: | ||
| |
1.2 | 302 | |
| |
1.1 | 303 | * Non-profit foundation with grants and donations |
| 304 | * Institutional subscriptions (universities, research organizations, media) | ||
| 305 | * API access for third-party integrations | ||
| 306 | * Premium features for power users | ||
| 307 | * Federated node hosting services | ||
| 308 | Core principle: **Public benefit** mission takes priority over profit. | ||
| |
1.2 | 309 | |
| |
1.1 | 310 | == 13. Related Pages == |
| |
1.2 | 311 | |
| |
1.6 | 312 | * [[Requirements (Roles)>>Test.FactHarbor pre11 V0\.9\.70.Specification.Requirements.WebHome]] |
| |
1.2 | 313 | * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarbor pre11 V0\.9\.70.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] |
| |
1.3 | 314 | * [[Automation>>Test.FactHarbor pre11 V0\.9\.70.Specification.Automation.WebHome]] |
| |
1.5 | 315 | * [[Federation & Decentralization>>Test.FactHarbor pre11 V0\.9\.70.Specification.Federation & Decentralization.WebHome]] |
| |
1.7 | 316 | * [[Mission & Purpose>>Test.FactHarbor V0\.9\.88 ex 2 new Org Pages.Organisation.Core Problems FactHarbor Solves.WebHome]] |
| |
1.2 | 317 | |
| |
1.1 | 318 | == 20. Glossary / Key Terms == |
| |
1.2 | 319 | |
| |
1.1 | 320 | === Phase 0 vs POC v1 === |
| |
1.2 | 321 | |
| |
1.1 | 322 | These terms refer to the same stage of FactHarbor's development: |
| |
1.2 | 323 | |
| |
1.1 | 324 | * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance |
| 325 | * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication | ||
| 326 | Both describe the current development stage where the platform is being built and initially validated. | ||
| |
1.2 | 327 | |
| |
1.1 | 328 | === Beta 0 === |
| |
1.2 | 329 | |
| |
1.1 | 330 | The next development stage after POC, featuring: |
| |
1.2 | 331 | |
| |
1.1 | 332 | * External testers |
| 333 | * Basic federation experiments | ||
| 334 | * Enhanced automation | ||
| |
1.2 | 335 | |
| |
1.1 | 336 | === Release 1.0 === |
| |
1.2 | 337 | |
| |
1.1 | 338 | The first public release featuring: |
| |
1.2 | 339 | |
| |
1.1 | 340 | * Full federation support |
| 341 | * 2000+ concurrent users | ||
| 342 | * Production-grade infrastructure |