Wiki source code of FAQ
Last modified by Robert Schaub on 2025/12/18 12:03
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Frequently Asked Questions (FAQ) = | ||
| 2 | Common questions about FactHarbor's design, functionality, and approach. | ||
| 3 | == 1. How do claims get evaluated in FactHarbor? == | ||
| 4 | === 1.1 User Submission === | ||
| 5 | **Who**: Anyone can submit claims | ||
| 6 | **Process**: User submits claim text + source URLs | ||
| 7 | **Speed**: Typically <20 seconds to verdict | ||
| 8 | === 1.2 AKEL Processing (Automated) === | ||
| 9 | **What**: AI Knowledge Extraction Layer analyzes claim | ||
| 10 | **Steps**: | ||
| 11 | * Parse claim into testable components | ||
| 12 | * Extract evidence from provided sources | ||
| 13 | * Score source credibility | ||
| 14 | * Generate verdict with confidence level | ||
| 15 | * Assign risk tier | ||
| 16 | * Publish automatically | ||
| 17 | **Authority**: AKEL makes all content decisions | ||
| 18 | **Scale**: Can process millions of claims | ||
| 19 | === 1.3 Continuous Improvement (Human Role) === | ||
| 20 | **What**: Humans improve the system, not individual verdicts | ||
| 21 | **Activities**: | ||
| 22 | * Monitor aggregate performance metrics | ||
| 23 | * Identify systematic errors | ||
| 24 | * Propose algorithm improvements | ||
| 25 | * Update policies and rules | ||
| 26 | * Test changes before deployment | ||
| 27 | **NOT**: Reviewing individual claims for approval | ||
| 28 | **Focus**: Fix the system, not the data | ||
| 29 | === 1.4 Exception Handling === | ||
| 30 | **When AKEL flags for review**: | ||
| 31 | * Low confidence verdict | ||
| 32 | * Detected manipulation attempt | ||
| 33 | * Unusual pattern requiring attention | ||
| 34 | **Moderator role**: | ||
| 35 | * Reviews flagged items | ||
| 36 | * Takes action on abuse/manipulation | ||
| 37 | * Proposes detection improvements | ||
| 38 | * Does NOT override verdicts | ||
| 39 | === 1.5 Why This Model Works === | ||
| 40 | **Scale**: Automation handles volume humans cannot | ||
| 41 | **Consistency**: Same rules applied uniformly | ||
| 42 | **Transparency**: Algorithms can be audited | ||
| 43 | **Improvement**: Systematic fixes benefit all claims | ||
| 44 | == 2. What prevents FactHarbor from becoming another echo chamber? == | ||
| 45 | FactHarbor includes multiple safeguards against echo chambers and filter bubbles: | ||
| 46 | **Mandatory Contradiction Search**: | ||
| 47 | * AI must actively search for counter-evidence, not just confirmations | ||
| 48 | * System checks for echo chamber patterns in source clusters | ||
| 49 | * Flags tribal or ideological source clustering | ||
| 50 | * Requires diverse perspectives across political/ideological spectrum | ||
| 51 | **Multiple Scenarios**: | ||
| 52 | * Claims are evaluated under different interpretations | ||
| 53 | * Reveals how assumptions change conclusions | ||
| 54 | * Makes disagreements understandable, not divisive | ||
| 55 | **Transparent Reasoning**: | ||
| 56 | * All assumptions, definitions, and boundaries are explicit | ||
| 57 | * Evidence chains are traceable | ||
| 58 | * Uncertainty is quantified, not hidden | ||
| 59 | **Audit System**: | ||
| 60 | * Human auditors check for bubble patterns | ||
| 61 | * Feedback loop improves AI search diversity | ||
| 62 | * Community can flag missing perspectives | ||
| 63 | **Federation**: | ||
| 64 | * Multiple independent nodes with different perspectives | ||
| 65 | * No single entity controls "the truth" | ||
| 66 | * Cross-node contradiction detection | ||
| 67 | == 3. How does FactHarbor handle claims that are "true in one context but false in another"? == | ||
| 68 | This is exactly what FactHarbor is designed for: | ||
| 69 | **Scenarios capture contexts**: | ||
| 70 | * Each scenario defines specific boundaries, definitions, and assumptions | ||
| 71 | * The same claim can have different verdicts in different scenarios | ||
| 72 | * Example: "Coffee is healthy" depends on: | ||
| 73 | ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) | ||
| 74 | ** Population (adults? pregnant women? people with heart conditions?) | ||
| 75 | ** Consumption level (1 cup/day? 5 cups/day?) | ||
| 76 | ** Time horizon (short-term? long-term?) | ||
| 77 | **Truth Landscape**: | ||
| 78 | * Shows all scenarios and their verdicts side-by-side | ||
| 79 | * Users see *why* interpretations differ | ||
| 80 | * No forced consensus when legitimate disagreement exists | ||
| 81 | **Explicit Assumptions**: | ||
| 82 | * Every scenario states its assumptions clearly | ||
| 83 | * Users can compare how changing assumptions changes conclusions | ||
| 84 | * Makes context-dependence visible, not hidden | ||
| 85 | == 4. What makes FactHarbor different from traditional fact-checking sites? == | ||
| 86 | **Traditional Fact-Checking**: | ||
| 87 | * Binary verdicts: True / Mostly True / False | ||
| 88 | * Single interpretation chosen by fact-checker | ||
| 89 | * Often hides legitimate contextual differences | ||
| 90 | * Limited ability to show *why* people disagree | ||
| 91 | **FactHarbor**: | ||
| 92 | * **Multi-scenario**: Shows multiple valid interpretations | ||
| 93 | * **Likelihood-based**: Ranges with uncertainty, not binary labels | ||
| 94 | * **Transparent assumptions**: Makes boundaries and definitions explicit | ||
| 95 | * **Version history**: Shows how understanding evolves | ||
| 96 | * **Contradiction search**: Actively seeks opposing evidence | ||
| 97 | * **Federated**: No single authority controls truth | ||
| 98 | == 5. How do you prevent manipulation or coordinated misinformation campaigns? == | ||
| 99 | **Quality Gates**: | ||
| 100 | * Automated checks before AI-generated content publishes | ||
| 101 | * Source quality verification | ||
| 102 | * Mandatory contradiction search | ||
| 103 | * Bubble detection for coordinated campaigns | ||
| 104 | **Audit System**: | ||
| 105 | * Stratified sampling catches manipulation patterns | ||
| 106 | * Trusted Contributor auditors validate AI research quality | ||
| 107 | * Failed audits trigger immediate review | ||
| 108 | **Transparency**: | ||
| 109 | * All reasoning chains are visible | ||
| 110 | * Evidence sources are traceable | ||
| 111 | * AKEL involvement clearly labeled | ||
| 112 | * Version history preserved | ||
| 113 | **Moderation**: | ||
| 114 | * Moderators handle abuse, spam, coordinated manipulation | ||
| 115 | * Content can be flagged by community | ||
| 116 | * Audit trail maintained even if content hidden | ||
| 117 | **Federation**: | ||
| 118 | * Multiple nodes with independent governance | ||
| 119 | * No single point of control | ||
| 120 | * Cross-node contradiction detection | ||
| 121 | * Trust model prevents malicious node influence | ||
| 122 | == 6. What happens when new evidence contradicts an existing verdict? == | ||
| 123 | FactHarbor is designed for evolving knowledge: | ||
| 124 | **Automatic Re-evaluation**: | ||
| 125 | 1. New evidence arrives | ||
| 126 | 2. System detects affected scenarios and verdicts | ||
| 127 | 3. AKEL proposes updated verdicts | ||
| 128 | 4. Contributors/experts validate | ||
| 129 | 5. New verdict version published | ||
| 130 | 6. Old versions remain accessible | ||
| 131 | **Version History**: | ||
| 132 | * Every verdict has complete history | ||
| 133 | * Users can see "as of date X, what did we know?" | ||
| 134 | * Timeline shows how understanding evolved | ||
| 135 | **Transparent Updates**: | ||
| 136 | * Reason for re-evaluation documented | ||
| 137 | * New evidence clearly linked | ||
| 138 | * Changes explained, not hidden | ||
| 139 | **User Notifications**: | ||
| 140 | * Users following claims are notified of updates | ||
| 141 | * Can compare old vs new verdicts | ||
| 142 | * Can see which evidence changed conclusions | ||
| 143 | == 7. Who can submit claims to FactHarbor? == | ||
| 144 | **Anyone** - even without login: | ||
| 145 | **Readers** (no login required): | ||
| 146 | * Browse and search all published content | ||
| 147 | * Submit text for analysis | ||
| 148 | * New claims added automatically unless duplicates exist | ||
| 149 | * System deduplicates and normalizes | ||
| 150 | **Contributors** (logged in): | ||
| 151 | * Everything Readers can do | ||
| 152 | * Submit evidence sources | ||
| 153 | * Suggest scenarios | ||
| 154 | * Participate in discussions | ||
| 155 | **Workflow**: | ||
| 156 | 1. User submits text (as Reader or Contributor) | ||
| 157 | 2. AKEL extracts claims | ||
| 158 | 3. Checks for existing duplicates | ||
| 159 | 4. Normalizes claim text | ||
| 160 | 5. Assigns risk tier | ||
| 161 | 6. Generates scenarios (draft) | ||
| 162 | 7. Runs quality gates | ||
| 163 | 8. Publishes as AI-Generated (Mode 2) if passes | ||
| 164 | == 8. What are "risk tiers" and why do they matter? == | ||
| 165 | Risk tiers determine review requirements and publication workflow: | ||
| 166 | **Tier A (High Risk)**: | ||
| 167 | * **Domains**: Medical, legal, elections, safety, security, major financial | ||
| 168 | * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status | ||
| 169 | * **Audit rate**: Recommendation 30-50% | ||
| 170 | * **Why**: Potential for significant harm if wrong | ||
| 171 | **Tier B (Medium Risk)**: | ||
| 172 | * **Domains**: Complex policy, science causality, contested issues | ||
| 173 | * **Publication**: AI can publish immediately with clear labeling | ||
| 174 | * **Audit rate**: Recommendation 10-20% | ||
| 175 | * **Why**: Nuanced but lower immediate harm risk | ||
| 176 | **Tier C (Low Risk)**: | ||
| 177 | * **Domains**: Definitions, established facts, historical data | ||
| 178 | * **Publication**: AI publication default | ||
| 179 | * **Audit rate**: Recommendation 5-10% | ||
| 180 | * **Why**: Well-established, low controversy | ||
| 181 | **Assignment**: | ||
| 182 | * AKEL suggests tier based on domain, keywords, impact | ||
| 183 | * Moderators and Trusted Contributors can override | ||
| 184 | * Risk tiers reviewed based on audit outcomes | ||
| 185 | == 9. How does federation work and why is it important? == | ||
| 186 | **Federation Model**: | ||
| 187 | * Multiple independent FactHarbor nodes | ||
| 188 | * Each node has own database, AKEL, governance | ||
| 189 | * Nodes exchange claims, scenarios, evidence, verdicts | ||
| 190 | * No central authority | ||
| 191 | **Why Federation Matters**: | ||
| 192 | * **Resilience**: No single point of failure or censorship | ||
| 193 | * **Autonomy**: Communities govern themselves | ||
| 194 | * **Scalability**: Add nodes to handle more users | ||
| 195 | * **Specialization**: Domain-focused nodes (health, energy, etc.) | ||
| 196 | * **Trust diversity**: Multiple perspectives, not single truth source | ||
| 197 | **How Nodes Exchange Data**: | ||
| 198 | 1. Local node creates versions | ||
| 199 | 2. Builds signed bundle | ||
| 200 | 3. Pushes to trusted neighbor nodes | ||
| 201 | 4. Remote nodes validate signatures and lineage | ||
| 202 | 5. Accept or branch versions | ||
| 203 | 6. Local re-evaluation if needed | ||
| 204 | **Trust Model**: | ||
| 205 | * Trusted nodes → auto-import | ||
| 206 | * Neutral nodes → import with review | ||
| 207 | * Untrusted nodes → manual only | ||
| 208 | == 10. Can experts disagree in FactHarbor? == | ||
| 209 | **Yes - and that's a feature, not a bug**: | ||
| 210 | **Multiple Scenarios**: | ||
| 211 | * Trusted Contributors can create different scenarios with different assumptions | ||
| 212 | * Each scenario gets its own verdict | ||
| 213 | * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) | ||
| 214 | **Parallel Verdicts**: | ||
| 215 | * Same scenario, different expert interpretations | ||
| 216 | * Both verdicts visible with expert attribution | ||
| 217 | * No forced consensus | ||
| 218 | **Transparency**: | ||
| 219 | * Trusted Contributor reasoning documented | ||
| 220 | * Assumptions stated explicitly | ||
| 221 | * Evidence chains traceable | ||
| 222 | * Users can evaluate competing expert opinions | ||
| 223 | **Federation**: | ||
| 224 | * Different nodes can have different expert conclusions | ||
| 225 | * Cross-node branching allowed | ||
| 226 | * Users can see how conclusions vary across nodes | ||
| 227 | == 11. What prevents AI from hallucinating or making up facts? == | ||
| 228 | **Multiple Safeguards**: | ||
| 229 | **Quality Gate 4: Structural Integrity**: | ||
| 230 | * Fact-checking against sources | ||
| 231 | * No hallucinations allowed | ||
| 232 | * Logic chain must be valid and traceable | ||
| 233 | * References must be accessible and verifiable | ||
| 234 | **Evidence Requirements**: | ||
| 235 | * Primary sources required | ||
| 236 | * Citations must be complete | ||
| 237 | * Sources must be accessible | ||
| 238 | * Reliability scored | ||
| 239 | **Audit System**: | ||
| 240 | * Human auditors check AI-generated content | ||
| 241 | * Hallucinations caught and fed back into training | ||
| 242 | * Patterns of errors trigger system improvements | ||
| 243 | **Transparency**: | ||
| 244 | * All reasoning chains visible | ||
| 245 | * Sources linked | ||
| 246 | * Users can verify claims against sources | ||
| 247 | * AKEL outputs clearly labeled | ||
| 248 | **Human Oversight**: | ||
| 249 | * Tier A marked as highest risk | ||
| 250 | * Audit sampling catches errors | ||
| 251 | * Community can flag issues | ||
| 252 | == 12. How does FactHarbor make money / is it sustainable? == | ||
| 253 | [ToDo: Business model and sustainability to be defined] | ||
| 254 | Potential models under consideration: | ||
| 255 | * Non-profit foundation with grants and donations | ||
| 256 | * Institutional subscriptions (universities, research organizations, media) | ||
| 257 | * API access for third-party integrations | ||
| 258 | * Premium features for power users | ||
| 259 | * Federated node hosting services | ||
| 260 | Core principle: **Public benefit** mission takes priority over profit. | ||
| 261 | == 13. Related Pages == | ||
| 262 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 263 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 264 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 265 | * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] | ||
| 266 | * [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]] | ||
| 267 | == 20. Glossary / Key Terms == | ||
| 268 | === Phase 0 vs POC v1 === | ||
| 269 | These terms refer to the same stage of FactHarbor's development: | ||
| 270 | * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance | ||
| 271 | * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication | ||
| 272 | Both describe the current development stage where the platform is being built and initially validated. | ||
| 273 | === Beta 0 === | ||
| 274 | The next development stage after POC, featuring: | ||
| 275 | * External testers | ||
| 276 | * Basic federation experiments | ||
| 277 | * Enhanced automation | ||
| 278 | === Release 1.0 === | ||
| 279 | The first public release featuring: | ||
| 280 | * Full federation support | ||
| 281 | * 2000+ concurrent users | ||
| 282 | * Production-grade infrastructure |