Wiki source code of FAQ
Version 1.1 by Robert Schaub on 2025/12/14 22:27
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Frequently Asked Questions (FAQ) = | ||
| 2 | |||
| 3 | Common questions about FactHarbor's design, functionality, and approach. | ||
| 4 | |||
| 5 | ---- | ||
| 6 | |||
| 7 | == How do facts get input into the system? == | ||
| 8 | |||
| 9 | FactHarbor uses a hybrid model: | ||
| 10 | |||
| 11 | **1. AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels. | ||
| 12 | |||
| 13 | **2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust. | ||
| 14 | |||
| 15 | **3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality. | ||
| 16 | |||
| 17 | **Why both matter**: | ||
| 18 | * AI research handles scale—emerging claims, immediate responses with transparent reasoning | ||
| 19 | * Expert authoring provides authoritative grounding for critical domains | ||
| 20 | * Audit feedback ensures AI quality improves based on expert validation patterns | ||
| 21 | |||
| 22 | Experts can author high-priority content directly, validate/edit AI outputs, or audit samples to improve system-wide performance—focusing their time where expertise matters most. | ||
| 23 | |||
| 24 | POC v1 demonstrates the AI research pipeline (fully automated with transparent reasoning); full system supports all three pathways. | ||
| 25 | |||
| 26 | ---- | ||
| 27 | |||
| 28 | == What prevents FactHarbor from becoming another echo chamber? == | ||
| 29 | |||
| 30 | FactHarbor includes multiple safeguards against echo chambers and filter bubbles: | ||
| 31 | |||
| 32 | **Mandatory Contradiction Search**: | ||
| 33 | * AI must actively search for counter-evidence, not just confirmations | ||
| 34 | * System checks for echo chamber patterns in source clusters | ||
| 35 | * Flags tribal or ideological source clustering | ||
| 36 | * Requires diverse perspectives across political/ideological spectrum | ||
| 37 | |||
| 38 | **Multiple Scenarios**: | ||
| 39 | * Claims are evaluated under different interpretations | ||
| 40 | * Reveals how assumptions change conclusions | ||
| 41 | * Makes disagreements understandable, not divisive | ||
| 42 | |||
| 43 | **Transparent Reasoning**: | ||
| 44 | * All assumptions, definitions, and boundaries are explicit | ||
| 45 | * Evidence chains are traceable | ||
| 46 | * Uncertainty is quantified, not hidden | ||
| 47 | |||
| 48 | **Audit System**: | ||
| 49 | * Human auditors check for bubble patterns | ||
| 50 | * Feedback loop improves AI search diversity | ||
| 51 | * Community can flag missing perspectives | ||
| 52 | |||
| 53 | **Federation**: | ||
| 54 | * Multiple independent nodes with different perspectives | ||
| 55 | * No single entity controls "the truth" | ||
| 56 | * Cross-node contradiction detection | ||
| 57 | |||
| 58 | ---- | ||
| 59 | |||
| 60 | == How does FactHarbor handle claims that are "true in one context but false in another"? == | ||
| 61 | |||
| 62 | This is exactly what FactHarbor is designed for: | ||
| 63 | |||
| 64 | **Scenarios capture contexts**: | ||
| 65 | * Each scenario defines specific boundaries, definitions, and assumptions | ||
| 66 | * The same claim can have different verdicts in different scenarios | ||
| 67 | * Example: "Coffee is healthy" depends on: | ||
| 68 | ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) | ||
| 69 | ** Population (adults? pregnant women? people with heart conditions?) | ||
| 70 | ** Consumption level (1 cup/day? 5 cups/day?) | ||
| 71 | ** Time horizon (short-term? long-term?) | ||
| 72 | |||
| 73 | **Truth Landscape**: | ||
| 74 | * Shows all scenarios and their verdicts side-by-side | ||
| 75 | * Users see *why* interpretations differ | ||
| 76 | * No forced consensus when legitimate disagreement exists | ||
| 77 | |||
| 78 | **Explicit Assumptions**: | ||
| 79 | * Every scenario states its assumptions clearly | ||
| 80 | * Users can compare how changing assumptions changes conclusions | ||
| 81 | * Makes context-dependence visible, not hidden | ||
| 82 | |||
| 83 | ---- | ||
| 84 | |||
| 85 | == What makes FactHarbor different from traditional fact-checking sites? == | ||
| 86 | |||
| 87 | **Traditional Fact-Checking**: | ||
| 88 | * Binary verdicts: True / Mostly True / False | ||
| 89 | * Single interpretation chosen by fact-checker | ||
| 90 | * Often hides legitimate contextual differences | ||
| 91 | * Limited ability to show *why* people disagree | ||
| 92 | |||
| 93 | **FactHarbor**: | ||
| 94 | * **Multi-scenario**: Shows multiple valid interpretations | ||
| 95 | * **Likelihood-based**: Ranges with uncertainty, not binary labels | ||
| 96 | * **Transparent assumptions**: Makes boundaries and definitions explicit | ||
| 97 | * **Version history**: Shows how understanding evolves | ||
| 98 | * **Contradiction search**: Actively seeks opposing evidence | ||
| 99 | * **Federated**: No single authority controls truth | ||
| 100 | |||
| 101 | ---- | ||
| 102 | |||
| 103 | == How do you prevent manipulation or coordinated misinformation campaigns? == | ||
| 104 | |||
| 105 | **Quality Gates**: | ||
| 106 | * Automated checks before AI-generated content publishes | ||
| 107 | * Source quality verification | ||
| 108 | * Mandatory contradiction search | ||
| 109 | * Bubble detection for coordinated campaigns | ||
| 110 | |||
| 111 | **Audit System**: | ||
| 112 | * Stratified sampling catches manipulation patterns | ||
| 113 | * Expert auditors validate AI research quality | ||
| 114 | * Failed audits trigger immediate review | ||
| 115 | |||
| 116 | **Transparency**: | ||
| 117 | * All reasoning chains are visible | ||
| 118 | * Evidence sources are traceable | ||
| 119 | * AKEL involvement clearly labeled | ||
| 120 | * Version history preserved | ||
| 121 | |||
| 122 | **Moderation**: | ||
| 123 | * Moderators handle abuse, spam, coordinated manipulation | ||
| 124 | * Content can be flagged by community | ||
| 125 | * Audit trail maintained even if content hidden | ||
| 126 | |||
| 127 | **Federation**: | ||
| 128 | * Multiple nodes with independent governance | ||
| 129 | * No single point of control | ||
| 130 | * Cross-node contradiction detection | ||
| 131 | * Trust model prevents malicious node influence | ||
| 132 | |||
| 133 | ---- | ||
| 134 | |||
| 135 | == What happens when new evidence contradicts an existing verdict? == | ||
| 136 | |||
| 137 | FactHarbor is designed for evolving knowledge: | ||
| 138 | |||
| 139 | **Automatic Re-evaluation**: | ||
| 140 | 1. New evidence arrives | ||
| 141 | 2. System detects affected scenarios and verdicts | ||
| 142 | 3. AKEL proposes updated verdicts | ||
| 143 | 4. Reviewers/experts validate | ||
| 144 | 5. New verdict version published | ||
| 145 | 6. Old versions remain accessible | ||
| 146 | |||
| 147 | **Version History**: | ||
| 148 | * Every verdict has complete history | ||
| 149 | * Users can see "as of date X, what did we know?" | ||
| 150 | * Timeline shows how understanding evolved | ||
| 151 | |||
| 152 | **Transparent Updates**: | ||
| 153 | * Reason for re-evaluation documented | ||
| 154 | * New evidence clearly linked | ||
| 155 | * Changes explained, not hidden | ||
| 156 | |||
| 157 | **User Notifications**: | ||
| 158 | * Users following claims are notified of updates | ||
| 159 | * Can compare old vs new verdicts | ||
| 160 | * Can see which evidence changed conclusions | ||
| 161 | |||
| 162 | ---- | ||
| 163 | |||
| 164 | == Who can submit claims to FactHarbor? == | ||
| 165 | |||
| 166 | **Anyone** - even without login: | ||
| 167 | |||
| 168 | **Readers** (no login required): | ||
| 169 | * Browse and search all published content | ||
| 170 | * Submit text for analysis | ||
| 171 | * New claims added automatically unless duplicates exist | ||
| 172 | * System deduplicates and normalizes | ||
| 173 | |||
| 174 | **Contributors** (logged in): | ||
| 175 | * Everything Readers can do | ||
| 176 | * Submit evidence sources | ||
| 177 | * Suggest scenarios | ||
| 178 | * Participate in discussions | ||
| 179 | |||
| 180 | **Workflow**: | ||
| 181 | 1. User submits text (as Reader or Contributor) | ||
| 182 | 2. AKEL extracts claims | ||
| 183 | 3. Checks for existing duplicates | ||
| 184 | 4. Normalizes claim text | ||
| 185 | 5. Assigns risk tier | ||
| 186 | 6. Generates scenarios (draft) | ||
| 187 | 7. Runs quality gates | ||
| 188 | 8. Publishes as AI-Generated (Mode 2) if passes | ||
| 189 | |||
| 190 | ---- | ||
| 191 | |||
| 192 | == What are "risk tiers" and why do they matter? == | ||
| 193 | |||
| 194 | Risk tiers determine review requirements and publication workflow: | ||
| 195 | |||
| 196 | **Tier A (High Risk)**: | ||
| 197 | * **Domains**: Medical, legal, elections, safety, security, major financial | ||
| 198 | * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status | ||
| 199 | * **Audit rate**: Recommendation 30-50% | ||
| 200 | * **Why**: Potential for significant harm if wrong | ||
| 201 | |||
| 202 | **Tier B (Medium Risk)**: | ||
| 203 | * **Domains**: Complex policy, science causality, contested issues | ||
| 204 | * **Publication**: AI can publish immediately with clear labeling | ||
| 205 | * **Audit rate**: Recommendation 10-20% | ||
| 206 | * **Why**: Nuanced but lower immediate harm risk | ||
| 207 | |||
| 208 | **Tier C (Low Risk)**: | ||
| 209 | * **Domains**: Definitions, established facts, historical data | ||
| 210 | * **Publication**: AI publication default | ||
| 211 | * **Audit rate**: Recommendation 5-10% | ||
| 212 | * **Why**: Well-established, low controversy | ||
| 213 | |||
| 214 | **Assignment**: | ||
| 215 | * AKEL suggests tier based on domain, keywords, impact | ||
| 216 | * Moderators and Experts can override | ||
| 217 | * Risk tiers reviewed based on audit outcomes | ||
| 218 | |||
| 219 | ---- | ||
| 220 | |||
| 221 | == How does federation work and why is it important? == | ||
| 222 | |||
| 223 | **Federation Model**: | ||
| 224 | * Multiple independent FactHarbor nodes | ||
| 225 | * Each node has own database, AKEL, governance | ||
| 226 | * Nodes exchange claims, scenarios, evidence, verdicts | ||
| 227 | * No central authority | ||
| 228 | |||
| 229 | **Why Federation Matters**: | ||
| 230 | * **Resilience**: No single point of failure or censorship | ||
| 231 | * **Autonomy**: Communities govern themselves | ||
| 232 | * **Scalability**: Add nodes to handle more users | ||
| 233 | * **Specialization**: Domain-focused nodes (health, energy, etc.) | ||
| 234 | * **Trust diversity**: Multiple perspectives, not single truth source | ||
| 235 | |||
| 236 | **How Nodes Exchange Data**: | ||
| 237 | 1. Local node creates versions | ||
| 238 | 2. Builds signed bundle | ||
| 239 | 3. Pushes to trusted neighbor nodes | ||
| 240 | 4. Remote nodes validate signatures and lineage | ||
| 241 | 5. Accept or branch versions | ||
| 242 | 6. Local re-evaluation if needed | ||
| 243 | |||
| 244 | **Trust Model**: | ||
| 245 | * Trusted nodes → auto-import | ||
| 246 | * Neutral nodes → import with review | ||
| 247 | * Untrusted nodes → manual only | ||
| 248 | |||
| 249 | ---- | ||
| 250 | |||
| 251 | == Can experts disagree in FactHarbor? == | ||
| 252 | |||
| 253 | **Yes - and that's a feature, not a bug**: | ||
| 254 | |||
| 255 | **Multiple Scenarios**: | ||
| 256 | * Experts can create different scenarios with different assumptions | ||
| 257 | * Each scenario gets its own verdict | ||
| 258 | * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) | ||
| 259 | |||
| 260 | **Parallel Verdicts**: | ||
| 261 | * Same scenario, different expert interpretations | ||
| 262 | * Both verdicts visible with expert attribution | ||
| 263 | * No forced consensus | ||
| 264 | |||
| 265 | **Transparency**: | ||
| 266 | * Expert reasoning documented | ||
| 267 | * Assumptions stated explicitly | ||
| 268 | * Evidence chains traceable | ||
| 269 | * Users can evaluate competing expert opinions | ||
| 270 | |||
| 271 | **Federation**: | ||
| 272 | * Different nodes can have different expert conclusions | ||
| 273 | * Cross-node branching allowed | ||
| 274 | * Users can see how conclusions vary across nodes | ||
| 275 | |||
| 276 | ---- | ||
| 277 | |||
| 278 | == What prevents AI from hallucinating or making up facts? == | ||
| 279 | |||
| 280 | **Multiple Safeguards**: | ||
| 281 | |||
| 282 | **Quality Gate 4: Structural Integrity**: | ||
| 283 | * Fact-checking against sources | ||
| 284 | * No hallucinations allowed | ||
| 285 | * Logic chain must be valid and traceable | ||
| 286 | * References must be accessible and verifiable | ||
| 287 | |||
| 288 | **Evidence Requirements**: | ||
| 289 | * Primary sources required | ||
| 290 | * Citations must be complete | ||
| 291 | * Sources must be accessible | ||
| 292 | * Reliability scored | ||
| 293 | |||
| 294 | **Audit System**: | ||
| 295 | * Human auditors check AI-generated content | ||
| 296 | * Hallucinations caught and fed back into training | ||
| 297 | * Patterns of errors trigger system improvements | ||
| 298 | |||
| 299 | **Transparency**: | ||
| 300 | * All reasoning chains visible | ||
| 301 | * Sources linked | ||
| 302 | * Users can verify claims against sources | ||
| 303 | * AKEL outputs clearly labeled | ||
| 304 | |||
| 305 | **Human Oversight**: | ||
| 306 | * Tier A requires expert review for "Human-Reviewed" status | ||
| 307 | * Audit sampling catches errors | ||
| 308 | * Community can flag issues | ||
| 309 | |||
| 310 | ---- | ||
| 311 | |||
| 312 | == How does FactHarbor make money / is it sustainable? == | ||
| 313 | |||
| 314 | [ToDo: Business model and sustainability to be defined] | ||
| 315 | |||
| 316 | Potential models under consideration: | ||
| 317 | * Non-profit foundation with grants and donations | ||
| 318 | * Institutional subscriptions (universities, research organizations, media) | ||
| 319 | * API access for third-party integrations | ||
| 320 | * Premium features for power users | ||
| 321 | * Federated node hosting services | ||
| 322 | |||
| 323 | Core principle: **Public benefit** mission takes priority over profit. | ||
| 324 | |||
| 325 | ---- | ||
| 326 | |||
| 327 | == Related Pages == | ||
| 328 | |||
| 329 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 330 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 331 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 332 | * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] | ||
| 333 | * [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]] |