Wiki source code of FAQ
Version 3.16 by Robert Schaub on 2025/12/24 20:32
Hide last authors
| author | version | line-number | content |
|---|---|---|---|
| |
1.1 | 1 | = Frequently Asked Questions (FAQ) = |
| 2 | |||
| 3 | Common questions about FactHarbor's design, functionality, and approach. | ||
| 4 | |||
| 5 | ---- | ||
| 6 | |||
| 7 | == How do facts get input into the system? == | ||
| 8 | |||
| |
3.1 | 9 | FactHarbor uses a hybrid model combining three complementary approaches: |
| |
1.1 | 10 | |
| |
3.1 | 11 | === 1. AI-Generated Content (Scalable) === |
| |
1.1 | 12 | |
| |
3.1 | 13 | **What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer) |
| |
1.1 | 14 | |
| |
3.1 | 15 | **Process**: |
| |
3.2 | 16 | |
| |
3.1 | 17 | * Extracts claims from submitted text |
| 18 | * Generates structured sub-queries | ||
| 19 | * Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations) | ||
| 20 | * Runs automated quality gates | ||
| 21 | * Publishes with clear "AI-Generated" labels | ||
| |
1.1 | 22 | |
| |
3.1 | 23 | **Publication**: Mode 2 (public, AI-labeled) when quality gates pass |
| |
2.1 | 24 | |
| |
3.1 | 25 | **Purpose**: Handles scale — emerging claims get immediate responses with transparent reasoning |
| |
1.1 | 26 | |
| |
3.1 | 27 | === 2. Expert-Authored Content (Authoritative) === |
| |
1.1 | 28 | |
| |
3.1 | 29 | **What**: Domain experts directly author, edit, and validate content |
| |
1.1 | 30 | |
| |
3.1 | 31 | **Focus**: High-risk domains (medical, legal, safety-critical) |
| 32 | |||
| 33 | **Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution | ||
| 34 | |||
| 35 | **Authority**: Tier A content requires expert approval | ||
| 36 | |||
| 37 | **Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences | ||
| 38 | |||
| 39 | === 3. Audit-Improved Quality (Continuous) === | ||
| 40 | |||
| 41 | **What**: Sampling audits where experts review AI-generated content | ||
| 42 | |||
| 43 | **Rates**: | ||
| |
3.2 | 44 | |
| |
3.1 | 45 | * High-risk (Tier A): 30-50% sampling |
| 46 | * Medium-risk (Tier B): 10-20% sampling | ||
| 47 | * Low-risk (Tier C): 5-10% sampling | ||
| 48 | |||
| 49 | **Impact**: Expert feedback systematically improves AI research quality | ||
| 50 | |||
| 51 | **Purpose**: Ensures AI quality evolves based on expert validation patterns | ||
| 52 | |||
| 53 | === Why All Three Matter === | ||
| 54 | |||
| 55 | **Complementary Strengths**: | ||
| |
3.2 | 56 | |
| |
3.1 | 57 | * **AI research**: Scale and speed for emerging claims |
| 58 | * **Expert authoring**: Authority and precision for critical domains | ||
| 59 | * **Audit feedback**: Continuous quality improvement | ||
| 60 | |||
| 61 | **Expert Time Optimization**: | ||
| 62 | |||
| 63 | Experts can choose where to focus their time: | ||
| |
3.2 | 64 | |
| |
3.1 | 65 | * Author high-priority content directly |
| 66 | * Validate and edit AI-generated outputs | ||
| 67 | * Audit samples to improve system-wide AI performance | ||
| 68 | |||
| 69 | This focuses expert time where domain expertise matters most while leveraging AI for scale. | ||
| 70 | |||
| 71 | === Current Status === | ||
| 72 | |||
| 73 | **POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates) | ||
| 74 | |||
| 75 | **Full System**: Will support all three pathways with integrated workflow | ||
| 76 | |||
| |
1.1 | 77 | ---- |
| 78 | |||
| 79 | == What prevents FactHarbor from becoming another echo chamber? == | ||
| 80 | |||
| 81 | FactHarbor includes multiple safeguards against echo chambers and filter bubbles: | ||
| 82 | |||
| 83 | **Mandatory Contradiction Search**: | ||
| |
3.2 | 84 | |
| |
1.1 | 85 | * AI must actively search for counter-evidence, not just confirmations |
| 86 | * System checks for echo chamber patterns in source clusters | ||
| 87 | * Flags tribal or ideological source clustering | ||
| 88 | * Requires diverse perspectives across political/ideological spectrum | ||
| 89 | |||
| 90 | **Multiple Scenarios**: | ||
| |
3.2 | 91 | |
| |
1.1 | 92 | * Claims are evaluated under different interpretations |
| 93 | * Reveals how assumptions change conclusions | ||
| 94 | * Makes disagreements understandable, not divisive | ||
| 95 | |||
| 96 | **Transparent Reasoning**: | ||
| |
3.2 | 97 | |
| |
1.1 | 98 | * All assumptions, definitions, and boundaries are explicit |
| 99 | * Evidence chains are traceable | ||
| 100 | * Uncertainty is quantified, not hidden | ||
| 101 | |||
| 102 | **Audit System**: | ||
| |
3.2 | 103 | |
| |
1.1 | 104 | * Human auditors check for bubble patterns |
| 105 | * Feedback loop improves AI search diversity | ||
| 106 | * Community can flag missing perspectives | ||
| 107 | |||
| 108 | **Federation**: | ||
| |
3.2 | 109 | |
| |
1.1 | 110 | * Multiple independent nodes with different perspectives |
| 111 | * No single entity controls "the truth" | ||
| 112 | * Cross-node contradiction detection | ||
| 113 | |||
| 114 | ---- | ||
| 115 | |||
| 116 | == How does FactHarbor handle claims that are "true in one context but false in another"? == | ||
| 117 | |||
| 118 | This is exactly what FactHarbor is designed for: | ||
| 119 | |||
| 120 | **Scenarios capture contexts**: | ||
| |
3.2 | 121 | |
| |
1.1 | 122 | * Each scenario defines specific boundaries, definitions, and assumptions |
| 123 | * The same claim can have different verdicts in different scenarios | ||
| 124 | * Example: "Coffee is healthy" depends on: | ||
| |
3.2 | 125 | ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) |
| 126 | ** Population (adults? pregnant women? people with heart conditions?) | ||
| 127 | ** Consumption level (1 cup/day? 5 cups/day?) | ||
| 128 | ** Time horizon (short-term? long-term?) | ||
| |
1.1 | 129 | |
| 130 | **Truth Landscape**: | ||
| |
3.2 | 131 | |
| |
1.1 | 132 | * Shows all scenarios and their verdicts side-by-side |
| 133 | * Users see *why* interpretations differ | ||
| 134 | * No forced consensus when legitimate disagreement exists | ||
| 135 | |||
| 136 | **Explicit Assumptions**: | ||
| |
3.2 | 137 | |
| |
1.1 | 138 | * Every scenario states its assumptions clearly |
| 139 | * Users can compare how changing assumptions changes conclusions | ||
| 140 | * Makes context-dependence visible, not hidden | ||
| 141 | |||
| 142 | ---- | ||
| 143 | |||
| 144 | == What makes FactHarbor different from traditional fact-checking sites? == | ||
| 145 | |||
| 146 | **Traditional Fact-Checking**: | ||
| |
3.2 | 147 | |
| |
1.1 | 148 | * Binary verdicts: True / Mostly True / False |
| 149 | * Single interpretation chosen by fact-checker | ||
| 150 | * Often hides legitimate contextual differences | ||
| 151 | * Limited ability to show *why* people disagree | ||
| 152 | |||
| 153 | **FactHarbor**: | ||
| |
3.2 | 154 | |
| |
1.1 | 155 | * **Multi-scenario**: Shows multiple valid interpretations |
| 156 | * **Likelihood-based**: Ranges with uncertainty, not binary labels | ||
| 157 | * **Transparent assumptions**: Makes boundaries and definitions explicit | ||
| 158 | * **Version history**: Shows how understanding evolves | ||
| 159 | * **Contradiction search**: Actively seeks opposing evidence | ||
| 160 | * **Federated**: No single authority controls truth | ||
| 161 | |||
| 162 | ---- | ||
| 163 | |||
| 164 | == How do you prevent manipulation or coordinated misinformation campaigns? == | ||
| 165 | |||
| 166 | **Quality Gates**: | ||
| |
3.2 | 167 | |
| |
1.1 | 168 | * Automated checks before AI-generated content publishes |
| 169 | * Source quality verification | ||
| 170 | * Mandatory contradiction search | ||
| 171 | * Bubble detection for coordinated campaigns | ||
| 172 | |||
| 173 | **Audit System**: | ||
| |
3.2 | 174 | |
| |
1.1 | 175 | * Stratified sampling catches manipulation patterns |
| 176 | * Expert auditors validate AI research quality | ||
| 177 | * Failed audits trigger immediate review | ||
| 178 | |||
| 179 | **Transparency**: | ||
| |
3.2 | 180 | |
| |
1.1 | 181 | * All reasoning chains are visible |
| 182 | * Evidence sources are traceable | ||
| 183 | * AKEL involvement clearly labeled | ||
| 184 | * Version history preserved | ||
| 185 | |||
| 186 | **Moderation**: | ||
| |
3.2 | 187 | |
| |
1.1 | 188 | * Moderators handle abuse, spam, coordinated manipulation |
| 189 | * Content can be flagged by community | ||
| 190 | * Audit trail maintained even if content hidden | ||
| 191 | |||
| 192 | **Federation**: | ||
| |
3.2 | 193 | |
| |
1.1 | 194 | * Multiple nodes with independent governance |
| 195 | * No single point of control | ||
| 196 | * Cross-node contradiction detection | ||
| 197 | * Trust model prevents malicious node influence | ||
| 198 | |||
| 199 | ---- | ||
| 200 | |||
| 201 | == What happens when new evidence contradicts an existing verdict? == | ||
| 202 | |||
| 203 | FactHarbor is designed for evolving knowledge: | ||
| 204 | |||
| 205 | **Automatic Re-evaluation**: | ||
| |
3.2 | 206 | |
| |
1.1 | 207 | 1. New evidence arrives |
| 208 | 2. System detects affected scenarios and verdicts | ||
| 209 | 3. AKEL proposes updated verdicts | ||
| 210 | 4. Reviewers/experts validate | ||
| 211 | 5. New verdict version published | ||
| 212 | 6. Old versions remain accessible | ||
| 213 | |||
| 214 | **Version History**: | ||
| |
3.2 | 215 | |
| |
1.1 | 216 | * Every verdict has complete history |
| 217 | * Users can see "as of date X, what did we know?" | ||
| 218 | * Timeline shows how understanding evolved | ||
| 219 | |||
| 220 | **Transparent Updates**: | ||
| |
3.2 | 221 | |
| |
1.1 | 222 | * Reason for re-evaluation documented |
| 223 | * New evidence clearly linked | ||
| 224 | * Changes explained, not hidden | ||
| 225 | |||
| 226 | **User Notifications**: | ||
| |
3.2 | 227 | |
| |
1.1 | 228 | * Users following claims are notified of updates |
| 229 | * Can compare old vs new verdicts | ||
| 230 | * Can see which evidence changed conclusions | ||
| 231 | |||
| 232 | ---- | ||
| 233 | |||
| 234 | == Who can submit claims to FactHarbor? == | ||
| 235 | |||
| 236 | **Anyone** - even without login: | ||
| 237 | |||
| 238 | **Readers** (no login required): | ||
| |
3.2 | 239 | |
| |
1.1 | 240 | * Browse and search all published content |
| 241 | * Submit text for analysis | ||
| 242 | * New claims added automatically unless duplicates exist | ||
| 243 | * System deduplicates and normalizes | ||
| 244 | |||
| 245 | **Contributors** (logged in): | ||
| |
3.2 | 246 | |
| |
1.1 | 247 | * Everything Readers can do |
| 248 | * Submit evidence sources | ||
| 249 | * Suggest scenarios | ||
| 250 | * Participate in discussions | ||
| 251 | |||
| 252 | **Workflow**: | ||
| |
3.2 | 253 | |
| |
1.1 | 254 | 1. User submits text (as Reader or Contributor) |
| 255 | 2. AKEL extracts claims | ||
| 256 | 3. Checks for existing duplicates | ||
| 257 | 4. Normalizes claim text | ||
| 258 | 5. Assigns risk tier | ||
| 259 | 6. Generates scenarios (draft) | ||
| 260 | 7. Runs quality gates | ||
| 261 | 8. Publishes as AI-Generated (Mode 2) if passes | ||
| 262 | |||
| 263 | ---- | ||
| 264 | |||
| 265 | == What are "risk tiers" and why do they matter? == | ||
| 266 | |||
| 267 | Risk tiers determine review requirements and publication workflow: | ||
| 268 | |||
| 269 | **Tier A (High Risk)**: | ||
| |
3.2 | 270 | |
| |
1.1 | 271 | * **Domains**: Medical, legal, elections, safety, security, major financial |
| 272 | * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status | ||
| 273 | * **Audit rate**: Recommendation 30-50% | ||
| 274 | * **Why**: Potential for significant harm if wrong | ||
| 275 | |||
| 276 | **Tier B (Medium Risk)**: | ||
| |
3.2 | 277 | |
| |
1.1 | 278 | * **Domains**: Complex policy, science causality, contested issues |
| 279 | * **Publication**: AI can publish immediately with clear labeling | ||
| 280 | * **Audit rate**: Recommendation 10-20% | ||
| 281 | * **Why**: Nuanced but lower immediate harm risk | ||
| 282 | |||
| 283 | **Tier C (Low Risk)**: | ||
| |
3.2 | 284 | |
| |
1.1 | 285 | * **Domains**: Definitions, established facts, historical data |
| 286 | * **Publication**: AI publication default | ||
| 287 | * **Audit rate**: Recommendation 5-10% | ||
| 288 | * **Why**: Well-established, low controversy | ||
| 289 | |||
| 290 | **Assignment**: | ||
| |
3.2 | 291 | |
| |
1.1 | 292 | * AKEL suggests tier based on domain, keywords, impact |
| 293 | * Moderators and Experts can override | ||
| 294 | * Risk tiers reviewed based on audit outcomes | ||
| 295 | |||
| 296 | ---- | ||
| 297 | |||
| 298 | == How does federation work and why is it important? == | ||
| 299 | |||
| 300 | **Federation Model**: | ||
| |
3.2 | 301 | |
| |
1.1 | 302 | * Multiple independent FactHarbor nodes |
| 303 | * Each node has own database, AKEL, governance | ||
| 304 | * Nodes exchange claims, scenarios, evidence, verdicts | ||
| 305 | * No central authority | ||
| 306 | |||
| 307 | **Why Federation Matters**: | ||
| |
3.2 | 308 | |
| |
1.1 | 309 | * **Resilience**: No single point of failure or censorship |
| 310 | * **Autonomy**: Communities govern themselves | ||
| 311 | * **Scalability**: Add nodes to handle more users | ||
| 312 | * **Specialization**: Domain-focused nodes (health, energy, etc.) | ||
| 313 | * **Trust diversity**: Multiple perspectives, not single truth source | ||
| 314 | |||
| 315 | **How Nodes Exchange Data**: | ||
| |
3.2 | 316 | |
| |
1.1 | 317 | 1. Local node creates versions |
| 318 | 2. Builds signed bundle | ||
| 319 | 3. Pushes to trusted neighbor nodes | ||
| 320 | 4. Remote nodes validate signatures and lineage | ||
| 321 | 5. Accept or branch versions | ||
| 322 | 6. Local re-evaluation if needed | ||
| 323 | |||
| 324 | **Trust Model**: | ||
| |
3.2 | 325 | |
| |
1.1 | 326 | * Trusted nodes → auto-import |
| 327 | * Neutral nodes → import with review | ||
| 328 | * Untrusted nodes → manual only | ||
| 329 | |||
| 330 | ---- | ||
| 331 | |||
| 332 | == Can experts disagree in FactHarbor? == | ||
| 333 | |||
| 334 | **Yes - and that's a feature, not a bug**: | ||
| 335 | |||
| 336 | **Multiple Scenarios**: | ||
| |
3.2 | 337 | |
| |
1.1 | 338 | * Experts can create different scenarios with different assumptions |
| 339 | * Each scenario gets its own verdict | ||
| 340 | * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) | ||
| 341 | |||
| 342 | **Parallel Verdicts**: | ||
| |
3.2 | 343 | |
| |
1.1 | 344 | * Same scenario, different expert interpretations |
| 345 | * Both verdicts visible with expert attribution | ||
| 346 | * No forced consensus | ||
| 347 | |||
| 348 | **Transparency**: | ||
| |
3.2 | 349 | |
| |
1.1 | 350 | * Expert reasoning documented |
| 351 | * Assumptions stated explicitly | ||
| 352 | * Evidence chains traceable | ||
| 353 | * Users can evaluate competing expert opinions | ||
| 354 | |||
| 355 | **Federation**: | ||
| |
3.2 | 356 | |
| |
1.1 | 357 | * Different nodes can have different expert conclusions |
| 358 | * Cross-node branching allowed | ||
| 359 | * Users can see how conclusions vary across nodes | ||
| 360 | |||
| 361 | ---- | ||
| 362 | |||
| 363 | == What prevents AI from hallucinating or making up facts? == | ||
| 364 | |||
| 365 | **Multiple Safeguards**: | ||
| 366 | |||
| 367 | **Quality Gate 4: Structural Integrity**: | ||
| |
3.2 | 368 | |
| |
1.1 | 369 | * Fact-checking against sources |
| 370 | * No hallucinations allowed | ||
| 371 | * Logic chain must be valid and traceable | ||
| 372 | * References must be accessible and verifiable | ||
| 373 | |||
| 374 | **Evidence Requirements**: | ||
| |
3.2 | 375 | |
| |
1.1 | 376 | * Primary sources required |
| 377 | * Citations must be complete | ||
| 378 | * Sources must be accessible | ||
| 379 | * Reliability scored | ||
| 380 | |||
| 381 | **Audit System**: | ||
| |
3.2 | 382 | |
| |
1.1 | 383 | * Human auditors check AI-generated content |
| 384 | * Hallucinations caught and fed back into training | ||
| 385 | * Patterns of errors trigger system improvements | ||
| 386 | |||
| 387 | **Transparency**: | ||
| |
3.2 | 388 | |
| |
1.1 | 389 | * All reasoning chains visible |
| 390 | * Sources linked | ||
| 391 | * Users can verify claims against sources | ||
| 392 | * AKEL outputs clearly labeled | ||
| 393 | |||
| 394 | **Human Oversight**: | ||
| |
3.2 | 395 | |
| |
1.1 | 396 | * Tier A requires expert review for "Human-Reviewed" status |
| 397 | * Audit sampling catches errors | ||
| 398 | * Community can flag issues | ||
| 399 | |||
| 400 | ---- | ||
| 401 | |||
| 402 | == How does FactHarbor make money / is it sustainable? == | ||
| 403 | |||
| 404 | [ToDo: Business model and sustainability to be defined] | ||
| 405 | |||
| 406 | Potential models under consideration: | ||
| |
3.2 | 407 | |
| |
1.1 | 408 | * Non-profit foundation with grants and donations |
| 409 | * Institutional subscriptions (universities, research organizations, media) | ||
| 410 | * API access for third-party integrations | ||
| 411 | * Premium features for power users | ||
| 412 | * Federated node hosting services | ||
| 413 | |||
| 414 | Core principle: **Public benefit** mission takes priority over profit. | ||
| 415 | |||
| 416 | ---- | ||
| 417 | |||
| 418 | == Related Pages == | ||
| 419 | |||
| |
3.13 | 420 | * [[Requirements (Roles)>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.Requirements.WebHome]] |
| |
3.15 | 421 | * [[AKEL (AI Knowledge Extraction Layer)>>Archive.FactHarbor V0\.9\.18 copy.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] |
| |
3.16 | 422 | * [[Automation>>Archive.FactHarbor V0\.9\.18 copy.Specification.Automation.WebHome]] |
| |
3.12 | 423 | * [[Federation & Decentralization>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.Federation & Decentralization.WebHome]] |
| |
3.14 | 424 | * [[Mission & Purpose>>Archive.FactHarbor V0\.9\.18 copy.Organisation V0\.9\.18.Core Problems FactHarbor Solves.WebHome]] |