Wiki source code of FAQ
Version 2.1 by Robert Schaub on 2025/12/14 23:02
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Frequently Asked Questions (FAQ) = | ||
| 2 | |||
| 3 | Common questions about FactHarbor's design, functionality, and approach. | ||
| 4 | |||
| 5 | ---- | ||
| 6 | |||
| 7 | == How do facts get input into the system? == | ||
| 8 | |||
| 9 | FactHarbor uses a hybrid model: | ||
| 10 | |||
| 11 | **~1. **AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels.** | ||
| 12 | |||
| 13 | **2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust. | ||
| 14 | |||
| 15 | **3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality. | ||
| 16 | |||
| 17 | **Why both matter**: | ||
| 18 | |||
| 19 | * AI research handles scale—emerging claims, immediate responses with transparent reasoning | ||
| 20 | * Expert authoring provides authoritative grounding for critical domains | ||
| 21 | * Audit feedback ensures AI quality improves based on expert validation patterns | ||
| 22 | |||
| 23 | Experts can author high-priority content directly, validate/edit AI outputs, or audit samples to improve system-wide performance—focusing their time where expertise matters most. | ||
| 24 | |||
| 25 | POC v1 demonstrates the AI research pipeline (fully automated with transparent reasoning); full system supports all three pathways. | ||
| 26 | |||
| 27 | ---- | ||
| 28 | |||
| 29 | == What prevents FactHarbor from becoming another echo chamber? == | ||
| 30 | |||
| 31 | FactHarbor includes multiple safeguards against echo chambers and filter bubbles: | ||
| 32 | |||
| 33 | **Mandatory Contradiction Search**: | ||
| 34 | |||
| 35 | * AI must actively search for counter-evidence, not just confirmations | ||
| 36 | * System checks for echo chamber patterns in source clusters | ||
| 37 | * Flags tribal or ideological source clustering | ||
| 38 | * Requires diverse perspectives across political/ideological spectrum | ||
| 39 | |||
| 40 | **Multiple Scenarios**: | ||
| 41 | |||
| 42 | * Claims are evaluated under different interpretations | ||
| 43 | * Reveals how assumptions change conclusions | ||
| 44 | * Makes disagreements understandable, not divisive | ||
| 45 | |||
| 46 | **Transparent Reasoning**: | ||
| 47 | |||
| 48 | * All assumptions, definitions, and boundaries are explicit | ||
| 49 | * Evidence chains are traceable | ||
| 50 | * Uncertainty is quantified, not hidden | ||
| 51 | |||
| 52 | **Audit System**: | ||
| 53 | |||
| 54 | * Human auditors check for bubble patterns | ||
| 55 | * Feedback loop improves AI search diversity | ||
| 56 | * Community can flag missing perspectives | ||
| 57 | |||
| 58 | **Federation**: | ||
| 59 | |||
| 60 | * Multiple independent nodes with different perspectives | ||
| 61 | * No single entity controls "the truth" | ||
| 62 | * Cross-node contradiction detection | ||
| 63 | |||
| 64 | ---- | ||
| 65 | |||
| 66 | == How does FactHarbor handle claims that are "true in one context but false in another"? == | ||
| 67 | |||
| 68 | This is exactly what FactHarbor is designed for: | ||
| 69 | |||
| 70 | **Scenarios capture contexts**: | ||
| 71 | |||
| 72 | * Each scenario defines specific boundaries, definitions, and assumptions | ||
| 73 | * The same claim can have different verdicts in different scenarios | ||
| 74 | * Example: "Coffee is healthy" depends on: | ||
| 75 | ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) | ||
| 76 | ** Population (adults? pregnant women? people with heart conditions?) | ||
| 77 | ** Consumption level (1 cup/day? 5 cups/day?) | ||
| 78 | ** Time horizon (short-term? long-term?) | ||
| 79 | |||
| 80 | **Truth Landscape**: | ||
| 81 | |||
| 82 | * Shows all scenarios and their verdicts side-by-side | ||
| 83 | * Users see *why* interpretations differ | ||
| 84 | * No forced consensus when legitimate disagreement exists | ||
| 85 | |||
| 86 | **Explicit Assumptions**: | ||
| 87 | |||
| 88 | * Every scenario states its assumptions clearly | ||
| 89 | * Users can compare how changing assumptions changes conclusions | ||
| 90 | * Makes context-dependence visible, not hidden | ||
| 91 | |||
| 92 | ---- | ||
| 93 | |||
| 94 | == What makes FactHarbor different from traditional fact-checking sites? == | ||
| 95 | |||
| 96 | **Traditional Fact-Checking**: | ||
| 97 | |||
| 98 | * Binary verdicts: True / Mostly True / False | ||
| 99 | * Single interpretation chosen by fact-checker | ||
| 100 | * Often hides legitimate contextual differences | ||
| 101 | * Limited ability to show *why* people disagree | ||
| 102 | |||
| 103 | **FactHarbor**: | ||
| 104 | |||
| 105 | * **Multi-scenario**: Shows multiple valid interpretations | ||
| 106 | * **Likelihood-based**: Ranges with uncertainty, not binary labels | ||
| 107 | * **Transparent assumptions**: Makes boundaries and definitions explicit | ||
| 108 | * **Version history**: Shows how understanding evolves | ||
| 109 | * **Contradiction search**: Actively seeks opposing evidence | ||
| 110 | * **Federated**: No single authority controls truth | ||
| 111 | |||
| 112 | ---- | ||
| 113 | |||
| 114 | == How do you prevent manipulation or coordinated misinformation campaigns? == | ||
| 115 | |||
| 116 | **Quality Gates**: | ||
| 117 | |||
| 118 | * Automated checks before AI-generated content publishes | ||
| 119 | * Source quality verification | ||
| 120 | * Mandatory contradiction search | ||
| 121 | * Bubble detection for coordinated campaigns | ||
| 122 | |||
| 123 | **Audit System**: | ||
| 124 | |||
| 125 | * Stratified sampling catches manipulation patterns | ||
| 126 | * Expert auditors validate AI research quality | ||
| 127 | * Failed audits trigger immediate review | ||
| 128 | |||
| 129 | **Transparency**: | ||
| 130 | |||
| 131 | * All reasoning chains are visible | ||
| 132 | * Evidence sources are traceable | ||
| 133 | * AKEL involvement clearly labeled | ||
| 134 | * Version history preserved | ||
| 135 | |||
| 136 | **Moderation**: | ||
| 137 | |||
| 138 | * Moderators handle abuse, spam, coordinated manipulation | ||
| 139 | * Content can be flagged by community | ||
| 140 | * Audit trail maintained even if content hidden | ||
| 141 | |||
| 142 | **Federation**: | ||
| 143 | |||
| 144 | * Multiple nodes with independent governance | ||
| 145 | * No single point of control | ||
| 146 | * Cross-node contradiction detection | ||
| 147 | * Trust model prevents malicious node influence | ||
| 148 | |||
| 149 | ---- | ||
| 150 | |||
| 151 | == What happens when new evidence contradicts an existing verdict? == | ||
| 152 | |||
| 153 | FactHarbor is designed for evolving knowledge: | ||
| 154 | |||
| 155 | **Automatic Re-evaluation**: | ||
| 156 | |||
| 157 | 1. New evidence arrives | ||
| 158 | 2. System detects affected scenarios and verdicts | ||
| 159 | 3. AKEL proposes updated verdicts | ||
| 160 | 4. Reviewers/experts validate | ||
| 161 | 5. New verdict version published | ||
| 162 | 6. Old versions remain accessible | ||
| 163 | |||
| 164 | **Version History**: | ||
| 165 | |||
| 166 | * Every verdict has complete history | ||
| 167 | * Users can see "as of date X, what did we know?" | ||
| 168 | * Timeline shows how understanding evolved | ||
| 169 | |||
| 170 | **Transparent Updates**: | ||
| 171 | |||
| 172 | * Reason for re-evaluation documented | ||
| 173 | * New evidence clearly linked | ||
| 174 | * Changes explained, not hidden | ||
| 175 | |||
| 176 | **User Notifications**: | ||
| 177 | |||
| 178 | * Users following claims are notified of updates | ||
| 179 | * Can compare old vs new verdicts | ||
| 180 | * Can see which evidence changed conclusions | ||
| 181 | |||
| 182 | ---- | ||
| 183 | |||
| 184 | == Who can submit claims to FactHarbor? == | ||
| 185 | |||
| 186 | **Anyone** - even without login: | ||
| 187 | |||
| 188 | **Readers** (no login required): | ||
| 189 | |||
| 190 | * Browse and search all published content | ||
| 191 | * Submit text for analysis | ||
| 192 | * New claims added automatically unless duplicates exist | ||
| 193 | * System deduplicates and normalizes | ||
| 194 | |||
| 195 | **Contributors** (logged in): | ||
| 196 | |||
| 197 | * Everything Readers can do | ||
| 198 | * Submit evidence sources | ||
| 199 | * Suggest scenarios | ||
| 200 | * Participate in discussions | ||
| 201 | |||
| 202 | **Workflow**: | ||
| 203 | |||
| 204 | 1. User submits text (as Reader or Contributor) | ||
| 205 | 2. AKEL extracts claims | ||
| 206 | 3. Checks for existing duplicates | ||
| 207 | 4. Normalizes claim text | ||
| 208 | 5. Assigns risk tier | ||
| 209 | 6. Generates scenarios (draft) | ||
| 210 | 7. Runs quality gates | ||
| 211 | 8. Publishes as AI-Generated (Mode 2) if passes | ||
| 212 | |||
| 213 | ---- | ||
| 214 | |||
| 215 | == What are "risk tiers" and why do they matter? == | ||
| 216 | |||
| 217 | Risk tiers determine review requirements and publication workflow: | ||
| 218 | |||
| 219 | **Tier A (High Risk)**: | ||
| 220 | |||
| 221 | * **Domains**: Medical, legal, elections, safety, security, major financial | ||
| 222 | * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status | ||
| 223 | * **Audit rate**: Recommendation 30-50% | ||
| 224 | * **Why**: Potential for significant harm if wrong | ||
| 225 | |||
| 226 | **Tier B (Medium Risk)**: | ||
| 227 | |||
| 228 | * **Domains**: Complex policy, science causality, contested issues | ||
| 229 | * **Publication**: AI can publish immediately with clear labeling | ||
| 230 | * **Audit rate**: Recommendation 10-20% | ||
| 231 | * **Why**: Nuanced but lower immediate harm risk | ||
| 232 | |||
| 233 | **Tier C (Low Risk)**: | ||
| 234 | |||
| 235 | * **Domains**: Definitions, established facts, historical data | ||
| 236 | * **Publication**: AI publication default | ||
| 237 | * **Audit rate**: Recommendation 5-10% | ||
| 238 | * **Why**: Well-established, low controversy | ||
| 239 | |||
| 240 | **Assignment**: | ||
| 241 | |||
| 242 | * AKEL suggests tier based on domain, keywords, impact | ||
| 243 | * Moderators and Experts can override | ||
| 244 | * Risk tiers reviewed based on audit outcomes | ||
| 245 | |||
| 246 | ---- | ||
| 247 | |||
| 248 | == How does federation work and why is it important? == | ||
| 249 | |||
| 250 | **Federation Model**: | ||
| 251 | |||
| 252 | * Multiple independent FactHarbor nodes | ||
| 253 | * Each node has own database, AKEL, governance | ||
| 254 | * Nodes exchange claims, scenarios, evidence, verdicts | ||
| 255 | * No central authority | ||
| 256 | |||
| 257 | **Why Federation Matters**: | ||
| 258 | |||
| 259 | * **Resilience**: No single point of failure or censorship | ||
| 260 | * **Autonomy**: Communities govern themselves | ||
| 261 | * **Scalability**: Add nodes to handle more users | ||
| 262 | * **Specialization**: Domain-focused nodes (health, energy, etc.) | ||
| 263 | * **Trust diversity**: Multiple perspectives, not single truth source | ||
| 264 | |||
| 265 | **How Nodes Exchange Data**: | ||
| 266 | |||
| 267 | 1. Local node creates versions | ||
| 268 | 2. Builds signed bundle | ||
| 269 | 3. Pushes to trusted neighbor nodes | ||
| 270 | 4. Remote nodes validate signatures and lineage | ||
| 271 | 5. Accept or branch versions | ||
| 272 | 6. Local re-evaluation if needed | ||
| 273 | |||
| 274 | **Trust Model**: | ||
| 275 | |||
| 276 | * Trusted nodes → auto-import | ||
| 277 | * Neutral nodes → import with review | ||
| 278 | * Untrusted nodes → manual only | ||
| 279 | |||
| 280 | ---- | ||
| 281 | |||
| 282 | == Can experts disagree in FactHarbor? == | ||
| 283 | |||
| 284 | **Yes - and that's a feature, not a bug**: | ||
| 285 | |||
| 286 | **Multiple Scenarios**: | ||
| 287 | |||
| 288 | * Experts can create different scenarios with different assumptions | ||
| 289 | * Each scenario gets its own verdict | ||
| 290 | * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) | ||
| 291 | |||
| 292 | **Parallel Verdicts**: | ||
| 293 | |||
| 294 | * Same scenario, different expert interpretations | ||
| 295 | * Both verdicts visible with expert attribution | ||
| 296 | * No forced consensus | ||
| 297 | |||
| 298 | **Transparency**: | ||
| 299 | |||
| 300 | * Expert reasoning documented | ||
| 301 | * Assumptions stated explicitly | ||
| 302 | * Evidence chains traceable | ||
| 303 | * Users can evaluate competing expert opinions | ||
| 304 | |||
| 305 | **Federation**: | ||
| 306 | |||
| 307 | * Different nodes can have different expert conclusions | ||
| 308 | * Cross-node branching allowed | ||
| 309 | * Users can see how conclusions vary across nodes | ||
| 310 | |||
| 311 | ---- | ||
| 312 | |||
| 313 | == What prevents AI from hallucinating or making up facts? == | ||
| 314 | |||
| 315 | **Multiple Safeguards**: | ||
| 316 | |||
| 317 | **Quality Gate 4: Structural Integrity**: | ||
| 318 | |||
| 319 | * Fact-checking against sources | ||
| 320 | * No hallucinations allowed | ||
| 321 | * Logic chain must be valid and traceable | ||
| 322 | * References must be accessible and verifiable | ||
| 323 | |||
| 324 | **Evidence Requirements**: | ||
| 325 | |||
| 326 | * Primary sources required | ||
| 327 | * Citations must be complete | ||
| 328 | * Sources must be accessible | ||
| 329 | * Reliability scored | ||
| 330 | |||
| 331 | **Audit System**: | ||
| 332 | |||
| 333 | * Human auditors check AI-generated content | ||
| 334 | * Hallucinations caught and fed back into training | ||
| 335 | * Patterns of errors trigger system improvements | ||
| 336 | |||
| 337 | **Transparency**: | ||
| 338 | |||
| 339 | * All reasoning chains visible | ||
| 340 | * Sources linked | ||
| 341 | * Users can verify claims against sources | ||
| 342 | * AKEL outputs clearly labeled | ||
| 343 | |||
| 344 | **Human Oversight**: | ||
| 345 | |||
| 346 | * Tier A requires expert review for "Human-Reviewed" status | ||
| 347 | * Audit sampling catches errors | ||
| 348 | * Community can flag issues | ||
| 349 | |||
| 350 | ---- | ||
| 351 | |||
| 352 | == How does FactHarbor make money / is it sustainable? == | ||
| 353 | |||
| 354 | [ToDo: Business model and sustainability to be defined] | ||
| 355 | |||
| 356 | Potential models under consideration: | ||
| 357 | |||
| 358 | * Non-profit foundation with grants and donations | ||
| 359 | * Institutional subscriptions (universities, research organizations, media) | ||
| 360 | * API access for third-party integrations | ||
| 361 | * Premium features for power users | ||
| 362 | * Federated node hosting services | ||
| 363 | |||
| 364 | Core principle: **Public benefit** mission takes priority over profit. | ||
| 365 | |||
| 366 | ---- | ||
| 367 | |||
| 368 | == Related Pages == | ||
| 369 | |||
| 370 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 371 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 372 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 373 | * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] | ||
| 374 | * [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]] |