Changes for page FAQ
Last modified by Robert Schaub on 2026/02/08 21:20
From version 2.3
edited by Robert Schaub
on 2025/12/24 20:30
on 2025/12/24 20:30
Change comment:
Update document after refactoring.
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -12,6 +12,7 @@ 12 12 **What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer) 13 13 14 14 **Process**: 15 + 15 15 * Extracts claims from submitted text 16 16 * Generates structured sub-queries 17 17 * Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations) ... ... @@ -39,6 +39,7 @@ 39 39 **What**: Sampling audits where experts review AI-generated content 40 40 41 41 **Rates**: 43 + 42 42 * High-risk (Tier A): 30-50% sampling 43 43 * Medium-risk (Tier B): 10-20% sampling 44 44 * Low-risk (Tier C): 5-10% sampling ... ... @@ -50,6 +50,7 @@ 50 50 === 1.4 Why All Three Matter === 51 51 52 52 **Complementary Strengths**: 55 + 53 53 * **AI research**: Scale and speed for emerging claims 54 54 * **Expert authoring**: Authority and precision for critical domains 55 55 * **Audit feedback**: Continuous quality improvement ... ... @@ -57,6 +57,7 @@ 57 57 **Expert Time Optimization**: 58 58 59 59 Experts can choose where to focus their time: 63 + 60 60 * Author high-priority content directly 61 61 * Validate and edit AI-generated outputs 62 62 * Audit samples to improve system-wide AI performance ... ... @@ -75,6 +75,7 @@ 75 75 FactHarbor includes multiple safeguards against echo chambers and filter bubbles: 76 76 77 77 **Mandatory Contradiction Search**: 82 + 78 78 * AI must actively search for counter-evidence, not just confirmations 79 79 * System checks for echo chamber patterns in source clusters 80 80 * Flags tribal or ideological source clustering ... ... @@ -81,53 +81,59 @@ 81 81 * Requires diverse perspectives across political/ideological spectrum 82 82 83 83 **Multiple Scenarios**: 89 + 84 84 * Claims are evaluated under different interpretations 85 85 * Reveals how assumptions change conclusions 86 86 * Makes disagreements understandable, not divisive 87 87 88 88 **Transparent Reasoning**: 95 + 89 89 * All assumptions, definitions, and boundaries are explicit 90 90 * Evidence chains are traceable 91 91 * Uncertainty is quantified, not hidden 92 92 93 93 **Audit System**: 101 + 94 94 * Human auditors check for bubble patterns 95 95 * Feedback loop improves AI search diversity 96 96 * Community can flag missing perspectives 97 97 98 98 **Federation**: 107 + 99 99 * Multiple independent nodes with different perspectives 100 100 * No single entity controls "the truth" 101 101 * Cross-node contradiction detection 102 102 103 - 104 104 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? == 105 105 106 106 This is exactly what FactHarbor is designed for: 107 107 108 108 **Scenarios capture contexts**: 117 + 109 109 * Each scenario defines specific boundaries, definitions, and assumptions 110 110 * The same claim can have different verdicts in different scenarios 111 111 * Example: "Coffee is healthy" depends on: 112 - ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)113 - ** Population (adults? pregnant women? people with heart conditions?)114 - ** Consumption level (1 cup/day? 5 cups/day?)115 - ** Time horizon (short-term? long-term?)121 +** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 122 +** Population (adults? pregnant women? people with heart conditions?) 123 +** Consumption level (1 cup/day? 5 cups/day?) 124 +** Time horizon (short-term? long-term?) 116 116 117 117 **Truth Landscape**: 127 + 118 118 * Shows all scenarios and their verdicts side-by-side 119 119 * Users see *why* interpretations differ 120 120 * No forced consensus when legitimate disagreement exists 121 121 122 122 **Explicit Assumptions**: 133 + 123 123 * Every scenario states its assumptions clearly 124 124 * Users can compare how changing assumptions changes conclusions 125 125 * Makes context-dependence visible, not hidden 126 126 127 - 128 128 == 4. What makes FactHarbor different from traditional fact-checking sites? == 129 129 130 130 **Traditional Fact-Checking**: 141 + 131 131 * Binary verdicts: True / Mostly True / False 132 132 * Single interpretation chosen by fact-checker 133 133 * Often hides legitimate contextual differences ... ... @@ -134,6 +134,7 @@ 134 134 * Limited ability to show *why* people disagree 135 135 136 136 **FactHarbor**: 148 + 137 137 * **Multi-scenario**: Shows multiple valid interpretations 138 138 * **Likelihood-based**: Ranges with uncertainty, not binary labels 139 139 * **Transparent assumptions**: Makes boundaries and definitions explicit ... ... @@ -141,10 +141,10 @@ 141 141 * **Contradiction search**: Actively seeks opposing evidence 142 142 * **Federated**: No single authority controls truth 143 143 144 - 145 145 == 5. How do you prevent manipulation or coordinated misinformation campaigns? == 146 146 147 147 **Quality Gates**: 159 + 148 148 * Automated checks before AI-generated content publishes 149 149 * Source quality verification 150 150 * Mandatory contradiction search ... ... @@ -151,11 +151,13 @@ 151 151 * Bubble detection for coordinated campaigns 152 152 153 153 **Audit System**: 166 + 154 154 * Stratified sampling catches manipulation patterns 155 155 * Expert auditors validate AI research quality 156 156 * Failed audits trigger immediate review 157 157 158 158 **Transparency**: 172 + 159 159 * All reasoning chains are visible 160 160 * Evidence sources are traceable 161 161 * AKEL involvement clearly labeled ... ... @@ -162,22 +162,24 @@ 162 162 * Version history preserved 163 163 164 164 **Moderation**: 179 + 165 165 * Moderators handle abuse, spam, coordinated manipulation 166 166 * Content can be flagged by community 167 167 * Audit trail maintained even if content hidden 168 168 169 169 **Federation**: 185 + 170 170 * Multiple nodes with independent governance 171 171 * No single point of control 172 172 * Cross-node contradiction detection 173 173 * Trust model prevents malicious node influence 174 174 175 - 176 176 == 6. What happens when new evidence contradicts an existing verdict? == 177 177 178 178 FactHarbor is designed for evolving knowledge: 179 179 180 180 **Automatic Re-evaluation**: 196 + 181 181 1. New evidence arrives 182 182 2. System detects affected scenarios and verdicts 183 183 3. AKEL proposes updated verdicts ... ... @@ -186,26 +186,29 @@ 186 186 6. Old versions remain accessible 187 187 188 188 **Version History**: 205 + 189 189 * Every verdict has complete history 190 190 * Users can see "as of date X, what did we know?" 191 191 * Timeline shows how understanding evolved 192 192 193 193 **Transparent Updates**: 211 + 194 194 * Reason for re-evaluation documented 195 195 * New evidence clearly linked 196 196 * Changes explained, not hidden 197 197 198 198 **User Notifications**: 217 + 199 199 * Users following claims are notified of updates 200 200 * Can compare old vs new verdicts 201 201 * Can see which evidence changed conclusions 202 202 203 - 204 204 == 7. Who can submit claims to FactHarbor? == 205 205 206 206 **Anyone** - even without login: 207 207 208 208 **Readers** (no login required): 227 + 209 209 * Browse and search all published content 210 210 * Submit text for analysis 211 211 * New claims added automatically unless duplicates exist ... ... @@ -212,6 +212,7 @@ 212 212 * System deduplicates and normalizes 213 213 214 214 **Contributors** (logged in): 234 + 215 215 * Everything Readers can do 216 216 * Submit evidence sources 217 217 * Suggest scenarios ... ... @@ -218,6 +218,7 @@ 218 218 * Participate in discussions 219 219 220 220 **Workflow**: 241 + 221 221 1. User submits text (as Reader or Contributor) 222 222 2. AKEL extracts claims 223 223 3. Checks for existing duplicates ... ... @@ -227,12 +227,12 @@ 227 227 7. Runs quality gates 228 228 8. Publishes as AI-Generated (Mode 2) if passes 229 229 230 - 231 231 == 8. What are "risk tiers" and why do they matter? == 232 232 233 233 Risk tiers determine review requirements and publication workflow: 234 234 235 235 **Tier A (High Risk)**: 256 + 236 236 * **Domains**: Medical, legal, elections, safety, security, major financial 237 237 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status 238 238 * **Audit rate**: Recommendation 30-50% ... ... @@ -239,6 +239,7 @@ 239 239 * **Why**: Potential for significant harm if wrong 240 240 241 241 **Tier B (Medium Risk)**: 263 + 242 242 * **Domains**: Complex policy, science causality, contested issues 243 243 * **Publication**: AI can publish immediately with clear labeling 244 244 * **Audit rate**: Recommendation 10-20% ... ... @@ -245,6 +245,7 @@ 245 245 * **Why**: Nuanced but lower immediate harm risk 246 246 247 247 **Tier C (Low Risk)**: 270 + 248 248 * **Domains**: Definitions, established facts, historical data 249 249 * **Publication**: AI publication default 250 250 * **Audit rate**: Recommendation 5-10% ... ... @@ -251,14 +251,15 @@ 251 251 * **Why**: Well-established, low controversy 252 252 253 253 **Assignment**: 277 + 254 254 * AKEL suggests tier based on domain, keywords, impact 255 255 * Moderators and Experts can override 256 256 * Risk tiers reviewed based on audit outcomes 257 257 258 - 259 259 == 9. How does federation work and why is it important? == 260 260 261 261 **Federation Model**: 285 + 262 262 * Multiple independent FactHarbor nodes 263 263 * Each node has own database, AKEL, governance 264 264 * Nodes exchange claims, scenarios, evidence, verdicts ... ... @@ -265,6 +265,7 @@ 265 265 * No central authority 266 266 267 267 **Why Federation Matters**: 292 + 268 268 * **Resilience**: No single point of failure or censorship 269 269 * **Autonomy**: Communities govern themselves 270 270 * **Scalability**: Add nodes to handle more users ... ... @@ -272,6 +272,7 @@ 272 272 * **Trust diversity**: Multiple perspectives, not single truth source 273 273 274 274 **How Nodes Exchange Data**: 300 + 275 275 1. Local node creates versions 276 276 2. Builds signed bundle 277 277 3. Pushes to trusted neighbor nodes ... ... @@ -280,26 +280,29 @@ 280 280 6. Local re-evaluation if needed 281 281 282 282 **Trust Model**: 309 + 283 283 * Trusted nodes → auto-import 284 284 * Neutral nodes → import with review 285 285 * Untrusted nodes → manual only 286 286 287 - 288 288 == 10. Can experts disagree in FactHarbor? == 289 289 290 290 **Yes - and that's a feature, not a bug**: 291 291 292 292 **Multiple Scenarios**: 319 + 293 293 * Experts can create different scenarios with different assumptions 294 294 * Each scenario gets its own verdict 295 295 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) 296 296 297 297 **Parallel Verdicts**: 325 + 298 298 * Same scenario, different expert interpretations 299 299 * Both verdicts visible with expert attribution 300 300 * No forced consensus 301 301 302 302 **Transparency**: 331 + 303 303 * Expert reasoning documented 304 304 * Assumptions stated explicitly 305 305 * Evidence chains traceable ... ... @@ -306,16 +306,17 @@ 306 306 * Users can evaluate competing expert opinions 307 307 308 308 **Federation**: 338 + 309 309 * Different nodes can have different expert conclusions 310 310 * Cross-node branching allowed 311 311 * Users can see how conclusions vary across nodes 312 312 313 - 314 314 == 11. What prevents AI from hallucinating or making up facts? == 315 315 316 316 **Multiple Safeguards**: 317 317 318 318 **Quality Gate 4: Structural Integrity**: 348 + 319 319 * Fact-checking against sources 320 320 * No hallucinations allowed 321 321 * Logic chain must be valid and traceable ... ... @@ -322,6 +322,7 @@ 322 322 * References must be accessible and verifiable 323 323 324 324 **Evidence Requirements**: 355 + 325 325 * Primary sources required 326 326 * Citations must be complete 327 327 * Sources must be accessible ... ... @@ -328,11 +328,13 @@ 328 328 * Reliability scored 329 329 330 330 **Audit System**: 362 + 331 331 * Human auditors check AI-generated content 332 332 * Hallucinations caught and fed back into training 333 333 * Patterns of errors trigger system improvements 334 334 335 335 **Transparency**: 368 + 336 336 * All reasoning chains visible 337 337 * Sources linked 338 338 * Users can verify claims against sources ... ... @@ -339,16 +339,17 @@ 339 339 * AKEL outputs clearly labeled 340 340 341 341 **Human Oversight**: 375 + 342 342 * Tier A requires expert review for "Human-Reviewed" status 343 343 * Audit sampling catches errors 344 344 * Community can flag issues 345 345 346 - 347 347 == 12. How does FactHarbor make money / is it sustainable? == 348 348 349 349 [ToDo: Business model and sustainability to be defined] 350 350 351 351 Potential models under consideration: 385 + 352 352 * Non-profit foundation with grants and donations 353 353 * Institutional subscriptions (universities, research organizations, media) 354 354 * API access for third-party integrations ... ... @@ -361,12 +361,11 @@ 361 361 == 13. Related Pages == 362 362 363 363 * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 364 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 365 -* [[Automation>>FactHarbor.Specification.Automation.WebHome]] 366 -* [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] 367 -* [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]] 398 +* [[AKEL (AI Knowledge Extraction Layer)>>Archive.FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 399 +* [[Automation>>Archive.FactHarbor.Specification.Automation.WebHome]] 400 +* [[Federation & Decentralization>>Archive.FactHarbor.Specification.Federation & Decentralization.WebHome]] 401 +* [[Mission & Purpose>>Archive.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]] 368 368 369 - 370 370 == 20. Glossary / Key Terms == 371 371 372 372 === Phase 0 vs POC v1 === ... ... @@ -381,6 +381,7 @@ 381 381 === Beta 0 === 382 382 383 383 The next development stage after POC, featuring: 417 + 384 384 * External testers 385 385 * Basic federation experiments 386 386 * Enhanced automation ... ... @@ -388,7 +388,7 @@ 388 388 === Release 1.0 === 389 389 390 390 The first public release featuring: 425 + 391 391 * Full federation support 392 392 * 2000+ concurrent users 393 393 * Production-grade infrastructure 394 -