Changes for page FAQ
Last modified by Robert Schaub on 2026/02/08 21:20
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Parent
-
... ... @@ -1,1 +1,1 @@ 1 - Archive.FactHarbor0\.9\.40.Specification.WebHome1 +FactHarbor.Specification.WebHome - Content
-
... ... @@ -12,7 +12,6 @@ 12 12 **What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer) 13 13 14 14 **Process**: 15 - 16 16 * Extracts claims from submitted text 17 17 * Generates structured sub-queries 18 18 * Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations) ... ... @@ -40,7 +40,6 @@ 40 40 **What**: Sampling audits where experts review AI-generated content 41 41 42 42 **Rates**: 43 - 44 44 * High-risk (Tier A): 30-50% sampling 45 45 * Medium-risk (Tier B): 10-20% sampling 46 46 * Low-risk (Tier C): 5-10% sampling ... ... @@ -52,7 +52,6 @@ 52 52 === 1.4 Why All Three Matter === 53 53 54 54 **Complementary Strengths**: 55 - 56 56 * **AI research**: Scale and speed for emerging claims 57 57 * **Expert authoring**: Authority and precision for critical domains 58 58 * **Audit feedback**: Continuous quality improvement ... ... @@ -60,7 +60,6 @@ 60 60 **Expert Time Optimization**: 61 61 62 62 Experts can choose where to focus their time: 63 - 64 64 * Author high-priority content directly 65 65 * Validate and edit AI-generated outputs 66 66 * Audit samples to improve system-wide AI performance ... ... @@ -79,7 +79,6 @@ 79 79 FactHarbor includes multiple safeguards against echo chambers and filter bubbles: 80 80 81 81 **Mandatory Contradiction Search**: 82 - 83 83 * AI must actively search for counter-evidence, not just confirmations 84 84 * System checks for echo chamber patterns in source clusters 85 85 * Flags tribal or ideological source clustering ... ... @@ -86,59 +86,53 @@ 86 86 * Requires diverse perspectives across political/ideological spectrum 87 87 88 88 **Multiple Scenarios**: 89 - 90 90 * Claims are evaluated under different interpretations 91 91 * Reveals how assumptions change conclusions 92 92 * Makes disagreements understandable, not divisive 93 93 94 94 **Transparent Reasoning**: 95 - 96 96 * All assumptions, definitions, and boundaries are explicit 97 97 * Evidence chains are traceable 98 98 * Uncertainty is quantified, not hidden 99 99 100 100 **Audit System**: 101 - 102 102 * Human auditors check for bubble patterns 103 103 * Feedback loop improves AI search diversity 104 104 * Community can flag missing perspectives 105 105 106 106 **Federation**: 107 - 108 108 * Multiple independent nodes with different perspectives 109 109 * No single entity controls "the truth" 110 110 * Cross-node contradiction detection 111 111 103 + 112 112 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? == 113 113 114 114 This is exactly what FactHarbor is designed for: 115 115 116 116 **Scenarios capture contexts**: 117 - 118 118 * Each scenario defines specific boundaries, definitions, and assumptions 119 119 * The same claim can have different verdicts in different scenarios 120 120 * Example: "Coffee is healthy" depends on: 121 -** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 122 -** Population (adults? pregnant women? people with heart conditions?) 123 -** Consumption level (1 cup/day? 5 cups/day?) 124 -** Time horizon (short-term? long-term?) 112 + ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 113 + ** Population (adults? pregnant women? people with heart conditions?) 114 + ** Consumption level (1 cup/day? 5 cups/day?) 115 + ** Time horizon (short-term? long-term?) 125 125 126 126 **Truth Landscape**: 127 - 128 128 * Shows all scenarios and their verdicts side-by-side 129 129 * Users see *why* interpretations differ 130 130 * No forced consensus when legitimate disagreement exists 131 131 132 132 **Explicit Assumptions**: 133 - 134 134 * Every scenario states its assumptions clearly 135 135 * Users can compare how changing assumptions changes conclusions 136 136 * Makes context-dependence visible, not hidden 137 137 127 + 138 138 == 4. What makes FactHarbor different from traditional fact-checking sites? == 139 139 140 140 **Traditional Fact-Checking**: 141 - 142 142 * Binary verdicts: True / Mostly True / False 143 143 * Single interpretation chosen by fact-checker 144 144 * Often hides legitimate contextual differences ... ... @@ -145,7 +145,6 @@ 145 145 * Limited ability to show *why* people disagree 146 146 147 147 **FactHarbor**: 148 - 149 149 * **Multi-scenario**: Shows multiple valid interpretations 150 150 * **Likelihood-based**: Ranges with uncertainty, not binary labels 151 151 * **Transparent assumptions**: Makes boundaries and definitions explicit ... ... @@ -153,10 +153,10 @@ 153 153 * **Contradiction search**: Actively seeks opposing evidence 154 154 * **Federated**: No single authority controls truth 155 155 144 + 156 156 == 5. How do you prevent manipulation or coordinated misinformation campaigns? == 157 157 158 158 **Quality Gates**: 159 - 160 160 * Automated checks before AI-generated content publishes 161 161 * Source quality verification 162 162 * Mandatory contradiction search ... ... @@ -163,13 +163,11 @@ 163 163 * Bubble detection for coordinated campaigns 164 164 165 165 **Audit System**: 166 - 167 167 * Stratified sampling catches manipulation patterns 168 168 * Expert auditors validate AI research quality 169 169 * Failed audits trigger immediate review 170 170 171 171 **Transparency**: 172 - 173 173 * All reasoning chains are visible 174 174 * Evidence sources are traceable 175 175 * AKEL involvement clearly labeled ... ... @@ -176,24 +176,22 @@ 176 176 * Version history preserved 177 177 178 178 **Moderation**: 179 - 180 180 * Moderators handle abuse, spam, coordinated manipulation 181 181 * Content can be flagged by community 182 182 * Audit trail maintained even if content hidden 183 183 184 184 **Federation**: 185 - 186 186 * Multiple nodes with independent governance 187 187 * No single point of control 188 188 * Cross-node contradiction detection 189 189 * Trust model prevents malicious node influence 190 190 175 + 191 191 == 6. What happens when new evidence contradicts an existing verdict? == 192 192 193 193 FactHarbor is designed for evolving knowledge: 194 194 195 195 **Automatic Re-evaluation**: 196 - 197 197 1. New evidence arrives 198 198 2. System detects affected scenarios and verdicts 199 199 3. AKEL proposes updated verdicts ... ... @@ -202,29 +202,26 @@ 202 202 6. Old versions remain accessible 203 203 204 204 **Version History**: 205 - 206 206 * Every verdict has complete history 207 207 * Users can see "as of date X, what did we know?" 208 208 * Timeline shows how understanding evolved 209 209 210 210 **Transparent Updates**: 211 - 212 212 * Reason for re-evaluation documented 213 213 * New evidence clearly linked 214 214 * Changes explained, not hidden 215 215 216 216 **User Notifications**: 217 - 218 218 * Users following claims are notified of updates 219 219 * Can compare old vs new verdicts 220 220 * Can see which evidence changed conclusions 221 221 203 + 222 222 == 7. Who can submit claims to FactHarbor? == 223 223 224 224 **Anyone** - even without login: 225 225 226 226 **Readers** (no login required): 227 - 228 228 * Browse and search all published content 229 229 * Submit text for analysis 230 230 * New claims added automatically unless duplicates exist ... ... @@ -231,7 +231,6 @@ 231 231 * System deduplicates and normalizes 232 232 233 233 **Contributors** (logged in): 234 - 235 235 * Everything Readers can do 236 236 * Submit evidence sources 237 237 * Suggest scenarios ... ... @@ -238,7 +238,6 @@ 238 238 * Participate in discussions 239 239 240 240 **Workflow**: 241 - 242 242 1. User submits text (as Reader or Contributor) 243 243 2. AKEL extracts claims 244 244 3. Checks for existing duplicates ... ... @@ -248,12 +248,12 @@ 248 248 7. Runs quality gates 249 249 8. Publishes as AI-Generated (Mode 2) if passes 250 250 230 + 251 251 == 8. What are "risk tiers" and why do they matter? == 252 252 253 253 Risk tiers determine review requirements and publication workflow: 254 254 255 255 **Tier A (High Risk)**: 256 - 257 257 * **Domains**: Medical, legal, elections, safety, security, major financial 258 258 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status 259 259 * **Audit rate**: Recommendation 30-50% ... ... @@ -260,7 +260,6 @@ 260 260 * **Why**: Potential for significant harm if wrong 261 261 262 262 **Tier B (Medium Risk)**: 263 - 264 264 * **Domains**: Complex policy, science causality, contested issues 265 265 * **Publication**: AI can publish immediately with clear labeling 266 266 * **Audit rate**: Recommendation 10-20% ... ... @@ -267,7 +267,6 @@ 267 267 * **Why**: Nuanced but lower immediate harm risk 268 268 269 269 **Tier C (Low Risk)**: 270 - 271 271 * **Domains**: Definitions, established facts, historical data 272 272 * **Publication**: AI publication default 273 273 * **Audit rate**: Recommendation 5-10% ... ... @@ -274,15 +274,14 @@ 274 274 * **Why**: Well-established, low controversy 275 275 276 276 **Assignment**: 277 - 278 278 * AKEL suggests tier based on domain, keywords, impact 279 279 * Moderators and Experts can override 280 280 * Risk tiers reviewed based on audit outcomes 281 281 258 + 282 282 == 9. How does federation work and why is it important? == 283 283 284 284 **Federation Model**: 285 - 286 286 * Multiple independent FactHarbor nodes 287 287 * Each node has own database, AKEL, governance 288 288 * Nodes exchange claims, scenarios, evidence, verdicts ... ... @@ -289,7 +289,6 @@ 289 289 * No central authority 290 290 291 291 **Why Federation Matters**: 292 - 293 293 * **Resilience**: No single point of failure or censorship 294 294 * **Autonomy**: Communities govern themselves 295 295 * **Scalability**: Add nodes to handle more users ... ... @@ -297,7 +297,6 @@ 297 297 * **Trust diversity**: Multiple perspectives, not single truth source 298 298 299 299 **How Nodes Exchange Data**: 300 - 301 301 1. Local node creates versions 302 302 2. Builds signed bundle 303 303 3. Pushes to trusted neighbor nodes ... ... @@ -306,29 +306,26 @@ 306 306 6. Local re-evaluation if needed 307 307 308 308 **Trust Model**: 309 - 310 310 * Trusted nodes → auto-import 311 311 * Neutral nodes → import with review 312 312 * Untrusted nodes → manual only 313 313 287 + 314 314 == 10. Can experts disagree in FactHarbor? == 315 315 316 316 **Yes - and that's a feature, not a bug**: 317 317 318 318 **Multiple Scenarios**: 319 - 320 320 * Experts can create different scenarios with different assumptions 321 321 * Each scenario gets its own verdict 322 322 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) 323 323 324 324 **Parallel Verdicts**: 325 - 326 326 * Same scenario, different expert interpretations 327 327 * Both verdicts visible with expert attribution 328 328 * No forced consensus 329 329 330 330 **Transparency**: 331 - 332 332 * Expert reasoning documented 333 333 * Assumptions stated explicitly 334 334 * Evidence chains traceable ... ... @@ -335,17 +335,16 @@ 335 335 * Users can evaluate competing expert opinions 336 336 337 337 **Federation**: 338 - 339 339 * Different nodes can have different expert conclusions 340 340 * Cross-node branching allowed 341 341 * Users can see how conclusions vary across nodes 342 342 313 + 343 343 == 11. What prevents AI from hallucinating or making up facts? == 344 344 345 345 **Multiple Safeguards**: 346 346 347 347 **Quality Gate 4: Structural Integrity**: 348 - 349 349 * Fact-checking against sources 350 350 * No hallucinations allowed 351 351 * Logic chain must be valid and traceable ... ... @@ -352,7 +352,6 @@ 352 352 * References must be accessible and verifiable 353 353 354 354 **Evidence Requirements**: 355 - 356 356 * Primary sources required 357 357 * Citations must be complete 358 358 * Sources must be accessible ... ... @@ -359,13 +359,11 @@ 359 359 * Reliability scored 360 360 361 361 **Audit System**: 362 - 363 363 * Human auditors check AI-generated content 364 364 * Hallucinations caught and fed back into training 365 365 * Patterns of errors trigger system improvements 366 366 367 367 **Transparency**: 368 - 369 369 * All reasoning chains visible 370 370 * Sources linked 371 371 * Users can verify claims against sources ... ... @@ -372,17 +372,16 @@ 372 372 * AKEL outputs clearly labeled 373 373 374 374 **Human Oversight**: 375 - 376 376 * Tier A requires expert review for "Human-Reviewed" status 377 377 * Audit sampling catches errors 378 378 * Community can flag issues 379 379 346 + 380 380 == 12. How does FactHarbor make money / is it sustainable? == 381 381 382 382 [ToDo: Business model and sustainability to be defined] 383 383 384 384 Potential models under consideration: 385 - 386 386 * Non-profit foundation with grants and donations 387 387 * Institutional subscriptions (universities, research organizations, media) 388 388 * API access for third-party integrations ... ... @@ -395,34 +395,8 @@ 395 395 == 13. Related Pages == 396 396 397 397 * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 398 -* [[AKEL (AI Knowledge Extraction Layer)>> Archive.FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]364 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 399 399 * [[Automation>>FactHarbor.Specification.Automation.WebHome]] 400 400 * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] 401 -* [[Mission & Purpose>> Archive.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]367 +* [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]] 402 402 403 -== 20. Glossary / Key Terms == 404 - 405 -=== Phase 0 vs POC v1 === 406 - 407 -These terms refer to the same stage of FactHarbor's development: 408 - 409 -* **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance 410 -* **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication 411 - 412 -Both describe the current development stage where the platform is being built and initially validated. 413 - 414 -=== Beta 0 === 415 - 416 -The next development stage after POC, featuring: 417 - 418 -* External testers 419 -* Basic federation experiments 420 -* Enhanced automation 421 - 422 -=== Release 1.0 === 423 - 424 -The first public release featuring: 425 - 426 -* Full federation support 427 -* 2000+ concurrent users 428 -* Production-grade infrastructure