Changes for page FAQ
Last modified by Robert Schaub on 2025/12/24 20:33
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Parent
-
... ... @@ -1,1 +1,1 @@ 1 -FactHarbor.Specification V0\.9\.18.WebHome1 +FactHarbor.Specification.WebHome - Content
-
... ... @@ -6,74 +6,23 @@ 6 6 7 7 == How do facts get input into the system? == 8 8 9 -FactHarbor uses a hybrid model combining three complementary approaches:9 +FactHarbor uses a hybrid model: 10 10 11 - ===1. AI-GeneratedContent (Scalable)===11 +**1. AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels. 12 12 13 -** What**:System dynamically researchesclaimsusingAKEL(AIKnowledgeExtractionLayer)13 +**2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust. 14 14 15 -** Process**:15 +**3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality. 16 16 17 -* Extracts claims from submitted text 18 -* Generates structured sub-queries 19 -* Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations) 20 -* Runs automated quality gates 21 -* Publishes with clear "AI-Generated" labels 17 +**Why both matter**: 18 +* AI research handles scale—emerging claims, immediate responses with transparent reasoning 19 +* Expert authoring provides authoritative grounding for critical domains 20 +* Audit feedback ensures AI quality improves based on expert validation patterns 22 22 23 - **Publication**:Mode2 (public,AI-labeled)whenqualitygates pass22 +Experts can author high-priority content directly, validate/edit AI outputs, or audit samples to improve system-wide performance—focusing their time where expertise matters most. 24 24 25 - **Purpose**: Handlesscale—emergingclaimsgetimmediateresponseswith transparent reasoning24 +POC v1 demonstrates the AI research pipeline (fully automated with transparent reasoning); full system supports all three pathways. 26 26 27 -=== 2. Expert-Authored Content (Authoritative) === 28 - 29 -**What**: Domain experts directly author, edit, and validate content 30 - 31 -**Focus**: High-risk domains (medical, legal, safety-critical) 32 - 33 -**Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution 34 - 35 -**Authority**: Tier A content requires expert approval 36 - 37 -**Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences 38 - 39 -=== 3. Audit-Improved Quality (Continuous) === 40 - 41 -**What**: Sampling audits where experts review AI-generated content 42 - 43 -**Rates**: 44 - 45 -* High-risk (Tier A): 30-50% sampling 46 -* Medium-risk (Tier B): 10-20% sampling 47 -* Low-risk (Tier C): 5-10% sampling 48 - 49 -**Impact**: Expert feedback systematically improves AI research quality 50 - 51 -**Purpose**: Ensures AI quality evolves based on expert validation patterns 52 - 53 -=== Why All Three Matter === 54 - 55 -**Complementary Strengths**: 56 - 57 -* **AI research**: Scale and speed for emerging claims 58 -* **Expert authoring**: Authority and precision for critical domains 59 -* **Audit feedback**: Continuous quality improvement 60 - 61 -**Expert Time Optimization**: 62 - 63 -Experts can choose where to focus their time: 64 - 65 -* Author high-priority content directly 66 -* Validate and edit AI-generated outputs 67 -* Audit samples to improve system-wide AI performance 68 - 69 -This focuses expert time where domain expertise matters most while leveraging AI for scale. 70 - 71 -=== Current Status === 72 - 73 -**POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates) 74 - 75 -**Full System**: Will support all three pathways with integrated workflow 76 - 77 77 ---- 78 78 79 79 == What prevents FactHarbor from becoming another echo chamber? == ... ... @@ -81,7 +81,6 @@ 81 81 FactHarbor includes multiple safeguards against echo chambers and filter bubbles: 82 82 83 83 **Mandatory Contradiction Search**: 84 - 85 85 * AI must actively search for counter-evidence, not just confirmations 86 86 * System checks for echo chamber patterns in source clusters 87 87 * Flags tribal or ideological source clustering ... ... @@ -88,25 +88,21 @@ 88 88 * Requires diverse perspectives across political/ideological spectrum 89 89 90 90 **Multiple Scenarios**: 91 - 92 92 * Claims are evaluated under different interpretations 93 93 * Reveals how assumptions change conclusions 94 94 * Makes disagreements understandable, not divisive 95 95 96 96 **Transparent Reasoning**: 97 - 98 98 * All assumptions, definitions, and boundaries are explicit 99 99 * Evidence chains are traceable 100 100 * Uncertainty is quantified, not hidden 101 101 102 102 **Audit System**: 103 - 104 104 * Human auditors check for bubble patterns 105 105 * Feedback loop improves AI search diversity 106 106 * Community can flag missing perspectives 107 107 108 108 **Federation**: 109 - 110 110 * Multiple independent nodes with different perspectives 111 111 * No single entity controls "the truth" 112 112 * Cross-node contradiction detection ... ... @@ -118,23 +118,20 @@ 118 118 This is exactly what FactHarbor is designed for: 119 119 120 120 **Scenarios capture contexts**: 121 - 122 122 * Each scenario defines specific boundaries, definitions, and assumptions 123 123 * The same claim can have different verdicts in different scenarios 124 124 * Example: "Coffee is healthy" depends on: 125 -** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 126 -** Population (adults? pregnant women? people with heart conditions?) 127 -** Consumption level (1 cup/day? 5 cups/day?) 128 -** Time horizon (short-term? long-term?) 68 + ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 69 + ** Population (adults? pregnant women? people with heart conditions?) 70 + ** Consumption level (1 cup/day? 5 cups/day?) 71 + ** Time horizon (short-term? long-term?) 129 129 130 130 **Truth Landscape**: 131 - 132 132 * Shows all scenarios and their verdicts side-by-side 133 133 * Users see *why* interpretations differ 134 134 * No forced consensus when legitimate disagreement exists 135 135 136 136 **Explicit Assumptions**: 137 - 138 138 * Every scenario states its assumptions clearly 139 139 * Users can compare how changing assumptions changes conclusions 140 140 * Makes context-dependence visible, not hidden ... ... @@ -144,7 +144,6 @@ 144 144 == What makes FactHarbor different from traditional fact-checking sites? == 145 145 146 146 **Traditional Fact-Checking**: 147 - 148 148 * Binary verdicts: True / Mostly True / False 149 149 * Single interpretation chosen by fact-checker 150 150 * Often hides legitimate contextual differences ... ... @@ -151,7 +151,6 @@ 151 151 * Limited ability to show *why* people disagree 152 152 153 153 **FactHarbor**: 154 - 155 155 * **Multi-scenario**: Shows multiple valid interpretations 156 156 * **Likelihood-based**: Ranges with uncertainty, not binary labels 157 157 * **Transparent assumptions**: Makes boundaries and definitions explicit ... ... @@ -164,7 +164,6 @@ 164 164 == How do you prevent manipulation or coordinated misinformation campaigns? == 165 165 166 166 **Quality Gates**: 167 - 168 168 * Automated checks before AI-generated content publishes 169 169 * Source quality verification 170 170 * Mandatory contradiction search ... ... @@ -171,13 +171,11 @@ 171 171 * Bubble detection for coordinated campaigns 172 172 173 173 **Audit System**: 174 - 175 175 * Stratified sampling catches manipulation patterns 176 176 * Expert auditors validate AI research quality 177 177 * Failed audits trigger immediate review 178 178 179 179 **Transparency**: 180 - 181 181 * All reasoning chains are visible 182 182 * Evidence sources are traceable 183 183 * AKEL involvement clearly labeled ... ... @@ -184,13 +184,11 @@ 184 184 * Version history preserved 185 185 186 186 **Moderation**: 187 - 188 188 * Moderators handle abuse, spam, coordinated manipulation 189 189 * Content can be flagged by community 190 190 * Audit trail maintained even if content hidden 191 191 192 192 **Federation**: 193 - 194 194 * Multiple nodes with independent governance 195 195 * No single point of control 196 196 * Cross-node contradiction detection ... ... @@ -203,7 +203,6 @@ 203 203 FactHarbor is designed for evolving knowledge: 204 204 205 205 **Automatic Re-evaluation**: 206 - 207 207 1. New evidence arrives 208 208 2. System detects affected scenarios and verdicts 209 209 3. AKEL proposes updated verdicts ... ... @@ -212,19 +212,16 @@ 212 212 6. Old versions remain accessible 213 213 214 214 **Version History**: 215 - 216 216 * Every verdict has complete history 217 217 * Users can see "as of date X, what did we know?" 218 218 * Timeline shows how understanding evolved 219 219 220 220 **Transparent Updates**: 221 - 222 222 * Reason for re-evaluation documented 223 223 * New evidence clearly linked 224 224 * Changes explained, not hidden 225 225 226 226 **User Notifications**: 227 - 228 228 * Users following claims are notified of updates 229 229 * Can compare old vs new verdicts 230 230 * Can see which evidence changed conclusions ... ... @@ -236,7 +236,6 @@ 236 236 **Anyone** - even without login: 237 237 238 238 **Readers** (no login required): 239 - 240 240 * Browse and search all published content 241 241 * Submit text for analysis 242 242 * New claims added automatically unless duplicates exist ... ... @@ -243,7 +243,6 @@ 243 243 * System deduplicates and normalizes 244 244 245 245 **Contributors** (logged in): 246 - 247 247 * Everything Readers can do 248 248 * Submit evidence sources 249 249 * Suggest scenarios ... ... @@ -250,7 +250,6 @@ 250 250 * Participate in discussions 251 251 252 252 **Workflow**: 253 - 254 254 1. User submits text (as Reader or Contributor) 255 255 2. AKEL extracts claims 256 256 3. Checks for existing duplicates ... ... @@ -267,7 +267,6 @@ 267 267 Risk tiers determine review requirements and publication workflow: 268 268 269 269 **Tier A (High Risk)**: 270 - 271 271 * **Domains**: Medical, legal, elections, safety, security, major financial 272 272 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status 273 273 * **Audit rate**: Recommendation 30-50% ... ... @@ -274,7 +274,6 @@ 274 274 * **Why**: Potential for significant harm if wrong 275 275 276 276 **Tier B (Medium Risk)**: 277 - 278 278 * **Domains**: Complex policy, science causality, contested issues 279 279 * **Publication**: AI can publish immediately with clear labeling 280 280 * **Audit rate**: Recommendation 10-20% ... ... @@ -281,7 +281,6 @@ 281 281 * **Why**: Nuanced but lower immediate harm risk 282 282 283 283 **Tier C (Low Risk)**: 284 - 285 285 * **Domains**: Definitions, established facts, historical data 286 286 * **Publication**: AI publication default 287 287 * **Audit rate**: Recommendation 5-10% ... ... @@ -288,7 +288,6 @@ 288 288 * **Why**: Well-established, low controversy 289 289 290 290 **Assignment**: 291 - 292 292 * AKEL suggests tier based on domain, keywords, impact 293 293 * Moderators and Experts can override 294 294 * Risk tiers reviewed based on audit outcomes ... ... @@ -298,7 +298,6 @@ 298 298 == How does federation work and why is it important? == 299 299 300 300 **Federation Model**: 301 - 302 302 * Multiple independent FactHarbor nodes 303 303 * Each node has own database, AKEL, governance 304 304 * Nodes exchange claims, scenarios, evidence, verdicts ... ... @@ -305,7 +305,6 @@ 305 305 * No central authority 306 306 307 307 **Why Federation Matters**: 308 - 309 309 * **Resilience**: No single point of failure or censorship 310 310 * **Autonomy**: Communities govern themselves 311 311 * **Scalability**: Add nodes to handle more users ... ... @@ -313,7 +313,6 @@ 313 313 * **Trust diversity**: Multiple perspectives, not single truth source 314 314 315 315 **How Nodes Exchange Data**: 316 - 317 317 1. Local node creates versions 318 318 2. Builds signed bundle 319 319 3. Pushes to trusted neighbor nodes ... ... @@ -322,7 +322,6 @@ 322 322 6. Local re-evaluation if needed 323 323 324 324 **Trust Model**: 325 - 326 326 * Trusted nodes → auto-import 327 327 * Neutral nodes → import with review 328 328 * Untrusted nodes → manual only ... ... @@ -334,19 +334,16 @@ 334 334 **Yes - and that's a feature, not a bug**: 335 335 336 336 **Multiple Scenarios**: 337 - 338 338 * Experts can create different scenarios with different assumptions 339 339 * Each scenario gets its own verdict 340 340 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) 341 341 342 342 **Parallel Verdicts**: 343 - 344 344 * Same scenario, different expert interpretations 345 345 * Both verdicts visible with expert attribution 346 346 * No forced consensus 347 347 348 348 **Transparency**: 349 - 350 350 * Expert reasoning documented 351 351 * Assumptions stated explicitly 352 352 * Evidence chains traceable ... ... @@ -353,7 +353,6 @@ 353 353 * Users can evaluate competing expert opinions 354 354 355 355 **Federation**: 356 - 357 357 * Different nodes can have different expert conclusions 358 358 * Cross-node branching allowed 359 359 * Users can see how conclusions vary across nodes ... ... @@ -365,7 +365,6 @@ 365 365 **Multiple Safeguards**: 366 366 367 367 **Quality Gate 4: Structural Integrity**: 368 - 369 369 * Fact-checking against sources 370 370 * No hallucinations allowed 371 371 * Logic chain must be valid and traceable ... ... @@ -372,7 +372,6 @@ 372 372 * References must be accessible and verifiable 373 373 374 374 **Evidence Requirements**: 375 - 376 376 * Primary sources required 377 377 * Citations must be complete 378 378 * Sources must be accessible ... ... @@ -379,13 +379,11 @@ 379 379 * Reliability scored 380 380 381 381 **Audit System**: 382 - 383 383 * Human auditors check AI-generated content 384 384 * Hallucinations caught and fed back into training 385 385 * Patterns of errors trigger system improvements 386 386 387 387 **Transparency**: 388 - 389 389 * All reasoning chains visible 390 390 * Sources linked 391 391 * Users can verify claims against sources ... ... @@ -392,7 +392,6 @@ 392 392 * AKEL outputs clearly labeled 393 393 394 394 **Human Oversight**: 395 - 396 396 * Tier A requires expert review for "Human-Reviewed" status 397 397 * Audit sampling catches errors 398 398 * Community can flag issues ... ... @@ -404,7 +404,6 @@ 404 404 [ToDo: Business model and sustainability to be defined] 405 405 406 406 Potential models under consideration: 407 - 408 408 * Non-profit foundation with grants and donations 409 409 * Institutional subscriptions (universities, research organizations, media) 410 410 * API access for third-party integrations ... ... @@ -417,8 +417,9 @@ 417 417 418 418 == Related Pages == 419 419 420 -* [[Requirements (Roles)>>FactHarbor.Specification V0\.9\.18.Requirements.WebHome]] 421 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 422 -* [[Automation>>FactHarbor.Specification V0\.9\.18.Automation.WebHome]] 423 -* [[Federation & Decentralization>>FactHarbor.Specification V0\.9\.18.Federation & Decentralization.WebHome]] 424 -* [[Mission & Purpose>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Organisation V0\.9\.18.Core Problems FactHarbor Solves.WebHome]] 329 +* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] 330 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 331 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]] 332 +* [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] 333 +* [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]] 334 +