Changes for page FAQ
Last modified by Robert Schaub on 2025/12/24 20:33
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -6,23 +6,74 @@ 6 6 7 7 == How do facts get input into the system? == 8 8 9 -FactHarbor uses a hybrid model: 9 +FactHarbor uses a hybrid model combining three complementary approaches: 10 10 11 - **1. AI-Generated(scalable)**: System dynamically researches claims—extracting, generatingstructured sub-queries, performing mandatory contradiction search(actively seeking counter-evidence, not just confirmations), running quality gates. Publishedwith clear "AI-Generated" labels.11 +=== 1. AI-Generated Content (Scalable) === 12 12 13 -** 2. Expert-Authored (authoritative)**:Domain expertsdirectly author,edit,andvalidatecontent—especiallyforhigh-riskdomains(medical, legal).Theseget"Human-Reviewed"status andhighertrust.13 +**What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer) 14 14 15 -** 3. Audit-Improved (continuousquality)**:Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality.15 +**Process**: 16 16 17 -**Why both matter**: 18 -* AI research handles scale—emerging claims, immediate responses with transparent reasoning 19 -* Expert authoring provides authoritative grounding for critical domains 20 -* Audit feedback ensures AI quality improves based on expert validation patterns 17 +* Extracts claims from submitted text 18 +* Generates structured sub-queries 19 +* Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations) 20 +* Runs automated quality gates 21 +* Publishes with clear "AI-Generated" labels 21 21 22 - Expertscan author high-prioritycontentdirectly,validate/editAIoutputs, orauditsamplesto improvesystem-wideperformance—focusing their time where expertise matters most.23 +**Publication**: Mode 2 (public, AI-labeled) when quality gates pass 23 23 24 -P OC v1 demonstratestheAIresearch pipeline(fullyautomatedwith transparent reasoning); full system supports all three pathways.25 +**Purpose**: Handles scale — emerging claims get immediate responses with transparent reasoning 25 25 27 +=== 2. Expert-Authored Content (Authoritative) === 28 + 29 +**What**: Domain experts directly author, edit, and validate content 30 + 31 +**Focus**: High-risk domains (medical, legal, safety-critical) 32 + 33 +**Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution 34 + 35 +**Authority**: Tier A content requires expert approval 36 + 37 +**Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences 38 + 39 +=== 3. Audit-Improved Quality (Continuous) === 40 + 41 +**What**: Sampling audits where experts review AI-generated content 42 + 43 +**Rates**: 44 + 45 +* High-risk (Tier A): 30-50% sampling 46 +* Medium-risk (Tier B): 10-20% sampling 47 +* Low-risk (Tier C): 5-10% sampling 48 + 49 +**Impact**: Expert feedback systematically improves AI research quality 50 + 51 +**Purpose**: Ensures AI quality evolves based on expert validation patterns 52 + 53 +=== Why All Three Matter === 54 + 55 +**Complementary Strengths**: 56 + 57 +* **AI research**: Scale and speed for emerging claims 58 +* **Expert authoring**: Authority and precision for critical domains 59 +* **Audit feedback**: Continuous quality improvement 60 + 61 +**Expert Time Optimization**: 62 + 63 +Experts can choose where to focus their time: 64 + 65 +* Author high-priority content directly 66 +* Validate and edit AI-generated outputs 67 +* Audit samples to improve system-wide AI performance 68 + 69 +This focuses expert time where domain expertise matters most while leveraging AI for scale. 70 + 71 +=== Current Status === 72 + 73 +**POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates) 74 + 75 +**Full System**: Will support all three pathways with integrated workflow 76 + 26 26 ---- 27 27 28 28 == What prevents FactHarbor from becoming another echo chamber? == ... ... @@ -30,6 +30,7 @@ 30 30 FactHarbor includes multiple safeguards against echo chambers and filter bubbles: 31 31 32 32 **Mandatory Contradiction Search**: 84 + 33 33 * AI must actively search for counter-evidence, not just confirmations 34 34 * System checks for echo chamber patterns in source clusters 35 35 * Flags tribal or ideological source clustering ... ... @@ -36,21 +36,25 @@ 36 36 * Requires diverse perspectives across political/ideological spectrum 37 37 38 38 **Multiple Scenarios**: 91 + 39 39 * Claims are evaluated under different interpretations 40 40 * Reveals how assumptions change conclusions 41 41 * Makes disagreements understandable, not divisive 42 42 43 43 **Transparent Reasoning**: 97 + 44 44 * All assumptions, definitions, and boundaries are explicit 45 45 * Evidence chains are traceable 46 46 * Uncertainty is quantified, not hidden 47 47 48 48 **Audit System**: 103 + 49 49 * Human auditors check for bubble patterns 50 50 * Feedback loop improves AI search diversity 51 51 * Community can flag missing perspectives 52 52 53 53 **Federation**: 109 + 54 54 * Multiple independent nodes with different perspectives 55 55 * No single entity controls "the truth" 56 56 * Cross-node contradiction detection ... ... @@ -62,20 +62,23 @@ 62 62 This is exactly what FactHarbor is designed for: 63 63 64 64 **Scenarios capture contexts**: 121 + 65 65 * Each scenario defines specific boundaries, definitions, and assumptions 66 66 * The same claim can have different verdicts in different scenarios 67 67 * Example: "Coffee is healthy" depends on: 68 - ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)69 - ** Population (adults? pregnant women? people with heart conditions?)70 - ** Consumption level (1 cup/day? 5 cups/day?)71 - ** Time horizon (short-term? long-term?)125 +** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 126 +** Population (adults? pregnant women? people with heart conditions?) 127 +** Consumption level (1 cup/day? 5 cups/day?) 128 +** Time horizon (short-term? long-term?) 72 72 73 73 **Truth Landscape**: 131 + 74 74 * Shows all scenarios and their verdicts side-by-side 75 75 * Users see *why* interpretations differ 76 76 * No forced consensus when legitimate disagreement exists 77 77 78 78 **Explicit Assumptions**: 137 + 79 79 * Every scenario states its assumptions clearly 80 80 * Users can compare how changing assumptions changes conclusions 81 81 * Makes context-dependence visible, not hidden ... ... @@ -85,6 +85,7 @@ 85 85 == What makes FactHarbor different from traditional fact-checking sites? == 86 86 87 87 **Traditional Fact-Checking**: 147 + 88 88 * Binary verdicts: True / Mostly True / False 89 89 * Single interpretation chosen by fact-checker 90 90 * Often hides legitimate contextual differences ... ... @@ -91,6 +91,7 @@ 91 91 * Limited ability to show *why* people disagree 92 92 93 93 **FactHarbor**: 154 + 94 94 * **Multi-scenario**: Shows multiple valid interpretations 95 95 * **Likelihood-based**: Ranges with uncertainty, not binary labels 96 96 * **Transparent assumptions**: Makes boundaries and definitions explicit ... ... @@ -103,6 +103,7 @@ 103 103 == How do you prevent manipulation or coordinated misinformation campaigns? == 104 104 105 105 **Quality Gates**: 167 + 106 106 * Automated checks before AI-generated content publishes 107 107 * Source quality verification 108 108 * Mandatory contradiction search ... ... @@ -109,11 +109,13 @@ 109 109 * Bubble detection for coordinated campaigns 110 110 111 111 **Audit System**: 174 + 112 112 * Stratified sampling catches manipulation patterns 113 113 * Expert auditors validate AI research quality 114 114 * Failed audits trigger immediate review 115 115 116 116 **Transparency**: 180 + 117 117 * All reasoning chains are visible 118 118 * Evidence sources are traceable 119 119 * AKEL involvement clearly labeled ... ... @@ -120,11 +120,13 @@ 120 120 * Version history preserved 121 121 122 122 **Moderation**: 187 + 123 123 * Moderators handle abuse, spam, coordinated manipulation 124 124 * Content can be flagged by community 125 125 * Audit trail maintained even if content hidden 126 126 127 127 **Federation**: 193 + 128 128 * Multiple nodes with independent governance 129 129 * No single point of control 130 130 * Cross-node contradiction detection ... ... @@ -137,6 +137,7 @@ 137 137 FactHarbor is designed for evolving knowledge: 138 138 139 139 **Automatic Re-evaluation**: 206 + 140 140 1. New evidence arrives 141 141 2. System detects affected scenarios and verdicts 142 142 3. AKEL proposes updated verdicts ... ... @@ -145,16 +145,19 @@ 145 145 6. Old versions remain accessible 146 146 147 147 **Version History**: 215 + 148 148 * Every verdict has complete history 149 149 * Users can see "as of date X, what did we know?" 150 150 * Timeline shows how understanding evolved 151 151 152 152 **Transparent Updates**: 221 + 153 153 * Reason for re-evaluation documented 154 154 * New evidence clearly linked 155 155 * Changes explained, not hidden 156 156 157 157 **User Notifications**: 227 + 158 158 * Users following claims are notified of updates 159 159 * Can compare old vs new verdicts 160 160 * Can see which evidence changed conclusions ... ... @@ -166,6 +166,7 @@ 166 166 **Anyone** - even without login: 167 167 168 168 **Readers** (no login required): 239 + 169 169 * Browse and search all published content 170 170 * Submit text for analysis 171 171 * New claims added automatically unless duplicates exist ... ... @@ -172,6 +172,7 @@ 172 172 * System deduplicates and normalizes 173 173 174 174 **Contributors** (logged in): 246 + 175 175 * Everything Readers can do 176 176 * Submit evidence sources 177 177 * Suggest scenarios ... ... @@ -178,6 +178,7 @@ 178 178 * Participate in discussions 179 179 180 180 **Workflow**: 253 + 181 181 1. User submits text (as Reader or Contributor) 182 182 2. AKEL extracts claims 183 183 3. Checks for existing duplicates ... ... @@ -194,6 +194,7 @@ 194 194 Risk tiers determine review requirements and publication workflow: 195 195 196 196 **Tier A (High Risk)**: 270 + 197 197 * **Domains**: Medical, legal, elections, safety, security, major financial 198 198 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status 199 199 * **Audit rate**: Recommendation 30-50% ... ... @@ -200,6 +200,7 @@ 200 200 * **Why**: Potential for significant harm if wrong 201 201 202 202 **Tier B (Medium Risk)**: 277 + 203 203 * **Domains**: Complex policy, science causality, contested issues 204 204 * **Publication**: AI can publish immediately with clear labeling 205 205 * **Audit rate**: Recommendation 10-20% ... ... @@ -206,6 +206,7 @@ 206 206 * **Why**: Nuanced but lower immediate harm risk 207 207 208 208 **Tier C (Low Risk)**: 284 + 209 209 * **Domains**: Definitions, established facts, historical data 210 210 * **Publication**: AI publication default 211 211 * **Audit rate**: Recommendation 5-10% ... ... @@ -212,6 +212,7 @@ 212 212 * **Why**: Well-established, low controversy 213 213 214 214 **Assignment**: 291 + 215 215 * AKEL suggests tier based on domain, keywords, impact 216 216 * Moderators and Experts can override 217 217 * Risk tiers reviewed based on audit outcomes ... ... @@ -221,6 +221,7 @@ 221 221 == How does federation work and why is it important? == 222 222 223 223 **Federation Model**: 301 + 224 224 * Multiple independent FactHarbor nodes 225 225 * Each node has own database, AKEL, governance 226 226 * Nodes exchange claims, scenarios, evidence, verdicts ... ... @@ -227,6 +227,7 @@ 227 227 * No central authority 228 228 229 229 **Why Federation Matters**: 308 + 230 230 * **Resilience**: No single point of failure or censorship 231 231 * **Autonomy**: Communities govern themselves 232 232 * **Scalability**: Add nodes to handle more users ... ... @@ -234,6 +234,7 @@ 234 234 * **Trust diversity**: Multiple perspectives, not single truth source 235 235 236 236 **How Nodes Exchange Data**: 316 + 237 237 1. Local node creates versions 238 238 2. Builds signed bundle 239 239 3. Pushes to trusted neighbor nodes ... ... @@ -242,6 +242,7 @@ 242 242 6. Local re-evaluation if needed 243 243 244 244 **Trust Model**: 325 + 245 245 * Trusted nodes → auto-import 246 246 * Neutral nodes → import with review 247 247 * Untrusted nodes → manual only ... ... @@ -253,16 +253,19 @@ 253 253 **Yes - and that's a feature, not a bug**: 254 254 255 255 **Multiple Scenarios**: 337 + 256 256 * Experts can create different scenarios with different assumptions 257 257 * Each scenario gets its own verdict 258 258 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) 259 259 260 260 **Parallel Verdicts**: 343 + 261 261 * Same scenario, different expert interpretations 262 262 * Both verdicts visible with expert attribution 263 263 * No forced consensus 264 264 265 265 **Transparency**: 349 + 266 266 * Expert reasoning documented 267 267 * Assumptions stated explicitly 268 268 * Evidence chains traceable ... ... @@ -269,6 +269,7 @@ 269 269 * Users can evaluate competing expert opinions 270 270 271 271 **Federation**: 356 + 272 272 * Different nodes can have different expert conclusions 273 273 * Cross-node branching allowed 274 274 * Users can see how conclusions vary across nodes ... ... @@ -280,6 +280,7 @@ 280 280 **Multiple Safeguards**: 281 281 282 282 **Quality Gate 4: Structural Integrity**: 368 + 283 283 * Fact-checking against sources 284 284 * No hallucinations allowed 285 285 * Logic chain must be valid and traceable ... ... @@ -286,6 +286,7 @@ 286 286 * References must be accessible and verifiable 287 287 288 288 **Evidence Requirements**: 375 + 289 289 * Primary sources required 290 290 * Citations must be complete 291 291 * Sources must be accessible ... ... @@ -292,11 +292,13 @@ 292 292 * Reliability scored 293 293 294 294 **Audit System**: 382 + 295 295 * Human auditors check AI-generated content 296 296 * Hallucinations caught and fed back into training 297 297 * Patterns of errors trigger system improvements 298 298 299 299 **Transparency**: 388 + 300 300 * All reasoning chains visible 301 301 * Sources linked 302 302 * Users can verify claims against sources ... ... @@ -303,6 +303,7 @@ 303 303 * AKEL outputs clearly labeled 304 304 305 305 **Human Oversight**: 395 + 306 306 * Tier A requires expert review for "Human-Reviewed" status 307 307 * Audit sampling catches errors 308 308 * Community can flag issues ... ... @@ -314,6 +314,7 @@ 314 314 [ToDo: Business model and sustainability to be defined] 315 315 316 316 Potential models under consideration: 407 + 317 317 * Non-profit foundation with grants and donations 318 318 * Institutional subscriptions (universities, research organizations, media) 319 319 * API access for third-party integrations ... ... @@ -330,5 +330,4 @@ 330 330 * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 331 331 * [[Automation>>FactHarbor.Specification.Automation.WebHome]] 332 332 * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] 333 -* [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]] 334 - 424 +* [[Mission & Purpose>>FactHarbor.Organisation V0\.9\.18.Core Problems FactHarbor Solves.WebHome]]