Changes for page FAQ
Last modified by Robert Schaub on 2025/12/24 20:33
To version 2.1
edited by Robert Schaub
on 2025/12/14 23:02
on 2025/12/14 23:02
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -8,7 +8,7 @@ 8 8 9 9 FactHarbor uses a hybrid model: 10 10 11 -**1. AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels. 11 +**~1. **AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels.** 12 12 13 13 **2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust. 14 14 ... ... @@ -15,6 +15,7 @@ 15 15 **3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality. 16 16 17 17 **Why both matter**: 18 + 18 18 * AI research handles scale—emerging claims, immediate responses with transparent reasoning 19 19 * Expert authoring provides authoritative grounding for critical domains 20 20 * Audit feedback ensures AI quality improves based on expert validation patterns ... ... @@ -30,6 +30,7 @@ 30 30 FactHarbor includes multiple safeguards against echo chambers and filter bubbles: 31 31 32 32 **Mandatory Contradiction Search**: 34 + 33 33 * AI must actively search for counter-evidence, not just confirmations 34 34 * System checks for echo chamber patterns in source clusters 35 35 * Flags tribal or ideological source clustering ... ... @@ -36,21 +36,25 @@ 36 36 * Requires diverse perspectives across political/ideological spectrum 37 37 38 38 **Multiple Scenarios**: 41 + 39 39 * Claims are evaluated under different interpretations 40 40 * Reveals how assumptions change conclusions 41 41 * Makes disagreements understandable, not divisive 42 42 43 43 **Transparent Reasoning**: 47 + 44 44 * All assumptions, definitions, and boundaries are explicit 45 45 * Evidence chains are traceable 46 46 * Uncertainty is quantified, not hidden 47 47 48 48 **Audit System**: 53 + 49 49 * Human auditors check for bubble patterns 50 50 * Feedback loop improves AI search diversity 51 51 * Community can flag missing perspectives 52 52 53 53 **Federation**: 59 + 54 54 * Multiple independent nodes with different perspectives 55 55 * No single entity controls "the truth" 56 56 * Cross-node contradiction detection ... ... @@ -62,20 +62,23 @@ 62 62 This is exactly what FactHarbor is designed for: 63 63 64 64 **Scenarios capture contexts**: 71 + 65 65 * Each scenario defines specific boundaries, definitions, and assumptions 66 66 * The same claim can have different verdicts in different scenarios 67 67 * Example: "Coffee is healthy" depends on: 68 - ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)69 - ** Population (adults? pregnant women? people with heart conditions?)70 - ** Consumption level (1 cup/day? 5 cups/day?)71 - ** Time horizon (short-term? long-term?)75 +** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) 76 +** Population (adults? pregnant women? people with heart conditions?) 77 +** Consumption level (1 cup/day? 5 cups/day?) 78 +** Time horizon (short-term? long-term?) 72 72 73 73 **Truth Landscape**: 81 + 74 74 * Shows all scenarios and their verdicts side-by-side 75 75 * Users see *why* interpretations differ 76 76 * No forced consensus when legitimate disagreement exists 77 77 78 78 **Explicit Assumptions**: 87 + 79 79 * Every scenario states its assumptions clearly 80 80 * Users can compare how changing assumptions changes conclusions 81 81 * Makes context-dependence visible, not hidden ... ... @@ -85,6 +85,7 @@ 85 85 == What makes FactHarbor different from traditional fact-checking sites? == 86 86 87 87 **Traditional Fact-Checking**: 97 + 88 88 * Binary verdicts: True / Mostly True / False 89 89 * Single interpretation chosen by fact-checker 90 90 * Often hides legitimate contextual differences ... ... @@ -91,6 +91,7 @@ 91 91 * Limited ability to show *why* people disagree 92 92 93 93 **FactHarbor**: 104 + 94 94 * **Multi-scenario**: Shows multiple valid interpretations 95 95 * **Likelihood-based**: Ranges with uncertainty, not binary labels 96 96 * **Transparent assumptions**: Makes boundaries and definitions explicit ... ... @@ -103,6 +103,7 @@ 103 103 == How do you prevent manipulation or coordinated misinformation campaigns? == 104 104 105 105 **Quality Gates**: 117 + 106 106 * Automated checks before AI-generated content publishes 107 107 * Source quality verification 108 108 * Mandatory contradiction search ... ... @@ -109,11 +109,13 @@ 109 109 * Bubble detection for coordinated campaigns 110 110 111 111 **Audit System**: 124 + 112 112 * Stratified sampling catches manipulation patterns 113 113 * Expert auditors validate AI research quality 114 114 * Failed audits trigger immediate review 115 115 116 116 **Transparency**: 130 + 117 117 * All reasoning chains are visible 118 118 * Evidence sources are traceable 119 119 * AKEL involvement clearly labeled ... ... @@ -120,11 +120,13 @@ 120 120 * Version history preserved 121 121 122 122 **Moderation**: 137 + 123 123 * Moderators handle abuse, spam, coordinated manipulation 124 124 * Content can be flagged by community 125 125 * Audit trail maintained even if content hidden 126 126 127 127 **Federation**: 143 + 128 128 * Multiple nodes with independent governance 129 129 * No single point of control 130 130 * Cross-node contradiction detection ... ... @@ -137,6 +137,7 @@ 137 137 FactHarbor is designed for evolving knowledge: 138 138 139 139 **Automatic Re-evaluation**: 156 + 140 140 1. New evidence arrives 141 141 2. System detects affected scenarios and verdicts 142 142 3. AKEL proposes updated verdicts ... ... @@ -145,16 +145,19 @@ 145 145 6. Old versions remain accessible 146 146 147 147 **Version History**: 165 + 148 148 * Every verdict has complete history 149 149 * Users can see "as of date X, what did we know?" 150 150 * Timeline shows how understanding evolved 151 151 152 152 **Transparent Updates**: 171 + 153 153 * Reason for re-evaluation documented 154 154 * New evidence clearly linked 155 155 * Changes explained, not hidden 156 156 157 157 **User Notifications**: 177 + 158 158 * Users following claims are notified of updates 159 159 * Can compare old vs new verdicts 160 160 * Can see which evidence changed conclusions ... ... @@ -166,6 +166,7 @@ 166 166 **Anyone** - even without login: 167 167 168 168 **Readers** (no login required): 189 + 169 169 * Browse and search all published content 170 170 * Submit text for analysis 171 171 * New claims added automatically unless duplicates exist ... ... @@ -172,6 +172,7 @@ 172 172 * System deduplicates and normalizes 173 173 174 174 **Contributors** (logged in): 196 + 175 175 * Everything Readers can do 176 176 * Submit evidence sources 177 177 * Suggest scenarios ... ... @@ -178,6 +178,7 @@ 178 178 * Participate in discussions 179 179 180 180 **Workflow**: 203 + 181 181 1. User submits text (as Reader or Contributor) 182 182 2. AKEL extracts claims 183 183 3. Checks for existing duplicates ... ... @@ -194,6 +194,7 @@ 194 194 Risk tiers determine review requirements and publication workflow: 195 195 196 196 **Tier A (High Risk)**: 220 + 197 197 * **Domains**: Medical, legal, elections, safety, security, major financial 198 198 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status 199 199 * **Audit rate**: Recommendation 30-50% ... ... @@ -200,6 +200,7 @@ 200 200 * **Why**: Potential for significant harm if wrong 201 201 202 202 **Tier B (Medium Risk)**: 227 + 203 203 * **Domains**: Complex policy, science causality, contested issues 204 204 * **Publication**: AI can publish immediately with clear labeling 205 205 * **Audit rate**: Recommendation 10-20% ... ... @@ -206,6 +206,7 @@ 206 206 * **Why**: Nuanced but lower immediate harm risk 207 207 208 208 **Tier C (Low Risk)**: 234 + 209 209 * **Domains**: Definitions, established facts, historical data 210 210 * **Publication**: AI publication default 211 211 * **Audit rate**: Recommendation 5-10% ... ... @@ -212,6 +212,7 @@ 212 212 * **Why**: Well-established, low controversy 213 213 214 214 **Assignment**: 241 + 215 215 * AKEL suggests tier based on domain, keywords, impact 216 216 * Moderators and Experts can override 217 217 * Risk tiers reviewed based on audit outcomes ... ... @@ -221,6 +221,7 @@ 221 221 == How does federation work and why is it important? == 222 222 223 223 **Federation Model**: 251 + 224 224 * Multiple independent FactHarbor nodes 225 225 * Each node has own database, AKEL, governance 226 226 * Nodes exchange claims, scenarios, evidence, verdicts ... ... @@ -227,6 +227,7 @@ 227 227 * No central authority 228 228 229 229 **Why Federation Matters**: 258 + 230 230 * **Resilience**: No single point of failure or censorship 231 231 * **Autonomy**: Communities govern themselves 232 232 * **Scalability**: Add nodes to handle more users ... ... @@ -234,6 +234,7 @@ 234 234 * **Trust diversity**: Multiple perspectives, not single truth source 235 235 236 236 **How Nodes Exchange Data**: 266 + 237 237 1. Local node creates versions 238 238 2. Builds signed bundle 239 239 3. Pushes to trusted neighbor nodes ... ... @@ -242,6 +242,7 @@ 242 242 6. Local re-evaluation if needed 243 243 244 244 **Trust Model**: 275 + 245 245 * Trusted nodes → auto-import 246 246 * Neutral nodes → import with review 247 247 * Untrusted nodes → manual only ... ... @@ -253,16 +253,19 @@ 253 253 **Yes - and that's a feature, not a bug**: 254 254 255 255 **Multiple Scenarios**: 287 + 256 256 * Experts can create different scenarios with different assumptions 257 257 * Each scenario gets its own verdict 258 258 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting) 259 259 260 260 **Parallel Verdicts**: 293 + 261 261 * Same scenario, different expert interpretations 262 262 * Both verdicts visible with expert attribution 263 263 * No forced consensus 264 264 265 265 **Transparency**: 299 + 266 266 * Expert reasoning documented 267 267 * Assumptions stated explicitly 268 268 * Evidence chains traceable ... ... @@ -269,6 +269,7 @@ 269 269 * Users can evaluate competing expert opinions 270 270 271 271 **Federation**: 306 + 272 272 * Different nodes can have different expert conclusions 273 273 * Cross-node branching allowed 274 274 * Users can see how conclusions vary across nodes ... ... @@ -280,6 +280,7 @@ 280 280 **Multiple Safeguards**: 281 281 282 282 **Quality Gate 4: Structural Integrity**: 318 + 283 283 * Fact-checking against sources 284 284 * No hallucinations allowed 285 285 * Logic chain must be valid and traceable ... ... @@ -286,6 +286,7 @@ 286 286 * References must be accessible and verifiable 287 287 288 288 **Evidence Requirements**: 325 + 289 289 * Primary sources required 290 290 * Citations must be complete 291 291 * Sources must be accessible ... ... @@ -292,11 +292,13 @@ 292 292 * Reliability scored 293 293 294 294 **Audit System**: 332 + 295 295 * Human auditors check AI-generated content 296 296 * Hallucinations caught and fed back into training 297 297 * Patterns of errors trigger system improvements 298 298 299 299 **Transparency**: 338 + 300 300 * All reasoning chains visible 301 301 * Sources linked 302 302 * Users can verify claims against sources ... ... @@ -303,6 +303,7 @@ 303 303 * AKEL outputs clearly labeled 304 304 305 305 **Human Oversight**: 345 + 306 306 * Tier A requires expert review for "Human-Reviewed" status 307 307 * Audit sampling catches errors 308 308 * Community can flag issues ... ... @@ -314,6 +314,7 @@ 314 314 [ToDo: Business model and sustainability to be defined] 315 315 316 316 Potential models under consideration: 357 + 317 317 * Non-profit foundation with grants and donations 318 318 * Institutional subscriptions (universities, research organizations, media) 319 319 * API access for third-party integrations ... ... @@ -331,4 +331,3 @@ 331 331 * [[Automation>>FactHarbor.Specification.Automation.WebHome]] 332 332 * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] 333 333 * [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]] 334 -