Changes for page FAQ

Last modified by Robert Schaub on 2025/12/24 20:33

From version 3.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR
To version 3.2
edited by Robert Schaub
on 2025/12/16 21:39
Change comment: Renamed back-links.

Summary

Details

Page properties
Content
... ... @@ -13,6 +13,7 @@
13 13  **What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer)
14 14  
15 15  **Process**:
16 +
16 16  * Extracts claims from submitted text
17 17  * Generates structured sub-queries
18 18  * Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations)
... ... @@ -40,6 +40,7 @@
40 40  **What**: Sampling audits where experts review AI-generated content
41 41  
42 42  **Rates**:
44 +
43 43  * High-risk (Tier A): 30-50% sampling
44 44  * Medium-risk (Tier B): 10-20% sampling
45 45  * Low-risk (Tier C): 5-10% sampling
... ... @@ -51,6 +51,7 @@
51 51  === Why All Three Matter ===
52 52  
53 53  **Complementary Strengths**:
56 +
54 54  * **AI research**: Scale and speed for emerging claims
55 55  * **Expert authoring**: Authority and precision for critical domains
56 56  * **Audit feedback**: Continuous quality improvement
... ... @@ -58,6 +58,7 @@
58 58  **Expert Time Optimization**:
59 59  
60 60  Experts can choose where to focus their time:
64 +
61 61  * Author high-priority content directly
62 62  * Validate and edit AI-generated outputs
63 63  * Audit samples to improve system-wide AI performance
... ... @@ -77,6 +77,7 @@
77 77  FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
78 78  
79 79  **Mandatory Contradiction Search**:
84 +
80 80  * AI must actively search for counter-evidence, not just confirmations
81 81  * System checks for echo chamber patterns in source clusters
82 82  * Flags tribal or ideological source clustering
... ... @@ -83,21 +83,25 @@
83 83  * Requires diverse perspectives across political/ideological spectrum
84 84  
85 85  **Multiple Scenarios**:
91 +
86 86  * Claims are evaluated under different interpretations
87 87  * Reveals how assumptions change conclusions
88 88  * Makes disagreements understandable, not divisive
89 89  
90 90  **Transparent Reasoning**:
97 +
91 91  * All assumptions, definitions, and boundaries are explicit
92 92  * Evidence chains are traceable
93 93  * Uncertainty is quantified, not hidden
94 94  
95 95  **Audit System**:
103 +
96 96  * Human auditors check for bubble patterns
97 97  * Feedback loop improves AI search diversity
98 98  * Community can flag missing perspectives
99 99  
100 100  **Federation**:
109 +
101 101  * Multiple independent nodes with different perspectives
102 102  * No single entity controls "the truth"
103 103  * Cross-node contradiction detection
... ... @@ -109,20 +109,23 @@
109 109  This is exactly what FactHarbor is designed for:
110 110  
111 111  **Scenarios capture contexts**:
121 +
112 112  * Each scenario defines specific boundaries, definitions, and assumptions
113 113  * The same claim can have different verdicts in different scenarios
114 114  * Example: "Coffee is healthy" depends on:
115 - ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
116 - ** Population (adults? pregnant women? people with heart conditions?)
117 - ** Consumption level (1 cup/day? 5 cups/day?)
118 - ** Time horizon (short-term? long-term?)
125 +** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
126 +** Population (adults? pregnant women? people with heart conditions?)
127 +** Consumption level (1 cup/day? 5 cups/day?)
128 +** Time horizon (short-term? long-term?)
119 119  
120 120  **Truth Landscape**:
131 +
121 121  * Shows all scenarios and their verdicts side-by-side
122 122  * Users see *why* interpretations differ
123 123  * No forced consensus when legitimate disagreement exists
124 124  
125 125  **Explicit Assumptions**:
137 +
126 126  * Every scenario states its assumptions clearly
127 127  * Users can compare how changing assumptions changes conclusions
128 128  * Makes context-dependence visible, not hidden
... ... @@ -132,6 +132,7 @@
132 132  == What makes FactHarbor different from traditional fact-checking sites? ==
133 133  
134 134  **Traditional Fact-Checking**:
147 +
135 135  * Binary verdicts: True / Mostly True / False
136 136  * Single interpretation chosen by fact-checker
137 137  * Often hides legitimate contextual differences
... ... @@ -138,6 +138,7 @@
138 138  * Limited ability to show *why* people disagree
139 139  
140 140  **FactHarbor**:
154 +
141 141  * **Multi-scenario**: Shows multiple valid interpretations
142 142  * **Likelihood-based**: Ranges with uncertainty, not binary labels
143 143  * **Transparent assumptions**: Makes boundaries and definitions explicit
... ... @@ -150,6 +150,7 @@
150 150  == How do you prevent manipulation or coordinated misinformation campaigns? ==
151 151  
152 152  **Quality Gates**:
167 +
153 153  * Automated checks before AI-generated content publishes
154 154  * Source quality verification
155 155  * Mandatory contradiction search
... ... @@ -156,11 +156,13 @@
156 156  * Bubble detection for coordinated campaigns
157 157  
158 158  **Audit System**:
174 +
159 159  * Stratified sampling catches manipulation patterns
160 160  * Expert auditors validate AI research quality
161 161  * Failed audits trigger immediate review
162 162  
163 163  **Transparency**:
180 +
164 164  * All reasoning chains are visible
165 165  * Evidence sources are traceable
166 166  * AKEL involvement clearly labeled
... ... @@ -167,11 +167,13 @@
167 167  * Version history preserved
168 168  
169 169  **Moderation**:
187 +
170 170  * Moderators handle abuse, spam, coordinated manipulation
171 171  * Content can be flagged by community
172 172  * Audit trail maintained even if content hidden
173 173  
174 174  **Federation**:
193 +
175 175  * Multiple nodes with independent governance
176 176  * No single point of control
177 177  * Cross-node contradiction detection
... ... @@ -184,6 +184,7 @@
184 184  FactHarbor is designed for evolving knowledge:
185 185  
186 186  **Automatic Re-evaluation**:
206 +
187 187  1. New evidence arrives
188 188  2. System detects affected scenarios and verdicts
189 189  3. AKEL proposes updated verdicts
... ... @@ -192,16 +192,19 @@
192 192  6. Old versions remain accessible
193 193  
194 194  **Version History**:
215 +
195 195  * Every verdict has complete history
196 196  * Users can see "as of date X, what did we know?"
197 197  * Timeline shows how understanding evolved
198 198  
199 199  **Transparent Updates**:
221 +
200 200  * Reason for re-evaluation documented
201 201  * New evidence clearly linked
202 202  * Changes explained, not hidden
203 203  
204 204  **User Notifications**:
227 +
205 205  * Users following claims are notified of updates
206 206  * Can compare old vs new verdicts
207 207  * Can see which evidence changed conclusions
... ... @@ -213,6 +213,7 @@
213 213  **Anyone** - even without login:
214 214  
215 215  **Readers** (no login required):
239 +
216 216  * Browse and search all published content
217 217  * Submit text for analysis
218 218  * New claims added automatically unless duplicates exist
... ... @@ -219,6 +219,7 @@
219 219  * System deduplicates and normalizes
220 220  
221 221  **Contributors** (logged in):
246 +
222 222  * Everything Readers can do
223 223  * Submit evidence sources
224 224  * Suggest scenarios
... ... @@ -225,6 +225,7 @@
225 225  * Participate in discussions
226 226  
227 227  **Workflow**:
253 +
228 228  1. User submits text (as Reader or Contributor)
229 229  2. AKEL extracts claims
230 230  3. Checks for existing duplicates
... ... @@ -241,6 +241,7 @@
241 241  Risk tiers determine review requirements and publication workflow:
242 242  
243 243  **Tier A (High Risk)**:
270 +
244 244  * **Domains**: Medical, legal, elections, safety, security, major financial
245 245  * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status
246 246  * **Audit rate**: Recommendation 30-50%
... ... @@ -247,6 +247,7 @@
247 247  * **Why**: Potential for significant harm if wrong
248 248  
249 249  **Tier B (Medium Risk)**:
277 +
250 250  * **Domains**: Complex policy, science causality, contested issues
251 251  * **Publication**: AI can publish immediately with clear labeling
252 252  * **Audit rate**: Recommendation 10-20%
... ... @@ -253,6 +253,7 @@
253 253  * **Why**: Nuanced but lower immediate harm risk
254 254  
255 255  **Tier C (Low Risk)**:
284 +
256 256  * **Domains**: Definitions, established facts, historical data
257 257  * **Publication**: AI publication default
258 258  * **Audit rate**: Recommendation 5-10%
... ... @@ -259,6 +259,7 @@
259 259  * **Why**: Well-established, low controversy
260 260  
261 261  **Assignment**:
291 +
262 262  * AKEL suggests tier based on domain, keywords, impact
263 263  * Moderators and Experts can override
264 264  * Risk tiers reviewed based on audit outcomes
... ... @@ -268,6 +268,7 @@
268 268  == How does federation work and why is it important? ==
269 269  
270 270  **Federation Model**:
301 +
271 271  * Multiple independent FactHarbor nodes
272 272  * Each node has own database, AKEL, governance
273 273  * Nodes exchange claims, scenarios, evidence, verdicts
... ... @@ -274,6 +274,7 @@
274 274  * No central authority
275 275  
276 276  **Why Federation Matters**:
308 +
277 277  * **Resilience**: No single point of failure or censorship
278 278  * **Autonomy**: Communities govern themselves
279 279  * **Scalability**: Add nodes to handle more users
... ... @@ -281,6 +281,7 @@
281 281  * **Trust diversity**: Multiple perspectives, not single truth source
282 282  
283 283  **How Nodes Exchange Data**:
316 +
284 284  1. Local node creates versions
285 285  2. Builds signed bundle
286 286  3. Pushes to trusted neighbor nodes
... ... @@ -289,6 +289,7 @@
289 289  6. Local re-evaluation if needed
290 290  
291 291  **Trust Model**:
325 +
292 292  * Trusted nodes → auto-import
293 293  * Neutral nodes → import with review
294 294  * Untrusted nodes → manual only
... ... @@ -300,16 +300,19 @@
300 300  **Yes - and that's a feature, not a bug**:
301 301  
302 302  **Multiple Scenarios**:
337 +
303 303  * Experts can create different scenarios with different assumptions
304 304  * Each scenario gets its own verdict
305 305  * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
306 306  
307 307  **Parallel Verdicts**:
343 +
308 308  * Same scenario, different expert interpretations
309 309  * Both verdicts visible with expert attribution
310 310  * No forced consensus
311 311  
312 312  **Transparency**:
349 +
313 313  * Expert reasoning documented
314 314  * Assumptions stated explicitly
315 315  * Evidence chains traceable
... ... @@ -316,6 +316,7 @@
316 316  * Users can evaluate competing expert opinions
317 317  
318 318  **Federation**:
356 +
319 319  * Different nodes can have different expert conclusions
320 320  * Cross-node branching allowed
321 321  * Users can see how conclusions vary across nodes
... ... @@ -327,6 +327,7 @@
327 327  **Multiple Safeguards**:
328 328  
329 329  **Quality Gate 4: Structural Integrity**:
368 +
330 330  * Fact-checking against sources
331 331  * No hallucinations allowed
332 332  * Logic chain must be valid and traceable
... ... @@ -333,6 +333,7 @@
333 333  * References must be accessible and verifiable
334 334  
335 335  **Evidence Requirements**:
375 +
336 336  * Primary sources required
337 337  * Citations must be complete
338 338  * Sources must be accessible
... ... @@ -339,11 +339,13 @@
339 339  * Reliability scored
340 340  
341 341  **Audit System**:
382 +
342 342  * Human auditors check AI-generated content
343 343  * Hallucinations caught and fed back into training
344 344  * Patterns of errors trigger system improvements
345 345  
346 346  **Transparency**:
388 +
347 347  * All reasoning chains visible
348 348  * Sources linked
349 349  * Users can verify claims against sources
... ... @@ -350,6 +350,7 @@
350 350  * AKEL outputs clearly labeled
351 351  
352 352  **Human Oversight**:
395 +
353 353  * Tier A requires expert review for "Human-Reviewed" status
354 354  * Audit sampling catches errors
355 355  * Community can flag issues
... ... @@ -361,6 +361,7 @@
361 361  [ToDo: Business model and sustainability to be defined]
362 362  
363 363  Potential models under consideration:
407 +
364 364  * Non-profit foundation with grants and donations
365 365  * Institutional subscriptions (universities, research organizations, media)
366 366  * API access for third-party integrations
... ... @@ -377,5 +377,4 @@
377 377  * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
378 378  * [[Automation>>FactHarbor.Specification.Automation.WebHome]]
379 379  * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]]
380 -* [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
381 -
424 +* [[Mission & Purpose>>FactHarbor.Organisation V0\.9\.18.Core Problems FactHarbor Solves.WebHome]]