Changes for page FAQ

Last modified by Robert Schaub on 2025/12/24 20:33

From version 3.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR
To version 2.1
edited by Robert Schaub
on 2025/12/14 23:02
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -6,70 +6,24 @@
6 6  
7 7  == How do facts get input into the system? ==
8 8  
9 -FactHarbor uses a hybrid model combining three complementary approaches:
9 +FactHarbor uses a hybrid model:
10 10  
11 -=== 1. AI-Generated Content (Scalable) ===
11 +**~1. **AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels.**
12 12  
13 -**What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer)
13 +**2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust.
14 14  
15 -**Process**:
16 -* Extracts claims from submitted text
17 -* Generates structured sub-queries
18 -* Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations)
19 -* Runs automated quality gates
20 -* Publishes with clear "AI-Generated" labels
15 +**3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality.
21 21  
22 -**Publication**: Mode 2 (public, AI-labeled) when quality gates pass
17 +**Why both matter**:
23 23  
24 -**Purpose**: Handles scale — emerging claims get immediate responses with transparent reasoning
19 +* AI research handles scale—emerging claims, immediate responses with transparent reasoning
20 +* Expert authoring provides authoritative grounding for critical domains
21 +* Audit feedback ensures AI quality improves based on expert validation patterns
25 25  
26 -=== 2. Expert-Authored Content (Authoritative) ===
23 +Experts can author high-priority content directly, validate/edit AI outputs, or audit samples to improve system-wide performance—focusing their time where expertise matters most.
27 27  
28 -**What**: Domain experts directly author, edit, and validate content
25 +POC v1 demonstrates the AI research pipeline (fully automated with transparent reasoning); full system supports all three pathways.
29 29  
30 -**Focus**: High-risk domains (medical, legal, safety-critical)
31 -
32 -**Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution
33 -
34 -**Authority**: Tier A content requires expert approval
35 -
36 -**Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences
37 -
38 -=== 3. Audit-Improved Quality (Continuous) ===
39 -
40 -**What**: Sampling audits where experts review AI-generated content
41 -
42 -**Rates**:
43 -* High-risk (Tier A): 30-50% sampling
44 -* Medium-risk (Tier B): 10-20% sampling
45 -* Low-risk (Tier C): 5-10% sampling
46 -
47 -**Impact**: Expert feedback systematically improves AI research quality
48 -
49 -**Purpose**: Ensures AI quality evolves based on expert validation patterns
50 -
51 -=== Why All Three Matter ===
52 -
53 -**Complementary Strengths**:
54 -* **AI research**: Scale and speed for emerging claims
55 -* **Expert authoring**: Authority and precision for critical domains
56 -* **Audit feedback**: Continuous quality improvement
57 -
58 -**Expert Time Optimization**:
59 -
60 -Experts can choose where to focus their time:
61 -* Author high-priority content directly
62 -* Validate and edit AI-generated outputs
63 -* Audit samples to improve system-wide AI performance
64 -
65 -This focuses expert time where domain expertise matters most while leveraging AI for scale.
66 -
67 -=== Current Status ===
68 -
69 -**POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates)
70 -
71 -**Full System**: Will support all three pathways with integrated workflow
72 -
73 73  ----
74 74  
75 75  == What prevents FactHarbor from becoming another echo chamber? ==
... ... @@ -77,6 +77,7 @@
77 77  FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
78 78  
79 79  **Mandatory Contradiction Search**:
34 +
80 80  * AI must actively search for counter-evidence, not just confirmations
81 81  * System checks for echo chamber patterns in source clusters
82 82  * Flags tribal or ideological source clustering
... ... @@ -83,21 +83,25 @@
83 83  * Requires diverse perspectives across political/ideological spectrum
84 84  
85 85  **Multiple Scenarios**:
41 +
86 86  * Claims are evaluated under different interpretations
87 87  * Reveals how assumptions change conclusions
88 88  * Makes disagreements understandable, not divisive
89 89  
90 90  **Transparent Reasoning**:
47 +
91 91  * All assumptions, definitions, and boundaries are explicit
92 92  * Evidence chains are traceable
93 93  * Uncertainty is quantified, not hidden
94 94  
95 95  **Audit System**:
53 +
96 96  * Human auditors check for bubble patterns
97 97  * Feedback loop improves AI search diversity
98 98  * Community can flag missing perspectives
99 99  
100 100  **Federation**:
59 +
101 101  * Multiple independent nodes with different perspectives
102 102  * No single entity controls "the truth"
103 103  * Cross-node contradiction detection
... ... @@ -109,20 +109,23 @@
109 109  This is exactly what FactHarbor is designed for:
110 110  
111 111  **Scenarios capture contexts**:
71 +
112 112  * Each scenario defines specific boundaries, definitions, and assumptions
113 113  * The same claim can have different verdicts in different scenarios
114 114  * Example: "Coffee is healthy" depends on:
115 - ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
116 - ** Population (adults? pregnant women? people with heart conditions?)
117 - ** Consumption level (1 cup/day? 5 cups/day?)
118 - ** Time horizon (short-term? long-term?)
75 +** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
76 +** Population (adults? pregnant women? people with heart conditions?)
77 +** Consumption level (1 cup/day? 5 cups/day?)
78 +** Time horizon (short-term? long-term?)
119 119  
120 120  **Truth Landscape**:
81 +
121 121  * Shows all scenarios and their verdicts side-by-side
122 122  * Users see *why* interpretations differ
123 123  * No forced consensus when legitimate disagreement exists
124 124  
125 125  **Explicit Assumptions**:
87 +
126 126  * Every scenario states its assumptions clearly
127 127  * Users can compare how changing assumptions changes conclusions
128 128  * Makes context-dependence visible, not hidden
... ... @@ -132,6 +132,7 @@
132 132  == What makes FactHarbor different from traditional fact-checking sites? ==
133 133  
134 134  **Traditional Fact-Checking**:
97 +
135 135  * Binary verdicts: True / Mostly True / False
136 136  * Single interpretation chosen by fact-checker
137 137  * Often hides legitimate contextual differences
... ... @@ -138,6 +138,7 @@
138 138  * Limited ability to show *why* people disagree
139 139  
140 140  **FactHarbor**:
104 +
141 141  * **Multi-scenario**: Shows multiple valid interpretations
142 142  * **Likelihood-based**: Ranges with uncertainty, not binary labels
143 143  * **Transparent assumptions**: Makes boundaries and definitions explicit
... ... @@ -150,6 +150,7 @@
150 150  == How do you prevent manipulation or coordinated misinformation campaigns? ==
151 151  
152 152  **Quality Gates**:
117 +
153 153  * Automated checks before AI-generated content publishes
154 154  * Source quality verification
155 155  * Mandatory contradiction search
... ... @@ -156,11 +156,13 @@
156 156  * Bubble detection for coordinated campaigns
157 157  
158 158  **Audit System**:
124 +
159 159  * Stratified sampling catches manipulation patterns
160 160  * Expert auditors validate AI research quality
161 161  * Failed audits trigger immediate review
162 162  
163 163  **Transparency**:
130 +
164 164  * All reasoning chains are visible
165 165  * Evidence sources are traceable
166 166  * AKEL involvement clearly labeled
... ... @@ -167,11 +167,13 @@
167 167  * Version history preserved
168 168  
169 169  **Moderation**:
137 +
170 170  * Moderators handle abuse, spam, coordinated manipulation
171 171  * Content can be flagged by community
172 172  * Audit trail maintained even if content hidden
173 173  
174 174  **Federation**:
143 +
175 175  * Multiple nodes with independent governance
176 176  * No single point of control
177 177  * Cross-node contradiction detection
... ... @@ -184,6 +184,7 @@
184 184  FactHarbor is designed for evolving knowledge:
185 185  
186 186  **Automatic Re-evaluation**:
156 +
187 187  1. New evidence arrives
188 188  2. System detects affected scenarios and verdicts
189 189  3. AKEL proposes updated verdicts
... ... @@ -192,16 +192,19 @@
192 192  6. Old versions remain accessible
193 193  
194 194  **Version History**:
165 +
195 195  * Every verdict has complete history
196 196  * Users can see "as of date X, what did we know?"
197 197  * Timeline shows how understanding evolved
198 198  
199 199  **Transparent Updates**:
171 +
200 200  * Reason for re-evaluation documented
201 201  * New evidence clearly linked
202 202  * Changes explained, not hidden
203 203  
204 204  **User Notifications**:
177 +
205 205  * Users following claims are notified of updates
206 206  * Can compare old vs new verdicts
207 207  * Can see which evidence changed conclusions
... ... @@ -213,6 +213,7 @@
213 213  **Anyone** - even without login:
214 214  
215 215  **Readers** (no login required):
189 +
216 216  * Browse and search all published content
217 217  * Submit text for analysis
218 218  * New claims added automatically unless duplicates exist
... ... @@ -219,6 +219,7 @@
219 219  * System deduplicates and normalizes
220 220  
221 221  **Contributors** (logged in):
196 +
222 222  * Everything Readers can do
223 223  * Submit evidence sources
224 224  * Suggest scenarios
... ... @@ -225,6 +225,7 @@
225 225  * Participate in discussions
226 226  
227 227  **Workflow**:
203 +
228 228  1. User submits text (as Reader or Contributor)
229 229  2. AKEL extracts claims
230 230  3. Checks for existing duplicates
... ... @@ -241,6 +241,7 @@
241 241  Risk tiers determine review requirements and publication workflow:
242 242  
243 243  **Tier A (High Risk)**:
220 +
244 244  * **Domains**: Medical, legal, elections, safety, security, major financial
245 245  * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status
246 246  * **Audit rate**: Recommendation 30-50%
... ... @@ -247,6 +247,7 @@
247 247  * **Why**: Potential for significant harm if wrong
248 248  
249 249  **Tier B (Medium Risk)**:
227 +
250 250  * **Domains**: Complex policy, science causality, contested issues
251 251  * **Publication**: AI can publish immediately with clear labeling
252 252  * **Audit rate**: Recommendation 10-20%
... ... @@ -253,6 +253,7 @@
253 253  * **Why**: Nuanced but lower immediate harm risk
254 254  
255 255  **Tier C (Low Risk)**:
234 +
256 256  * **Domains**: Definitions, established facts, historical data
257 257  * **Publication**: AI publication default
258 258  * **Audit rate**: Recommendation 5-10%
... ... @@ -259,6 +259,7 @@
259 259  * **Why**: Well-established, low controversy
260 260  
261 261  **Assignment**:
241 +
262 262  * AKEL suggests tier based on domain, keywords, impact
263 263  * Moderators and Experts can override
264 264  * Risk tiers reviewed based on audit outcomes
... ... @@ -268,6 +268,7 @@
268 268  == How does federation work and why is it important? ==
269 269  
270 270  **Federation Model**:
251 +
271 271  * Multiple independent FactHarbor nodes
272 272  * Each node has own database, AKEL, governance
273 273  * Nodes exchange claims, scenarios, evidence, verdicts
... ... @@ -274,6 +274,7 @@
274 274  * No central authority
275 275  
276 276  **Why Federation Matters**:
258 +
277 277  * **Resilience**: No single point of failure or censorship
278 278  * **Autonomy**: Communities govern themselves
279 279  * **Scalability**: Add nodes to handle more users
... ... @@ -281,6 +281,7 @@
281 281  * **Trust diversity**: Multiple perspectives, not single truth source
282 282  
283 283  **How Nodes Exchange Data**:
266 +
284 284  1. Local node creates versions
285 285  2. Builds signed bundle
286 286  3. Pushes to trusted neighbor nodes
... ... @@ -289,6 +289,7 @@
289 289  6. Local re-evaluation if needed
290 290  
291 291  **Trust Model**:
275 +
292 292  * Trusted nodes → auto-import
293 293  * Neutral nodes → import with review
294 294  * Untrusted nodes → manual only
... ... @@ -300,16 +300,19 @@
300 300  **Yes - and that's a feature, not a bug**:
301 301  
302 302  **Multiple Scenarios**:
287 +
303 303  * Experts can create different scenarios with different assumptions
304 304  * Each scenario gets its own verdict
305 305  * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
306 306  
307 307  **Parallel Verdicts**:
293 +
308 308  * Same scenario, different expert interpretations
309 309  * Both verdicts visible with expert attribution
310 310  * No forced consensus
311 311  
312 312  **Transparency**:
299 +
313 313  * Expert reasoning documented
314 314  * Assumptions stated explicitly
315 315  * Evidence chains traceable
... ... @@ -316,6 +316,7 @@
316 316  * Users can evaluate competing expert opinions
317 317  
318 318  **Federation**:
306 +
319 319  * Different nodes can have different expert conclusions
320 320  * Cross-node branching allowed
321 321  * Users can see how conclusions vary across nodes
... ... @@ -327,6 +327,7 @@
327 327  **Multiple Safeguards**:
328 328  
329 329  **Quality Gate 4: Structural Integrity**:
318 +
330 330  * Fact-checking against sources
331 331  * No hallucinations allowed
332 332  * Logic chain must be valid and traceable
... ... @@ -333,6 +333,7 @@
333 333  * References must be accessible and verifiable
334 334  
335 335  **Evidence Requirements**:
325 +
336 336  * Primary sources required
337 337  * Citations must be complete
338 338  * Sources must be accessible
... ... @@ -339,11 +339,13 @@
339 339  * Reliability scored
340 340  
341 341  **Audit System**:
332 +
342 342  * Human auditors check AI-generated content
343 343  * Hallucinations caught and fed back into training
344 344  * Patterns of errors trigger system improvements
345 345  
346 346  **Transparency**:
338 +
347 347  * All reasoning chains visible
348 348  * Sources linked
349 349  * Users can verify claims against sources
... ... @@ -350,6 +350,7 @@
350 350  * AKEL outputs clearly labeled
351 351  
352 352  **Human Oversight**:
345 +
353 353  * Tier A requires expert review for "Human-Reviewed" status
354 354  * Audit sampling catches errors
355 355  * Community can flag issues
... ... @@ -361,6 +361,7 @@
361 361  [ToDo: Business model and sustainability to be defined]
362 362  
363 363  Potential models under consideration:
357 +
364 364  * Non-profit foundation with grants and donations
365 365  * Institutional subscriptions (universities, research organizations, media)
366 366  * API access for third-party integrations
... ... @@ -377,5 +377,4 @@
377 377  * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
378 378  * [[Automation>>FactHarbor.Specification.Automation.WebHome]]
379 379  * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]]
380 -* [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
381 -
374 +* [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]]