Changes for page FAQ

Last modified by Robert Schaub on 2025/12/24 20:33

From version 2.1
edited by Robert Schaub
on 2025/12/14 23:02
Change comment: There is no comment for this version
To version 3.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -6,24 +6,70 @@
6 6  
7 7  == How do facts get input into the system? ==
8 8  
9 -FactHarbor uses a hybrid model:
9 +FactHarbor uses a hybrid model combining three complementary approaches:
10 10  
11 -**~1. **AI-Generated (scalable)**: System dynamically researches claims—extracting, generating structured sub-queries, performing mandatory contradiction search (actively seeking counter-evidence, not just confirmations), running quality gates. Published with clear "AI-Generated" labels.**
11 +=== 1. AI-Generated Content (Scalable) ===
12 12  
13 -**2. Expert-Authored (authoritative)**: Domain experts directly author, edit, and validate content—especially for high-risk domains (medical, legal). These get "Human-Reviewed" status and higher trust.
13 +**What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer)
14 14  
15 -**3. Audit-Improved (continuous quality)**: Sampling audits (30-50% high-risk, 5-10% low-risk) where expert reviews systematically improve AI research quality.
15 +**Process**:
16 +* Extracts claims from submitted text
17 +* Generates structured sub-queries
18 +* Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations)
19 +* Runs automated quality gates
20 +* Publishes with clear "AI-Generated" labels
16 16  
17 -**Why both matter**:
22 +**Publication**: Mode 2 (public, AI-labeled) when quality gates pass
18 18  
19 -* AI research handles scale—emerging claims, immediate responses with transparent reasoning
20 -* Expert authoring provides authoritative grounding for critical domains
21 -* Audit feedback ensures AI quality improves based on expert validation patterns
24 +**Purpose**: Handles scale — emerging claims get immediate responses with transparent reasoning
22 22  
23 -Experts can author high-priority content directly, validate/edit AI outputs, or audit samples to improve system-wide performance—focusing their time where expertise matters most.
26 +=== 2. Expert-Authored Content (Authoritative) ===
24 24  
25 -POC v1 demonstrates the AI research pipeline (fully automated with transparent reasoning); full system supports all three pathways.
28 +**What**: Domain experts directly author, edit, and validate content
26 26  
30 +**Focus**: High-risk domains (medical, legal, safety-critical)
31 +
32 +**Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution
33 +
34 +**Authority**: Tier A content requires expert approval
35 +
36 +**Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences
37 +
38 +=== 3. Audit-Improved Quality (Continuous) ===
39 +
40 +**What**: Sampling audits where experts review AI-generated content
41 +
42 +**Rates**:
43 +* High-risk (Tier A): 30-50% sampling
44 +* Medium-risk (Tier B): 10-20% sampling
45 +* Low-risk (Tier C): 5-10% sampling
46 +
47 +**Impact**: Expert feedback systematically improves AI research quality
48 +
49 +**Purpose**: Ensures AI quality evolves based on expert validation patterns
50 +
51 +=== Why All Three Matter ===
52 +
53 +**Complementary Strengths**:
54 +* **AI research**: Scale and speed for emerging claims
55 +* **Expert authoring**: Authority and precision for critical domains
56 +* **Audit feedback**: Continuous quality improvement
57 +
58 +**Expert Time Optimization**:
59 +
60 +Experts can choose where to focus their time:
61 +* Author high-priority content directly
62 +* Validate and edit AI-generated outputs
63 +* Audit samples to improve system-wide AI performance
64 +
65 +This focuses expert time where domain expertise matters most while leveraging AI for scale.
66 +
67 +=== Current Status ===
68 +
69 +**POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates)
70 +
71 +**Full System**: Will support all three pathways with integrated workflow
72 +
27 27  ----
28 28  
29 29  == What prevents FactHarbor from becoming another echo chamber? ==
... ... @@ -31,7 +31,6 @@
31 31  FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
32 32  
33 33  **Mandatory Contradiction Search**:
34 -
35 35  * AI must actively search for counter-evidence, not just confirmations
36 36  * System checks for echo chamber patterns in source clusters
37 37  * Flags tribal or ideological source clustering
... ... @@ -38,25 +38,21 @@
38 38  * Requires diverse perspectives across political/ideological spectrum
39 39  
40 40  **Multiple Scenarios**:
41 -
42 42  * Claims are evaluated under different interpretations
43 43  * Reveals how assumptions change conclusions
44 44  * Makes disagreements understandable, not divisive
45 45  
46 46  **Transparent Reasoning**:
47 -
48 48  * All assumptions, definitions, and boundaries are explicit
49 49  * Evidence chains are traceable
50 50  * Uncertainty is quantified, not hidden
51 51  
52 52  **Audit System**:
53 -
54 54  * Human auditors check for bubble patterns
55 55  * Feedback loop improves AI search diversity
56 56  * Community can flag missing perspectives
57 57  
58 58  **Federation**:
59 -
60 60  * Multiple independent nodes with different perspectives
61 61  * No single entity controls "the truth"
62 62  * Cross-node contradiction detection
... ... @@ -68,23 +68,20 @@
68 68  This is exactly what FactHarbor is designed for:
69 69  
70 70  **Scenarios capture contexts**:
71 -
72 72  * Each scenario defines specific boundaries, definitions, and assumptions
73 73  * The same claim can have different verdicts in different scenarios
74 74  * Example: "Coffee is healthy" depends on:
75 -** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
76 -** Population (adults? pregnant women? people with heart conditions?)
77 -** Consumption level (1 cup/day? 5 cups/day?)
78 -** Time horizon (short-term? long-term?)
115 + ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
116 + ** Population (adults? pregnant women? people with heart conditions?)
117 + ** Consumption level (1 cup/day? 5 cups/day?)
118 + ** Time horizon (short-term? long-term?)
79 79  
80 80  **Truth Landscape**:
81 -
82 82  * Shows all scenarios and their verdicts side-by-side
83 83  * Users see *why* interpretations differ
84 84  * No forced consensus when legitimate disagreement exists
85 85  
86 86  **Explicit Assumptions**:
87 -
88 88  * Every scenario states its assumptions clearly
89 89  * Users can compare how changing assumptions changes conclusions
90 90  * Makes context-dependence visible, not hidden
... ... @@ -94,7 +94,6 @@
94 94  == What makes FactHarbor different from traditional fact-checking sites? ==
95 95  
96 96  **Traditional Fact-Checking**:
97 -
98 98  * Binary verdicts: True / Mostly True / False
99 99  * Single interpretation chosen by fact-checker
100 100  * Often hides legitimate contextual differences
... ... @@ -101,7 +101,6 @@
101 101  * Limited ability to show *why* people disagree
102 102  
103 103  **FactHarbor**:
104 -
105 105  * **Multi-scenario**: Shows multiple valid interpretations
106 106  * **Likelihood-based**: Ranges with uncertainty, not binary labels
107 107  * **Transparent assumptions**: Makes boundaries and definitions explicit
... ... @@ -114,7 +114,6 @@
114 114  == How do you prevent manipulation or coordinated misinformation campaigns? ==
115 115  
116 116  **Quality Gates**:
117 -
118 118  * Automated checks before AI-generated content publishes
119 119  * Source quality verification
120 120  * Mandatory contradiction search
... ... @@ -121,13 +121,11 @@
121 121  * Bubble detection for coordinated campaigns
122 122  
123 123  **Audit System**:
124 -
125 125  * Stratified sampling catches manipulation patterns
126 126  * Expert auditors validate AI research quality
127 127  * Failed audits trigger immediate review
128 128  
129 129  **Transparency**:
130 -
131 131  * All reasoning chains are visible
132 132  * Evidence sources are traceable
133 133  * AKEL involvement clearly labeled
... ... @@ -134,13 +134,11 @@
134 134  * Version history preserved
135 135  
136 136  **Moderation**:
137 -
138 138  * Moderators handle abuse, spam, coordinated manipulation
139 139  * Content can be flagged by community
140 140  * Audit trail maintained even if content hidden
141 141  
142 142  **Federation**:
143 -
144 144  * Multiple nodes with independent governance
145 145  * No single point of control
146 146  * Cross-node contradiction detection
... ... @@ -153,7 +153,6 @@
153 153  FactHarbor is designed for evolving knowledge:
154 154  
155 155  **Automatic Re-evaluation**:
156 -
157 157  1. New evidence arrives
158 158  2. System detects affected scenarios and verdicts
159 159  3. AKEL proposes updated verdicts
... ... @@ -162,19 +162,16 @@
162 162  6. Old versions remain accessible
163 163  
164 164  **Version History**:
165 -
166 166  * Every verdict has complete history
167 167  * Users can see "as of date X, what did we know?"
168 168  * Timeline shows how understanding evolved
169 169  
170 170  **Transparent Updates**:
171 -
172 172  * Reason for re-evaluation documented
173 173  * New evidence clearly linked
174 174  * Changes explained, not hidden
175 175  
176 176  **User Notifications**:
177 -
178 178  * Users following claims are notified of updates
179 179  * Can compare old vs new verdicts
180 180  * Can see which evidence changed conclusions
... ... @@ -186,7 +186,6 @@
186 186  **Anyone** - even without login:
187 187  
188 188  **Readers** (no login required):
189 -
190 190  * Browse and search all published content
191 191  * Submit text for analysis
192 192  * New claims added automatically unless duplicates exist
... ... @@ -193,7 +193,6 @@
193 193  * System deduplicates and normalizes
194 194  
195 195  **Contributors** (logged in):
196 -
197 197  * Everything Readers can do
198 198  * Submit evidence sources
199 199  * Suggest scenarios
... ... @@ -200,7 +200,6 @@
200 200  * Participate in discussions
201 201  
202 202  **Workflow**:
203 -
204 204  1. User submits text (as Reader or Contributor)
205 205  2. AKEL extracts claims
206 206  3. Checks for existing duplicates
... ... @@ -217,7 +217,6 @@
217 217  Risk tiers determine review requirements and publication workflow:
218 218  
219 219  **Tier A (High Risk)**:
220 -
221 221  * **Domains**: Medical, legal, elections, safety, security, major financial
222 222  * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status
223 223  * **Audit rate**: Recommendation 30-50%
... ... @@ -224,7 +224,6 @@
224 224  * **Why**: Potential for significant harm if wrong
225 225  
226 226  **Tier B (Medium Risk)**:
227 -
228 228  * **Domains**: Complex policy, science causality, contested issues
229 229  * **Publication**: AI can publish immediately with clear labeling
230 230  * **Audit rate**: Recommendation 10-20%
... ... @@ -231,7 +231,6 @@
231 231  * **Why**: Nuanced but lower immediate harm risk
232 232  
233 233  **Tier C (Low Risk)**:
234 -
235 235  * **Domains**: Definitions, established facts, historical data
236 236  * **Publication**: AI publication default
237 237  * **Audit rate**: Recommendation 5-10%
... ... @@ -238,7 +238,6 @@
238 238  * **Why**: Well-established, low controversy
239 239  
240 240  **Assignment**:
241 -
242 242  * AKEL suggests tier based on domain, keywords, impact
243 243  * Moderators and Experts can override
244 244  * Risk tiers reviewed based on audit outcomes
... ... @@ -248,7 +248,6 @@
248 248  == How does federation work and why is it important? ==
249 249  
250 250  **Federation Model**:
251 -
252 252  * Multiple independent FactHarbor nodes
253 253  * Each node has own database, AKEL, governance
254 254  * Nodes exchange claims, scenarios, evidence, verdicts
... ... @@ -255,7 +255,6 @@
255 255  * No central authority
256 256  
257 257  **Why Federation Matters**:
258 -
259 259  * **Resilience**: No single point of failure or censorship
260 260  * **Autonomy**: Communities govern themselves
261 261  * **Scalability**: Add nodes to handle more users
... ... @@ -263,7 +263,6 @@
263 263  * **Trust diversity**: Multiple perspectives, not single truth source
264 264  
265 265  **How Nodes Exchange Data**:
266 -
267 267  1. Local node creates versions
268 268  2. Builds signed bundle
269 269  3. Pushes to trusted neighbor nodes
... ... @@ -272,7 +272,6 @@
272 272  6. Local re-evaluation if needed
273 273  
274 274  **Trust Model**:
275 -
276 276  * Trusted nodes → auto-import
277 277  * Neutral nodes → import with review
278 278  * Untrusted nodes → manual only
... ... @@ -284,19 +284,16 @@
284 284  **Yes - and that's a feature, not a bug**:
285 285  
286 286  **Multiple Scenarios**:
287 -
288 288  * Experts can create different scenarios with different assumptions
289 289  * Each scenario gets its own verdict
290 290  * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
291 291  
292 292  **Parallel Verdicts**:
293 -
294 294  * Same scenario, different expert interpretations
295 295  * Both verdicts visible with expert attribution
296 296  * No forced consensus
297 297  
298 298  **Transparency**:
299 -
300 300  * Expert reasoning documented
301 301  * Assumptions stated explicitly
302 302  * Evidence chains traceable
... ... @@ -303,7 +303,6 @@
303 303  * Users can evaluate competing expert opinions
304 304  
305 305  **Federation**:
306 -
307 307  * Different nodes can have different expert conclusions
308 308  * Cross-node branching allowed
309 309  * Users can see how conclusions vary across nodes
... ... @@ -315,7 +315,6 @@
315 315  **Multiple Safeguards**:
316 316  
317 317  **Quality Gate 4: Structural Integrity**:
318 -
319 319  * Fact-checking against sources
320 320  * No hallucinations allowed
321 321  * Logic chain must be valid and traceable
... ... @@ -322,7 +322,6 @@
322 322  * References must be accessible and verifiable
323 323  
324 324  **Evidence Requirements**:
325 -
326 326  * Primary sources required
327 327  * Citations must be complete
328 328  * Sources must be accessible
... ... @@ -329,13 +329,11 @@
329 329  * Reliability scored
330 330  
331 331  **Audit System**:
332 -
333 333  * Human auditors check AI-generated content
334 334  * Hallucinations caught and fed back into training
335 335  * Patterns of errors trigger system improvements
336 336  
337 337  **Transparency**:
338 -
339 339  * All reasoning chains visible
340 340  * Sources linked
341 341  * Users can verify claims against sources
... ... @@ -342,7 +342,6 @@
342 342  * AKEL outputs clearly labeled
343 343  
344 344  **Human Oversight**:
345 -
346 346  * Tier A requires expert review for "Human-Reviewed" status
347 347  * Audit sampling catches errors
348 348  * Community can flag issues
... ... @@ -354,7 +354,6 @@
354 354  [ToDo: Business model and sustainability to be defined]
355 355  
356 356  Potential models under consideration:
357 -
358 358  * Non-profit foundation with grants and donations
359 359  * Institutional subscriptions (universities, research organizations, media)
360 360  * API access for third-party integrations
... ... @@ -371,4 +371,5 @@
371 371  * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
372 372  * [[Automation>>FactHarbor.Specification.Automation.WebHome]]
373 373  * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]]
374 -* [[Mission & Purpose>>FactHarbor.Organisation.Mission & Purpose.WebHome]]
380 +* [[Mission & Purpose>>FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
381 +