Wiki source code of FAQ

Version 3.13 by Robert Schaub on 2025/12/16 21:50

Show last authors
1 = Frequently Asked Questions (FAQ) =
2
3 Common questions about FactHarbor's design, functionality, and approach.
4
5 ----
6
7 == How do facts get input into the system? ==
8
9 FactHarbor uses a hybrid model combining three complementary approaches:
10
11 === 1. AI-Generated Content (Scalable) ===
12
13 **What**: System dynamically researches claims using AKEL (AI Knowledge Extraction Layer)
14
15 **Process**:
16
17 * Extracts claims from submitted text
18 * Generates structured sub-queries
19 * Performs **mandatory contradiction search** (actively seeks counter-evidence, not just confirmations)
20 * Runs automated quality gates
21 * Publishes with clear "AI-Generated" labels
22
23 **Publication**: Mode 2 (public, AI-labeled) when quality gates pass
24
25 **Purpose**: Handles scale — emerging claims get immediate responses with transparent reasoning
26
27 === 2. Expert-Authored Content (Authoritative) ===
28
29 **What**: Domain experts directly author, edit, and validate content
30
31 **Focus**: High-risk domains (medical, legal, safety-critical)
32
33 **Publication**: Mode 3 ("Human-Reviewed" status) with expert attribution
34
35 **Authority**: Tier A content requires expert approval
36
37 **Purpose**: Provides authoritative grounding for critical domains where errors have serious consequences
38
39 === 3. Audit-Improved Quality (Continuous) ===
40
41 **What**: Sampling audits where experts review AI-generated content
42
43 **Rates**:
44
45 * High-risk (Tier A): 30-50% sampling
46 * Medium-risk (Tier B): 10-20% sampling
47 * Low-risk (Tier C): 5-10% sampling
48
49 **Impact**: Expert feedback systematically improves AI research quality
50
51 **Purpose**: Ensures AI quality evolves based on expert validation patterns
52
53 === Why All Three Matter ===
54
55 **Complementary Strengths**:
56
57 * **AI research**: Scale and speed for emerging claims
58 * **Expert authoring**: Authority and precision for critical domains
59 * **Audit feedback**: Continuous quality improvement
60
61 **Expert Time Optimization**:
62
63 Experts can choose where to focus their time:
64
65 * Author high-priority content directly
66 * Validate and edit AI-generated outputs
67 * Audit samples to improve system-wide AI performance
68
69 This focuses expert time where domain expertise matters most while leveraging AI for scale.
70
71 === Current Status ===
72
73 **POC v1**: Demonstrates the AI research pipeline (fully automated with transparent reasoning and quality gates)
74
75 **Full System**: Will support all three pathways with integrated workflow
76
77 ----
78
79 == What prevents FactHarbor from becoming another echo chamber? ==
80
81 FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
82
83 **Mandatory Contradiction Search**:
84
85 * AI must actively search for counter-evidence, not just confirmations
86 * System checks for echo chamber patterns in source clusters
87 * Flags tribal or ideological source clustering
88 * Requires diverse perspectives across political/ideological spectrum
89
90 **Multiple Scenarios**:
91
92 * Claims are evaluated under different interpretations
93 * Reveals how assumptions change conclusions
94 * Makes disagreements understandable, not divisive
95
96 **Transparent Reasoning**:
97
98 * All assumptions, definitions, and boundaries are explicit
99 * Evidence chains are traceable
100 * Uncertainty is quantified, not hidden
101
102 **Audit System**:
103
104 * Human auditors check for bubble patterns
105 * Feedback loop improves AI search diversity
106 * Community can flag missing perspectives
107
108 **Federation**:
109
110 * Multiple independent nodes with different perspectives
111 * No single entity controls "the truth"
112 * Cross-node contradiction detection
113
114 ----
115
116 == How does FactHarbor handle claims that are "true in one context but false in another"? ==
117
118 This is exactly what FactHarbor is designed for:
119
120 **Scenarios capture contexts**:
121
122 * Each scenario defines specific boundaries, definitions, and assumptions
123 * The same claim can have different verdicts in different scenarios
124 * Example: "Coffee is healthy" depends on:
125 ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
126 ** Population (adults? pregnant women? people with heart conditions?)
127 ** Consumption level (1 cup/day? 5 cups/day?)
128 ** Time horizon (short-term? long-term?)
129
130 **Truth Landscape**:
131
132 * Shows all scenarios and their verdicts side-by-side
133 * Users see *why* interpretations differ
134 * No forced consensus when legitimate disagreement exists
135
136 **Explicit Assumptions**:
137
138 * Every scenario states its assumptions clearly
139 * Users can compare how changing assumptions changes conclusions
140 * Makes context-dependence visible, not hidden
141
142 ----
143
144 == What makes FactHarbor different from traditional fact-checking sites? ==
145
146 **Traditional Fact-Checking**:
147
148 * Binary verdicts: True / Mostly True / False
149 * Single interpretation chosen by fact-checker
150 * Often hides legitimate contextual differences
151 * Limited ability to show *why* people disagree
152
153 **FactHarbor**:
154
155 * **Multi-scenario**: Shows multiple valid interpretations
156 * **Likelihood-based**: Ranges with uncertainty, not binary labels
157 * **Transparent assumptions**: Makes boundaries and definitions explicit
158 * **Version history**: Shows how understanding evolves
159 * **Contradiction search**: Actively seeks opposing evidence
160 * **Federated**: No single authority controls truth
161
162 ----
163
164 == How do you prevent manipulation or coordinated misinformation campaigns? ==
165
166 **Quality Gates**:
167
168 * Automated checks before AI-generated content publishes
169 * Source quality verification
170 * Mandatory contradiction search
171 * Bubble detection for coordinated campaigns
172
173 **Audit System**:
174
175 * Stratified sampling catches manipulation patterns
176 * Expert auditors validate AI research quality
177 * Failed audits trigger immediate review
178
179 **Transparency**:
180
181 * All reasoning chains are visible
182 * Evidence sources are traceable
183 * AKEL involvement clearly labeled
184 * Version history preserved
185
186 **Moderation**:
187
188 * Moderators handle abuse, spam, coordinated manipulation
189 * Content can be flagged by community
190 * Audit trail maintained even if content hidden
191
192 **Federation**:
193
194 * Multiple nodes with independent governance
195 * No single point of control
196 * Cross-node contradiction detection
197 * Trust model prevents malicious node influence
198
199 ----
200
201 == What happens when new evidence contradicts an existing verdict? ==
202
203 FactHarbor is designed for evolving knowledge:
204
205 **Automatic Re-evaluation**:
206
207 1. New evidence arrives
208 2. System detects affected scenarios and verdicts
209 3. AKEL proposes updated verdicts
210 4. Reviewers/experts validate
211 5. New verdict version published
212 6. Old versions remain accessible
213
214 **Version History**:
215
216 * Every verdict has complete history
217 * Users can see "as of date X, what did we know?"
218 * Timeline shows how understanding evolved
219
220 **Transparent Updates**:
221
222 * Reason for re-evaluation documented
223 * New evidence clearly linked
224 * Changes explained, not hidden
225
226 **User Notifications**:
227
228 * Users following claims are notified of updates
229 * Can compare old vs new verdicts
230 * Can see which evidence changed conclusions
231
232 ----
233
234 == Who can submit claims to FactHarbor? ==
235
236 **Anyone** - even without login:
237
238 **Readers** (no login required):
239
240 * Browse and search all published content
241 * Submit text for analysis
242 * New claims added automatically unless duplicates exist
243 * System deduplicates and normalizes
244
245 **Contributors** (logged in):
246
247 * Everything Readers can do
248 * Submit evidence sources
249 * Suggest scenarios
250 * Participate in discussions
251
252 **Workflow**:
253
254 1. User submits text (as Reader or Contributor)
255 2. AKEL extracts claims
256 3. Checks for existing duplicates
257 4. Normalizes claim text
258 5. Assigns risk tier
259 6. Generates scenarios (draft)
260 7. Runs quality gates
261 8. Publishes as AI-Generated (Mode 2) if passes
262
263 ----
264
265 == What are "risk tiers" and why do they matter? ==
266
267 Risk tiers determine review requirements and publication workflow:
268
269 **Tier A (High Risk)**:
270
271 * **Domains**: Medical, legal, elections, safety, security, major financial
272 * **Publication**: AI can publish with warnings, expert review required for "Human-Reviewed" status
273 * **Audit rate**: Recommendation 30-50%
274 * **Why**: Potential for significant harm if wrong
275
276 **Tier B (Medium Risk)**:
277
278 * **Domains**: Complex policy, science causality, contested issues
279 * **Publication**: AI can publish immediately with clear labeling
280 * **Audit rate**: Recommendation 10-20%
281 * **Why**: Nuanced but lower immediate harm risk
282
283 **Tier C (Low Risk)**:
284
285 * **Domains**: Definitions, established facts, historical data
286 * **Publication**: AI publication default
287 * **Audit rate**: Recommendation 5-10%
288 * **Why**: Well-established, low controversy
289
290 **Assignment**:
291
292 * AKEL suggests tier based on domain, keywords, impact
293 * Moderators and Experts can override
294 * Risk tiers reviewed based on audit outcomes
295
296 ----
297
298 == How does federation work and why is it important? ==
299
300 **Federation Model**:
301
302 * Multiple independent FactHarbor nodes
303 * Each node has own database, AKEL, governance
304 * Nodes exchange claims, scenarios, evidence, verdicts
305 * No central authority
306
307 **Why Federation Matters**:
308
309 * **Resilience**: No single point of failure or censorship
310 * **Autonomy**: Communities govern themselves
311 * **Scalability**: Add nodes to handle more users
312 * **Specialization**: Domain-focused nodes (health, energy, etc.)
313 * **Trust diversity**: Multiple perspectives, not single truth source
314
315 **How Nodes Exchange Data**:
316
317 1. Local node creates versions
318 2. Builds signed bundle
319 3. Pushes to trusted neighbor nodes
320 4. Remote nodes validate signatures and lineage
321 5. Accept or branch versions
322 6. Local re-evaluation if needed
323
324 **Trust Model**:
325
326 * Trusted nodes → auto-import
327 * Neutral nodes → import with review
328 * Untrusted nodes → manual only
329
330 ----
331
332 == Can experts disagree in FactHarbor? ==
333
334 **Yes - and that's a feature, not a bug**:
335
336 **Multiple Scenarios**:
337
338 * Experts can create different scenarios with different assumptions
339 * Each scenario gets its own verdict
340 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
341
342 **Parallel Verdicts**:
343
344 * Same scenario, different expert interpretations
345 * Both verdicts visible with expert attribution
346 * No forced consensus
347
348 **Transparency**:
349
350 * Expert reasoning documented
351 * Assumptions stated explicitly
352 * Evidence chains traceable
353 * Users can evaluate competing expert opinions
354
355 **Federation**:
356
357 * Different nodes can have different expert conclusions
358 * Cross-node branching allowed
359 * Users can see how conclusions vary across nodes
360
361 ----
362
363 == What prevents AI from hallucinating or making up facts? ==
364
365 **Multiple Safeguards**:
366
367 **Quality Gate 4: Structural Integrity**:
368
369 * Fact-checking against sources
370 * No hallucinations allowed
371 * Logic chain must be valid and traceable
372 * References must be accessible and verifiable
373
374 **Evidence Requirements**:
375
376 * Primary sources required
377 * Citations must be complete
378 * Sources must be accessible
379 * Reliability scored
380
381 **Audit System**:
382
383 * Human auditors check AI-generated content
384 * Hallucinations caught and fed back into training
385 * Patterns of errors trigger system improvements
386
387 **Transparency**:
388
389 * All reasoning chains visible
390 * Sources linked
391 * Users can verify claims against sources
392 * AKEL outputs clearly labeled
393
394 **Human Oversight**:
395
396 * Tier A requires expert review for "Human-Reviewed" status
397 * Audit sampling catches errors
398 * Community can flag issues
399
400 ----
401
402 == How does FactHarbor make money / is it sustainable? ==
403
404 [ToDo: Business model and sustainability to be defined]
405
406 Potential models under consideration:
407
408 * Non-profit foundation with grants and donations
409 * Institutional subscriptions (universities, research organizations, media)
410 * API access for third-party integrations
411 * Premium features for power users
412 * Federated node hosting services
413
414 Core principle: **Public benefit** mission takes priority over profit.
415
416 ----
417
418 == Related Pages ==
419
420 * [[Requirements (Roles)>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.Requirements.WebHome]]
421 * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
422 * [[Automation>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.Automation.WebHome]]
423 * [[Federation & Decentralization>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Specification.Federation & Decentralization.WebHome]]
424 * [[Mission & Purpose>>FactHarbor.Archive.FactHarbor V0\.9\.18 copy.Organisation V0\.9\.18.Core Problems FactHarbor Solves.WebHome]]