Wiki source code of FAQ

Version 1.5 by Robert Schaub on 2025/12/22 14:38

Show last authors
1 = Frequently Asked Questions (FAQ) =
2
3 Common questions about FactHarbor's design, functionality, and approach.
4
5 == 1. How do claims get evaluated in FactHarbor? ==
6
7 === 1.1 User Submission ===
8
9 **Who**: Anyone can submit claims
10 **Process**: User submits claim text + source URLs
11 **Speed**: Typically <20 seconds to verdict
12
13 === 1.2 AKEL Processing (Automated) ===
14
15 **What**: AI Knowledge Extraction Layer analyzes claim
16 **Steps**:
17
18 * Parse claim into testable components
19 * Extract evidence from provided sources
20 * Score source credibility
21 * Generate verdict with confidence level
22 * Assign risk tier
23 * Publish automatically
24 **Authority**: AKEL makes all content decisions
25 **Scale**: Can process millions of claims
26
27 === 1.3 Continuous Improvement (Human Role) ===
28
29 **What**: Humans improve the system, not individual verdicts
30 **Activities**:
31
32 * Monitor aggregate performance metrics
33 * Identify systematic errors
34 * Propose algorithm improvements
35 * Update policies and rules
36 * Test changes before deployment
37 **NOT**: Reviewing individual claims for approval
38 **Focus**: Fix the system, not the data
39
40 === 1.4 Exception Handling ===
41
42 **When AKEL flags for review**:
43
44 * Low confidence verdict
45 * Detected manipulation attempt
46 * Unusual pattern requiring attention
47 **Moderator role**:
48 * Reviews flagged items
49 * Takes action on abuse/manipulation
50 * Proposes detection improvements
51 * Does NOT override verdicts
52
53 === 1.5 Why This Model Works ===
54
55 **Scale**: Automation handles volume humans cannot
56 **Consistency**: Same rules applied uniformly
57 **Transparency**: Algorithms can be audited
58 **Improvement**: Systematic fixes benefit all claims
59
60 == 2. What prevents FactHarbor from becoming another echo chamber? ==
61
62 FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
63 **Mandatory Contradiction Search**:
64
65 * AI must actively search for counter-evidence, not just confirmations
66 * System checks for echo chamber patterns in source clusters
67 * Flags tribal or ideological source clustering
68 * Requires diverse perspectives across political/ideological spectrum
69 **Multiple Scenarios**:
70 * Claims are evaluated under different interpretations
71 * Reveals how assumptions change conclusions
72 * Makes disagreements understandable, not divisive
73 **Transparent Reasoning**:
74 * All assumptions, definitions, and boundaries are explicit
75 * Evidence chains are traceable
76 * Uncertainty is quantified, not hidden
77 **Audit System**:
78 * Human auditors check for bubble patterns
79 * Feedback loop improves AI search diversity
80 * Community can flag missing perspectives
81 **Federation**:
82 * Multiple independent nodes with different perspectives
83 * No single entity controls "the truth"
84 * Cross-node contradiction detection
85
86 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? ==
87
88 This is exactly what FactHarbor is designed for:
89 **Scenarios capture contexts**:
90
91 * Each scenario defines specific boundaries, definitions, and assumptions
92 * The same claim can have different verdicts in different scenarios
93 * Example: "Coffee is healthy" depends on: ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) ** Population (adults? pregnant women? people with heart conditions?) ** Consumption level (1 cup/day? 5 cups/day?) ** Time horizon (short-term? long-term?)
94 **Truth Landscape**:
95 * Shows all scenarios and their verdicts side-by-side
96 * Users see *why* interpretations differ
97 * No forced consensus when legitimate disagreement exists
98 **Explicit Assumptions**:
99 * Every scenario states its assumptions clearly
100 * Users can compare how changing assumptions changes conclusions
101 * Makes context-dependence visible, not hidden
102
103 == 4. What makes FactHarbor different from traditional fact-checking sites? ==
104
105 **Traditional Fact-Checking**:
106
107 * Binary verdicts: True / Mostly True / False
108 * Single interpretation chosen by fact-checker
109 * Often hides legitimate contextual differences
110 * Limited ability to show *why* people disagree
111 **FactHarbor**:
112 * **Multi-scenario**: Shows multiple valid interpretations
113 * **Likelihood-based**: Ranges with uncertainty, not binary labels
114 * **Transparent assumptions**: Makes boundaries and definitions explicit
115 * **Version history**: Shows how understanding evolves
116 * **Contradiction search**: Actively seeks opposing evidence
117 * **Federated**: No single authority controls truth
118
119 == 5. How do you prevent manipulation or coordinated misinformation campaigns? ==
120
121 **Quality Gates**:
122
123 * Automated checks before AI-generated content publishes
124 * Source quality verification
125 * Mandatory contradiction search
126 * Bubble detection for coordinated campaigns
127 **Audit System**:
128 * Stratified sampling catches manipulation patterns
129 * Trusted Contributor auditors validate AI research quality
130 * Failed audits trigger immediate review
131 **Transparency**:
132 * All reasoning chains are visible
133 * Evidence sources are traceable
134 * AKEL involvement clearly labeled
135 * Version history preserved
136 **Moderation**:
137 * Moderators handle abuse, spam, coordinated manipulation
138 * Content can be flagged by community
139 * Audit trail maintained even if content hidden
140 **Federation**:
141 * Multiple nodes with independent governance
142 * No single point of control
143 * Cross-node contradiction detection
144 * Trust model prevents malicious node influence
145
146 == 6. What happens when new evidence contradicts an existing verdict? ==
147
148 FactHarbor is designed for evolving knowledge:
149 **Automatic Re-evaluation**:
150
151 1. New evidence arrives
152 2. System detects affected scenarios and verdicts
153 3. AKEL proposes updated verdicts
154 4. Contributors/experts validate
155 5. New verdict version published
156 6. Old versions remain accessible
157 **Version History**:
158
159 * Every verdict has complete history
160 * Users can see "as of date X, what did we know?"
161 * Timeline shows how understanding evolved
162 **Transparent Updates**:
163 * Reason for re-evaluation documented
164 * New evidence clearly linked
165 * Changes explained, not hidden
166 **User Notifications**:
167 * Users following claims are notified of updates
168 * Can compare old vs new verdicts
169 * Can see which evidence changed conclusions
170
171 == 7. Who can submit claims to FactHarbor? ==
172
173 **Anyone** - even without login:
174 **Readers** (no login required):
175
176 * Browse and search all published content
177 * Submit text for analysis
178 * New claims added automatically unless duplicates exist
179 * System deduplicates and normalizes
180 **Contributors** (logged in):
181 * Everything Readers can do
182 * Submit evidence sources
183 * Suggest scenarios
184 * Participate in discussions
185 **Workflow**:
186
187 1. User submits text (as Reader or Contributor)
188 2. AKEL extracts claims
189 3. Checks for existing duplicates
190 4. Normalizes claim text
191 5. Assigns risk tier
192 6. Generates scenarios (draft)
193 7. Runs quality gates
194 8. Publishes as AI-Generated (Mode 2) if passes
195
196 == 8. What are "risk tiers" and why do they matter? ==
197
198 Risk tiers determine review requirements and publication workflow:
199 **Tier A (High Risk)**:
200
201 * **Domains**: Medical, legal, elections, safety, security, major financial
202 * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status
203 * **Audit rate**: Recommendation 30-50%
204 * **Why**: Potential for significant harm if wrong
205 **Tier B (Medium Risk)**:
206 * **Domains**: Complex policy, science causality, contested issues
207 * **Publication**: AI can publish immediately with clear labeling
208 * **Audit rate**: Recommendation 10-20%
209 * **Why**: Nuanced but lower immediate harm risk
210 **Tier C (Low Risk)**:
211 * **Domains**: Definitions, established facts, historical data
212 * **Publication**: AI publication default
213 * **Audit rate**: Recommendation 5-10%
214 * **Why**: Well-established, low controversy
215 **Assignment**:
216 * AKEL suggests tier based on domain, keywords, impact
217 * Moderators and Trusted Contributors can override
218 * Risk tiers reviewed based on audit outcomes
219
220 == 9. How does federation work and why is it important? ==
221
222 **Federation Model**:
223
224 * Multiple independent FactHarbor nodes
225 * Each node has own database, AKEL, governance
226 * Nodes exchange claims, scenarios, evidence, verdicts
227 * No central authority
228 **Why Federation Matters**:
229 * **Resilience**: No single point of failure or censorship
230 * **Autonomy**: Communities govern themselves
231 * **Scalability**: Add nodes to handle more users
232 * **Specialization**: Domain-focused nodes (health, energy, etc.)
233 * **Trust diversity**: Multiple perspectives, not single truth source
234 **How Nodes Exchange Data**:
235
236 1. Local node creates versions
237 2. Builds signed bundle
238 3. Pushes to trusted neighbor nodes
239 4. Remote nodes validate signatures and lineage
240 5. Accept or branch versions
241 6. Local re-evaluation if needed
242 **Trust Model**:
243
244 * Trusted nodes → auto-import
245 * Neutral nodes → import with review
246 * Untrusted nodes → manual only
247
248 == 10. Can experts disagree in FactHarbor? ==
249
250 **Yes - and that's a feature, not a bug**:
251 **Multiple Scenarios**:
252
253 * Trusted Contributors can create different scenarios with different assumptions
254 * Each scenario gets its own verdict
255 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
256 **Parallel Verdicts**:
257 * Same scenario, different expert interpretations
258 * Both verdicts visible with expert attribution
259 * No forced consensus
260 **Transparency**:
261 * Trusted Contributor reasoning documented
262 * Assumptions stated explicitly
263 * Evidence chains traceable
264 * Users can evaluate competing expert opinions
265 **Federation**:
266 * Different nodes can have different expert conclusions
267 * Cross-node branching allowed
268 * Users can see how conclusions vary across nodes
269
270 == 11. What prevents AI from hallucinating or making up facts? ==
271
272 **Multiple Safeguards**:
273 **Quality Gate 4: Structural Integrity**:
274
275 * Fact-checking against sources
276 * No hallucinations allowed
277 * Logic chain must be valid and traceable
278 * References must be accessible and verifiable
279 **Evidence Requirements**:
280 * Primary sources required
281 * Citations must be complete
282 * Sources must be accessible
283 * Reliability scored
284 **Audit System**:
285 * Human auditors check AI-generated content
286 * Hallucinations caught and fed back into training
287 * Patterns of errors trigger system improvements
288 **Transparency**:
289 * All reasoning chains visible
290 * Sources linked
291 * Users can verify claims against sources
292 * AKEL outputs clearly labeled
293 **Human Oversight**:
294 * Tier A marked as highest risk
295 * Audit sampling catches errors
296 * Community can flag issues
297
298 == 12. How does FactHarbor make money / is it sustainable? ==
299
300 [ToDo: Business model and sustainability to be defined]
301 Potential models under consideration:
302
303 * Non-profit foundation with grants and donations
304 * Institutional subscriptions (universities, research organizations, media)
305 * API access for third-party integrations
306 * Premium features for power users
307 * Federated node hosting services
308 Core principle: **Public benefit** mission takes priority over profit.
309
310 == 13. Related Pages ==
311
312 * [[Requirements (Roles)>>Test.FactHarbor.Specification.Requirements.WebHome]]
313 * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarbor pre13 V0\.9\.70.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
314 * [[Automation>>Test.FactHarbor pre13 V0\.9\.70.Specification.Automation.WebHome]]
315 * [[Federation & Decentralization>>Test.FactHarbor.Specification.Federation & Decentralization.WebHome]]
316 * [[Mission & Purpose>>Test.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
317
318 == 20. Glossary / Key Terms ==
319
320 === Phase 0 vs POC v1 ===
321
322 These terms refer to the same stage of FactHarbor's development:
323
324 * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance
325 * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication
326 Both describe the current development stage where the platform is being built and initially validated.
327
328 === Beta 0 ===
329
330 The next development stage after POC, featuring:
331
332 * External testers
333 * Basic federation experiments
334 * Enhanced automation
335
336 === Release 1.0 ===
337
338 The first public release featuring:
339
340 * Full federation support
341 * 2000+ concurrent users
342 * Production-grade infrastructure