Wiki source code of FAQ

Last modified by Robert Schaub on 2026/02/08 21:20

Hide last authors
Robert Schaub 1.1 1 = Frequently Asked Questions (FAQ) =
Robert Schaub 1.3 2
Robert Schaub 1.1 3 Common questions about FactHarbor's design, functionality, and approach.
Robert Schaub 1.3 4
Robert Schaub 1.1 5 == 1. How do claims get evaluated in FactHarbor? ==
Robert Schaub 1.3 6
Robert Schaub 1.1 7 === 1.1 User Submission ===
Robert Schaub 1.3 8
Robert Schaub 1.1 9 **Who**: Anyone can submit claims
10 **Process**: User submits claim text + source URLs
11 **Speed**: Typically <20 seconds to verdict
Robert Schaub 1.3 12
Robert Schaub 1.1 13 === 1.2 AKEL Processing (Automated) ===
Robert Schaub 1.3 14
Robert Schaub 1.1 15 **What**: AI Knowledge Extraction Layer analyzes claim
16 **Steps**:
Robert Schaub 1.3 17
Robert Schaub 1.1 18 * Parse claim into testable components
19 * Extract evidence from provided sources
20 * Score source credibility
21 * Generate verdict with confidence level
22 * Assign risk tier
23 * Publish automatically
24 **Authority**: AKEL makes all content decisions
25 **Scale**: Can process millions of claims
Robert Schaub 1.3 26
Robert Schaub 1.1 27 === 1.3 Continuous Improvement (Human Role) ===
Robert Schaub 1.3 28
Robert Schaub 1.1 29 **What**: Humans improve the system, not individual verdicts
30 **Activities**:
Robert Schaub 1.3 31
Robert Schaub 1.1 32 * Monitor aggregate performance metrics
33 * Identify systematic errors
34 * Propose algorithm improvements
35 * Update policies and rules
36 * Test changes before deployment
37 **NOT**: Reviewing individual claims for approval
38 **Focus**: Fix the system, not the data
Robert Schaub 1.3 39
Robert Schaub 1.1 40 === 1.4 Exception Handling ===
Robert Schaub 1.3 41
Robert Schaub 1.1 42 **When AKEL flags for review**:
Robert Schaub 1.3 43
Robert Schaub 1.1 44 * Low confidence verdict
45 * Detected manipulation attempt
46 * Unusual pattern requiring attention
47 **Moderator role**:
48 * Reviews flagged items
49 * Takes action on abuse/manipulation
50 * Proposes detection improvements
51 * Does NOT override verdicts
Robert Schaub 1.3 52
Robert Schaub 1.1 53 === 1.5 Why This Model Works ===
Robert Schaub 1.3 54
Robert Schaub 1.1 55 **Scale**: Automation handles volume humans cannot
56 **Consistency**: Same rules applied uniformly
57 **Transparency**: Algorithms can be audited
58 **Improvement**: Systematic fixes benefit all claims
Robert Schaub 1.3 59
Robert Schaub 1.1 60 == 2. What prevents FactHarbor from becoming another echo chamber? ==
Robert Schaub 1.3 61
Robert Schaub 1.1 62 FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
63 **Mandatory Contradiction Search**:
Robert Schaub 1.3 64
Robert Schaub 1.1 65 * AI must actively search for counter-evidence, not just confirmations
66 * System checks for echo chamber patterns in source clusters
67 * Flags tribal or ideological source clustering
68 * Requires diverse perspectives across political/ideological spectrum
69 **Multiple Scenarios**:
70 * Claims are evaluated under different interpretations
71 * Reveals how assumptions change conclusions
72 * Makes disagreements understandable, not divisive
73 **Transparent Reasoning**:
74 * All assumptions, definitions, and boundaries are explicit
75 * Evidence chains are traceable
76 * Uncertainty is quantified, not hidden
77 **Audit System**:
78 * Human auditors check for bubble patterns
79 * Feedback loop improves AI search diversity
80 * Community can flag missing perspectives
81 **Federation**:
82 * Multiple independent nodes with different perspectives
83 * No single entity controls "the truth"
84 * Cross-node contradiction detection
Robert Schaub 1.3 85
Robert Schaub 1.1 86 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? ==
Robert Schaub 1.3 87
Robert Schaub 1.1 88 This is exactly what FactHarbor is designed for:
89 **Scenarios capture contexts**:
Robert Schaub 1.3 90
Robert Schaub 1.1 91 * Each scenario defines specific boundaries, definitions, and assumptions
92 * The same claim can have different verdicts in different scenarios
93 * Example: "Coffee is healthy" depends on:
Robert Schaub 1.3 94 ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
95 ** Population (adults? pregnant women? people with heart conditions?)
96 ** Consumption level (1 cup/day? 5 cups/day?)
97 ** Time horizon (short-term? long-term?)
Robert Schaub 1.1 98 **Truth Landscape**:
99 * Shows all scenarios and their verdicts side-by-side
100 * Users see *why* interpretations differ
101 * No forced consensus when legitimate disagreement exists
102 **Explicit Assumptions**:
103 * Every scenario states its assumptions clearly
104 * Users can compare how changing assumptions changes conclusions
105 * Makes context-dependence visible, not hidden
Robert Schaub 1.3 106
Robert Schaub 1.1 107 == 4. What makes FactHarbor different from traditional fact-checking sites? ==
Robert Schaub 1.3 108
Robert Schaub 1.1 109 **Traditional Fact-Checking**:
Robert Schaub 1.3 110
Robert Schaub 1.1 111 * Binary verdicts: True / Mostly True / False
112 * Single interpretation chosen by fact-checker
113 * Often hides legitimate contextual differences
114 * Limited ability to show *why* people disagree
115 **FactHarbor**:
116 * **Multi-scenario**: Shows multiple valid interpretations
117 * **Likelihood-based**: Ranges with uncertainty, not binary labels
118 * **Transparent assumptions**: Makes boundaries and definitions explicit
119 * **Version history**: Shows how understanding evolves
120 * **Contradiction search**: Actively seeks opposing evidence
121 * **Federated**: No single authority controls truth
Robert Schaub 1.3 122
Robert Schaub 1.1 123 == 5. How do you prevent manipulation or coordinated misinformation campaigns? ==
Robert Schaub 1.3 124
Robert Schaub 1.1 125 **Quality Gates**:
Robert Schaub 1.3 126
Robert Schaub 1.1 127 * Automated checks before AI-generated content publishes
128 * Source quality verification
129 * Mandatory contradiction search
130 * Bubble detection for coordinated campaigns
131 **Audit System**:
132 * Stratified sampling catches manipulation patterns
133 * Trusted Contributor auditors validate AI research quality
134 * Failed audits trigger immediate review
135 **Transparency**:
136 * All reasoning chains are visible
137 * Evidence sources are traceable
138 * AKEL involvement clearly labeled
139 * Version history preserved
140 **Moderation**:
141 * Moderators handle abuse, spam, coordinated manipulation
142 * Content can be flagged by community
143 * Audit trail maintained even if content hidden
144 **Federation**:
145 * Multiple nodes with independent governance
146 * No single point of control
147 * Cross-node contradiction detection
148 * Trust model prevents malicious node influence
Robert Schaub 1.3 149
Robert Schaub 1.1 150 == 6. What happens when new evidence contradicts an existing verdict? ==
Robert Schaub 1.3 151
Robert Schaub 1.1 152 FactHarbor is designed for evolving knowledge:
153 **Automatic Re-evaluation**:
Robert Schaub 1.3 154
Robert Schaub 1.1 155 1. New evidence arrives
156 2. System detects affected scenarios and verdicts
157 3. AKEL proposes updated verdicts
158 4. Contributors/experts validate
159 5. New verdict version published
160 6. Old versions remain accessible
161 **Version History**:
Robert Schaub 1.3 162
Robert Schaub 1.1 163 * Every verdict has complete history
164 * Users can see "as of date X, what did we know?"
165 * Timeline shows how understanding evolved
166 **Transparent Updates**:
167 * Reason for re-evaluation documented
168 * New evidence clearly linked
169 * Changes explained, not hidden
170 **User Notifications**:
171 * Users following claims are notified of updates
172 * Can compare old vs new verdicts
173 * Can see which evidence changed conclusions
Robert Schaub 1.3 174
Robert Schaub 1.1 175 == 7. Who can submit claims to FactHarbor? ==
Robert Schaub 1.3 176
Robert Schaub 1.1 177 **Anyone** - even without login:
178 **Readers** (no login required):
Robert Schaub 1.3 179
Robert Schaub 1.1 180 * Browse and search all published content
181 * Submit text for analysis
182 * New claims added automatically unless duplicates exist
183 * System deduplicates and normalizes
184 **Contributors** (logged in):
185 * Everything Readers can do
186 * Submit evidence sources
187 * Suggest scenarios
188 * Participate in discussions
189 **Workflow**:
Robert Schaub 1.3 190
Robert Schaub 1.1 191 1. User submits text (as Reader or Contributor)
192 2. AKEL extracts claims
193 3. Checks for existing duplicates
194 4. Normalizes claim text
195 5. Assigns risk tier
196 6. Generates scenarios (draft)
197 7. Runs quality gates
198 8. Publishes as AI-Generated (Mode 2) if passes
Robert Schaub 1.3 199
Robert Schaub 1.1 200 == 8. What are "risk tiers" and why do they matter? ==
Robert Schaub 1.3 201
Robert Schaub 1.1 202 Risk tiers determine review requirements and publication workflow:
203 **Tier A (High Risk)**:
Robert Schaub 1.3 204
Robert Schaub 1.1 205 * **Domains**: Medical, legal, elections, safety, security, major financial
206 * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status
207 * **Audit rate**: Recommendation 30-50%
208 * **Why**: Potential for significant harm if wrong
209 **Tier B (Medium Risk)**:
210 * **Domains**: Complex policy, science causality, contested issues
211 * **Publication**: AI can publish immediately with clear labeling
212 * **Audit rate**: Recommendation 10-20%
213 * **Why**: Nuanced but lower immediate harm risk
214 **Tier C (Low Risk)**:
215 * **Domains**: Definitions, established facts, historical data
216 * **Publication**: AI publication default
217 * **Audit rate**: Recommendation 5-10%
218 * **Why**: Well-established, low controversy
219 **Assignment**:
220 * AKEL suggests tier based on domain, keywords, impact
221 * Moderators and Trusted Contributors can override
222 * Risk tiers reviewed based on audit outcomes
Robert Schaub 1.3 223
Robert Schaub 1.1 224 == 9. How does federation work and why is it important? ==
Robert Schaub 1.3 225
Robert Schaub 1.1 226 **Federation Model**:
Robert Schaub 1.3 227
Robert Schaub 1.1 228 * Multiple independent FactHarbor nodes
229 * Each node has own database, AKEL, governance
230 * Nodes exchange claims, scenarios, evidence, verdicts
231 * No central authority
232 **Why Federation Matters**:
233 * **Resilience**: No single point of failure or censorship
234 * **Autonomy**: Communities govern themselves
235 * **Scalability**: Add nodes to handle more users
236 * **Specialization**: Domain-focused nodes (health, energy, etc.)
237 * **Trust diversity**: Multiple perspectives, not single truth source
238 **How Nodes Exchange Data**:
Robert Schaub 1.3 239
Robert Schaub 1.1 240 1. Local node creates versions
241 2. Builds signed bundle
242 3. Pushes to trusted neighbor nodes
243 4. Remote nodes validate signatures and lineage
244 5. Accept or branch versions
245 6. Local re-evaluation if needed
246 **Trust Model**:
Robert Schaub 1.3 247
Robert Schaub 1.1 248 * Trusted nodes → auto-import
249 * Neutral nodes → import with review
250 * Untrusted nodes → manual only
Robert Schaub 1.3 251
Robert Schaub 1.1 252 == 10. Can experts disagree in FactHarbor? ==
Robert Schaub 1.3 253
Robert Schaub 1.1 254 **Yes - and that's a feature, not a bug**:
255 **Multiple Scenarios**:
Robert Schaub 1.3 256
Robert Schaub 1.1 257 * Trusted Contributors can create different scenarios with different assumptions
258 * Each scenario gets its own verdict
259 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
260 **Parallel Verdicts**:
261 * Same scenario, different expert interpretations
262 * Both verdicts visible with expert attribution
263 * No forced consensus
264 **Transparency**:
265 * Trusted Contributor reasoning documented
266 * Assumptions stated explicitly
267 * Evidence chains traceable
268 * Users can evaluate competing expert opinions
269 **Federation**:
270 * Different nodes can have different expert conclusions
271 * Cross-node branching allowed
272 * Users can see how conclusions vary across nodes
Robert Schaub 1.3 273
Robert Schaub 1.1 274 == 11. What prevents AI from hallucinating or making up facts? ==
Robert Schaub 1.3 275
Robert Schaub 1.1 276 **Multiple Safeguards**:
277 **Quality Gate 4: Structural Integrity**:
Robert Schaub 1.3 278
Robert Schaub 1.1 279 * Fact-checking against sources
280 * No hallucinations allowed
281 * Logic chain must be valid and traceable
282 * References must be accessible and verifiable
283 **Evidence Requirements**:
284 * Primary sources required
285 * Citations must be complete
286 * Sources must be accessible
287 * Reliability scored
288 **Audit System**:
289 * Human auditors check AI-generated content
290 * Hallucinations caught and fed back into training
291 * Patterns of errors trigger system improvements
292 **Transparency**:
293 * All reasoning chains visible
294 * Sources linked
295 * Users can verify claims against sources
296 * AKEL outputs clearly labeled
297 **Human Oversight**:
298 * Tier A marked as highest risk
299 * Audit sampling catches errors
300 * Community can flag issues
Robert Schaub 1.3 301
Robert Schaub 1.1 302 == 12. How does FactHarbor make money / is it sustainable? ==
Robert Schaub 1.3 303
Robert Schaub 1.1 304 [ToDo: Business model and sustainability to be defined]
305 Potential models under consideration:
Robert Schaub 1.3 306
Robert Schaub 1.1 307 * Non-profit foundation with grants and donations
308 * Institutional subscriptions (universities, research organizations, media)
309 * API access for third-party integrations
310 * Premium features for power users
311 * Federated node hosting services
312 Core principle: **Public benefit** mission takes priority over profit.
Robert Schaub 1.3 313
Robert Schaub 1.1 314 == 13. Related Pages ==
Robert Schaub 1.3 315
Robert Schaub 1.12 316 * [[Requirements (Roles)>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.WebHome]]
Robert Schaub 1.9 317 * [[AKEL (AI Knowledge Extraction Layer)>>Archive.FactHarbor 2026\.01\.20.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
Robert Schaub 1.10 318 * [[Automation>>Archive.FactHarbor 2026\.01\.20.Specification.Automation.WebHome]]
Robert Schaub 1.11 319 * [[Federation & Decentralization>>Archive.FactHarbor 2026\.01\.20.Specification.Federation & Decentralization.WebHome]]
Robert Schaub 1.14 320 * [[Mission & Purpose>>Archive.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
Robert Schaub 1.3 321
Robert Schaub 1.1 322 == 20. Glossary / Key Terms ==
Robert Schaub 1.3 323
Robert Schaub 1.1 324 === Phase 0 vs POC v1 ===
Robert Schaub 1.3 325
Robert Schaub 1.1 326 These terms refer to the same stage of FactHarbor's development:
Robert Schaub 1.3 327
Robert Schaub 1.1 328 * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance
329 * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication
330 Both describe the current development stage where the platform is being built and initially validated.
Robert Schaub 1.3 331
Robert Schaub 1.1 332 === Beta 0 ===
Robert Schaub 1.3 333
Robert Schaub 1.1 334 The next development stage after POC, featuring:
Robert Schaub 1.3 335
Robert Schaub 1.1 336 * External testers
337 * Basic federation experiments
338 * Enhanced automation
Robert Schaub 1.3 339
Robert Schaub 1.1 340 === Release 1.0 ===
Robert Schaub 1.3 341
Robert Schaub 1.1 342 The first public release featuring:
Robert Schaub 1.3 343
Robert Schaub 1.1 344 * Full federation support
345 * 2000+ concurrent users
346 * Production-grade infrastructure