Wiki source code of FAQ

Last modified by Robert Schaub on 2025/12/23 18:00

Show last authors
1 = Frequently Asked Questions (FAQ) =
2
3 Common questions about FactHarbor's design, functionality, and approach.
4
5 == 1. How do claims get evaluated in FactHarbor? ==
6
7 === 1.1 User Submission ===
8
9 **Who**: Anyone can submit claims
10 **Process**: User submits claim text + source URLs
11 **Speed**: Typically <20 seconds to verdict
12
13 === 1.2 AKEL Processing (Automated) ===
14
15 **What**: AI Knowledge Extraction Layer analyzes claim
16 **Steps**:
17
18 * Parse claim into testable components
19 * Extract evidence from provided sources
20 * Score source credibility
21 * Generate verdict with confidence level
22 * Assign risk tier
23 * Publish automatically
24 **Authority**: AKEL makes all content decisions
25 **Scale**: Can process millions of claims
26
27 === 1.3 Continuous Improvement (Human Role) ===
28
29 **What**: Humans improve the system, not individual verdicts
30 **Activities**:
31
32 * Monitor aggregate performance metrics
33 * Identify systematic errors
34 * Propose algorithm improvements
35 * Update policies and rules
36 * Test changes before deployment
37 **NOT**: Reviewing individual claims for approval
38 **Focus**: Fix the system, not the data
39
40 === 1.4 Exception Handling ===
41
42 **When AKEL flags for review**:
43
44 * Low confidence verdict
45 * Detected manipulation attempt
46 * Unusual pattern requiring attention
47 **Moderator role**:
48 * Reviews flagged items
49 * Takes action on abuse/manipulation
50 * Proposes detection improvements
51 * Does NOT override verdicts
52
53 === 1.5 Why This Model Works ===
54
55 **Scale**: Automation handles volume humans cannot
56 **Consistency**: Same rules applied uniformly
57 **Transparency**: Algorithms can be audited
58 **Improvement**: Systematic fixes benefit all claims
59
60 == 2. What prevents FactHarbor from becoming another echo chamber? ==
61
62 FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
63 **Mandatory Contradiction Search**:
64
65 * AI must actively search for counter-evidence, not just confirmations
66 * System checks for echo chamber patterns in source clusters
67 * Flags tribal or ideological source clustering
68 * Requires diverse perspectives across political/ideological spectrum
69 **Multiple Scenarios**:
70 * Claims are evaluated under different interpretations
71 * Reveals how assumptions change conclusions
72 * Makes disagreements understandable, not divisive
73 **Transparent Reasoning**:
74 * All assumptions, definitions, and boundaries are explicit
75 * Evidence chains are traceable
76 * Uncertainty is quantified, not hidden
77 **Audit System**:
78 * Human auditors check for bubble patterns
79 * Feedback loop improves AI search diversity
80 * Community can flag missing perspectives
81 **Federation**:
82 * Multiple independent nodes with different perspectives
83 * No single entity controls "the truth"
84 * Cross-node contradiction detection
85
86 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? ==
87
88 This is exactly what FactHarbor is designed for:
89 **Scenarios capture contexts**:
90
91 * Each scenario defines specific boundaries, definitions, and assumptions
92 * The same claim can have different verdicts in different scenarios
93 * Example: "Coffee is healthy" depends on:
94 ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?)
95 ** Population (adults? pregnant women? people with heart conditions?)
96 ** Consumption level (1 cup/day? 5 cups/day?)
97 ** Time horizon (short-term? long-term?)
98 **Truth Landscape**:
99 * Shows all scenarios and their verdicts side-by-side
100 * Users see *why* interpretations differ
101 * No forced consensus when legitimate disagreement exists
102 **Explicit Assumptions**:
103 * Every scenario states its assumptions clearly
104 * Users can compare how changing assumptions changes conclusions
105 * Makes context-dependence visible, not hidden
106
107 == 4. What makes FactHarbor different from traditional fact-checking sites? ==
108
109 **Traditional Fact-Checking**:
110
111 * Binary verdicts: True / Mostly True / False
112 * Single interpretation chosen by fact-checker
113 * Often hides legitimate contextual differences
114 * Limited ability to show *why* people disagree
115 **FactHarbor**:
116 * **Multi-scenario**: Shows multiple valid interpretations
117 * **Likelihood-based**: Ranges with uncertainty, not binary labels
118 * **Transparent assumptions**: Makes boundaries and definitions explicit
119 * **Version history**: Shows how understanding evolves
120 * **Contradiction search**: Actively seeks opposing evidence
121 * **Federated**: No single authority controls truth
122
123 == 5. How do you prevent manipulation or coordinated misinformation campaigns? ==
124
125 **Quality Gates**:
126
127 * Automated checks before AI-generated content publishes
128 * Source quality verification
129 * Mandatory contradiction search
130 * Bubble detection for coordinated campaigns
131 **Audit System**:
132 * Stratified sampling catches manipulation patterns
133 * Trusted Contributor auditors validate AI research quality
134 * Failed audits trigger immediate review
135 **Transparency**:
136 * All reasoning chains are visible
137 * Evidence sources are traceable
138 * AKEL involvement clearly labeled
139 * Version history preserved
140 **Moderation**:
141 * Moderators handle abuse, spam, coordinated manipulation
142 * Content can be flagged by community
143 * Audit trail maintained even if content hidden
144 **Federation**:
145 * Multiple nodes with independent governance
146 * No single point of control
147 * Cross-node contradiction detection
148 * Trust model prevents malicious node influence
149
150 == 6. What happens when new evidence contradicts an existing verdict? ==
151
152 FactHarbor is designed for evolving knowledge:
153 **Automatic Re-evaluation**:
154
155 1. New evidence arrives
156 2. System detects affected scenarios and verdicts
157 3. AKEL proposes updated verdicts
158 4. Contributors/experts validate
159 5. New verdict version published
160 6. Old versions remain accessible
161 **Version History**:
162
163 * Every verdict has complete history
164 * Users can see "as of date X, what did we know?"
165 * Timeline shows how understanding evolved
166 **Transparent Updates**:
167 * Reason for re-evaluation documented
168 * New evidence clearly linked
169 * Changes explained, not hidden
170 **User Notifications**:
171 * Users following claims are notified of updates
172 * Can compare old vs new verdicts
173 * Can see which evidence changed conclusions
174
175 == 7. Who can submit claims to FactHarbor? ==
176
177 **Anyone** - even without login:
178 **Readers** (no login required):
179
180 * Browse and search all published content
181 * Submit text for analysis
182 * New claims added automatically unless duplicates exist
183 * System deduplicates and normalizes
184 **Contributors** (logged in):
185 * Everything Readers can do
186 * Submit evidence sources
187 * Suggest scenarios
188 * Participate in discussions
189 **Workflow**:
190
191 1. User submits text (as Reader or Contributor)
192 2. AKEL extracts claims
193 3. Checks for existing duplicates
194 4. Normalizes claim text
195 5. Assigns risk tier
196 6. Generates scenarios (draft)
197 7. Runs quality gates
198 8. Publishes as AI-Generated (Mode 2) if passes
199
200 == 8. What are "risk tiers" and why do they matter? ==
201
202 Risk tiers determine review requirements and publication workflow:
203 **Tier A (High Risk)**:
204
205 * **Domains**: Medical, legal, elections, safety, security, major financial
206 * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status
207 * **Audit rate**: Recommendation 30-50%
208 * **Why**: Potential for significant harm if wrong
209 **Tier B (Medium Risk)**:
210 * **Domains**: Complex policy, science causality, contested issues
211 * **Publication**: AI can publish immediately with clear labeling
212 * **Audit rate**: Recommendation 10-20%
213 * **Why**: Nuanced but lower immediate harm risk
214 **Tier C (Low Risk)**:
215 * **Domains**: Definitions, established facts, historical data
216 * **Publication**: AI publication default
217 * **Audit rate**: Recommendation 5-10%
218 * **Why**: Well-established, low controversy
219 **Assignment**:
220 * AKEL suggests tier based on domain, keywords, impact
221 * Moderators and Trusted Contributors can override
222 * Risk tiers reviewed based on audit outcomes
223
224 == 9. How does federation work and why is it important? ==
225
226 **Federation Model**:
227
228 * Multiple independent FactHarbor nodes
229 * Each node has own database, AKEL, governance
230 * Nodes exchange claims, scenarios, evidence, verdicts
231 * No central authority
232 **Why Federation Matters**:
233 * **Resilience**: No single point of failure or censorship
234 * **Autonomy**: Communities govern themselves
235 * **Scalability**: Add nodes to handle more users
236 * **Specialization**: Domain-focused nodes (health, energy, etc.)
237 * **Trust diversity**: Multiple perspectives, not single truth source
238 **How Nodes Exchange Data**:
239
240 1. Local node creates versions
241 2. Builds signed bundle
242 3. Pushes to trusted neighbor nodes
243 4. Remote nodes validate signatures and lineage
244 5. Accept or branch versions
245 6. Local re-evaluation if needed
246 **Trust Model**:
247
248 * Trusted nodes → auto-import
249 * Neutral nodes → import with review
250 * Untrusted nodes → manual only
251
252 == 10. Can experts disagree in FactHarbor? ==
253
254 **Yes - and that's a feature, not a bug**:
255 **Multiple Scenarios**:
256
257 * Trusted Contributors can create different scenarios with different assumptions
258 * Each scenario gets its own verdict
259 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
260 **Parallel Verdicts**:
261 * Same scenario, different expert interpretations
262 * Both verdicts visible with expert attribution
263 * No forced consensus
264 **Transparency**:
265 * Trusted Contributor reasoning documented
266 * Assumptions stated explicitly
267 * Evidence chains traceable
268 * Users can evaluate competing expert opinions
269 **Federation**:
270 * Different nodes can have different expert conclusions
271 * Cross-node branching allowed
272 * Users can see how conclusions vary across nodes
273
274 == 11. What prevents AI from hallucinating or making up facts? ==
275
276 **Multiple Safeguards**:
277 **Quality Gate 4: Structural Integrity**:
278
279 * Fact-checking against sources
280 * No hallucinations allowed
281 * Logic chain must be valid and traceable
282 * References must be accessible and verifiable
283 **Evidence Requirements**:
284 * Primary sources required
285 * Citations must be complete
286 * Sources must be accessible
287 * Reliability scored
288 **Audit System**:
289 * Human auditors check AI-generated content
290 * Hallucinations caught and fed back into training
291 * Patterns of errors trigger system improvements
292 **Transparency**:
293 * All reasoning chains visible
294 * Sources linked
295 * Users can verify claims against sources
296 * AKEL outputs clearly labeled
297 **Human Oversight**:
298 * Tier A marked as highest risk
299 * Audit sampling catches errors
300 * Community can flag issues
301
302 == 12. How does FactHarbor make money / is it sustainable? ==
303
304 [ToDo: Business model and sustainability to be defined]
305 Potential models under consideration:
306
307 * Non-profit foundation with grants and donations
308 * Institutional subscriptions (universities, research organizations, media)
309 * API access for third-party integrations
310 * Premium features for power users
311 * Federated node hosting services
312 Core principle: **Public benefit** mission takes priority over profit.
313
314 == 13. Related Pages ==
315
316 * [[Requirements (Roles)>>Test.FactHarbor pre10 V0\.9\.70.Specification.Requirements.WebHome]]
317 * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarbor pre10 V0\.9\.70.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
318 * [[Automation>>Test.FactHarbor pre10 V0\.9\.70.Specification.Automation.WebHome]]
319 * [[Federation & Decentralization>>Test.FactHarbor pre10 V0\.9\.70.Specification.Federation & Decentralization.WebHome]]
320 * [[Mission & Purpose>>Test.FactHarbor V0\.9\.88 ex 2 new Org Pages.Organisation.Core Problems FactHarbor Solves.WebHome]]
321
322 == 20. Glossary / Key Terms ==
323
324 === Phase 0 vs POC v1 ===
325
326 These terms refer to the same stage of FactHarbor's development:
327
328 * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance
329 * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication
330 Both describe the current development stage where the platform is being built and initially validated.
331
332 === Beta 0 ===
333
334 The next development stage after POC, featuring:
335
336 * External testers
337 * Basic federation experiments
338 * Enhanced automation
339
340 === Release 1.0 ===
341
342 The first public release featuring:
343
344 * Full federation support
345 * 2000+ concurrent users
346 * Production-grade infrastructure