Wiki source code of FAQ

Version 1.1 by Robert Schaub on 2025/12/22 14:22

Hide last authors
Robert Schaub 1.1 1 = Frequently Asked Questions (FAQ) =
2 Common questions about FactHarbor's design, functionality, and approach.
3 == 1. How do claims get evaluated in FactHarbor? ==
4 === 1.1 User Submission ===
5 **Who**: Anyone can submit claims
6 **Process**: User submits claim text + source URLs
7 **Speed**: Typically <20 seconds to verdict
8 === 1.2 AKEL Processing (Automated) ===
9 **What**: AI Knowledge Extraction Layer analyzes claim
10 **Steps**:
11 * Parse claim into testable components
12 * Extract evidence from provided sources
13 * Score source credibility
14 * Generate verdict with confidence level
15 * Assign risk tier
16 * Publish automatically
17 **Authority**: AKEL makes all content decisions
18 **Scale**: Can process millions of claims
19 === 1.3 Continuous Improvement (Human Role) ===
20 **What**: Humans improve the system, not individual verdicts
21 **Activities**:
22 * Monitor aggregate performance metrics
23 * Identify systematic errors
24 * Propose algorithm improvements
25 * Update policies and rules
26 * Test changes before deployment
27 **NOT**: Reviewing individual claims for approval
28 **Focus**: Fix the system, not the data
29 === 1.4 Exception Handling ===
30 **When AKEL flags for review**:
31 * Low confidence verdict
32 * Detected manipulation attempt
33 * Unusual pattern requiring attention
34 **Moderator role**:
35 * Reviews flagged items
36 * Takes action on abuse/manipulation
37 * Proposes detection improvements
38 * Does NOT override verdicts
39 === 1.5 Why This Model Works ===
40 **Scale**: Automation handles volume humans cannot
41 **Consistency**: Same rules applied uniformly
42 **Transparency**: Algorithms can be audited
43 **Improvement**: Systematic fixes benefit all claims
44 == 2. What prevents FactHarbor from becoming another echo chamber? ==
45 FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
46 **Mandatory Contradiction Search**:
47 * AI must actively search for counter-evidence, not just confirmations
48 * System checks for echo chamber patterns in source clusters
49 * Flags tribal or ideological source clustering
50 * Requires diverse perspectives across political/ideological spectrum
51 **Multiple Scenarios**:
52 * Claims are evaluated under different interpretations
53 * Reveals how assumptions change conclusions
54 * Makes disagreements understandable, not divisive
55 **Transparent Reasoning**:
56 * All assumptions, definitions, and boundaries are explicit
57 * Evidence chains are traceable
58 * Uncertainty is quantified, not hidden
59 **Audit System**:
60 * Human auditors check for bubble patterns
61 * Feedback loop improves AI search diversity
62 * Community can flag missing perspectives
63 **Federation**:
64 * Multiple independent nodes with different perspectives
65 * No single entity controls "the truth"
66 * Cross-node contradiction detection
67 == 3. How does FactHarbor handle claims that are "true in one context but false in another"? ==
68 This is exactly what FactHarbor is designed for:
69 **Scenarios capture contexts**:
70 * Each scenario defines specific boundaries, definitions, and assumptions
71 * The same claim can have different verdicts in different scenarios
72 * Example: "Coffee is healthy" depends on: ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) ** Population (adults? pregnant women? people with heart conditions?) ** Consumption level (1 cup/day? 5 cups/day?) ** Time horizon (short-term? long-term?)
73 **Truth Landscape**:
74 * Shows all scenarios and their verdicts side-by-side
75 * Users see *why* interpretations differ
76 * No forced consensus when legitimate disagreement exists
77 **Explicit Assumptions**:
78 * Every scenario states its assumptions clearly
79 * Users can compare how changing assumptions changes conclusions
80 * Makes context-dependence visible, not hidden
81 == 4. What makes FactHarbor different from traditional fact-checking sites? ==
82 **Traditional Fact-Checking**:
83 * Binary verdicts: True / Mostly True / False
84 * Single interpretation chosen by fact-checker
85 * Often hides legitimate contextual differences
86 * Limited ability to show *why* people disagree
87 **FactHarbor**:
88 * **Multi-scenario**: Shows multiple valid interpretations
89 * **Likelihood-based**: Ranges with uncertainty, not binary labels
90 * **Transparent assumptions**: Makes boundaries and definitions explicit
91 * **Version history**: Shows how understanding evolves
92 * **Contradiction search**: Actively seeks opposing evidence
93 * **Federated**: No single authority controls truth
94 == 5. How do you prevent manipulation or coordinated misinformation campaigns? ==
95 **Quality Gates**:
96 * Automated checks before AI-generated content publishes
97 * Source quality verification
98 * Mandatory contradiction search
99 * Bubble detection for coordinated campaigns
100 **Audit System**:
101 * Stratified sampling catches manipulation patterns
102 * Trusted Contributor auditors validate AI research quality
103 * Failed audits trigger immediate review
104 **Transparency**:
105 * All reasoning chains are visible
106 * Evidence sources are traceable
107 * AKEL involvement clearly labeled
108 * Version history preserved
109 **Moderation**:
110 * Moderators handle abuse, spam, coordinated manipulation
111 * Content can be flagged by community
112 * Audit trail maintained even if content hidden
113 **Federation**:
114 * Multiple nodes with independent governance
115 * No single point of control
116 * Cross-node contradiction detection
117 * Trust model prevents malicious node influence
118 == 6. What happens when new evidence contradicts an existing verdict? ==
119 FactHarbor is designed for evolving knowledge:
120 **Automatic Re-evaluation**:
121 1. New evidence arrives
122 2. System detects affected scenarios and verdicts
123 3. AKEL proposes updated verdicts
124 4. Contributors/experts validate
125 5. New verdict version published
126 6. Old versions remain accessible
127 **Version History**:
128 * Every verdict has complete history
129 * Users can see "as of date X, what did we know?"
130 * Timeline shows how understanding evolved
131 **Transparent Updates**:
132 * Reason for re-evaluation documented
133 * New evidence clearly linked
134 * Changes explained, not hidden
135 **User Notifications**:
136 * Users following claims are notified of updates
137 * Can compare old vs new verdicts
138 * Can see which evidence changed conclusions
139 == 7. Who can submit claims to FactHarbor? ==
140 **Anyone** - even without login:
141 **Readers** (no login required):
142 * Browse and search all published content
143 * Submit text for analysis
144 * New claims added automatically unless duplicates exist
145 * System deduplicates and normalizes
146 **Contributors** (logged in):
147 * Everything Readers can do
148 * Submit evidence sources
149 * Suggest scenarios
150 * Participate in discussions
151 **Workflow**:
152 1. User submits text (as Reader or Contributor)
153 2. AKEL extracts claims
154 3. Checks for existing duplicates
155 4. Normalizes claim text
156 5. Assigns risk tier
157 6. Generates scenarios (draft)
158 7. Runs quality gates
159 8. Publishes as AI-Generated (Mode 2) if passes
160 == 8. What are "risk tiers" and why do they matter? ==
161 Risk tiers determine review requirements and publication workflow:
162 **Tier A (High Risk)**:
163 * **Domains**: Medical, legal, elections, safety, security, major financial
164 * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status
165 * **Audit rate**: Recommendation 30-50%
166 * **Why**: Potential for significant harm if wrong
167 **Tier B (Medium Risk)**:
168 * **Domains**: Complex policy, science causality, contested issues
169 * **Publication**: AI can publish immediately with clear labeling
170 * **Audit rate**: Recommendation 10-20%
171 * **Why**: Nuanced but lower immediate harm risk
172 **Tier C (Low Risk)**:
173 * **Domains**: Definitions, established facts, historical data
174 * **Publication**: AI publication default
175 * **Audit rate**: Recommendation 5-10%
176 * **Why**: Well-established, low controversy
177 **Assignment**:
178 * AKEL suggests tier based on domain, keywords, impact
179 * Moderators and Trusted Contributors can override
180 * Risk tiers reviewed based on audit outcomes
181 == 9. How does federation work and why is it important? ==
182 **Federation Model**:
183 * Multiple independent FactHarbor nodes
184 * Each node has own database, AKEL, governance
185 * Nodes exchange claims, scenarios, evidence, verdicts
186 * No central authority
187 **Why Federation Matters**:
188 * **Resilience**: No single point of failure or censorship
189 * **Autonomy**: Communities govern themselves
190 * **Scalability**: Add nodes to handle more users
191 * **Specialization**: Domain-focused nodes (health, energy, etc.)
192 * **Trust diversity**: Multiple perspectives, not single truth source
193 **How Nodes Exchange Data**:
194 1. Local node creates versions
195 2. Builds signed bundle
196 3. Pushes to trusted neighbor nodes
197 4. Remote nodes validate signatures and lineage
198 5. Accept or branch versions
199 6. Local re-evaluation if needed
200 **Trust Model**:
201 * Trusted nodes → auto-import
202 * Neutral nodes → import with review
203 * Untrusted nodes → manual only
204 == 10. Can experts disagree in FactHarbor? ==
205 **Yes - and that's a feature, not a bug**:
206 **Multiple Scenarios**:
207 * Trusted Contributors can create different scenarios with different assumptions
208 * Each scenario gets its own verdict
209 * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
210 **Parallel Verdicts**:
211 * Same scenario, different expert interpretations
212 * Both verdicts visible with expert attribution
213 * No forced consensus
214 **Transparency**:
215 * Trusted Contributor reasoning documented
216 * Assumptions stated explicitly
217 * Evidence chains traceable
218 * Users can evaluate competing expert opinions
219 **Federation**:
220 * Different nodes can have different expert conclusions
221 * Cross-node branching allowed
222 * Users can see how conclusions vary across nodes
223 == 11. What prevents AI from hallucinating or making up facts? ==
224 **Multiple Safeguards**:
225 **Quality Gate 4: Structural Integrity**:
226 * Fact-checking against sources
227 * No hallucinations allowed
228 * Logic chain must be valid and traceable
229 * References must be accessible and verifiable
230 **Evidence Requirements**:
231 * Primary sources required
232 * Citations must be complete
233 * Sources must be accessible
234 * Reliability scored
235 **Audit System**:
236 * Human auditors check AI-generated content
237 * Hallucinations caught and fed back into training
238 * Patterns of errors trigger system improvements
239 **Transparency**:
240 * All reasoning chains visible
241 * Sources linked
242 * Users can verify claims against sources
243 * AKEL outputs clearly labeled
244 **Human Oversight**:
245 * Tier A marked as highest risk
246 * Audit sampling catches errors
247 * Community can flag issues
248 == 12. How does FactHarbor make money / is it sustainable? ==
249 [ToDo: Business model and sustainability to be defined]
250 Potential models under consideration:
251 * Non-profit foundation with grants and donations
252 * Institutional subscriptions (universities, research organizations, media)
253 * API access for third-party integrations
254 * Premium features for power users
255 * Federated node hosting services
256 Core principle: **Public benefit** mission takes priority over profit.
257 == 13. Related Pages ==
258 * [[Requirements (Roles)>>Test.FactHarbor.Specification.Requirements.WebHome]]
259 * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
260 * [[Automation>>Test.FactHarbor.Specification.Automation.WebHome]]
261 * [[Federation & Decentralization>>Test.FactHarbor.Specification.Federation & Decentralization.WebHome]]
262 * [[Mission & Purpose>>Test.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
263 == 20. Glossary / Key Terms ==
264 === Phase 0 vs POC v1 ===
265 These terms refer to the same stage of FactHarbor's development:
266 * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance
267 * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication
268 Both describe the current development stage where the platform is being built and initially validated.
269 === Beta 0 ===
270 The next development stage after POC, featuring:
271 * External testers
272 * Basic federation experiments
273 * Enhanced automation
274 === Release 1.0 ===
275 The first public release featuring:
276 * Full federation support
277 * 2000+ concurrent users
278 * Production-grade infrastructure