Changes for page FAQ

Last modified by Robert Schaub on 2025/12/23 18:00

From version 1.1
edited by Robert Schaub
on 2025/12/22 14:34
Change comment: Imported from XAR
To version 1.3
edited by Robert Schaub
on 2025/12/22 14:37
Change comment: Renamed back-links.

Summary

Details

Page properties
Content
... ... @@ -1,13 +1,20 @@
1 1  = Frequently Asked Questions (FAQ) =
2 +
2 2  Common questions about FactHarbor's design, functionality, and approach.
4 +
3 3  == 1. How do claims get evaluated in FactHarbor? ==
6 +
4 4  === 1.1 User Submission ===
8 +
5 5  **Who**: Anyone can submit claims
6 6  **Process**: User submits claim text + source URLs
7 7  **Speed**: Typically <20 seconds to verdict
12 +
8 8  === 1.2 AKEL Processing (Automated) ===
14 +
9 9  **What**: AI Knowledge Extraction Layer analyzes claim
10 10  **Steps**:
17 +
11 11  * Parse claim into testable components
12 12  * Extract evidence from provided sources
13 13  * Score source credibility
... ... @@ -16,9 +16,12 @@
16 16  * Publish automatically
17 17  **Authority**: AKEL makes all content decisions
18 18  **Scale**: Can process millions of claims
26 +
19 19  === 1.3 Continuous Improvement (Human Role) ===
28 +
20 20  **What**: Humans improve the system, not individual verdicts
21 21  **Activities**:
31 +
22 22  * Monitor aggregate performance metrics
23 23  * Identify systematic errors
24 24  * Propose algorithm improvements
... ... @@ -26,8 +26,11 @@
26 26  * Test changes before deployment
27 27  **NOT**: Reviewing individual claims for approval
28 28  **Focus**: Fix the system, not the data
39 +
29 29  === 1.4 Exception Handling ===
41 +
30 30  **When AKEL flags for review**:
43 +
31 31  * Low confidence verdict
32 32  * Detected manipulation attempt
33 33  * Unusual pattern requiring attention
... ... @@ -36,14 +36,19 @@
36 36  * Takes action on abuse/manipulation
37 37  * Proposes detection improvements
38 38  * Does NOT override verdicts
52 +
39 39  === 1.5 Why This Model Works ===
54 +
40 40  **Scale**: Automation handles volume humans cannot
41 41  **Consistency**: Same rules applied uniformly
42 42  **Transparency**: Algorithms can be audited
43 43  **Improvement**: Systematic fixes benefit all claims
59 +
44 44  == 2. What prevents FactHarbor from becoming another echo chamber? ==
61 +
45 45  FactHarbor includes multiple safeguards against echo chambers and filter bubbles:
46 46  **Mandatory Contradiction Search**:
64 +
47 47  * AI must actively search for counter-evidence, not just confirmations
48 48  * System checks for echo chamber patterns in source clusters
49 49  * Flags tribal or ideological source clustering
... ... @@ -64,9 +64,12 @@
64 64  * Multiple independent nodes with different perspectives
65 65  * No single entity controls "the truth"
66 66  * Cross-node contradiction detection
85 +
67 67  == 3. How does FactHarbor handle claims that are "true in one context but false in another"? ==
87 +
68 68  This is exactly what FactHarbor is designed for:
69 69  **Scenarios capture contexts**:
90 +
70 70  * Each scenario defines specific boundaries, definitions, and assumptions
71 71  * The same claim can have different verdicts in different scenarios
72 72  * Example: "Coffee is healthy" depends on: ** Definition of "healthy" (reduces disease risk? improves mood? affects specific conditions?) ** Population (adults? pregnant women? people with heart conditions?) ** Consumption level (1 cup/day? 5 cups/day?) ** Time horizon (short-term? long-term?)
... ... @@ -78,8 +78,11 @@
78 78  * Every scenario states its assumptions clearly
79 79  * Users can compare how changing assumptions changes conclusions
80 80  * Makes context-dependence visible, not hidden
102 +
81 81  == 4. What makes FactHarbor different from traditional fact-checking sites? ==
104 +
82 82  **Traditional Fact-Checking**:
106 +
83 83  * Binary verdicts: True / Mostly True / False
84 84  * Single interpretation chosen by fact-checker
85 85  * Often hides legitimate contextual differences
... ... @@ -91,8 +91,11 @@
91 91  * **Version history**: Shows how understanding evolves
92 92  * **Contradiction search**: Actively seeks opposing evidence
93 93  * **Federated**: No single authority controls truth
118 +
94 94  == 5. How do you prevent manipulation or coordinated misinformation campaigns? ==
120 +
95 95  **Quality Gates**:
122 +
96 96  * Automated checks before AI-generated content publishes
97 97  * Source quality verification
98 98  * Mandatory contradiction search
... ... @@ -115,9 +115,12 @@
115 115  * No single point of control
116 116  * Cross-node contradiction detection
117 117  * Trust model prevents malicious node influence
145 +
118 118  == 6. What happens when new evidence contradicts an existing verdict? ==
147 +
119 119  FactHarbor is designed for evolving knowledge:
120 120  **Automatic Re-evaluation**:
150 +
121 121  1. New evidence arrives
122 122  2. System detects affected scenarios and verdicts
123 123  3. AKEL proposes updated verdicts
... ... @@ -125,6 +125,7 @@
125 125  5. New verdict version published
126 126  6. Old versions remain accessible
127 127  **Version History**:
158 +
128 128  * Every verdict has complete history
129 129  * Users can see "as of date X, what did we know?"
130 130  * Timeline shows how understanding evolved
... ... @@ -136,9 +136,12 @@
136 136  * Users following claims are notified of updates
137 137  * Can compare old vs new verdicts
138 138  * Can see which evidence changed conclusions
170 +
139 139  == 7. Who can submit claims to FactHarbor? ==
172 +
140 140  **Anyone** - even without login:
141 141  **Readers** (no login required):
175 +
142 142  * Browse and search all published content
143 143  * Submit text for analysis
144 144  * New claims added automatically unless duplicates exist
... ... @@ -149,6 +149,7 @@
149 149  * Suggest scenarios
150 150  * Participate in discussions
151 151  **Workflow**:
186 +
152 152  1. User submits text (as Reader or Contributor)
153 153  2. AKEL extracts claims
154 154  3. Checks for existing duplicates
... ... @@ -157,9 +157,12 @@
157 157  6. Generates scenarios (draft)
158 158  7. Runs quality gates
159 159  8. Publishes as AI-Generated (Mode 2) if passes
195 +
160 160  == 8. What are "risk tiers" and why do they matter? ==
197 +
161 161  Risk tiers determine review requirements and publication workflow:
162 162  **Tier A (High Risk)**:
200 +
163 163  * **Domains**: Medical, legal, elections, safety, security, major financial
164 164  * **Publication**: AI can publish with warnings, expert review required for "AKEL-Generated" status
165 165  * **Audit rate**: Recommendation 30-50%
... ... @@ -178,8 +178,11 @@
178 178  * AKEL suggests tier based on domain, keywords, impact
179 179  * Moderators and Trusted Contributors can override
180 180  * Risk tiers reviewed based on audit outcomes
219 +
181 181  == 9. How does federation work and why is it important? ==
221 +
182 182  **Federation Model**:
223 +
183 183  * Multiple independent FactHarbor nodes
184 184  * Each node has own database, AKEL, governance
185 185  * Nodes exchange claims, scenarios, evidence, verdicts
... ... @@ -191,6 +191,7 @@
191 191  * **Specialization**: Domain-focused nodes (health, energy, etc.)
192 192  * **Trust diversity**: Multiple perspectives, not single truth source
193 193  **How Nodes Exchange Data**:
235 +
194 194  1. Local node creates versions
195 195  2. Builds signed bundle
196 196  3. Pushes to trusted neighbor nodes
... ... @@ -198,12 +198,16 @@
198 198  5. Accept or branch versions
199 199  6. Local re-evaluation if needed
200 200  **Trust Model**:
243 +
201 201  * Trusted nodes → auto-import
202 202  * Neutral nodes → import with review
203 203  * Untrusted nodes → manual only
247 +
204 204  == 10. Can experts disagree in FactHarbor? ==
249 +
205 205  **Yes - and that's a feature, not a bug**:
206 206  **Multiple Scenarios**:
252 +
207 207  * Trusted Contributors can create different scenarios with different assumptions
208 208  * Each scenario gets its own verdict
209 209  * Users see *why* experts disagree (different definitions, boundaries, evidence weighting)
... ... @@ -220,9 +220,12 @@
220 220  * Different nodes can have different expert conclusions
221 221  * Cross-node branching allowed
222 222  * Users can see how conclusions vary across nodes
269 +
223 223  == 11. What prevents AI from hallucinating or making up facts? ==
271 +
224 224  **Multiple Safeguards**:
225 225  **Quality Gate 4: Structural Integrity**:
274 +
226 226  * Fact-checking against sources
227 227  * No hallucinations allowed
228 228  * Logic chain must be valid and traceable
... ... @@ -245,9 +245,12 @@
245 245  * Tier A marked as highest risk
246 246  * Audit sampling catches errors
247 247  * Community can flag issues
297 +
248 248  == 12. How does FactHarbor make money / is it sustainable? ==
299 +
249 249  [ToDo: Business model and sustainability to be defined]
250 250  Potential models under consideration:
302 +
251 251  * Non-profit foundation with grants and donations
252 252  * Institutional subscriptions (universities, research organizations, media)
253 253  * API access for third-party integrations
... ... @@ -254,25 +254,37 @@
254 254  * Premium features for power users
255 255  * Federated node hosting services
256 256  Core principle: **Public benefit** mission takes priority over profit.
309 +
257 257  == 13. Related Pages ==
311 +
258 258  * [[Requirements (Roles)>>Test.FactHarbor.Specification.Requirements.WebHome]]
259 259  * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
260 260  * [[Automation>>Test.FactHarbor.Specification.Automation.WebHome]]
261 261  * [[Federation & Decentralization>>Test.FactHarbor.Specification.Federation & Decentralization.WebHome]]
262 262  * [[Mission & Purpose>>Test.FactHarbor.Organisation.Core Problems FactHarbor Solves.WebHome]]
317 +
263 263  == 20. Glossary / Key Terms ==
319 +
264 264  === Phase 0 vs POC v1 ===
321 +
265 265  These terms refer to the same stage of FactHarbor's development:
323 +
266 266  * **Phase 0** - Organisational perspective: Pre-alpha stage with founder-led governance
267 267  * **POC v1** - Technical perspective: Proof of Concept demonstrating AI-generated publication
268 268  Both describe the current development stage where the platform is being built and initially validated.
327 +
269 269  === Beta 0 ===
329 +
270 270  The next development stage after POC, featuring:
331 +
271 271  * External testers
272 272  * Basic federation experiments
273 273  * Enhanced automation
335 +
274 274  === Release 1.0 ===
337 +
275 275  The first public release featuring:
339 +
276 276  * Full federation support
277 277  * 2000+ concurrent users
278 278  * Production-grade infrastructure