Wiki source code of Specification

Last modified by Robert Schaub on 2025/12/24 20:30

Hide last authors
Robert Schaub 2.1 1 {{warning title="Version 0.9.32 (Draft)"}}
Robert Schaub 1.1 2 This document describes the **Specification** of FactHarbor. It is a working draft.
3 {{/warning}}
4
5 = Specification =
6
7 This section defines the technical architecture, data models, and functional requirements of FactHarbor.
8
9 == 1. Mission ==
10
11 **FactHarbor brings clarity and transparency to a world full of unclear, controversial, and misleading information by shedding light on the context, assumptions, and evidence behind claims — empowering people to better understand and judge wisely.**
12
13 == 2. Purpose ==
14
15 Modern society faces a deep informational crisis:
16 * Misinformation spreads faster than corrections.
17 * High-quality evidence is buried under noise.
18 * Meanings shift depending on context — but this is rarely made explicit.
19 * Users lack tools to understand *why* information conflicts.
20 * Claims are evaluated without clearly defined assumptions.
21
22 FactHarbor introduces structure, transparency, and comparative reasoning. It provides:
23 * Multiple valid scenarios for ambiguous claims.
24 * Transparent assumptions, definitions, and boundaries.
25 * Full evidence provenance.
26 * Likelihood-based verdicts (one per scenario).
27 * Versioning and temporal change tracking.
28 * Hybrid AI–human collaboration.
29
30 == 3. Core Concepts ==
31
32 * **Claim**: A statement needing structured interpretation.
33 * **Scenario**: Definitions, assumptions, boundaries, and context.
34 * **Evidence**: Information supporting or contradicting a scenario.
35 * **Verdict**: Likelihood estimate based on weighted evidence **for a specific scenario**.
36 * **Summary View**: User-facing overview.
37 * **AKEL**: AI subsystem for drafting and assistance (human supervised).
38 * **Federation**: Decentralized nodes hosting datasets.
39 * **Truth Landscape**: The aggregation of multiple scenario-dependent verdicts showing where a claim is plausible.
40 * **Time Evolution**: Versioning of all entities allowing historical views.
41
42 == 4. Functional Lifecycle ==
43
44 The system follows a six-step lifecycle:
45
46 1. **Claim submission**: Automatic extraction and normalisation; Cluster detection.
47 2. **Scenario building**: Clarifying definitions and assumptions; AI proposals with human approval.
48 3. **Evidence handling**: AI-assisted retrieval; Human assessment of reliability; Explicit scenario linking.
49 4. **Verdict creation**: AI-generated draft verdicts; Human refinement; Reasoning explanations.
50 5. **Public presentation**: Concise summaries; Truth Landscape comparison; Deep dives.
51 6. **Time evolution**: Versioning of all entities; Re-evaluation triggers when evidence changes.
52
53 == 5. Chapters ==
54
55