Wiki source code of Core Problems FactHarbor Solves
Last modified by Robert Schaub on 2025/12/18 12:03
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Core Problems FactHarbor Solves = | ||
| 2 | (% class="box infomessage" %) | ||
| 3 | ((( | ||
| 4 | **Our Mission** | ||
| 5 | FactHarbor brings clarity and transparency to a world full of unclear, contested, and misleading information by shedding light on the context, assumptions, and evidence behind claims. | ||
| 6 | ))) | ||
| 7 | == 1. Core Problems == | ||
| 8 | === 1.1 Problem 1 — Misinformation & Manipulation === | ||
| 9 | Falsehoods and distortions spread rapidly through: | ||
| 10 | * Political propaganda | ||
| 11 | * Social media amplification | ||
| 12 | * Coordinated influence networks | ||
| 13 | * AI-generated fake content | ||
| 14 | Users need a structured system that resists manipulation and makes reasoning transparent. | ||
| 15 | === 1.2 Problem 2 — Missing Context Behind Claims === | ||
| 16 | Most claims change meaning drastically depending on: | ||
| 17 | * Definitions | ||
| 18 | * Assumptions | ||
| 19 | * Boundaries | ||
| 20 | * Interpretation | ||
| 21 | FactHarbor reveals and compares these variations. | ||
| 22 | === 1.3 Problem 3 — "Binary Fact Checks" Fail === | ||
| 23 | Most fact-checking simplifies complex claims into: | ||
| 24 | * True | ||
| 25 | * Mostly True | ||
| 26 | * False | ||
| 27 | This hides legitimate contextual differences. | ||
| 28 | FactHarbor replaces binary judgment with scenario-based, likelihood-driven evaluation. | ||
| 29 | === 1.4 Problem 4 — Good Evidence Is Hard to Find === | ||
| 30 | High-quality evidence exists — but users often cannot: | ||
| 31 | * Locate it | ||
| 32 | * Assess its reliability | ||
| 33 | * Understand how it fits into a scenario | ||
| 34 | * Compare it with competing evidence | ||
| 35 | FactHarbor aggregates, assesses, and organizes evidence with full transparency. | ||
| 36 | === 1.5 Problem 5 — Claims Evolve Over Time === | ||
| 37 | Research and understanding change: | ||
| 38 | * New studies emerge | ||
| 39 | * Old studies are retracted | ||
| 40 | * Consensus shifts | ||
| 41 | FactHarbor provides: | ||
| 42 | * Full entity versioning | ||
| 43 | * Verdict timelines | ||
| 44 | * Automatic re-evaluation when inputs change | ||
| 45 | === 1.6 Problem 6 — Users Cannot See Why People Disagree === | ||
| 46 | People often assume others are ignorant or dishonest, when disagreements typically arise from: | ||
| 47 | * Different definitions | ||
| 48 | * Different implicit assumptions | ||
| 49 | * Different evidence | ||
| 50 | * Different contexts | ||
| 51 | FactHarbor exposes these underlying structures so disagreements become understandable, not divisive. | ||
| 52 | == 2. Core Concepts == | ||
| 53 | === 2.1 Claim === | ||
| 54 | A user- or AI-submitted statement whose meaning is often ambiguous and requires structured interpretation. | ||
| 55 | A claim does not receive a single verdict — it branches into scenarios that clarify its meaning. | ||
| 56 | === 2.2 Scenario === | ||
| 57 | A structured interpretation that clarifies what the claim means under a specific set of: | ||
| 58 | * Boundaries | ||
| 59 | * Definitions | ||
| 60 | * Assumptions | ||
| 61 | * Contextual conditions | ||
| 62 | Multiple scenarios allow claims to be understood fairly and without political or ideological bias. | ||
| 63 | === 2.3 Evidence === | ||
| 64 | Information that supports or contradicts a scenario. | ||
| 65 | Evidence includes empirical studies, experimental data, expert consensus, historical records, contextual background, and absence-of-evidence signals. | ||
| 66 | Evidence evolves through versioning and includes reliability assessment. | ||
| 67 | === 2.4 Verdict === | ||
| 68 | A likelihood estimate for a claim within a specific scenario based on evidence quality, quantity, methodology, uncertainty factors, and comparison with competing scenarios. | ||
| 69 | Each verdict is versioned and includes a historical timeline. | ||
| 70 | === 2.5 AI Knowledge Extraction Layer (AKEL) === | ||
| 71 | The AI subsystem that interprets claims, proposes scenario drafts, retrieves evidence, classifies sources, drafts verdicts, detects contradictions, and triggers re-evaluation when inputs change. | ||
| 72 | AKEL outputs follow risk-based publication model with quality gates and audit oversight. | ||
| 73 | === 2.6 Decentralized Federation Model === | ||
| 74 | FactHarbor supports a decentralized, multi-node architecture where each node stores its own data and synchronizes via federation protocol. | ||
| 75 | This increases resilience, autonomy, and scalability. | ||
| 76 | == 3. Vision for Impact == | ||
| 77 | FactHarbor aims to: | ||
| 78 | * **Reduce polarization** by revealing legitimate grounds for disagreement | ||
| 79 | * **Combat misinformation** by providing structured, transparent evaluation | ||
| 80 | * **Empower users** to make informed judgments based on evidence | ||
| 81 | * **Support deliberative democracy** by clarifying complex policy questions | ||
| 82 | * **Enable federated knowledge** so no single entity controls the truth | ||
| 83 | * **Resist manipulation** through transparent reasoning and quality oversight | ||
| 84 | * **Evolve with research** by maintaining versioned, updatable knowledge | ||
| 85 | == 4. Related Pages == | ||
| 86 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 87 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 88 | * [[Functional Requirements>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 89 | * [[Federation & Decentralization>>FactHarbor.Specification.Federation & Decentralization.WebHome]] |