Wiki source code of Mission & Purpose
Last modified by Robert Schaub on 2025/12/24 20:33
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Mission & Purpose = | ||
| 2 | |||
| 3 | == Mission == | ||
| 4 | |||
| 5 | **FactHarbor brings clarity and transparency to a world full of unclear, controversial, and misleading information by shedding light on the context, assumptions, and evidence behind claims — empowering people to better understand and judge wisely.** | ||
| 6 | |||
| 7 | == Purpose == | ||
| 8 | |||
| 9 | Modern society faces a deep informational crisis: | ||
| 10 | |||
| 11 | * Misinformation spreads faster than corrections | ||
| 12 | * High-quality evidence is buried under noise | ||
| 13 | * Interpretations change depending on context — but this is rarely made explicit | ||
| 14 | * Users lack tools to understand *why* information conflicts | ||
| 15 | * Claims are evaluated without clearly defined assumptions | ||
| 16 | * The concept of "truth" is increasingly politicized and weaponized | ||
| 17 | * AI accelerates both clarity and manipulation | ||
| 18 | |||
| 19 | FactHarbor exists to bring structure and transparency into this chaos. | ||
| 20 | |||
| 21 | It provides: | ||
| 22 | |||
| 23 | * A structured way to interpret claims | ||
| 24 | * Multiple valid scenarios when a claim is ambiguous | ||
| 25 | * Transparent assumptions, definitions, and boundaries | ||
| 26 | * Complete evidence provenance | ||
| 27 | * Likelihood-based verdicts rather than binary labels | ||
| 28 | * Explanations for why interpretations differ | ||
| 29 | * Neutral tools that reduce manipulation and bias | ||
| 30 | |||
| 31 | The platform is built to: | ||
| 32 | |||
| 33 | * Reveal nuance | ||
| 34 | * Expose misleading interpretations | ||
| 35 | * Eliminate ambiguity | ||
| 36 | * Help users understand how conclusions differ across valid contexts | ||
| 37 | * Support well-grounded, independent judgments | ||
| 38 | |||
| 39 | FactHarbor does not declare absolute truths. | ||
| 40 | It clarifies how thinking works, why disagreement arises, and what can be responsibly concluded. | ||
| 41 | |||
| 42 | ---- | ||
| 43 | |||
| 44 | == Core Problems FactHarbor Solves == | ||
| 45 | |||
| 46 | === Problem 1 — Misinformation & Manipulation === | ||
| 47 | |||
| 48 | Falsehoods and distortions spread rapidly through: | ||
| 49 | |||
| 50 | * Political propaganda | ||
| 51 | * Social media amplification | ||
| 52 | * Coordinated influence networks | ||
| 53 | * AI-generated fake content | ||
| 54 | |||
| 55 | Users need a structured system that resists manipulation and makes reasoning transparent. | ||
| 56 | |||
| 57 | === Problem 2 — Missing Context Behind Claims === | ||
| 58 | |||
| 59 | Most claims change meaning drastically depending on: | ||
| 60 | |||
| 61 | * Definitions | ||
| 62 | * Assumptions | ||
| 63 | * Boundaries | ||
| 64 | * Interpretation | ||
| 65 | |||
| 66 | FactHarbor reveals and compares these variations. | ||
| 67 | |||
| 68 | === Problem 3 — "Binary Fact Checks" Fail === | ||
| 69 | |||
| 70 | Most fact-checking simplifies complex claims into: | ||
| 71 | |||
| 72 | * True | ||
| 73 | * Mostly True | ||
| 74 | * False | ||
| 75 | |||
| 76 | This hides legitimate contextual differences. | ||
| 77 | |||
| 78 | FactHarbor replaces binary judgment with scenario-based, likelihood-driven evaluation. | ||
| 79 | |||
| 80 | === Problem 4 — Good Evidence Is Hard to Find === | ||
| 81 | |||
| 82 | High-quality evidence exists — but users often cannot: | ||
| 83 | |||
| 84 | * Locate it | ||
| 85 | * Assess its reliability | ||
| 86 | * Understand how it fits into a scenario | ||
| 87 | * Compare it with competing evidence | ||
| 88 | |||
| 89 | FactHarbor aggregates, assesses, and organizes evidence with full transparency. | ||
| 90 | |||
| 91 | === Problem 5 — Claims Evolve Over Time === | ||
| 92 | |||
| 93 | Research and understanding change: | ||
| 94 | |||
| 95 | * New studies emerge | ||
| 96 | * Old studies are retracted | ||
| 97 | * Consensus shifts | ||
| 98 | |||
| 99 | FactHarbor provides: | ||
| 100 | |||
| 101 | * Full entity versioning | ||
| 102 | * Verdict timelines | ||
| 103 | * Automatic re-evaluation when inputs change | ||
| 104 | |||
| 105 | === Problem 6 — Users Cannot See Why People Disagree === | ||
| 106 | |||
| 107 | People often assume others are ignorant or dishonest, when disagreements typically arise from: | ||
| 108 | |||
| 109 | * Different definitions | ||
| 110 | * Different implicit assumptions | ||
| 111 | * Different evidence | ||
| 112 | * Different contexts | ||
| 113 | |||
| 114 | FactHarbor exposes these underlying structures so disagreements become understandable, not divisive. | ||
| 115 | |||
| 116 | ---- | ||
| 117 | |||
| 118 | == Core Concepts == | ||
| 119 | |||
| 120 | === Claim === | ||
| 121 | |||
| 122 | A user- or AI-submitted statement whose meaning is often ambiguous and requires structured interpretation. | ||
| 123 | |||
| 124 | Key fields include: | ||
| 125 | |||
| 126 | * Text | ||
| 127 | * Type (literal, metaphorical, rhetorical, supernatural, etc.) | ||
| 128 | * Evaluability | ||
| 129 | * Safety classification | ||
| 130 | * Risk tier | ||
| 131 | * Version metadata | ||
| 132 | |||
| 133 | A claim does not receive a single verdict. | ||
| 134 | It branches into scenarios that clarify its meaning. | ||
| 135 | |||
| 136 | === Scenario === | ||
| 137 | |||
| 138 | A structured interpretation that clarifies what the claim means under a specific set of: | ||
| 139 | |||
| 140 | * Boundaries | ||
| 141 | * Definitions | ||
| 142 | * Assumptions | ||
| 143 | * Contextual conditions | ||
| 144 | |||
| 145 | Multiple scenarios allow claims to be understood fairly and without political or ideological bias. | ||
| 146 | |||
| 147 | === Evidence === | ||
| 148 | |||
| 149 | Information that supports or contradicts a scenario. | ||
| 150 | |||
| 151 | Evidence includes: | ||
| 152 | |||
| 153 | * Empirical studies | ||
| 154 | * Experimental data | ||
| 155 | * Expert consensus | ||
| 156 | * Historical records | ||
| 157 | * Contextual background | ||
| 158 | * Absence-of-evidence signals | ||
| 159 | |||
| 160 | Evidence evolves through versioning and includes reliability assessment. | ||
| 161 | |||
| 162 | === Verdict === | ||
| 163 | |||
| 164 | A likelihood estimate for a claim within a specific scenario based on: | ||
| 165 | |||
| 166 | * Evidence quality | ||
| 167 | * Evidence quantity | ||
| 168 | * Strength of assumptions | ||
| 169 | * Methodological reliability | ||
| 170 | * Uncertainty factors | ||
| 171 | * Comparison with competing scenarios | ||
| 172 | |||
| 173 | Each verdict is versioned and includes a historical timeline. | ||
| 174 | |||
| 175 | === Summary View === | ||
| 176 | |||
| 177 | A user-facing, simplified overview that: | ||
| 178 | |||
| 179 | * Highlights the most common interpretation | ||
| 180 | * Presents alternative scenarios | ||
| 181 | * Explains why interpretations differ | ||
| 182 | * Shows aggregated likelihoods | ||
| 183 | * Communicates uncertainty clearly | ||
| 184 | |||
| 185 | === AI Knowledge Extraction Layer (AKEL) === | ||
| 186 | |||
| 187 | The AI subsystem that: | ||
| 188 | |||
| 189 | * Interprets claims | ||
| 190 | * Proposes scenario drafts | ||
| 191 | * Retrieves evidence | ||
| 192 | * Classifies and summarizes sources | ||
| 193 | * Drafts verdicts | ||
| 194 | * Detects contradictions | ||
| 195 | * Triggers re-evaluation when inputs change | ||
| 196 | |||
| 197 | AKEL outputs follow risk-based publication model with quality gates and audit oversight. | ||
| 198 | |||
| 199 | === Decentralized Federation Model === | ||
| 200 | |||
| 201 | FactHarbor supports a decentralized, multi-node architecture: | ||
| 202 | |||
| 203 | * Each node stores its own claims, scenarios, and verdicts | ||
| 204 | * Nodes synchronize via a federation protocol | ||
| 205 | * Evidence may be stored locally or via IPFS | ||
| 206 | * Communities, universities, or organizations can host their own nodes | ||
| 207 | * A global, emergent consensus forms across the network without central authority | ||
| 208 | |||
| 209 | This increases resilience, autonomy, and scalability. | ||
| 210 | |||
| 211 | ---- | ||
| 212 | |||
| 213 | == Vision for Impact == | ||
| 214 | |||
| 215 | FactHarbor aims to: | ||
| 216 | |||
| 217 | * **Reduce polarization** by revealing the legitimate grounds for disagreement | ||
| 218 | * **Combat misinformation** by providing structured, transparent evaluation | ||
| 219 | * **Empower users** to make informed judgments based on evidence | ||
| 220 | * **Support deliberative democracy** by clarifying complex policy questions | ||
| 221 | * **Enable federated knowledge** so no single entity controls the truth | ||
| 222 | * **Resist manipulation** through transparent reasoning and quality oversight | ||
| 223 | * **Evolve with research** by maintaining versioned, updatable knowledge | ||
| 224 | |||
| 225 | ---- | ||
| 226 | |||
| 227 | == Related Pages == | ||
| 228 | |||
| 229 | * [[Functional Requirements>>FactHarbor.Specification.Functional Requirements.WebHome]] | ||
| 230 | * [[Requirements (Roles)>>Archive.FactHarbor V0\.9\.18 copy.Specification.Requirements.WebHome]] | ||
| 231 | * [[AKEL (AI Knowledge Extraction Layer)>>Archive.FactHarbor V0\.9\.18 copy.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 232 | * [[Governance>>FactHarbor.Organisation.Governance]] |