Mission & Purpose

Last modified by Robert Schaub on 2025/12/24 20:33

Mission & Purpose

Mission

FactHarbor brings clarity and transparency to a world full of unclear, controversial, and misleading information by shedding light on the context, assumptions, and evidence behind claims — empowering people to better understand and judge wisely.

Purpose

Modern society faces a deep informational crisis:

  • Misinformation spreads faster than corrections
  • High-quality evidence is buried under noise
  • Interpretations change depending on context — but this is rarely made explicit
  • Users lack tools to understand *why* information conflicts
  • Claims are evaluated without clearly defined assumptions
  • The concept of "truth" is increasingly politicized and weaponized
  • AI accelerates both clarity and manipulation

FactHarbor exists to bring structure and transparency into this chaos.

It provides:

  • A structured way to interpret claims
  • Multiple valid scenarios when a claim is ambiguous
  • Transparent assumptions, definitions, and boundaries
  • Complete evidence provenance
  • Likelihood-based verdicts rather than binary labels
  • Explanations for why interpretations differ
  • Neutral tools that reduce manipulation and bias

The platform is built to:

  • Reveal nuance
  • Expose misleading interpretations
  • Eliminate ambiguity
  • Help users understand how conclusions differ across valid contexts
  • Support well-grounded, independent judgments

FactHarbor does not declare absolute truths.  
It clarifies how thinking works, why disagreement arises, and what can be responsibly concluded.


Core Problems FactHarbor Solves

Problem 1 — Misinformation & Manipulation

Falsehoods and distortions spread rapidly through:

  • Political propaganda
  • Social media amplification
  • Coordinated influence networks
  • AI-generated fake content

Users need a structured system that resists manipulation and makes reasoning transparent.

Problem 2 — Missing Context Behind Claims

Most claims change meaning drastically depending on:

  • Definitions
  • Assumptions
  • Boundaries
  • Interpretation

FactHarbor reveals and compares these variations.

Problem 3 — "Binary Fact Checks" Fail

Most fact-checking simplifies complex claims into:

  • True
  • Mostly True
  • False

This hides legitimate contextual differences.

FactHarbor replaces binary judgment with scenario-based, likelihood-driven evaluation.

Problem 4 — Good Evidence Is Hard to Find

High-quality evidence exists — but users often cannot:

  • Locate it
  • Assess its reliability
  • Understand how it fits into a scenario
  • Compare it with competing evidence

FactHarbor aggregates, assesses, and organizes evidence with full transparency.

Problem 5 — Claims Evolve Over Time

Research and understanding change:

  • New studies emerge
  • Old studies are retracted
  • Consensus shifts

FactHarbor provides:

  • Full entity versioning
  • Verdict timelines
  • Automatic re-evaluation when inputs change

Problem 6 — Users Cannot See Why People Disagree

People often assume others are ignorant or dishonest, when disagreements typically arise from:

  • Different definitions
  • Different implicit assumptions
  • Different evidence
  • Different contexts

FactHarbor exposes these underlying structures so disagreements become understandable, not divisive.


Core Concepts

Claim

A user- or AI-submitted statement whose meaning is often ambiguous and requires structured interpretation.

Key fields include:

  • Text
  • Type (literal, metaphorical, rhetorical, supernatural, etc.)
  • Evaluability
  • Safety classification
  • Risk tier
  • Version metadata

A claim does not receive a single verdict.  
It branches into scenarios that clarify its meaning.

Scenario

A structured interpretation that clarifies what the claim means under a specific set of:

  • Boundaries
  • Definitions
  • Assumptions
  • Contextual conditions

Multiple scenarios allow claims to be understood fairly and without political or ideological bias.

Evidence

Information that supports or contradicts a scenario.

Evidence includes:

  • Empirical studies
  • Experimental data
  • Expert consensus
  • Historical records
  • Contextual background
  • Absence-of-evidence signals

Evidence evolves through versioning and includes reliability assessment.

Verdict

A likelihood estimate for a claim within a specific scenario based on:

  • Evidence quality
  • Evidence quantity
  • Strength of assumptions
  • Methodological reliability
  • Uncertainty factors
  • Comparison with competing scenarios

Each verdict is versioned and includes a historical timeline.

Summary View

A user-facing, simplified overview that:

  • Highlights the most common interpretation
  • Presents alternative scenarios
  • Explains why interpretations differ
  • Shows aggregated likelihoods
  • Communicates uncertainty clearly

AI Knowledge Extraction Layer (AKEL)

The AI subsystem that:

  • Interprets claims
  • Proposes scenario drafts
  • Retrieves evidence
  • Classifies and summarizes sources
  • Drafts verdicts
  • Detects contradictions
  • Triggers re-evaluation when inputs change

AKEL outputs follow risk-based publication model with quality gates and audit oversight.

Decentralized Federation Model

FactHarbor supports a decentralized, multi-node architecture:

  • Each node stores its own claims, scenarios, and verdicts
  • Nodes synchronize via a federation protocol
  • Evidence may be stored locally or via IPFS
  • Communities, universities, or organizations can host their own nodes
  • A global, emergent consensus forms across the network without central authority

This increases resilience, autonomy, and scalability.


Vision for Impact

FactHarbor aims to:

  • Reduce polarization by revealing the legitimate grounds for disagreement
  • Combat misinformation by providing structured, transparent evaluation
  • Empower users to make informed judgments based on evidence
  • Support deliberative democracy by clarifying complex policy questions
  • Enable federated knowledge so no single entity controls the truth
  • Resist manipulation through transparent reasoning and quality oversight
  • Evolve with research by maintaining versioned, updatable knowledge

Related Pages