Core Problems FactHarbor Solves
Core Problems FactHarbor Solves
Core Problems
Problem 1 — Misinformation & Manipulation
Falsehoods and distortions spread rapidly through:
- Political propaganda
- Social media amplification
- Coordinated influence networks
- AI-generated fake content
Users need a structured system that resists manipulation and makes reasoning transparent.
Problem 2 — Missing Context Behind Claims
Most claims change meaning drastically depending on:
- Definitions
- Assumptions
- Boundaries
- Interpretation
FactHarbor reveals and compares these variations.
Problem 3 — "Binary Fact Checks" Fail
Most fact-checking simplifies complex claims into:
- True
- Mostly True
- False
This hides legitimate contextual differences.
FactHarbor replaces binary judgment with scenario-based, likelihood-driven evaluation.
Problem 4 — Good Evidence Is Hard to Find
High-quality evidence exists — but users often cannot:
- Locate it
- Assess its reliability
- Understand how it fits into a scenario
- Compare it with competing evidence
FactHarbor aggregates, assesses, and organizes evidence with full transparency.
Problem 5 — Claims Evolve Over Time
Research and understanding change:
- New studies emerge
- Old studies are retracted
- Consensus shifts
FactHarbor provides:
- Full entity versioning
- Verdict timelines
- Automatic re-evaluation when inputs change
Problem 6 — Users Cannot See Why People Disagree
People often assume others are ignorant or dishonest, when disagreements typically arise from:
- Different definitions
- Different implicit assumptions
- Different evidence
- Different contexts
FactHarbor exposes these underlying structures so disagreements become understandable, not divisive.
Core Concepts
Claim
A user- or AI-submitted statement whose meaning is often ambiguous and requires structured interpretation.
A claim does not receive a single verdict — it branches into scenarios that clarify its meaning.
Scenario
A structured interpretation that clarifies what the claim means under a specific set of:
- Boundaries
- Definitions
- Assumptions
- Contextual conditions
Multiple scenarios allow claims to be understood fairly and without political or ideological bias.
Evidence
Information that supports or contradicts a scenario.
Evidence includes empirical studies, experimental data, expert consensus, historical records, contextual background, and absence-of-evidence signals.
Evidence evolves through versioning and includes reliability assessment.
Verdict
A likelihood estimate for a claim within a specific scenario based on evidence quality, quantity, methodology, uncertainty factors, and comparison with competing scenarios.
Each verdict is versioned and includes a historical timeline.
AI Knowledge Extraction Layer (AKEL)
The AI subsystem that interprets claims, proposes scenario drafts, retrieves evidence, classifies sources, drafts verdicts, detects contradictions, and triggers re-evaluation when inputs change.
AKEL outputs follow risk-based publication model with quality gates and audit oversight.
Decentralized Federation Model
FactHarbor supports a decentralized, multi-node architecture where each node stores its own data and synchronizes via federation protocol.
This increases resilience, autonomy, and scalability.
Vision for Impact
FactHarbor aims to:
- Reduce polarization by revealing legitimate grounds for disagreement
- Combat misinformation by providing structured, transparent evaluation
- Empower users to make informed judgments based on evidence
- Support deliberative democracy by clarifying complex policy questions
- Enable federated knowledge so no single entity controls the truth
- Resist manipulation through transparent reasoning and quality oversight
- Evolve with research by maintaining versioned, updatable knowledge