FactHarbor
Why FactHarbor Exists
The Problem
We live in an environment where information conflicts, misleading content spreads fast, and many people lack the time or tools to verify complex claims. Headlines, soundbites, and viral posts often win over careful reasoning, and “fact-checks” frequently reduce everything to a simple verdict without explaining the why behind it. The result is frustration, confusion, and growing distrust — not just in institutions, but in the very idea that complex questions can be assessed fairly.
Our Response
FactHarbor acts as a navigation system for complex claims. We don’t just say “true” or “false” — we make assumptions, evidence, and context visible so you can form your own judgement. Instead of asking you to trust an authority, we show you how different conclusions are reached, where the evidence is strong or weak, and where reasonable people might still disagree.
From Claim to Conclusion – Reasoning You Can Trust Because You Can Inspect
What FactHarbor Does
FactHarbor helps people make sense of contested questions without stripping away nuance. Instead of chasing quick binary verdicts, we break topics into clear, interconnected claims. For each claim, we highlight the context and assumptions it depends on and link directly to the evidence that supports or challenges it.
Where It Helps
Whether you’re analyzing public policy, science, or everyday decisions, FactHarbor provides a transparent way to compare perspectives. Our underlying model creates reusable “claim maps” that journalists, educators, and researchers can build on, while remaining accessible to anyone seeking a clear, honest overview.
Why It’s Trustworthy
At our core is a simple principle: reasoning must be as transparent as the result. Our rules for structuring claims and weighing evidence are documented in the open — designed to be reviewed, challenged, and improved by you. Trust comes not from authority, but from a process anyone can inspect.
AI’s Role
FactHarbor assumes that AI is a powerful pattern-finder, not a built-in truth detector. That’s why we design the system around explicit frames, assumptions, and evidence, and keep humans in the loop to check which “puzzle” we’re actually solving. AI helps us map the landscape; humans decide what’s trustworthy and how it fits the real world
How It Works: The Core Concepts
FactHarbor structures reasoning into transparent steps, moving beyond simple headlines:
- Claims & Clusters – We group similar real-world statements into clusters to avoid duplicates and keep related claims together.
- Scenarios – A claim might be true in one context but false in another. We define these contexts (assumptions, definitions) explicitly as Scenarios.
- Evidence – Data and sources are linked to specific scenarios, not just to the claim in general.
- Verdicts – We assign likelihoods (e.g., “Highly likely”, “Unsubstantiated”) to each scenario, based on the available evidence.
- Truth Landscape – The result is not a single word, but a landscape showing where a claim holds up, where it fails, and where the evidence is still unclear.
The Lifecycle: From Input to Verdict
Data in FactHarbor flows through a structured, auditable process:
- Submission: Text or URLs are submitted and normalised.
- Scenario Building: AI proposes contexts; humans refine definitions and boundaries.
- Evidence Handling: Evidence is retrieved, assessed for quality, and linked.
- Verdict Creation: Drafting reasoning and likelihoods for each scenario.
- Public Presentation: The "Truth Landscape" is published for users to explore.
- Time Evolution: When new evidence arrives, we re-evaluate. Everything is versioned.
Explore FactHarbor
Organisation
Structure, Governance, and Funding.
Discover how we are organised, how decisions are made, and how you can contribute.
Specification
Deep Technical Specs.
Dive into the Architecture, Data Models, API definitions, and detailed algorithms.
- Go to Specification
- (Draft content available in Holding Page)