Specification
Last modified by Robert Schaub on 2025/12/18 12:03
Specification
This section defines the technical architecture, data models, and functional requirements of FactHarbor.
1. Mission
FactHarbor brings clarity and transparency to a world full of unclear, controversial, and misleading information by shedding light on the context, assumptions, and evidence behind claims — empowering people to better understand and judge wisely.
2. Purpose
Modern society faces a deep informational crisis:
- Misinformation spreads faster than corrections.
- High-quality evidence is buried under noise.
- Meanings shift depending on context — but this is rarely made explicit.
- Users lack tools to understand *why* information conflicts.
- Claims are evaluated without clearly defined assumptions.
FactHarbor introduces structure, transparency, and comparative reasoning. It provides: - Multiple valid scenarios for ambiguous claims.
- Transparent assumptions, definitions, and boundaries.
- Full evidence provenance.
- Likelihood-based verdicts (one per scenario).
- Versioning and temporal change tracking.
- Hybrid AI–human collaboration.
3. Core Concepts
- Claim: A statement needing structured interpretation.
- Scenario: Definitions, assumptions, boundaries, and context.
- Evidence: Information supporting or contradicting a scenario.
- Verdict: Likelihood estimate based on weighted evidence for a specific scenario.
- Summary View: User-facing overview.
- AKEL: AI subsystem for drafting and assistance (human supervised).
- Federation: Decentralized nodes hosting datasets.
- Truth Landscape: The aggregation of multiple scenario-dependent verdicts showing where a claim is plausible.
- Time Evolution: Versioning of all entities allowing historical views.
4. Functional Lifecycle
The system follows a six-step lifecycle:
- Claim submission: Automatic extraction and normalisation; Cluster detection.
2. Scenario building: Clarifying definitions and assumptions; AI proposals with human approval.
3. Evidence handling: AI-assisted retrieval; Human assessment of reliability; Explicit scenario linking.
4. Verdict creation: AI-generated draft verdicts; Human refinement; Reasoning explanations.
5. Public presentation: Concise summaries; Truth Landscape comparison; Deep dives.
6. Time evolution: Versioning of all entities; Re-evaluation triggers when evidence changes.
5. Chapters
This specification is organized into the following sections:
Core Documentation
- AI Knowledge Extraction Layer (AKEL) - AI system architecture
- Architecture - System architecture and design
- Automation - Automated processes and workflows
- Data Model - Database schema and entities
- Requirements - Functional and non-functional requirements
- Workflows - Process workflows
Diagrams
- Diagrams - Visual architecture and workflow diagrams
Additional Resources
- Data Examples - Sample data structures
- FAQ - Frequently asked questions
- Federation & Decentralization - Future federation plans
- Review & Data Use - Data usage policies