Requirements
Requirements
This page defines Roles, Responsibilities, and Rules for contributors and users of FactHarbor.
Roles
Reader
Who: Anyone (no login required).
Can:
- Browse and search claims
- View scenarios, evidence, verdicts, and timelines
- Compare scenarios and explore assumptions
- Flag issues, errors, contradictions, or suspicious patterns
- Use filters, search, and visualization tools
- Create personal views (saved searches, bookmarks - local browser storage)
- Submit claims automatically by providing text to analyze - new claims are added automatically unless equal claims already exist in the system
Cannot:
- Modify existing content
- Access draft content
- Participate in governance decisions
Note: Readers can request human review of AI-generated content by flagging it.
Contributor
Who: Registered and logged-in users (extends Reader capabilities).
Can:
- Everything a Reader can do
- Submit claims
- Submit evidence
- Provide feedback
- Suggest scenarios
- Flag content for review
- Request human review of AI-generated content
Cannot:
- Publish or mark content as "reviewed" or "approved"
- Override expert or maintainer decisions
- Directly modify AKEL or quality gate configurations
Reviewer
Who: Trusted community members, appointed by maintainers.
Can:
- Review contributions from Contributors and AKEL drafts
- Validate AI-generated content (Mode 2 → Mode 3 transition)
- Edit claims, scenarios, and evidence
- Add clarifications or warnings
- Change content status: `draft` → `in review` → `published` / `rejected`
- Approve or reject Tier B and C content for "Human-Reviewed" status
- Flag content for expert review
- Participate in audit sampling
Cannot:
- Approve Tier A content for "Human-Reviewed" status (requires Expert)
- Change governance rules
- Unilaterally change expert conclusions without process
- Bypass quality gates
Note on AI-Drafted Content:
- Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3)
- For Tier B and C, Reviewers have approval authority
- For Tier A, only Experts can grant "Human-Reviewed" status
Expert (Domain-Specific)
Who: Subject-matter specialists in specific domains (medicine, law, science, etc.).
Can:
- Everything a Reviewer can do
- Final authority on Tier A content "Human-Reviewed" status
- Validate complex or controversial claims in their domain
- Define domain-specific quality standards
- Set reliability thresholds for domain sources
- Participate in risk tier assignment review
- Override AKEL suggestions in their domain (with documentation)
Cannot:
- Change platform governance policies
- Approve content outside their expertise domain
- Bypass technical quality gates (but can flag for adjustment)
Specialization:
- Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert")
- Cross-domain claims may require multiple expert reviews
Auditor
Who: Reviewers or Experts assigned to sampling audit duties.
Can:
- Review sampled AI-generated content against quality standards
- Validate quality gate enforcement
- Identify patterns in AI errors or hallucinations
- Provide feedback for system improvement
- Flag content for immediate review if errors found
- Contribute to audit statistics and transparency reports
Cannot:
- Change audit sampling algorithms (maintainer responsibility)
- Bypass normal review workflows
- Audit content they personally created
Selection:
- Auditors selected based on domain expertise and review quality
- Rotation to prevent audit fatigue
- Stratified assignment (Tier A auditors need higher expertise)
Audit Focus:
- Tier A: Recommendation 30-50% sampling rate, expert auditors
- Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors
- Tier C: Recommendation 5-10% sampling rate, reviewer auditors
Moderator
Who: Maintainers or trusted long-term contributors.
Can:
- All Reviewer and Expert capabilities (cross-domain)
- Manage user accounts and permissions
- Handle disputes and conflicts
- Enforce community guidelines
- Suspend or ban abusive users
- Finalize publication status for sensitive content
- Review and adjust risk tier assignments
- Oversee audit system performance
Cannot:
- Change core data model or architecture
- Override technical system constraints
- Make unilateral governance decisions without consensus
Maintainer
Who: Core team members responsible for the platform.
Can:
- All Moderator capabilities
- Change data model, architecture, and technical systems
- Configure quality gates and AKEL parameters
- Adjust audit sampling algorithms
- Set and modify risk tier policies
- Make platform-wide governance decisions
- Access and modify backend systems
- Deploy updates and fixes
- Grant and revoke roles
Governance:
- Maintainers operate under organizational governance rules
- Major policy changes require Governing Team approval
- Technical decisions made collaboratively
Content Publication States
Mode 1: Draft
- Not visible to public
- Visible to contributor and reviewers
- Can be edited by contributor or reviewers
- Default state for failed quality gates
Mode 2: AI-Generated (Published)
- Public and visible to all users
- Clearly labeled as "AI-Generated, Awaiting Human Review"
- Passed all automated quality gates
- Risk tier displayed (A/B/C)
- Users can:
- Read and use content
- Request human review
- Flag for expert attention
- Subject to sampling audits
- Can be promoted to Mode 3 by reviewer/expert validation
Mode 3: Human-Reviewed (Published)
- Public and visible to all users
- Labeled as "Human-Reviewed" with reviewer/expert attribution
- Passed quality gates + human validation
- Highest trust level
- For Tier A, requires Expert approval
- For Tier B/C, Reviewer approval sufficient
Rejected
- Not visible to public
- Visible to contributor with rejection reason
- Can be resubmitted after addressing issues
- Rejection logged for transparency
Contribution Rules
All Contributors Must:
- Provide sources for claims
- Use clear, neutral language
- Avoid personal attacks or insults
- Respect intellectual property (cite sources)
- Accept community feedback gracefully
AKEL (AI) Must:
- Mark all outputs with `AuthorType = AI`
- Pass quality gates before Mode 2 publication
- Perform mandatory contradiction search
- Disclose confidence levels and uncertainty
- Provide traceable reasoning chains
- Flag potential bubbles or echo chambers
- Submit to audit sampling
Reviewers Must:
- Be impartial and evidence-based
- Document reasoning for decisions
- Escalate to experts when appropriate
- Participate in audits when assigned
- Provide constructive feedback
Experts Must:
- Stay within domain expertise
- Disclose conflicts of interest
- Document specialized terminology
- Provide reasoning for domain-specific decisions
- Participate in Tier A audits
Quality Standards
Source Requirements
- Primary sources preferred over secondary
- Publication date and author must be identifiable
- Sources must be accessible (not paywalled when possible)
- Contradictory sources must be acknowledged
- Echo chamber sources must be flagged
Claim Requirements
- Falsifiable or evaluable
- Clear definitions of key terms
- Boundaries and scope stated
- Assumptions made explicit
- Uncertainty acknowledged
Evidence Requirements
- Relevant to the claim and scenario
- Reliability assessment provided
- Methodology described (for studies)
- Limitations noted
- Conflicting evidence acknowledged
Risk Tier Assignment
Automated (AKEL): Initial tier suggested based on domain, keywords, impact
Human Validation: Moderators or Experts can override AKEL suggestions
Review: Risk tiers periodically reviewed based on audit outcomes
Tier A Indicators:
- Medical diagnosis or treatment advice
- Legal interpretation or advice
- Election or voting information
- Safety or security sensitive
- Major financial decisions
- Potential for significant harm
Tier B Indicators:
- Complex scientific causality
- Contested policy domains
- Historical interpretation with political implications
- Significant economic impact claims
Tier C Indicators:
- Established historical facts
- Simple definitions
- Well-documented scientific consensus
- Basic reference information
User Role Hierarchy Diagram
The following diagram visualizes the complete role hierarchy:
User Class Diagram
This diagram shows the complete user role hierarchy for Test.FactHarborV09.
classDiagram
class Reader {
+String SessionID
+String LocalPreferences
+browse() View all published content
+search() Search claims and scenarios
+compare() Compare scenarios
+flag() Flag issues or errors
+submitClaim() Submit text for automatic claim extraction
}
class Contributor {
+String UserID
+String DisplayName
+String Email
+DateTime RegisteredAt
+submitEvidence() Attach sources
+proposeScenario() Draft scenarios
+comment() Participate in discussions
+requestReview() Request human review
}
class TechnicalUser {
+String SystemID
+String SystemName
+String Purpose
+automatedProcess() Execute automated tasks
+systemIntegration() Integrate with external systems
}
class Reviewer {
+String ReviewerID
+String[] Domains
+DateTime AppointedAt
+review() Review contributions and AI drafts
+validate() Validate AI-generated content Mode 2→3
+edit() Edit claims, scenarios, evidence
+approve() Approve Tier B/C for Human-Reviewed
+flagForExpert() Escalate to expert review
+audit() Participate in sampling audits
}
class Auditor {
+String AuditorID
+String[] AuditDomains
+Float AuditAccuracy
+reviewSample() Review sampled AI content
+validateQualityGates() Check gate enforcement
+identifyPatterns() Find AI error patterns
+provideFeedback() Improve system quality
}
class Expert {
+String ExpertID
+String ExpertiseArea
+String[] Certifications
+DateTime VerifiedAt
+authoritativeApproval() Final authority Tier A
+validateComplex() Complex domain validation
+defineStandards() Set domain quality standards
+overrideAKEL() Override AI suggestions with docs
}
class Moderator {
+String ModeratorID
+String[] Responsibilities
+handleAbuse() Manage abuse reports
+manageUsers() User permissions
+enforceGuidelines() Community guidelines
+adjustRiskTiers() Review tier assignments
+overseeAudits() Audit system oversight
}
class Maintainer {
+String MaintainerID
+String[] SystemAccess
+configureSystem() Technical configuration
+manageRoles() Grant and revoke roles
+configureAKEL() Quality gates and parameters
+deployUpdates() System deployment
+setPolicy() Risk tier policies
+manageTechnicalUsers() Create and manage system accounts
}
class AKEL {
+String InstanceID
+Enum AuthorType "AI"
+extractClaims() Claim extraction
+classifyRisk() Risk tier assignment
+generateScenarios() Draft scenarios
+searchContradictions() Mandatory counter-evidence search
+validateQualityGates() Run 4 quality gates
+proposeVerdicts() Draft verdicts
}
Reader <|-- Contributor : extends
Reader <|-- TechnicalUser : system-type
Contributor <|-- Reviewer : content-track
Contributor <|-- Maintainer : technical-track
Reviewer <|-- Auditor : specialized-QA
Reviewer <|-- Expert : specialized-domain
Reviewer <|-- Moderator : specialized-process
AKEL --|> TechnicalUser : implements
AKEL ..> Contributor : creates-drafts-for
AKEL ..> Reviewer : submits-to
AKEL ..> Auditor : audited-by
Maintainer ..> TechnicalUser : manages
Role Hierarchy Diagrams
User Class Diagram
The following class diagram visualizes the complete user role hierarchy:
User Class Diagram
This diagram shows the complete user role hierarchy for Test.FactHarborV09.
classDiagram
class Reader {
+String SessionID
+String LocalPreferences
+browse() View all published content
+search() Search claims and scenarios
+compare() Compare scenarios
+flag() Flag issues or errors
+submitClaim() Submit text for automatic claim extraction
}
class Contributor {
+String UserID
+String DisplayName
+String Email
+DateTime RegisteredAt
+submitEvidence() Attach sources
+proposeScenario() Draft scenarios
+comment() Participate in discussions
+requestReview() Request human review
}
class TechnicalUser {
+String SystemID
+String SystemName
+String Purpose
+automatedProcess() Execute automated tasks
+systemIntegration() Integrate with external systems
}
class Reviewer {
+String ReviewerID
+String[] Domains
+DateTime AppointedAt
+review() Review contributions and AI drafts
+validate() Validate AI-generated content Mode 2→3
+edit() Edit claims, scenarios, evidence
+approve() Approve Tier B/C for Human-Reviewed
+flagForExpert() Escalate to expert review
+audit() Participate in sampling audits
}
class Auditor {
+String AuditorID
+String[] AuditDomains
+Float AuditAccuracy
+reviewSample() Review sampled AI content
+validateQualityGates() Check gate enforcement
+identifyPatterns() Find AI error patterns
+provideFeedback() Improve system quality
}
class Expert {
+String ExpertID
+String ExpertiseArea
+String[] Certifications
+DateTime VerifiedAt
+authoritativeApproval() Final authority Tier A
+validateComplex() Complex domain validation
+defineStandards() Set domain quality standards
+overrideAKEL() Override AI suggestions with docs
}
class Moderator {
+String ModeratorID
+String[] Responsibilities
+handleAbuse() Manage abuse reports
+manageUsers() User permissions
+enforceGuidelines() Community guidelines
+adjustRiskTiers() Review tier assignments
+overseeAudits() Audit system oversight
}
class Maintainer {
+String MaintainerID
+String[] SystemAccess
+configureSystem() Technical configuration
+manageRoles() Grant and revoke roles
+configureAKEL() Quality gates and parameters
+deployUpdates() System deployment
+setPolicy() Risk tier policies
+manageTechnicalUsers() Create and manage system accounts
}
class AKEL {
+String InstanceID
+Enum AuthorType "AI"
+extractClaims() Claim extraction
+classifyRisk() Risk tier assignment
+generateScenarios() Draft scenarios
+searchContradictions() Mandatory counter-evidence search
+validateQualityGates() Run 4 quality gates
+proposeVerdicts() Draft verdicts
}
Reader <|-- Contributor : extends
Reader <|-- TechnicalUser : system-type
Contributor <|-- Reviewer : content-track
Contributor <|-- Maintainer : technical-track
Reviewer <|-- Auditor : specialized-QA
Reviewer <|-- Expert : specialized-domain
Reviewer <|-- Moderator : specialized-process
AKEL --|> TechnicalUser : implements
AKEL ..> Contributor : creates-drafts-for
AKEL ..> Reviewer : submits-to
AKEL ..> Auditor : audited-by
Maintainer ..> TechnicalUser : manages
Human User Roles
This diagram shows the two-track progression for human users:
Human User Roles
This diagram shows the two-track progression for human users.
erDiagram
READER {
string SessionID PK
string LocalPreferences
datetime LastVisit
}
CONTRIBUTOR {
string UserID PK
string DisplayName
string Email
datetime RegisteredAt
}
REVIEWER {
string ReviewerID PK
string UserID FK
string[] Domains
datetime AppointedAt
}
AUDITOR {
string AuditorID PK
string ReviewerID FK
string[] AuditDomains
}
EXPERT {
string ExpertID PK
string ReviewerID FK
string ExpertiseArea
string[] Certifications
}
MODERATOR {
string ModeratorID PK
string ReviewerID FK
string[] Responsibilities
}
MAINTAINER {
string MaintainerID PK
string UserID FK
string[] SystemAccess
}
READER ||--|| CONTRIBUTOR : "registers-as"
CONTRIBUTOR ||--|| REVIEWER : "content-track"
CONTRIBUTOR ||--|| MAINTAINER : "technical-track"
REVIEWER ||--|| AUDITOR : "QA-specialist"
REVIEWER ||--|| EXPERT : "domain-specialist"
REVIEWER ||--|| MODERATOR : "process-specialist"
Technical and System Users
This diagram shows system processes and their management:
Technical & System Users
This diagram shows system processes and their management by Maintainers.
erDiagram
READER {
string SessionID PK
}
CONTRIBUTOR {
string UserID PK
}
MAINTAINER {
string MaintainerID PK
string UserID FK
string[] SystemAccess
}
TECHNICAL_USER {
string SystemID PK
string SystemName
string Purpose
string CreatedBy FK
string Status
}
AKEL {
string InstanceID PK
string SystemID FK
string Version
}
FEDERATION_SYNC {
string SyncBotID PK
string SystemID FK
string[] TrustedNodes
}
AUDIT_SCHEDULER {
string SchedulerID PK
string SystemID FK
string[] SamplingRules
}
READER ||--|| TECHNICAL_USER : "system-type"
CONTRIBUTOR ||--|| MAINTAINER : "technical-authority"
MAINTAINER ||--o{ TECHNICAL_USER : "creates-manages"
TECHNICAL_USER ||--|| AKEL : "AI-processing"
TECHNICAL_USER ||--|| FEDERATION_SYNC : "node-sync"
TECHNICAL_USER ||--|| AUDIT_SCHEDULER : "automated-audit"
Key Design Principles:
- Two tracks from Contributor: Content Track (Reviewer) and Technical Track (Maintainer)
- Technical Users: System processes (AKEL, bots) managed by Maintainers
- Separation of concerns: Editorial authority independent from technical authority
Functional Requirements
This page defines what the FactHarbor system must do to fulfill its mission.
Requirements are structured as FR (Functional Requirement) items and organized by capability area.
Claim Intake & Normalization
FR1 – Claim Intake
The system must support Claim creation from:
- Free-text input (from any Reader)
- URLs (web pages, articles, posts)
- Uploaded documents and transcripts
- Structured feeds (optional, e.g. from partner platforms)
- Automated ingestion (federation input)
- AKEL extraction from multi-claim texts
Automatic submission: Any Reader can submit text, and new claims are added automatically unless identical claims already exist.
FR2 – Claim Normalization
- Convert diverse inputs into short, structured, declarative claims
- Preserve original phrasing for reference
- Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible
FR3 – Claim Classification
- Classify claims by topic, domain, and type (e.g., quantitative, causal, normative)
- Assign risk tier (A/B/C) based on domain and potential impact
- Suggest which node / experts are relevant
FR4 – Claim Clustering
- Group similar claims into Claim Clusters
- Allow manual correction of cluster membership
- Provide explanation why two claims are considered "same cluster"
Scenario System
FR5 – Scenario Creation
- Contributors, Reviewers, and Experts can create scenarios
- AKEL can propose draft scenarios
- Each scenario is tied to exactly one Claim Cluster
FR6 – Required Scenario Fields
Each scenario includes:
- Definitions (key terms)
- Assumptions (explicit, testable where possible)
- Boundaries (time, geography, population, conditions)
- Scope of evidence considered
- Intended decision / context (optional)
FR7 – Scenario Versioning
- Every change to a scenario creates a new version
- Previous versions remain accessible with timestamps and rationale
- ParentVersionID links versions
FR8 – Scenario Comparison
- Users can compare scenarios side by side
- Show differences in assumptions, definitions, and evidence sets
Evidence Management
FR9 – Evidence Ingestion
- Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios
- Allow multiple pieces of evidence per Scenario
- Support large file uploads (with size limits)
FR10 – Evidence Assessment
For each piece of evidence:
- Assign reliability / quality ratings
- Capture who rated it and why
- Indicate known limitations, biases, or conflicts of interest
- Track evidence version history
FR11 – Evidence Linking
- Link one piece of evidence to multiple scenarios if relevant
- Make dependencies explicit (e.g., "Scenario A uses subset of evidence used in Scenario B")
- Use ScenarioEvidenceLink table with RelevanceScore
Verdicts & Truth Landscape
FR12 – Scenario Verdicts
For each Scenario:
- Provide a probability- or likelihood-based verdict
- Capture uncertainty and reasoning
- Distinguish between AKEL draft and human-approved verdict
- Support Mode 1 (draft), Mode 2 (AI-generated), Mode 3 (human-reviewed)
FR13 – Truth Landscape
- Aggregate all scenario-specific verdicts into a "truth landscape" for a claim
- Make disagreements visible rather than collapsing them into a single binary result
- Show parallel scenarios and their respective verdicts
FR14 – Time Evolution
- Show how verdicts and evidence evolve over time
- Allow users to see "as of date X, what did we know?"
- Maintain complete version history for auditing
Workflow, Moderation & Audit
FR15 – Workflow States
- Draft → In Review → Published / Rejected
- Separate states for Claims, Scenarios, Evidence, and Verdicts
- Support Mode 1/2/3 publication model
FR16 – Moderation & Abuse Handling
- Allow Moderators to hide content or lock edits for abuse or legal reasons
- Keep internal audit trail even if public view is restricted
- Support user reporting and flagging
FR17 – Audit Trail
- Every significant action (create, edit, publish, delete/hide) is logged with:
- Who did it
- When (timestamp)
- What changed (diffs)
- Why (justification text)
Quality Gates & AI Review
FR18 – Quality Gate Validation
Before AI-generated content (Mode 2) publication, enforce:
- Gate 1: Source Quality
- Gate 2: Contradiction Search (MANDATORY)
- Gate 3: Uncertainty Quantification
- Gate 4: Structural Integrity
FR19 – Audit Sampling
- Implement stratified sampling by risk tier
- Recommendation: 30-50% Tier A, 10-20% Tier B, 5-10% Tier C
- Support audit workflow and feedback loop
FR20 – Risk Tier Assignment
- AKEL suggests tier based on domain, keywords, impact
- Moderators and Experts can override
- Risk tier affects publication workflow
Federation Requirements
FR21 – Node Autonomy
- Each node can run independently (local policies, local users, local moderation)
- Nodes decide which other nodes to federate with
- Trust levels: Trusted / Neutral / Untrusted
FR22 – Data Sharing Modes
Nodes must be able to:
- Share claims and summaries only
- Share selected claims, scenarios, and verdicts
- Share full underlying evidence metadata where allowed
- Opt-out of sharing sensitive or restricted content
FR23 – Synchronization & Conflict Handling
- Changes from remote nodes must be mergeable or explicitly conflict-marked
- Conflicting verdicts are allowed and visible; not forced into consensus
- Support push/pull/subscription synchronization
FR24 – Federation Discovery
- Discover other nodes and their capabilities (public endpoints, policies)
- Allow whitelisting / blacklisting of nodes
- Global identifier format: `factharbor://node_url/type/local_id`
FR25 – Cross-Node AI Knowledge Exchange
- Share vector embeddings for clustering
- Share canonical claim forms
- Share scenario templates
- Share contradiction alerts
- NEVER share model weights
- NEVER override local governance
Non-Functional Requirements
NFR1 – Transparency
- All assumptions, evidence, and reasoning behind verdicts must be visible
- AKEL involvement must be clearly labeled
- Users must be able to inspect the chain of reasoning and versions
NFR2 – Security
- Role-based access control
- Transport-level security (HTTPS)
- Secure storage of secrets (API keys, credentials)
- Audit trails for sensitive actions
NFR3 – Privacy & Compliance
- Configurable data retention policies
- Ability to redact or pseudonymize personal data when required
- Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests)
NFR4 – Performance
- POC: typical interactions < 2 s
- Release 1.0: < 300 ms for common read operations after caching
- Degradation strategies under load
NFR5 – Scalability
- POC: 50 internal testers on one node
- Beta 0: 100 external testers on one node
- Release 1.0: 2000+ concurrent users on a reasonably provisioned node
Technical targets for Release 1.0:
- Scalable monolith or early microservice architecture
- Sharded vector database (for semantic search)
- Optional IPFS or other decentralized storage for large artifacts
- Horizontal scalability for read capacity
NFR6 – Interoperability
- Open, documented API
- Modular AKEL that can be swapped or extended
- Federation protocols that follow open standards where possible
- Standard model for external integrations
NFR7 – Observability & Operations
- Metrics for performance, errors, and queue backlogs
- Logs for key flows (claim intake, scenario changes, verdict updates, federation sync)
- Health endpoints for monitoring
NFR8 – Maintainability
- Clear module boundaries (API, core services, AKEL, storage, federation)
- Backward-compatible schema migration strategy where feasible
- Configuration via files / environment variables, not hard-coded
NFR9 – Usability
- UI optimized for exploring complexity, not hiding it
- Support for saved views, filters, and user-level preferences
- Progressive disclosure: casual users see summaries, advanced users can dive deep
Release Levels
Proof of Concept (POC)
- Single node
- Limited user set (50 internal testers)
- Basic claim → scenario → evidence → verdict flow
- Minimal federation (optional)
- AI-generated publication (Mode 2) demonstration
- Quality gates active
Beta 0
- One or few nodes
- External testers (100)
- Expanded workflows and basic moderation
- Initial federation experiments
- Audit sampling implemented
Release 1.0
- 2000+ concurrent users
- Scalable architecture
- Sharded vector DB
- IPFS optional
- High automation (AKEL assistance)
- Multi-node federation with full sync protocol
- Mature audit system