Requirements

Version 7.1 by Robert Schaub on 2025/12/14 22:27

Requirements

This page defines Roles, Responsibilities, and Rules for contributors and users of FactHarbor.

Roles

Reader

Who: Anyone (no login required).

Can:

  • Browse and search claims
  • View scenarios, evidence, verdicts, and timelines
  • Compare scenarios and explore assumptions
  • Flag issues, errors, contradictions, or suspicious patterns
  • Use filters, search, and visualization tools
  • Create personal views (saved searches, bookmarks - local browser storage)
  • Submit claims automatically by providing text to analyze - new claims are added automatically unless equal claims already exist in the system

Cannot:

  • Modify existing content
  • Access draft content
  • Participate in governance decisions

Note: Readers can request human review of AI-generated content by flagging it.

Contributor

Who: Registered and logged-in users (extends Reader capabilities).

Can:

  • Everything a Reader can do
  • Submit claims
  • Submit evidence
  • Provide feedback
  • Suggest scenarios
  • Flag content for review
  • Request human review of AI-generated content

Cannot:

  • Publish or mark content as "reviewed" or "approved"
  • Override expert or maintainer decisions
  • Directly modify AKEL or quality gate configurations

Reviewer

Who: Trusted community members, appointed by maintainers.

Can:

  • Review contributions from Contributors and AKEL drafts
  • Validate AI-generated content (Mode 2 → Mode 3 transition)
  • Edit claims, scenarios, and evidence
  • Add clarifications or warnings
  • Change content status: `draft` → `in review` → `published` / `rejected`
  • Approve or reject Tier B and C content for "Human-Reviewed" status
  • Flag content for expert review
  • Participate in audit sampling

Cannot:

  • Approve Tier A content for "Human-Reviewed" status (requires Expert)
  • Change governance rules
  • Unilaterally change expert conclusions without process
  • Bypass quality gates

Note on AI-Drafted Content:

  • Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3)
  • For Tier B and C, Reviewers have approval authority
  • For Tier A, only Experts can grant "Human-Reviewed" status

Expert (Domain-Specific)

Who: Subject-matter specialists in specific domains (medicine, law, science, etc.).

Can:

  • Everything a Reviewer can do
  • Final authority on Tier A content "Human-Reviewed" status
  • Validate complex or controversial claims in their domain
  • Define domain-specific quality standards
  • Set reliability thresholds for domain sources
  • Participate in risk tier assignment review
  • Override AKEL suggestions in their domain (with documentation)

Cannot:

  • Change platform governance policies
  • Approve content outside their expertise domain
  • Bypass technical quality gates (but can flag for adjustment)

Specialization:

  • Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert")
  • Cross-domain claims may require multiple expert reviews

Auditor

Who: Reviewers or Experts assigned to sampling audit duties.

Can:

  • Review sampled AI-generated content against quality standards
  • Validate quality gate enforcement
  • Identify patterns in AI errors or hallucinations
  • Provide feedback for system improvement
  • Flag content for immediate review if errors found
  • Contribute to audit statistics and transparency reports

Cannot:

  • Change audit sampling algorithms (maintainer responsibility)
  • Bypass normal review workflows
  • Audit content they personally created

Selection:

  • Auditors selected based on domain expertise and review quality
  • Rotation to prevent audit fatigue
  • Stratified assignment (Tier A auditors need higher expertise)

Audit Focus:

  • Tier A: Recommendation 30-50% sampling rate, expert auditors
  • Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors
  • Tier C: Recommendation 5-10% sampling rate, reviewer auditors

Moderator

Who: Maintainers or trusted long-term contributors.

Can:

  • All Reviewer and Expert capabilities (cross-domain)
  • Manage user accounts and permissions
  • Handle disputes and conflicts
  • Enforce community guidelines
  • Suspend or ban abusive users
  • Finalize publication status for sensitive content
  • Review and adjust risk tier assignments
  • Oversee audit system performance

Cannot:

  • Change core data model or architecture
  • Override technical system constraints
  • Make unilateral governance decisions without consensus

Maintainer

Who: Core team members responsible for the platform.

Can:

  • All Moderator capabilities
  • Change data model, architecture, and technical systems
  • Configure quality gates and AKEL parameters
  • Adjust audit sampling algorithms
  • Set and modify risk tier policies
  • Make platform-wide governance decisions
  • Access and modify backend systems
  • Deploy updates and fixes
  • Grant and revoke roles

Governance:

  • Maintainers operate under organizational governance rules
  • Major policy changes require Governing Team approval
  • Technical decisions made collaboratively

Content Publication States

Mode 1: Draft

  • Not visible to public
  • Visible to contributor and reviewers
  • Can be edited by contributor or reviewers
  • Default state for failed quality gates

Mode 2: AI-Generated (Published)

  • Public and visible to all users
  • Clearly labeled as "AI-Generated, Awaiting Human Review"
  • Passed all automated quality gates
  • Risk tier displayed (A/B/C)
  • Users can:
    • Read and use content
    • Request human review
    • Flag for expert attention
  • Subject to sampling audits
  • Can be promoted to Mode 3 by reviewer/expert validation

Mode 3: Human-Reviewed (Published)

  • Public and visible to all users
  • Labeled as "Human-Reviewed" with reviewer/expert attribution
  • Passed quality gates + human validation
  • Highest trust level
  • For Tier A, requires Expert approval
  • For Tier B/C, Reviewer approval sufficient

Rejected

  • Not visible to public
  • Visible to contributor with rejection reason
  • Can be resubmitted after addressing issues
  • Rejection logged for transparency

Contribution Rules

All Contributors Must:

  • Provide sources for claims
  • Use clear, neutral language
  • Avoid personal attacks or insults
  • Respect intellectual property (cite sources)
  • Accept community feedback gracefully

AKEL (AI) Must:

  • Mark all outputs with `AuthorType = AI`
  • Pass quality gates before Mode 2 publication
  • Perform mandatory contradiction search
  • Disclose confidence levels and uncertainty
  • Provide traceable reasoning chains
  • Flag potential bubbles or echo chambers
  • Submit to audit sampling

Reviewers Must:

  • Be impartial and evidence-based
  • Document reasoning for decisions
  • Escalate to experts when appropriate
  • Participate in audits when assigned
  • Provide constructive feedback

Experts Must:

  • Stay within domain expertise
  • Disclose conflicts of interest
  • Document specialized terminology
  • Provide reasoning for domain-specific decisions
  • Participate in Tier A audits

Quality Standards

Source Requirements

  • Primary sources preferred over secondary
  • Publication date and author must be identifiable
  • Sources must be accessible (not paywalled when possible)
  • Contradictory sources must be acknowledged
  • Echo chamber sources must be flagged

Claim Requirements

  • Falsifiable or evaluable
  • Clear definitions of key terms
  • Boundaries and scope stated
  • Assumptions made explicit
  • Uncertainty acknowledged

Evidence Requirements

  • Relevant to the claim and scenario
  • Reliability assessment provided
  • Methodology described (for studies)
  • Limitations noted
  • Conflicting evidence acknowledged

Risk Tier Assignment

Automated (AKEL): Initial tier suggested based on domain, keywords, impact
Human Validation: Moderators or Experts can override AKEL suggestions
Review: Risk tiers periodically reviewed based on audit outcomes

Tier A Indicators:

  • Medical diagnosis or treatment advice
  • Legal interpretation or advice
  • Election or voting information
  • Safety or security sensitive
  • Major financial decisions
  • Potential for significant harm

Tier B Indicators:

  • Complex scientific causality
  • Contested policy domains
  • Historical interpretation with political implications
  • Significant economic impact claims

Tier C Indicators:

  • Established historical facts
  • Simple definitions
  • Well-documented scientific consensus
  • Basic reference information

Related Pages