Workflows
Workflows
This page describes the core workflows for content creation, review, and publication in FactHarbor.
Overview
FactHarbor workflows support three publication modes with risk-based review:
- Mode 1 (Draft): Internal only, failed quality gates or pending review
- Mode 2 (AI-Generated): Public with AI-generated label, passed quality gates
- Mode 3 (Human-Reviewed): Public with human-reviewed status, highest trust
Workflows vary by Risk Tier (A/B/C) and Content Type (Claim, Scenario, Evidence, Verdict).
Claim Submission & Publication Workflow
Step 1: Claim Submission
Actor: Contributor or AKEL
Actions:
- Submit claim text
- Provide initial sources (optional for human contributors, mandatory for AKEL)
- System assigns initial AuthorType (Human or AI)
Output: Claim draft created
Step 2: AKEL Processing
Automated Steps:
- Claim extraction and normalization
2. Classification (domain, type, evaluability)
3. Risk tier assignment (A/B/C suggested)
4. Initial scenario generation
5. Evidence search
6. Contradiction search (mandatory)
7. Quality gate validation
Output: Processed claim with risk tier and quality gate results
Step 3: Quality Gate Checkpoint
Gates Evaluated:
- Source quality
- Contradiction search completion
- Uncertainty quantification
- Structural integrity
Outcomes:
- All gates pass → Proceed to Mode 2 publication (if Tier B or C)
- Any gate fails → Mode 1 (Draft), flag for human review
- Tier A → Mode 2 with warnings + auto-escalate to expert queue
Step 4: Publication (Risk-Tier Dependent)
Tier C (Low Risk):
- Direct to Mode 2: AI-generated, public, clearly labeled
- User can request human review
- Sampling audit applies
Tier B (Medium Risk):
- Direct to Mode 2: AI-generated, public, clearly labeled
- Higher audit sampling rate
- High-engagement content may auto-escalate
Tier A (High Risk):
- Mode 2 with warnings: AI-generated, public, prominent disclaimers
- Auto-escalated to expert review queue
- User warnings displayed
- Highest audit sampling rate
Step 5: Human Review (Optional for B/C, Escalated for A)
Triggers:
- User requests review
- Audit flags issues
- High engagement (Tier B)
- Automatic (Tier A)
Process:
- Reviewer/Expert examines claim
2. Validates quality gates
3. Checks contradiction search results
4. Assesses risk tier appropriateness
5. Decision: Approve, Request Changes, or Reject
Outcomes:
- Approved → Mode 3 (Human-Reviewed)
- Changes Requested → Back to contributor or AKEL for revision
- Rejected → Rejected status with reasoning
Scenario Creation Workflow
Step 1: Scenario Generation
Automated (AKEL):
- Generate scenarios for claim
- Define boundaries, assumptions, context
- Identify evaluation methods
Manual (Expert/Reviewer):
- Create custom scenarios
- Refine AKEL-generated scenarios
- Add domain-specific nuances
Step 2: Scenario Validation
Quality Checks:
- Completeness (definitions, boundaries, assumptions clear)
- Relevance to claim
- Evaluability
- No circular logic
Risk Tier Assignment:
- Inherits from parent claim
- Can be overridden by expert if scenario increases/decreases risk
Step 3: Scenario Publication
Mode 2 (AI-Generated):
- Tier B/C scenarios can publish immediately
- Subject to sampling audits
Mode 1 (Draft):
- Tier A scenarios default to draft
- Require expert validation for Mode 2 or Mode 3
Evidence Evaluation Workflow
Step 1: Evidence Search & Retrieval
AKEL Actions:
- Search academic databases, reputable media
- Mandatory contradiction search (counter-evidence, reservations)
- Extract metadata (author, date, publication, methodology)
- Assess source reliability
Quality Requirements:
- Primary sources preferred
- Diverse perspectives included
- Echo chambers flagged
- Conflicting evidence acknowledged
Step 2: Evidence Summarization
AKEL Generates:
- Summary of evidence
- Relevance assessment
- Reliability score
- Limitations and caveats
- Conflicting evidence summary
Quality Gate: Structural integrity, source quality
Step 3: Evidence Review
Reviewer/Expert Validates:
- Accuracy of summaries
- Appropriateness of sources
- Completeness of contradiction search
- Reliability assessments
Outcomes:
- Mode 2: Evidence summaries published as AI-generated
- Mode 3: After human validation
- Mode 1: Failed quality checks or pending expert review
Verdict Generation Workflow
Step 1: Verdict Computation
AKEL Computes:
- Verdict across scenarios
- Confidence scores
- Uncertainty quantification
- Key assumptions
- Limitations
Inputs:
- Claim text
- Scenario definitions
- Evidence assessments
- Contradiction search results
Step 2: Verdict Validation
Quality Gates:
- All four gates apply (source, contradiction, uncertainty, structure)
- Reasoning chain must be traceable
- Assumptions must be explicit
Risk Tier Check:
- Tier A: Always requires expert validation for Mode 3
- Tier B: Mode 2 allowed, audit sampling
- Tier C: Mode 2 default
Step 3: Verdict Publication
Mode 2 (AI-Generated Verdict):
- Clear labeling with confidence scores
- Uncertainty disclosure
- Links to reasoning trail
- User can request expert review
Mode 3 (Expert-Validated Verdict):
- Human reviewer/expert stamp
- Additional commentary (optional)
- Highest trust level
Audit Workflow
Step 1: Audit Sampling Selection
Stratified Sampling:
- Risk tier priority (A > B > C)
- Low confidence scores
- High traffic content
- Novel topics
- User flags
Sampling Rates (Recommendations):
- Tier A: 30-50%
- Tier B: 10-20%
- Tier C: 5-10%
Step 2: Audit Execution
Auditor Actions:
- Review sampled AI-generated content
2. Validate quality gates were properly applied
3. Check contradiction search completeness
4. Assess reasoning quality
5. Identify errors or hallucinations
Audit Outcome:
- Pass: Content remains in Mode 2, logged as validated
- Fail: Content flagged for review, system improvement triggered
Step 3: Feedback Loop
System Improvements:
- Failed audits analyzed for patterns
- AKEL parameters adjusted
- Quality gates refined
- Risk tier assignments recalibrated
Transparency:
- Audit statistics published periodically
- Patterns shared with community
- System improvements documented
Mode Transition Workflow
Mode 1 → Mode 2
Requirements:
- All quality gates pass
- Risk tier B or C (or A with warnings)
- Contradiction search completed
Trigger: Automatic upon quality gate validation
Mode 2 → Mode 3
Requirements:
- Human reviewer/expert validation
- Quality standards confirmed
- For Tier A: Expert approval required
- For Tier B/C: Reviewer approval sufficient
Trigger: Human review completion
Mode 3 → Mode 1 (Demotion)
Rare - Only if:
- New evidence contradicts verdict
- Error discovered in reasoning
- Source retraction
Process:
- Content flagged for re-evaluation
2. Moved to draft (Mode 1)
3. Re-processed through workflow
4. Reason for demotion documented
User Actions Across Modes
On Mode 1 (Draft) Content
Contributors:
- Edit their own drafts
- Submit for review
Reviewers/Experts:
- View and comment
- Request changes
- Approve for Mode 2 or Mode 3
On Mode 2 (AI-Generated) Content
All Users:
- Read and use content
- Request human review
- Flag for expert attention
- Provide feedback
Reviewers/Experts:
- Validate for Mode 3 transition
- Edit and refine
- Adjust risk tier if needed
On Mode 3 (Human-Reviewed) Content
All Users:
- Read with highest confidence
- Still can flag if new evidence emerges
Reviewers/Experts:
- Update if needed
- Trigger re-evaluation if new evidence