POC1 API & Schemas Specification

Last modified by Robert Schaub on 2025/12/24 18:26

POC1 API & Schemas Specification


Version History

VersionDateChanges
0.4.12025-12-24Applied 9 critical fixes: file format notice, verdict taxonomy, canonicalization algorithm, Stage 1 cost policy, BullMQ fix, language in cache key, historical claims TTL, idempotency, copyright policy
0.42025-12-24BREAKING: 3-stage pipeline with claim-level caching, user tier system, cache-only mode for free users, Redis cache architecture
0.3.12025-12-24Fixed single-prompt strategy, SSE clarification, schema canonicalization, cost constraints
0.32025-12-24Added complete API endpoints, LLM config, risk tiers, scraping details

1. Core Objective (POC1)

The primary technical goal of POC1 is to validate Approach 1 (Single-Pass Holistic Analysis) while implementing claim-level caching to achieve cost sustainability.

The system must prove that AI can identify an article's Main Thesis and determine if supporting claims logically support that thesis without committing fallacies.

Success Criteria:

  • Test with 30 diverse articles
  • Target: ≥70% accuracy detecting misleading articles
  • Cost: <$0.25 per NEW analysis (uncached)
  • Cost: $0.00 for cached claim reuse
  • Cache hit rate: ≥50% after 1,000 articles
  • Processing time: <2 minutes (standard depth)

Economic Model:

  • Free tier: $10 credit per month (~40-140 articles depending on cache hits)
  • After limit: Cache-only mode (instant, free access to cached claims)
  • Paid tier: Unlimited new analyses

2. Architecture Overview

2.1 3-Stage Pipeline with Caching

FactHarbor POC1 uses a 3-stage architecture designed for claim-level caching and cost efficiency:

graph TD
 A[Article Input] --> B[Stage 1: Extract Claims]
 B --> C{For Each Claim}
 C --> D[Check Cache]
 D -->|Cache HIT| E[Return Cached Verdict]
 D -->|Cache MISS| F[Stage 2: Analyze Claim]
 F --> G[Store in Cache]
 G --> E
 E --> H[Stage 3: Holistic Assessment]
 H --> I[Final Report]

Stage 1: Claim Extraction (Haiku, no cache)

  • Input: Article text
  • Output: 5 canonical claims (normalized, deduplicated)
  • Model: Claude Haiku 4 (default, configurable via LLM abstraction layer)
  • Cost: $0.003 per article
  • Cache strategy: No caching (article-specific)

Stage 2: Claim Analysis (Sonnet, CACHED)

  • Input: Single canonical claim
  • Output: Scenarios + Evidence + Verdicts
  • Model: Claude Sonnet 3.5 (default, configurable via LLM abstraction layer)
  • Cost: $0.081 per NEW claim
  • Cache strategy: Redis, 90-day TTL
  • Cache key: claim:v1norm1:{language}:{sha256(canonical_claim)}

Stage 3: Holistic Assessment (Sonnet, no cache)

  • Input: Article + Claim verdicts (from cache or Stage 2)
  • Output: Article verdict + Fallacies + Logic quality
  • Model: Claude Sonnet 3.5 (default, configurable via LLM abstraction layer)
  • Cost: $0.030 per article
  • Cache strategy: No caching (article-specific)

Note: Stage 3 implements Approach 1 (Single-Pass Holistic Analysis) from the Article Verdict Problem. While claim analysis (Stage 2) is cached for efficiency, the holistic assessment maintains the integrated evaluation philosophy of Approach 1.

Total Cost Formula:

Cost = $0.003 (extraction) + (N_new_claims × $0.081) + $0.030 (holistic)

Examples:
- 0 new claims (100% cache hit): $0.033
- 1 new claim (80% cache hit): $0.114
- 3 new claims (40% cache hit): $0.276
- 5 new claims (0% cache hit): $0.438

2.2 User Tier System

TierMonthly CreditAfter LimitCache AccessAnalytics
Free$10Cache-only mode✅ FullBasic
Pro (future)$50Continues✅ FullAdvanced
Enterprise (future)CustomContinues✅ Full + PriorityFull

Free Tier Economics:

  • $10 credit = 40-140 articles analyzed (depending on cache hit rate)
  • Average 70 articles/month at 70% cache hit rate
  • After limit: Cache-only mode

2.3 Cache-Only Mode (Free Tier Feature)

When free users reach their $10 monthly limit, they enter Cache-Only Mode:

What Cache-Only Mode Provides:

Claim Extraction (Platform-Funded):

  • Stage 1 extraction runs at $0.003 per article
  • Cost: Absorbed by platform (not charged to user credit)
  • Rationale: Extraction is necessary to check cache, and cost is negligible
  • Rate limit: Max 50 extractions/day in cache-only mode (prevents abuse)

Instant Access to Cached Claims:

  • Any claim that exists in cache → Full verdict returned
  • Cost: $0 (no LLM calls)
  • Response time: <100ms

Partial Article Analysis:

  • Check each claim against cache
  • Return verdicts for ALL cached claims
  • For uncached claims: Return "status": "cache_miss"

Cache Coverage Report:

  • "3 of 5 claims available in cache (60% coverage)"
  • Links to cached analyses
  • Estimated cost to complete: $0.162 (2 new claims)

Not Available in Cache-Only Mode:

  • New claim analysis (Stage 2 LLM calls blocked)
  • Full holistic assessment (Stage 3 blocked if any claims missing)

User Experience Example:

{
 "status": "cache_only_mode",
 "message": "Monthly credit limit reached. Showing cached results only.",
 "cache_coverage": {
 "claims_total": 5,
 "claims_cached": 3,
 "claims_missing": 2,
 "coverage_percent": 60
 },
 "cached_claims": [
 {"claim_id": "C1", "verdict": "Likely", "confidence": 0.82},
 {"claim_id": "C2", "verdict": "Highly Likely", "confidence": 0.91},
 {"claim_id": "C4", "verdict": "Unclear", "confidence": 0.55}
 ],
 "missing_claims": [
 {"claim_id": "C3", "claim_text": "...", "estimated_cost": "$0.081"},
 {"claim_id": "C5", "claim_text": "...", "estimated_cost": "$0.081"}
 ],
 "upgrade_options": {
 "top_up": "$5 for 20-70 more articles",
 "pro_tier": "$50/month unlimited"
 }
}

Design Rationale:

  • Free users still get value (cached claims often answer their question)
  • Demonstrates FactHarbor's value (partial results encourage upgrade)
  • Sustainable for platform (no additional cost)
  • Fair to all users (everyone contributes to cache)

6. LLM Abstraction Layer

6.1 Design Principle

FactHarbor uses provider-agnostic LLM abstraction to avoid vendor lock-in and enable:

  • Provider switching: Change LLM providers without code changes
  • Cost optimization: Use different providers for different stages
  • Resilience: Automatic fallback if primary provider fails
  • Cross-checking: Compare outputs from multiple providers
  • A/B testing: Test new models without deployment changes

Implementation: All LLM calls go through an abstraction layer that routes to configured providers.


6.2 LLM Provider Interface

Abstract Interface:

interface LLMProvider {
  // Core methods
  complete(prompt: string, options: CompletionOptions): Promise<CompletionResponse>
  stream(prompt: string, options: CompletionOptions): AsyncIterator<StreamChunk>
  
  // Provider metadata
  getName(): string
  getMaxTokens(): number
  getCostPer1kTokens(): { input: number, output: number }
  
  // Health check
  isAvailable(): Promise<boolean>
}

interface CompletionOptions {
  model?: string
  maxTokens?: number
  temperature?: number
  stopSequences?: string[]
  systemPrompt?: string
}

6.3 Supported Providers (POC1)

Primary Provider (Default):

  • Anthropic Claude API
  • Models: Claude Haiku 4, Claude Sonnet 3.5, Claude Opus 4
  • Used by default in POC1
  • Best quality for holistic analysis

Secondary Providers (Future):

  • OpenAI API
  • Models: GPT-4o, GPT-4o-mini
  • For cost comparison
      
  • Google Vertex AI
  • Models: Gemini 1.5 Pro, Gemini 1.5 Flash
  • For diversity in evidence gathering
  • Local Models (Post-POC)
  • Models: Llama 3.1, Mistral
  • For privacy-sensitive deployments

6.4 Provider Configuration

Environment Variables:

# Primary provider
LLM_PRIMARY_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Fallback provider
LLM_FALLBACK_PROVIDER=openai
OPENAI_API_KEY=sk-...

# Provider selection per stage
LLM_STAGE1_PROVIDER=anthropic
LLM_STAGE1_MODEL=claude-haiku-4
LLM_STAGE2_PROVIDER=anthropic
LLM_STAGE2_MODEL=claude-sonnet-3-5
LLM_STAGE3_PROVIDER=anthropic
LLM_STAGE3_MODEL=claude-sonnet-3-5

# Cost limits
LLM_MAX_COST_PER_REQUEST=1.00

Database Configuration (Alternative):

{
{
  "providers": [
    {
      "name": "anthropic",
      "api_key_ref": "vault://anthropic-api-key",
      "enabled": true,
      "priority": 1
    },
    {
      "name": "openai",
      "api_key_ref": "vault://openai-api-key",
      "enabled": true,
      "priority": 2
    }
  ],
  "stage_config": {
    "stage1": {
      "provider": "anthropic",
      "model": "claude-haiku-4",
      "max_tokens": 4096,
      "temperature": 0.0
    },
    "stage2": {
      "provider": "anthropic",
      "model": "claude-sonnet-3-5",
      "max_tokens": 16384,
      "temperature": 0.3
    },
    "stage3": {
      "provider": "anthropic",
      "model": "claude-sonnet-3-5",
      "max_tokens": 8192,
      "temperature": 0.2
    }
  }
}

6.5 Stage-Specific Models (POC1 Defaults)

Stage 1: Claim Extraction

  • Default: Anthropic Claude Haiku 4
  • Alternative: OpenAI GPT-4o-mini, Google Gemini 1.5 Flash
  • Rationale: Fast, cheap, simple task
  • Cost: $0.003 per article

Stage 2: Claim Analysis (CACHEABLE)

  • Default: Anthropic Claude Sonnet 3.5
  • Alternative: OpenAI GPT-4o, Google Gemini 1.5 Pro
  • Rationale: High-quality analysis, cached 90 days
  • Cost: $0.081 per NEW claim

Stage 3: Holistic Assessment

  • Default: Anthropic Claude Sonnet 3.5
  • Alternative: OpenAI GPT-4o, Claude Opus 4 (for high-stakes)
  • Rationale: Complex reasoning, logical fallacy detection
  • Cost: $0.030 per article

Cost Comparison (Example):

StageAnthropic (Default)OpenAI AlternativeGoogle Alternative
Stage 1Claude Haiku 4 ($0.003)GPT-4o-mini ($0.002)Gemini Flash ($0.002)
Stage 2Claude Sonnet 3.5 ($0.081)GPT-4o ($0.045)Gemini Pro ($0.050)
Stage 3Claude Sonnet 3.5 ($0.030)GPT-4o ($0.018)Gemini Pro ($0.020)
Total (0% cache)$0.114$0.065$0.072

Note: POC1 uses Anthropic exclusively for consistency. Multi-provider support planned for POC2.


6.6 Failover Strategy

Automatic Failover:

async function completeLLM(stage: string, prompt: string): Promise<string> {
  const primaryProvider = getProviderForStage(stage)
  const fallbackProvider = getFallbackProvider()
  
  try {
    return await primaryProvider.complete(prompt)
  } catch (error) {
    if (error.type === 'rate_limit' || error.type === 'service_unavailable') {
      logger.warn(`Primary provider failed, using fallback`)
      return await fallbackProvider.complete(prompt)
    }
    throw error
  }
}

Fallback Priority:

  1. Primary: Configured provider for stage
    2. Secondary: Fallback provider (if configured)
    3. Cache: Return cached result (if available for Stage 2)
    4. Error: Return 503 Service Unavailable

6.7 Provider Selection API

Admin Endpoint: POST /admin/v1/llm/configure

Update provider for specific stage:

{
{
  "stage": "stage2",
  "provider": "openai",
  "model": "gpt-4o",
  "max_tokens": 16384,
  "temperature": 0.3
}

Response: 200 OK

{
{
  "message": "LLM configuration updated",
  "stage": "stage2",
  "previous": {
    "provider": "anthropic",
    "model": "claude-sonnet-3-5"
  },
  "current": {
    "provider": "openai",
    "model": "gpt-4o"
  },
  "cost_impact": {
    "previous_cost_per_claim": 0.081,
    "new_cost_per_claim": 0.045,
    "savings_percent": 44
  }
}

Get current configuration:

GET /admin/v1/llm/config

{
{
  "providers": ["anthropic", "openai"],
  "primary": "anthropic",
  "fallback": "openai",
  "stages": {
    "stage1": {
      "provider": "anthropic",
      "model": "claude-haiku-4",
      "cost_per_request": 0.003
    },
    "stage2": {
      "provider": "anthropic",
      "model": "claude-sonnet-3-5",
      "cost_per_new_claim": 0.081
    },
    "stage3": {
      "provider": "anthropic",
      "model": "claude-sonnet-3-5",
      "cost_per_request": 0.030
    }
  }
}

6.8 Implementation Notes

Provider Adapter Pattern:

class AnthropicProvider implements LLMProvider {
  async complete(prompt: string, options: CompletionOptions) {
    const response = await anthropic.messages.create({
      model: options.model || 'claude-sonnet-3-5',
      max_tokens: options.maxTokens || 4096,
      messages: [{ role: 'user', content: prompt }],
      system: options.systemPrompt
    })
    return response.content[0].text
  }
}

class OpenAIProvider implements LLMProvider {
  async complete(prompt: string, options: CompletionOptions) {
    const response = await openai.chat.completions.create({
      model: options.model || 'gpt-4o',
      max_tokens: options.maxTokens || 4096,
      messages: [
        { role: 'system', content: options.systemPrompt },
        { role: 'user', content: prompt }
      ]
    })
    return response.choices[0].message.content
  }
}

Provider Registry:

const providers = new Map<string, LLMProvider>()
providers.set('anthropic', new AnthropicProvider())
providers.set('openai', new OpenAIProvider())
providers.set('google', new GoogleProvider())

function getProvider(name: string): LLMProvider {
  return providers.get(name) || providers.get(config.primaryProvider)
}

3. REST API Contract

3.1 User Credit Tracking

Endpoint: GET /v1/user/credit

Response: 200 OK

{
 "user_id": "user_abc123",
 "tier": "free",
 "credit_limit": 10.00,
 "credit_used": 7.42,
 "credit_remaining": 2.58,
 "reset_date": "2025-02-01T00:00:00Z",
 "cache_only_mode": false,
 "usage_stats": {
 "articles_analyzed": 67,
 "claims_from_cache": 189,
 "claims_newly_analyzed": 113,
 "cache_hit_rate": 0.626
 }
}

3.2 Create Analysis Job (3-Stage)

Endpoint: POST /v1/analyze

Idempotency Support:

To prevent duplicate job creation on network retries, clients SHOULD include:

POST /v1/analyze
Idempotency-Key: {client-generated-uuid}

OR use the client.request_id field:

{
 "input_url": "...",
 "client": {
 "request_id": "client-uuid-12345",
 "source_label": "optional"
 }
}

Server Behavior:

  • If Idempotency-Key or request_id seen before (within 24 hours):
    • Return existing job (200 OK, not 202 Accepted)
    • Do NOT create duplicate job or charge twice
  • Idempotency keys expire after 24 hours (matches job retention)

Example Response (Idempotent):

{
 "job_id": "01J...ULID",
 "status": "RUNNING",
 "idempotent": true,
 "original_request_at": "2025-12-24T10:31:00Z",
 "message": "Returning existing job (idempotency key matched)"
}

Request Body:

{
 "input_type": "url",
 "input_url": "https://example.com/medical-report-01",
 "input_text": null,
 "options": {
 "browsing": "on",
 "depth": "standard",
 "max_claims": 5,
 "scenarios_per_claim": 2,
 "max_evidence_per_scenario": 6,
 "context_aware_analysis": true
 },
 "client": {
 "request_id": "optional-client-tracking-id",
 "source_label": "optional"
 }
}

Options:

  • browsing: on | off (retrieve web sources or just output queries)
  • depth: standard | deep (evidence thoroughness)
  • max_claims: 1-10 (default: 5 for cost control)
  • scenarios_per_claim: 1-5 (default: 2 for cost control)
  • max_evidence_per_scenario: 3-10 (default: 6)
  • context_aware_analysis: true | false (experimental)

Response: 202 Accepted

{
 "job_id": "01J...ULID",
 "status": "QUEUED",
 "created_at": "2025-12-24T10:31:00Z",
 "estimated_cost": 0.114,
 "cost_breakdown": {
 "stage1_extraction": 0.003,
 "stage2_new_claims": 0.081,
 "stage2_cached_claims": 0.000,
 "stage3_holistic": 0.030
 },
 "cache_info": {
 "claims_to_extract": 5,
 "estimated_cache_hits": 4,
 "estimated_new_claims": 1
 },
 "links": {
 "self": "/v1/jobs/01J...ULID",
 "result": "/v1/jobs/01J...ULID/result",
 "report": "/v1/jobs/01J...ULID/report",
 "events": "/v1/jobs/01J...ULID/events"
 }
}

Error Responses:

402 Payment Required - Free tier limit reached, cache-only mode

{
 "error": "credit_limit_reached",
 "message": "Monthly credit limit reached. Entering cache-only mode.",
 "cache_only_mode": true,
 "credit_remaining": 0.00,
 "reset_date": "2025-02-01T00:00:00Z",
 "action": "Resubmit with cache_preference=allow_partial for cached results"
}

4. Data Schemas

4.1 Stage 1 Output: ClaimExtraction

{
 "job_id": "01J...ULID",
 "stage": "stage1_extraction",
 "article_metadata": {
 "title": "Article title",
 "source_url": "https://example.com/article",
 "extracted_text_length": 5234,
 "language": "en"
 },
 "claims": [
 {
 "claim_id": "C1",
 "claim_text": "Original claim text from article",
 "canonical_claim": "Normalized, deduplicated phrasing",
 "claim_hash": "sha256:abc123...",
 "is_central_to_thesis": true,
 "claim_type": "causal",
 "evaluability": "evaluable",
 "risk_tier": "B",
 "domain": "public_health"
 }
 ],
 "article_thesis": "Main argument detected",
 "cost": 0.003
}

4.5 Verdict Label Taxonomy

FactHarbor uses three distinct verdict taxonomies depending on analysis level:

4.5.1 Scenario Verdict Labels (Stage 2)

Used for individual scenario verdicts within a claim.

Enum Values:

  • Highly Likely - Probability 0.85-1.0, high confidence
  • Likely - Probability 0.65-0.84, moderate-high confidence
  • Unclear - Probability 0.35-0.64, or low confidence
  • Unlikely - Probability 0.16-0.34, moderate-high confidence
  • Highly Unlikely - Probability 0.0-0.15, high confidence
  • Unsubstantiated - Insufficient evidence to determine probability

4.5.2 Claim Verdict Labels (Rollup)

Used when summarizing a claim across all scenarios.

Enum Values:

  • Supported - Majority of scenarios are Likely or Highly Likely
  • Refuted - Majority of scenarios are Unlikely or Highly Unlikely
  • Inconclusive - Mixed scenarios or majority Unclear/Unsubstantiated

Mapping Logic:

  • If ≥60% scenarios are (Highly Likely | Likely) → Supported
  • If ≥60% scenarios are (Highly Unlikely | Unlikely) → Refuted
  • Otherwise → Inconclusive

4.5.3 Article Verdict Labels (Stage 3)

Used for holistic article-level assessment.

Enum Values:

  • WELL-SUPPORTED - Article thesis logically follows from supported claims
  • MISLEADING - Claims may be true but article commits logical fallacies
  • REFUTED - Central claims are refuted, invalidating thesis
  • UNCERTAIN - Insufficient evidence or highly mixed claim verdicts

Note: Article verdict considers claim centrality (central claims override supporting claims).

4.5.4 API Field Mapping

LevelAPI FieldEnum Name
Scenarioscenarios[].verdict.labelscenario_verdict_label
Claimclaims[].rollup_verdict (optional)claim_verdict_label
Articlearticle_holistic_assessment.overall_verdictarticle_verdict_label

5. Cache Architecture

5.1 Redis Cache Design

Technology: Redis 7.0+ (in-memory key-value store)

Cache Key Schema:

claim:v1norm1:{language}:{sha256(canonical_claim)}

Example:

Claim (English): "COVID vaccines are 95% effective"
Canonical: "covid vaccines are 95 percent effective"
Language: "en"
SHA256: abc123...def456
Key: claim:v1norm1:en:abc123...def456

Rationale: Prevents cross-language collisions and enables per-language cache analytics.

Data Structure:

SET claim:v1norm1:en:abc123...def456 '{...ClaimAnalysis JSON...}'
EXPIRE claim:v1norm1:en:abc123...def456 7776000 # 90 days

5.1.1 Canonical Claim Normalization (v1)

The cache key depends on deterministic claim normalization. All implementations MUST follow this algorithm exactly.

Algorithm: Canonical Claim Normalization v1

def normalize_claim_v1(claim_text: str, language: str) -> str:
 """
 Normalizes claim to canonical form for cache key generation.
 Version: v1norm1 (POC1)
 """
 import re
 import unicodedata
 
 # Step 1: Unicode normalization (NFC)
 text = unicodedata.normalize('NFC', claim_text)
 
 # Step 2: Lowercase
 text = text.lower()
 
 # Step 3: Remove punctuation (except hyphens in words)
 text = re.sub(r'[^\w\s-]', '', text)
 
 # Step 4: Normalize whitespace (collapse multiple spaces)
 text = re.sub(r'\s+', ' ', text).strip()
 
 # Step 5: Numeric normalization
 text = text.replace('%', ' percent')
 # Spell out single-digit numbers
 num_to_word = {'0':'zero', '1':'one', '2':'two', '3':'three',
 '4':'four', '5':'five', '6':'six', '7':'seven',
 '8':'eight', '9':'nine'}
 for num, word in num_to_word.items():
 text = re.sub(rf'\b{num}\b', word, text)
 
 # Step 6: Common abbreviations (English only in v1)
 if language == 'en':
 text = text.replace('covid-19', 'covid')
 text = text.replace('u.s.', 'us')
 text = text.replace('u.k.', 'uk')
 
 # Step 7: NO entity normalization in v1
 # (Trump vs Donald Trump vs President Trump remain distinct)
 
 return text

# Version identifier (include in cache namespace)
CANONICALIZER_VERSION = "v1norm1"

Cache Key Formula (Updated):

language = "en"
canonical = normalize_claim_v1(claim_text, language)
cache_key = f"claim:{CANONICALIZER_VERSION}:{language}:{sha256(canonical)}"

Example:
 claim: "COVID-19 vaccines are 95% effective"
 canonical: "covid vaccines are 95 percent effective"
 sha256: abc123...def456
 key: "claim:v1norm1:en:abc123...def456"

Cache Metadata MUST Include:

{
 "canonical_claim": "covid vaccines are 95 percent effective",
 "canonicalizer_version": "v1norm1",
 "language": "en",
 "original_claim_samples": ["COVID-19 vaccines are 95% effective"]
}

Version Upgrade Path:

  • v1norm1 → v1norm2: Cache namespace changes, old keys remain valid until TTL
  • v1normN → v2norm1: Major version bump, invalidate all v1 caches

5.1.2 Copyright & Data Retention Policy

Evidence Excerpt Storage:

To comply with copyright law and fair use principles:

What We Store:

  • Metadata only: Title, author, publisher, URL, publication date
  • Short excerpts: Max 25 words per quote, max 3 quotes per evidence item
  • Summaries: AI-generated bullet points (not verbatim text)
  • No full articles: Never store complete article text beyond job processing

Total per Cached Claim:

  • Scenarios: 2 per claim
  • Evidence items: 6 per scenario (12 total)
  • Quotes: 3 per evidence × 25 words = 75 words per item
  • Maximum stored verbatim text: ~900 words per claim (12 × 75)

Retention:

  • Cache TTL: 90 days
  • Job outputs: 24 hours (then archived or deleted)
  • No persistent full-text article storage

Rationale:

  • Short excerpts for citation = fair use
  • Summaries are transformative (not copyrightable)
  • Limited retention (90 days max)
  • No commercial republication of excerpts

DMCA Compliance:

  • Cache invalidation endpoint available for rights holders
  • Contact: dmca@factharbor.org

Summary

This WYSIWYG preview shows the structure and key sections of the 1,515-line API specification.

Full specification includes:

  • Complete API endpoints (7 total)
  • All data schemas (ClaimExtraction, ClaimAnalysis, HolisticAssessment, Complete)
  • Quality gates & validation rules
  • LLM configuration for all 3 stages
  • Implementation notes with code samples
  • Testing strategy
  • Cross-references to other pages

The complete specification is available in:

  • FactHarbor_POC1_API_and_Schemas_Spec_v0_4_1_PATCHED.md (45 KB standalone)
  • Export files (TEST/PRODUCTION) for xWiki import