Version 1.1 by Robert Schaub on 2025/12/22 14:22

Show last authors
1 = POC2: Robust Quality & Reliability = **Phase Goal:** Prove AKEL produces high-quality outputs consistently at scale **Success Metric:** <5% hallucination rate, all 4 quality gates operational == 1. Overview == POC2 extends POC1 by implementing the full quality assurance framework (all 4 gates), adding evidence deduplication, and processing significantly more test articles to validate system reliability at scale. **Key Innovation:** Complete quality validation pipeline catches all categories of errors **What We're Proving:**
2 * All 4 quality gates work together effectively
3 * Evidence deduplication prevents artificial inflation
4 * System maintains quality at larger scale
5 * Quality metrics dashboard provides actionable insights == 2. New Requirements == === 2.1 NFR11: Complete Quality Assurance Framework === **Add Gates 2 & 3** (POC1 had only Gates 1 & 4) ==== Gate 2: Evidence Relevance Validation ==== **Purpose:** Ensure AI-linked evidence actually relates to the claim **Validation Checks:**
6 1. **Semantic Similarity:** Cosine similarity between claim and evidence embeddings ≥ 0.6
7 2. **Entity Overlap:** At least 1 shared named entity between claim and evidence
8 3. **Topic Relevance:** Evidence discusses the claim's subject matter (score ≥ 0.5) **Action if Failed:**
9 * Discard irrelevant evidence (don't count it)
10 * If <2 relevant evidence items remain → "Insufficient Evidence" verdict
11 * Log discarded evidence for quality review **Target:** 0% of evidence cited is off-topic ==== Gate 3: Scenario Coherence Check ==== **Purpose:** Validate scenarios are logical, complete, and meaningfully different **Validation Checks:**
12 1. **Completeness:** All required fields populated (assumptions, scope, evidence context)
13 2. **Internal Consistency:** Assumptions don't contradict each other (score <0.3)
14 3. **Distinctiveness:** Scenarios are meaningfully different (similarity <0.8)
15 4. **Minimum Detail:** At least 1 specific assumption per scenario **Action if Failed:**
16 * Merge duplicate scenarios
17 * Flag contradictory assumptions for review
18 * Reduce confidence score by 20%
19 * Do not publish if <2 distinct scenarios **Target:** 0% duplicate scenarios, all scenarios internally consistent === 2.2 FR54: Evidence Deduplication (NEW) === **Priority:** HIGH **Fulfills:** Accurate evidence counting, prevents artificial inflation **Purpose:** Prevent counting the same evidence multiple times when cited by different sources **Problem:**
20 * Wire services (AP, Reuters) redistribute same content
21 * Different sites cite the same original study
22 * Aggregators copy primary sources
23 * AKEL might count this as "5 sources" when it's really 1 **Solution: Content Fingerprinting**
24 * Generate SHA-256 hash of normalized text
25 * Detect near-duplicates (≥85% similarity) using fuzzy matching
26 * Track which sources cited each unique piece of evidence
27 * Display provenance chain to user **Target:** Duplicate detection >95% accurate, evidence counts reflect reality === 2.3 NFR13: Quality Metrics Dashboard (Internal) === **Priority:** HIGH **Fulfills:** Real-time quality monitoring during development **Dashboard Metrics:**
28 * Claim processing statistics
29 * Gate performance (pass/fail rates for each gate)
30 * Evidence quality metrics
31 * Hallucination rate tracking
32 * Processing performance **Target:** Dashboard functional, all metrics tracked, exportable == 3. Success Criteria == **✅ Quality:**
33 * Hallucination rate <5% (target: <3%)
34 * Average quality rating ≥8.0/10
35 * 0 critical failures (publishable falsities)
36 * Gates correctly identify >95% of low-quality outputs **✅ All 4 Gates Operational:**
37 * Gate 1: Claim validation working
38 * Gate 2: Evidence relevance filtering working
39 * Gate 3: Scenario coherence checking working
40 * Gate 4: Verdict confidence assessment working **✅ Evidence Deduplication:**
41 * Duplicate detection >95% accurate
42 * Evidence counts reflect reality
43 * Provenance tracked correctly **✅ Metrics Dashboard:**
44 * All metrics implemented and tracking
45 * Dashboard functional and useful
46 * Alerts trigger appropriately == 4. Architecture Notes == **POC2 Enhanced Architecture:** {{code}}
47 Input → AKEL Processing → All 4 Quality Gates → Display (claims + scenarios (1: Claim validation + evidence linking 2: Evidence relevance + verdicts) 3: Scenario coherence 4: Verdict confidence)
48 {{/code}} **Key Additions from POC1:**
49 * Scenario generation component
50 * Evidence deduplication system
51 * Gates 2 & 3 implementation
52 * Quality metrics collection **Still Simplified vs. Full System:**
53 * Single AKEL orchestration (not multi-component pipeline)
54 * No review queue
55 * No federation architecture **See:** [[Architecture>>Test.FactHarbor.Specification.Architecture.WebHome]] for details == Related Pages == * [[POC1>>Test.FactHarbor.Roadmap.POC1.WebHome]] - Previous phase
56 * [[Beta 0>>Test.FactHarbor.Roadmap.Beta0.WebHome]] - Next phase
57 * [[Roadmap Overview>>Test.FactHarbor.Roadmap.WebHome]]
58 * [[Architecture>>Test.FactHarbor.Specification.Architecture.WebHome]] **Document Status:** ✅ POC2 Specification Complete - Waiting for POC1 Completion **Version:** V0.9.70