Last modified by Robert Schaub on 2025/12/22 14:38

Hide last authors
Robert Schaub 1.2 1 = POC2: Robust Quality & Reliability =
2
3 **Phase Goal:** Prove AKEL produces high-quality outputs consistently at scale **Success Metric:** <5% hallucination rate, all 4 quality gates operational == 1. Overview == POC2 extends POC1 by implementing the full quality assurance framework (all 4 gates), adding evidence deduplication, and processing significantly more test articles to validate system reliability at scale. **Key Innovation:** Complete quality validation pipeline catches all categories of errors **What We're Proving:**
4
Robert Schaub 1.1 5 * All 4 quality gates work together effectively
6 * Evidence deduplication prevents artificial inflation
7 * System maintains quality at larger scale
8 * Quality metrics dashboard provides actionable insights == 2. New Requirements == === 2.1 NFR11: Complete Quality Assurance Framework === **Add Gates 2 & 3** (POC1 had only Gates 1 & 4) ==== Gate 2: Evidence Relevance Validation ==== **Purpose:** Ensure AI-linked evidence actually relates to the claim **Validation Checks:**
Robert Schaub 1.2 9
Robert Schaub 1.1 10 1. **Semantic Similarity:** Cosine similarity between claim and evidence embeddings ≥ 0.6
11 2. **Entity Overlap:** At least 1 shared named entity between claim and evidence
12 3. **Topic Relevance:** Evidence discusses the claim's subject matter (score ≥ 0.5) **Action if Failed:**
Robert Schaub 1.2 13
Robert Schaub 1.1 14 * Discard irrelevant evidence (don't count it)
15 * If <2 relevant evidence items remain → "Insufficient Evidence" verdict
16 * Log discarded evidence for quality review **Target:** 0% of evidence cited is off-topic ==== Gate 3: Scenario Coherence Check ==== **Purpose:** Validate scenarios are logical, complete, and meaningfully different **Validation Checks:**
Robert Schaub 1.2 17
Robert Schaub 1.1 18 1. **Completeness:** All required fields populated (assumptions, scope, evidence context)
19 2. **Internal Consistency:** Assumptions don't contradict each other (score <0.3)
20 3. **Distinctiveness:** Scenarios are meaningfully different (similarity <0.8)
21 4. **Minimum Detail:** At least 1 specific assumption per scenario **Action if Failed:**
Robert Schaub 1.2 22
Robert Schaub 1.1 23 * Merge duplicate scenarios
24 * Flag contradictory assumptions for review
25 * Reduce confidence score by 20%
26 * Do not publish if <2 distinct scenarios **Target:** 0% duplicate scenarios, all scenarios internally consistent === 2.2 FR54: Evidence Deduplication (NEW) === **Priority:** HIGH **Fulfills:** Accurate evidence counting, prevents artificial inflation **Purpose:** Prevent counting the same evidence multiple times when cited by different sources **Problem:**
27 * Wire services (AP, Reuters) redistribute same content
28 * Different sites cite the same original study
29 * Aggregators copy primary sources
30 * AKEL might count this as "5 sources" when it's really 1 **Solution: Content Fingerprinting**
31 * Generate SHA-256 hash of normalized text
32 * Detect near-duplicates (≥85% similarity) using fuzzy matching
33 * Track which sources cited each unique piece of evidence
34 * Display provenance chain to user **Target:** Duplicate detection >95% accurate, evidence counts reflect reality === 2.3 NFR13: Quality Metrics Dashboard (Internal) === **Priority:** HIGH **Fulfills:** Real-time quality monitoring during development **Dashboard Metrics:**
35 * Claim processing statistics
36 * Gate performance (pass/fail rates for each gate)
37 * Evidence quality metrics
38 * Hallucination rate tracking
39 * Processing performance **Target:** Dashboard functional, all metrics tracked, exportable == 3. Success Criteria == **✅ Quality:**
40 * Hallucination rate <5% (target: <3%)
41 * Average quality rating ≥8.0/10
42 * 0 critical failures (publishable falsities)
43 * Gates correctly identify >95% of low-quality outputs **✅ All 4 Gates Operational:**
44 * Gate 1: Claim validation working
45 * Gate 2: Evidence relevance filtering working
46 * Gate 3: Scenario coherence checking working
47 * Gate 4: Verdict confidence assessment working **✅ Evidence Deduplication:**
48 * Duplicate detection >95% accurate
49 * Evidence counts reflect reality
50 * Provenance tracked correctly **✅ Metrics Dashboard:**
51 * All metrics implemented and tracking
52 * Dashboard functional and useful
Robert Schaub 1.2 53 * Alerts trigger appropriately == 4. Architecture Notes == **POC2 Enhanced Architecture:** {{code}}Input → AKEL Processing → All 4 Quality Gates → Display (claims + scenarios (1: Claim validation + evidence linking 2: Evidence relevance + verdicts) 3: Scenario coherence 4: Verdict confidence){{/code}} **Key Additions from POC1:**
Robert Schaub 1.1 54 * Scenario generation component
55 * Evidence deduplication system
56 * Gates 2 & 3 implementation
57 * Quality metrics collection **Still Simplified vs. Full System:**
58 * Single AKEL orchestration (not multi-component pipeline)
59 * No review queue
Robert Schaub 1.6 60 * No federation architecture **See:** [[Architecture>>Test.FactHarbor pre13 V0\.9\.70.Specification.Architecture.WebHome]] for details == Related Pages == * [[POC1>>Test.FactHarbor pre13 V0\.9\.70.Roadmap.POC1.WebHome]] - Previous phase
Robert Schaub 1.2 61 * [[Beta 0>>Test.FactHarbor pre13 V0\.9\.70.Roadmap.Beta0.WebHome]] - Next phase
Robert Schaub 1.5 62 * [[Roadmap Overview>>Test.FactHarbor pre13 V0\.9\.70.Roadmap.WebHome]]
Robert Schaub 1.6 63 * [[Architecture>>Test.FactHarbor pre13 V0\.9\.70.Specification.Architecture.WebHome]] **Document Status:** ✅ POC2 Specification Complete - Waiting for POC1 Completion **Version:** V0.9.70