Changes for page Data Model
Last modified by Robert Schaub on 2026/02/08 08:27
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -7,19 +7,19 @@ 7 7 **Rationale**: Claims system is 95% reads, 5% writes. Denormalizing common data reduces joins and improves query performance by 70%. 8 8 **Additional cached fields in claims table**: 9 9 * **evidence_summary** (JSONB): Top 5 most relevant evidence snippets with scores 10 - * Avoids joining evidence table for listing/preview11 - * Updated when evidence is added/removed12 - * Format: `[{"text": "...", "source": "...", "relevance": 0.95}, ...]`10 + * Avoids joining evidence table for listing/preview 11 + * Updated when evidence is added/removed 12 + * Format: `[{"text": "...", "source": "...", "relevance": 0.95}, ...]` 13 13 * **source_names** (TEXT[]): Array of source names for quick display 14 - * Avoids joining through evidence to sources15 - * Updated when sources change16 - * Format: `["New York Times", "Nature Journal", ...]`14 + * Avoids joining through evidence to sources 15 + * Updated when sources change 16 + * Format: `["New York Times", "Nature Journal", ...]` 17 17 * **scenario_count** (INTEGER): Number of scenarios for this claim 18 - * Quick metric without counting rows19 - * Updated when scenarios added/removed18 + * Quick metric without counting rows 19 + * Updated when scenarios added/removed 20 20 * **cache_updated_at** (TIMESTAMP): When denormalized data was last refreshed 21 - * Helps invalidate stale caches22 - * Triggers background refresh if too old21 + * Helps invalidate stale caches 22 + * Triggers background refresh if too old 23 23 **Update Strategy**: 24 24 * **Immediate**: Update on claim edit (user-facing) 25 25 * **Deferred**: Update via background job every hour (non-critical) ... ... @@ -63,48 +63,48 @@ 63 63 Runs independently of claim analysis: 64 64 {{code language="python"}} 65 65 def update_source_scores_weekly(): 66 - """67 - Background job: Calculate source reliability68 - Never triggered by individual claim analysis69 - """70 - # Analyze all claims from past week71 - claims = get_claims_from_past_week()72 - for source in get_all_sources():73 - # Calculate accuracy metrics74 - correct_verdicts = count_correct_verdicts_citing(source, claims)75 - total_citations = count_total_citations(source, claims)76 - accuracy = correct_verdicts / total_citations if total_citations > 0 else 0.577 - # Weight by claim importance78 - weighted_score = calculate_weighted_score(source, claims)79 - # Update source record80 - source.track_record_score = weighted_score81 - source.total_citations = total_citations82 - source.last_updated = now()83 - source.save()84 - # Job runs: Sunday 2 AM UTC85 - # Never during claim processing66 + """ 67 + Background job: Calculate source reliability 68 + Never triggered by individual claim analysis 69 + """ 70 + # Analyze all claims from past week 71 + claims = get_claims_from_past_week() 72 + for source in get_all_sources(): 73 + # Calculate accuracy metrics 74 + correct_verdicts = count_correct_verdicts_citing(source, claims) 75 + total_citations = count_total_citations(source, claims) 76 + accuracy = correct_verdicts / total_citations if total_citations > 0 else 0.5 77 + # Weight by claim importance 78 + weighted_score = calculate_weighted_score(source, claims) 79 + # Update source record 80 + source.track_record_score = weighted_score 81 + source.total_citations = total_citations 82 + source.last_updated = now() 83 + source.save() 84 + # Job runs: Sunday 2 AM UTC 85 + # Never during claim processing 86 86 {{/code}} 87 87 ==== Real-Time Claim Analysis (AKEL) ==== 88 88 Uses source scores but never updates them: 89 89 {{code language="python"}} 90 90 def analyze_claim(claim_text): 91 - """92 - Real-time: Analyze claim using current source scores93 - READ source scores, never UPDATE them94 - """95 - # Gather evidence96 - evidence_list = gather_evidence(claim_text)97 - for evidence in evidence_list:98 - # READ source score (snapshot from last weekly update)99 - source = get_source(evidence.source_id)100 - source_score = source.track_record_score101 - # Use score to weight evidence102 - evidence.weighted_relevance = evidence.relevance * source_score103 - # Generate verdict using weighted evidence104 - verdict = synthesize_verdict(evidence_list)105 - # NEVER update source scores here106 - # That happens in weekly background job107 - return verdict91 + """ 92 + Real-time: Analyze claim using current source scores 93 + READ source scores, never UPDATE them 94 + """ 95 + # Gather evidence 96 + evidence_list = gather_evidence(claim_text) 97 + for evidence in evidence_list: 98 + # READ source score (snapshot from last weekly update) 99 + source = get_source(evidence.source_id) 100 + source_score = source.track_record_score 101 + # Use score to weight evidence 102 + evidence.weighted_relevance = evidence.relevance * source_score 103 + # Generate verdict using weighted evidence 104 + verdict = synthesize_verdict(evidence_list) 105 + # NEVER update source scores here 106 + # That happens in weekly background job 107 + return verdict 108 108 {{/code}} 109 109 ==== Monthly Audit (Quality Assurance) ==== 110 110 Moderator review of flagged source scores: ... ... @@ -138,14 +138,14 @@ 138 138 **Example Timeline**: 139 139 ``` 140 140 Sunday 2 AM: Calculate source scores for past week 141 - → NYT score: 0.87 (up from 0.85)142 - → Blog X score: 0.52 (down from 0.61)141 + → NYT score: 0.87 (up from 0.85) 142 + → Blog X score: 0.52 (down from 0.61) 143 143 Monday-Saturday: Claims processed using these scores 144 - → All claims this week use NYT=0.87145 - → All claims this week use Blog X=0.52144 + → All claims this week use NYT=0.87 145 + → All claims this week use Blog X=0.52 146 146 Next Sunday 2 AM: Recalculate scores including this week's claims 147 - → NYT score: 0.89 (trending up)148 - → Blog X score: 0.48 (trending down)147 + → NYT score: 0.89 (trending up) 148 + → Blog X score: 0.48 (trending down) 149 149 ``` 150 150 === 1.4 Scenario === 151 151 **Purpose**: Different interpretations or contexts for evaluating claims ... ... @@ -157,8 +157,6 @@ 157 157 * **description** (text): Human-readable description of the scenario 158 158 * **assumptions** (JSONB): Key assumptions that define this scenario context 159 159 * **extracted_from** (UUID): Reference to evidence that this scenario was extracted from 160 -* **verdict_summary** (text): Compiled verdict for this scenario 161 -* **confidence** (decimal 0-1): Confidence level for verdict in this scenario 162 162 * **created_at** (timestamp): When scenario was created 163 163 * **updated_at** (timestamp): Last modification 164 164 **How Found**: Evidence search → Extract context → Create scenario → Link to claim ... ... @@ -168,7 +168,34 @@ 168 168 * Scenario 2: "Real-world data (diverse population, Omicron variant)" from hospital data 169 169 * Scenario 3: "Immunocompromised patients" from specialist study 170 170 **V2.0 Evolution**: Many-to-many relationship can be added if users request cross-claim scenario sharing. For V1.0, keeping scenarios tied to single claims simplifies queries and reduces complexity without limiting functionality. 171 -=== 1.5 User === 169 + 170 +=== 1.5 Verdict === 171 + 172 +**Purpose**: Assessment of a claim within a specific scenario context. Each verdict provides a conclusion about whether the claim is supported, refuted, or uncertain given the scenario's assumptions and available evidence. 173 + 174 +**Core Fields**: 175 +* **id** (UUID): Primary key 176 +* **scenario_id** (UUID FK): The scenario being assessed 177 +* **likelihood_range** (text): Probabilistic assessment (e.g., "0.40-0.65 (uncertain)", "0.75-0.85 (likely true)") 178 +* **confidence** (decimal 0-1): How confident we are in this assessment 179 +* **explanation_summary** (text): Human-readable reasoning explaining the verdict 180 +* **uncertainty_factors** (text array): Specific factors limiting confidence (e.g., "Small sample sizes", "Lifestyle confounds", "Long-term effects unknown") 181 +* **created_at** (timestamp): When verdict was created 182 +* **updated_at** (timestamp): Last modification 183 + 184 +**Change Tracking**: Like all entities, verdict changes are tracked through the Edit entity (section 1.7), not through separate version tables. Each edit records before/after states. 185 + 186 +**Relationship**: Each Scenario has one Verdict. When understanding evolves, the verdict is updated and the change is logged in the Edit entity. 187 + 188 +**Example**: 189 +For claim "Exercise improves mental health" in scenario "Clinical trials (healthy adults, structured programs)": 190 +* Initial state: likelihood_range="0.40-0.65 (uncertain)", uncertainty_factors=["Small sample sizes", "Short-term studies only"] 191 +* After new evidence: likelihood_range="0.70-0.85 (likely true)", uncertainty_factors=["Lifestyle confounds remain"] 192 +* Edit entity records the complete before/after change with timestamp and reason 193 + 194 +**Key Design**: Verdicts are mutable entities tracked through the centralized Edit entity, consistent with Claims, Evidence, and Scenarios. 195 + 196 +=== 1.6 User === 172 172 Fields: username, email, **role** (Reader/Contributor/Moderator), **reputation**, contributions_count 173 173 === User Reputation System == 174 174 **V1.0 Approach**: Simple manual role assignment ... ... @@ -225,7 +225,7 @@ 225 225 * Reputation decay for inactive users 226 226 * Track record scoring for contributors 227 227 See [[When to Add Complexity>>FactHarbor.Specification.When-to-Add-Complexity]] for triggers. 228 -=== 1. 6Edit ===253 +=== 1.7 Edit === 229 229 **Fields**: entity_type, entity_id, user_id, before_state (JSON), after_state (JSON), edit_type, reason, created_at 230 230 **Purpose**: Complete audit trail for all content changes 231 231 === Edit History Details === ... ... @@ -258,9 +258,9 @@ 258 258 * Legal compliance (audit trail) 259 259 * Rollback capability 260 260 See **Edit History Documentation** for complete details on what gets edited by whom, retention policy, and use cases 261 -=== 1. 7Flag ===286 +=== 1.8 Flag === 262 262 Fields: entity_id, reported_by, issue_type, status, resolution_note 263 -=== 1. 8QualityMetric===288 +=== 1.9 QualityMetric === 264 264 **Fields**: metric_type, category, value, target, timestamp 265 265 **Purpose**: Time-series quality tracking 266 266 **Usage**: ... ... @@ -270,7 +270,7 @@ 270 270 * **A/B testing**: Compare control vs treatment metrics 271 271 * **Improvement validation**: Measure before/after changes 272 272 **Example**: `{type: "ErrorRate", category: "Politics", value: 0.12, target: 0.10, timestamp: "2025-12-17"}` 273 -=== 1. 9ErrorPattern===298 +=== 1.10 ErrorPattern === 274 274 **Fields**: error_category, claim_id, description, root_cause, frequency, status 275 275 **Purpose**: Capture errors to trigger system improvements 276 276 **Usage**: ... ... @@ -279,6 +279,11 @@ 279 279 * **Improvement workflow**: Analyze → Fix → Test → Deploy → Re-process → Monitor 280 280 * **Metrics**: Track error rate reduction over time 281 281 **Example**: `{category: "WrongSource", description: "Unreliable tabloid cited", root_cause: "No quality check", frequency: 23, status: "Fixed"}` 307 + 308 +== 1.4 Core Data Model ERD == 309 + 310 +{{include reference="FactHarbor.Specification.Diagrams.Core Data Model ERD.WebHome"/}} 311 + 282 282 == 1.5 User Class Diagram == 283 283 {{include reference="FactHarbor.Specification.Diagrams.User Class Diagram.WebHome"/}} 284 284 == 2. Versioning Strategy == ... ... @@ -299,9 +299,9 @@ 299 299 **Example**: 300 300 ``` 301 301 Claim V1: "The sky is blue" 302 - → User edits →332 + → User edits → 303 303 Claim V2: "The sky is blue during daytime" 304 - → EDIT table stores: {before: "The sky is blue", after: "The sky is blue during daytime"}334 + → EDIT table stores: {before: "The sky is blue", after: "The sky is blue during daytime"} 305 305 ``` 306 306 == 2.5. Storage vs Computation Strategy == 307 307 **Critical architectural decision**: What to persist in databases vs compute dynamically? ... ... @@ -389,8 +389,8 @@ 389 389 * **Compute cost**: $0.005-0.01 per request (LLM API call) 390 390 * **Frequency**: Viewed in detail by ~20% of users 391 391 * **Trade-off analysis**: 392 - - IF STORED: 1M claims × 3 KB = 3 GB storage, $0.05/month, fast access393 - - IF COMPUTED: 1M claims × 20% views × $0.01 = $2,000/month in LLM costs422 + - IF STORED: 1M claims × 3 KB = 3 GB storage, $0.05/month, fast access 423 + - IF COMPUTED: 1M claims × 20% views × $0.01 = $2,000/month in LLM costs 394 394 * **Reproducibility**: Scenarios may improve as AI improves (good to recompute) 395 395 * **Speed**: Computed = 5-8 seconds delay, Stored = instant 396 396 * **Decision**: ✅ STORE (hybrid approach below) ... ... @@ -423,8 +423,8 @@ 423 423 * **Current design**: Stored in User table 424 424 * **Alternative**: Compute from Edit table 425 425 * **Trade-off**: 426 - - Stored: Fast, simple427 - - Computed: Always accurate, no denormalization456 + - Stored: Fast, simple 457 + - Computed: Always accurate, no denormalization 428 428 * **Frequency**: Read on every user action 429 429 * **Compute cost**: Simple COUNT query, milliseconds 430 430 * **Decision**: ✅ STORE - Performance critical, read-heavy ... ... @@ -454,7 +454,7 @@ 454 454 * **Total**: ~$75/month infrastructure 455 455 **LLM cost savings by caching**: 456 456 * Analysis summary stored: Save $0.03 per claim = $30K per 1M claims 457 -* Scenarios stored: Save $0.01 per claim × 20% views = $2K per 1M claims 487 +* Scenarios stored: Save $0.01 per claim × 20% views = $2K per 1M claims 458 458 * Verdict stored: Save $0.003 per claim = $3K per 1M claims 459 459 * **Total savings**: ~$35K per 1M claims vs recomputing every time 460 460 === Recomputation Triggers === ... ... @@ -472,11 +472,11 @@ 472 472 **Year 1**: 10K claims 473 473 * Storage: 180 MB 474 474 * Cost: $10/month 475 -**Year 3**: 100K claims 505 +**Year 3**: 100K claims 476 476 * Storage: 1.8 GB 477 477 * Cost: $30/month 478 478 **Year 5**: 1M claims 479 -* Storage: 18 GB 509 +* Storage: 18 GB 480 480 * Cost: $75/month 481 481 **Year 10**: 10M claims 482 482 * Storage: 180 GB