Wiki source code of Requirements

Last modified by Robert Schaub on 2026/02/08 21:32

Show last authors
1 = Requirements =
2
3 **This page defines Roles, Content States, Rules, and System Requirements for FactHarbor.**
4
5 **Core Philosophy:** Invest in system improvement, not manual data correction. When AI makes errors, improve the algorithm and re-process automatically.
6
7 == Navigation ==
8
9 * **[[User Needs>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.User Needs.WebHome]]** - What users need from FactHarbor (drives these requirements)
10 * **This page** - How we fulfill those needs through system design
11
12 (% class="box infomessage" %)
13 (((
14 **How to read this page:**
15
16 1. **User Needs drive Requirements**: See [[User Needs>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.User Needs.WebHome]] for what users need
17 2. **Requirements define implementation**: This page shows how we fulfill those needs
18 3. **Functional Requirements (FR)**: Specific features and capabilities
19 4. **Non-Functional Requirements (NFR)**: Quality attributes (performance, security, etc.)
20
21 Each requirement references which User Needs it fulfills.
22 )))
23
24 == 1. Roles ==
25
26 **Fulfills**: UN-12 (Submit claims), UN-13 (Cite verdicts), UN-14 (API access)
27
28 FactHarbor uses three simple roles plus a reputation system.
29
30 === 1.1 Reader ===
31
32 **Who**: Anyone (no login required)
33
34 **Can**:
35
36 * Browse and search claims
37 * View scenarios, evidence, verdicts, and confidence scores
38 * Flag issues or errors
39 * Use filters, search, and visualization tools
40 * Submit claims automatically (new claims added if not duplicates)
41
42 **Cannot**:
43
44 * Modify content
45 * Access edit history details
46
47 **User Needs served**: UN-1 (Trust assessment), UN-2 (Claim verification), UN-3 (Article summary with FactHarbor analysis summary), UN-4 (Social media fact-checking), UN-5 (Source tracing), UN-7 (Evidence transparency), UN-8 (Understanding disagreement), UN-12 (Submit claims), UN-17 (In-article highlighting)
48
49 === 1.2 Contributor ===
50
51 **Who**: Registered users (earns reputation through contributions)
52
53 **Can**:
54
55 * Everything a Reader can do
56 * Edit claims, evidence, and scenarios
57 * Add sources and citations
58 * Suggest improvements to AI-generated content
59 * Participate in discussions
60 * Earn reputation points for quality contributions
61
62 **Reputation System**:
63
64 * New contributors: Limited edit privileges
65 * Established contributors (established reputation): Full edit access
66 * Trusted contributors (substantial reputation): Can approve certain changes
67 * Reputation earned through: Accepted edits, helpful flags, quality contributions
68 * Reputation lost through: Reverted edits, invalid flags, abuse
69
70 **Cannot**:
71
72 * Delete or hide content (only moderators)
73 * Override moderation decisions
74
75 **User Needs served**: UN-13 (Cite and contribute)
76
77 === 1.3 Moderator ===
78
79 **Who**: Trusted community members with proven track record, appointed by governance board
80
81 **Can**:
82
83 * Review flagged content
84 * Hide harmful or abusive content
85 * Resolve disputes between contributors
86 * Issue warnings or temporary bans
87 * Make final decisions on content disputes
88 * Access full audit logs
89
90 **Cannot**:
91
92 * Change governance rules
93 * Permanently ban users without board approval
94 * Override technical quality gates
95
96 **Note**: Small team (3-5 initially), supported by automated moderation tools.
97
98 === 1.4 Domain Trusted Contributors (Optional, Task-Specific) ===
99
100 **Who**: Subject matter specialists invited for specific high-stakes disputes
101
102 **Not a permanent role**: Contacted externally when needed for contested claims in their domain
103
104 **When used**:
105
106 * Medical claims with life/safety implications
107 * Legal interpretations with significant impact
108 * Scientific claims with high controversy
109 * Technical claims requiring specialized knowledge
110
111 **Process**:
112
113 * Moderator identifies need for expert input
114 * Contact expert externally (don't require them to be users)
115 * Trusted Contributor provides written opinion with sources
116 * Opinion added to claim record
117 * Trusted Contributor acknowledged in claim
118
119 **User Needs served**: UN-16 (Expert validation status)
120
121 == 2. Content States ==
122
123 **Fulfills**: UN-1 (Trust indicators), UN-16 (Review status transparency)
124
125 FactHarbor uses two content states. Focus is on transparency and confidence scoring, not gatekeeping.
126
127 === 2.1 Published ===
128
129 **Status**: Visible to all users
130
131 **Includes**:
132
133 * AI-generated analyses (default state)
134 * User-contributed content
135 * Edited/improved content
136
137 **Quality Indicators** (displayed with content):
138
139 * **Confidence Score**: 0-100% (AI's confidence in analysis)
140 * **Source Quality Score**: 0-100% (based on source track record)
141 * **Controversy Flag**: If high dispute/edit activity
142 * **Completeness Score**: % of expected fields filled
143 * **Last Updated**: Date of most recent change
144 * **Edit Count**: Number of revisions
145 * **Review Status**: AI-generated / Human-reviewed / Expert-validated
146
147 **Automatic Warnings**:
148
149 * Confidence < 60%: "Low confidence - use caution"
150 * Source quality < 40%: "Sources may be unreliable"
151 * High controversy: "Disputed - multiple interpretations exist"
152 * Medical/Legal/Safety domain: "Seek professional advice"
153
154 **User Needs served**: UN-1 (Trust score), UN-9 (Methodology transparency), UN-15 (Evolution timeline), UN-16 (Review status)
155
156 === 2.2 Hidden ===
157
158 **Status**: Not visible to regular users (only to moderators)
159
160 **Reasons**:
161
162 * Spam or advertising
163 * Personal attacks or harassment
164 * Illegal content
165 * Privacy violations
166 * Deliberate misinformation (verified)
167 * Abuse or harmful content
168
169 **Process**:
170
171 * Automated detection flags for moderator review
172 * Moderator confirms and hides
173 * Original author notified with reason
174 * Can appeal to board if disputes moderator decision
175
176 **Note**: Content is hidden, not deleted (for audit trail)
177
178 == 3. Contribution Rules ==
179
180 === 3.1 All Contributors Must ===
181
182 * Provide sources for factual claims
183 * Use clear, neutral language in FactHarbor's own summaries
184 * Respect others and maintain civil discourse
185 * Accept community feedback constructively
186 * Focus on improving quality, not protecting ego
187
188 === 3.2 AKEL (AI System) ===
189
190 **AKEL is the primary system**. Human contributions supplement and train AKEL.
191
192 **AKEL Must**:
193
194 * Mark all outputs as AI-generated
195 * Display confidence scores prominently
196 * Provide source citations
197 * Flag uncertainty clearly
198 * Identify contradictions in evidence
199 * Learn from human corrections
200
201 **When AKEL Makes Errors**:
202
203 1. Capture the error pattern (what, why, how common)
204 2. Improve the system (better prompt, model, validation)
205 3. Re-process affected claims automatically
206 4. Measure improvement (did quality increase?)
207
208 **Human Role**: Train AKEL through corrections, not replace AKEL
209
210 === 3.3 Contributors Should ===
211
212 * Improve clarity and structure
213 * Add missing sources
214 * Flag errors for system improvement
215 * Suggest better ways to present information
216 * Participate in quality discussions
217
218 === 3.4 Moderators Must ===
219
220 * Be impartial
221 * Document moderation decisions
222 * Respond to appeals promptly
223 * Use automated tools to scale efforts
224 * Focus on abuse/harm, not routine quality control
225
226 == 4. Quality Standards ==
227
228 **Fulfills**: UN-5 (Source reliability), UN-6 (Publisher track records), UN-7 (Evidence transparency), UN-9 (Methodology transparency)
229
230 === 4.1 Source Requirements ===
231
232 **Track Record Over Credentials**:
233
234 * Sources evaluated by historical accuracy
235 * Correction policy matters
236 * Independence from conflicts of interest
237 * Methodology transparency
238
239 **Source Quality Database**:
240
241 * Automated tracking of source accuracy
242 * Correction frequency
243 * Reliability score (updated continuously)
244 * Users can see source track record
245
246 **No automatic trust** for government, academia, or media - all evaluated by track record.
247
248 **User Needs served**: UN-5 (Source provenance), UN-6 (Publisher reliability)
249
250 === 4.2 Claim Requirements ===
251
252 * Clear subject and assertion
253 * Verifiable with available information
254 * Sourced (or explicitly marked as needing sources)
255 * Neutral language in FactHarbor summaries
256 * Appropriate context provided
257
258 **User Needs served**: UN-2 (Claim extraction and verification)
259
260 === 4.3 Evidence Requirements ===
261
262 * Publicly accessible (or explain why not)
263 * Properly cited with attribution
264 * Relevant to claim being evaluated
265 * Original source preferred over secondary
266
267 **User Needs served**: UN-7 (Evidence transparency)
268
269 === 4.4 Confidence Scoring ===
270
271 **Automated confidence calculation based on**:
272
273 * Source quality scores
274 * Evidence consistency
275 * Contradiction detection
276 * Completeness of analysis
277 * Historical accuracy of similar claims
278
279 **Thresholds**:
280
281 * < 40%: Too low to publish (needs improvement)
282 * 40-60%: Published with "Low confidence" warning
283 * 60-80%: Published as standard
284 * 80-100%: Published as "High confidence"
285
286 **User Needs served**: UN-1 (Trust assessment), UN-9 (Methodology transparency)
287
288 == 5. Automated Risk Scoring ==
289
290 **Fulfills**: UN-10 (Manipulation detection), UN-16 (Appropriate review level)
291
292 **Replace manual risk tiers with continuous automated scoring**.
293
294 === 5.1 Risk Score Calculation ===
295
296 **Factors** (weighted algorithm):
297
298 * **Domain sensitivity**: Medical, legal, safety auto-flagged higher
299 * **Potential impact**: Views, citations, spread
300 * **Controversy level**: Flags, disputes, edit wars
301 * **Uncertainty**: Low confidence, contradictory evidence
302 * **Source reliability**: Track record of sources used
303
304 **Score**: 0-100 (higher = more risk)
305
306 === 5.2 Automated Actions ===
307
308 * **Score > 80**: Flag for moderator review before publication
309 * **Score 60-80**: Publish with prominent warnings
310 * **Score 40-60**: Publish with standard warnings
311 * **Score < 40**: Publish normally
312
313 **Continuous monitoring**: Risk score recalculated as new information emerges
314
315 **User Needs served**: UN-10 (Detect manipulation tactics), UN-16 (Review status)
316
317 == 6. System Improvement Process ==
318
319 **Core principle**: Fix the system, not just the data.
320
321 === 6.1 Error Capture ===
322
323 **When users flag errors or make corrections**:
324
325 1. What was wrong? (categorize)
326 2. What should it have been?
327 3. Why did the system fail? (root cause)
328 4. How common is this pattern?
329 5. Store in ErrorPattern table (improvement queue)
330
331 === 6.2 Continuous Improvement Cycle ===
332
333 1. **Review**: Analyze top error patterns
334 2. **Develop**: Create fix (prompt, model, validation)
335 3. **Test**: Validate fix on sample claims
336 4. **Deploy**: Roll out if quality improves
337 5. **Re-process**: Automatically update affected claims
338 6. **Monitor**: Track quality metrics
339
340 === 6.3 Quality Metrics Dashboard ===
341
342 **Track continuously**:
343
344 * Error rate by category
345 * Source quality distribution
346 * Confidence score trends
347 * User flag rate (issues found)
348 * Correction acceptance rate
349 * Re-work rate
350 * Claims processed per hour
351
352 **Goal**: continuous improvement in error rate
353
354 == 7. Automated Quality Monitoring ==
355
356 **Replace manual audit sampling with automated monitoring**.
357
358 === 7.1 Continuous Metrics ===
359
360 * **Source quality**: Track record database
361 * **Consistency**: Contradiction detection
362 * **Clarity**: Readability scores
363 * **Completeness**: Field validation
364 * **Accuracy**: User corrections tracked
365
366 === 7.2 Anomaly Detection ===
367
368 **Automated alerts for**:
369
370 * Sudden quality drops
371 * Unusual patterns
372 * Contradiction clusters
373 * Source reliability changes
374 * User behavior anomalies
375
376 === 7.3 Targeted Review ===
377
378 * Review only flagged items
379 * Random sampling for calibration (not quotas)
380 * Learn from corrections to improve automation
381
382 == 8. Functional Requirements ==
383
384 This section defines specific features that fulfill user needs.
385
386 === 8.1 Claim Intake & Normalization ===
387
388 ==== FR1 — Claim Intake ====
389
390 **Fulfills**: UN-2 (Claim extraction), UN-4 (Quick fact-checking), UN-12 (Submit claims)
391
392 * Users submit claims via simple form or API
393 * Claims can be text, URL, or image
394 * Duplicate detection (semantic similarity)
395 * Auto-categorization by domain
396
397 ==== FR2 — Claim Normalization ====
398
399 **Fulfills**: UN-2 (Claim verification)
400
401 * Standardize to clear assertion format
402 * Extract key entities (who, what, when, where)
403 * Identify claim type (factual, predictive, evaluative)
404 * Link to existing similar claims
405
406 ==== FR3 — Claim Classification ====
407
408 **Fulfills**: UN-11 (Filtered research)
409
410 * Domain: Politics, Science, Health, etc.
411 * Type: Historical fact, current stat, prediction, etc.
412 * Risk score: Automated calculation
413 * Complexity: Simple, moderate, complex
414
415 === 8.2 Scenario System ===
416
417 ==== FR4 — Scenario Generation ====
418
419 **Fulfills**: UN-2 (Context-dependent verification), UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement)
420
421 **Automated scenario creation**:
422
423 * AKEL analyzes claim and generates likely scenarios (use-cases and contexts)
424 * Each scenario includes: assumptions, definitions, boundaries, evidence context
425 * Users can flag incorrect scenarios
426 * System learns from corrections
427
428 **Key Concept**: Scenarios represent different interpretations or contexts (e.g., "Clinical trials with healthy adults" vs. "Real-world data with diverse populations")
429
430 ==== FR5 — Evidence Linking ====
431
432 **Fulfills**: UN-5 (Source tracing), UN-7 (Evidence transparency)
433
434 * Automated evidence discovery from sources
435 * Relevance scoring
436 * Contradiction detection
437 * Source quality assessment
438
439 ==== FR6 — Scenario Comparison ====
440
441 **Fulfills**: UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement)
442
443 * Side-by-side comparison interface
444 * Highlight key differences between scenarios
445 * Show evidence supporting each scenario
446 * Display confidence scores per scenario
447
448 === 8.3 Verdicts & Analysis ===
449
450 ==== FR7 — Automated Verdicts ====
451
452 **Fulfills**: UN-1 (Trust score), UN-2 (Verification verdicts), UN-3 (Article summary with FactHarbor analysis summary), UN-13 (Cite verdicts)
453
454 * AKEL generates verdict based on evidence within each scenario
455 * **Likelihood range** displayed (e.g., "0.70-0.85 (likely true)") - NOT binary true/false
456 * **Uncertainty factors** explicitly listed (e.g., "Small sample sizes", "Long-term effects unknown")
457 * Confidence score displayed prominently
458 * Source quality indicators shown
459 * Contradictions noted
460 * Uncertainty acknowledged
461
462 **Key Innovation**: Detailed probabilistic verdicts with explicit uncertainty, not binary judgments
463
464 ==== FR8 — Time Evolution ====
465
466 **Fulfills**: UN-15 (Verdict evolution timeline)
467
468 * Claims and verdicts update as new evidence emerges
469 * Version history maintained for all verdicts
470 * Changes highlighted
471 * Confidence score trends visible
472 * Users can see "as of date X, what did we know?"
473
474 === 8.4 User Interface & Presentation ===
475
476 ==== FR12 — Two-Panel Summary View (Article Summary with FactHarbor Analysis Summary) ====
477
478 **Fulfills**: UN-3 (Article Summary with FactHarbor Analysis Summary)
479
480 **Purpose**: Provide side-by-side comparison of what a document claims vs. FactHarbor's complete analysis of its credibility
481
482 **Left Panel: Article Summary**:
483
484 * Document title, source, and claimed credibility
485 * "The Big Picture" - main thesis or position change
486 * "Key Findings" - structured summary of document's main claims
487 * "Reasoning" - document's explanation for positions
488 * "Conclusion" - document's bottom line
489
490 **Right Panel: FactHarbor Analysis Summary**:
491
492 * FactHarbor's independent source credibility assessment
493 * Claim-by-claim verdicts with confidence scores
494 * Methodology assessment (strengths, limitations)
495 * Overall verdict on document quality
496 * Analysis ID for reference
497
498 **Design Principles**:
499
500 * No scrolling required - both panels visible simultaneously
501 * Visual distinction between "what they say" and "FactHarbor's analysis"
502 * Color coding for verdicts (supported, uncertain, refuted)
503 * Confidence percentages clearly visible
504 * Mobile responsive (panels stack vertically on small screens)
505
506 **Implementation Notes**:
507
508 * Generated automatically by AKEL for every analyzed document
509 * Updates when verdict evolves (maintains version history)
510 * Exportable as standalone summary report
511 * Shareable via permanent URL
512
513 ==== FR13 — In-Article Claim Highlighting ====
514
515 **Fulfills**: UN-17 (In-article claim highlighting)
516
517 **Purpose**: Enable readers to quickly assess claim credibility while reading by visually highlighting factual claims with color-coded indicators
518
519 ==== Visual Example: Article with Highlighted Claims ====
520
521 (% class="box" %)
522 (((
523 **Article: "New Study Shows Benefits of Mediterranean Diet"**
524
525 A recent study published in the Journal of Nutrition has revealed new findings about the Mediterranean diet.
526
527 (% class="box successmessage" style="margin:10px 0;" %)
528 (((
529 🟢 **Researchers found that Mediterranean diet followers had a 25% lower risk of heart disease compared to control groups**
530
531 (% style="font-size:0.9em; color:#666;" %)
532 ↑ WELL SUPPORTED • 87% confidence
533 [[Click for evidence details →]]
534
535
536 )))
537
538 The study, which followed 10,000 participants over five years, showed significant improvements in cardiovascular health markers.
539
540 (% class="box warningmessage" style="margin:10px 0;" %)
541 (((
542 🟡 **Some experts believe this diet can completely prevent heart attacks**
543
544 (% style="font-size:0.9em; color:#666;" %)
545 ↑ UNCERTAIN • 45% confidence
546 Overstated - evidence shows risk reduction, not prevention
547 [[Click for details →]]
548
549
550 )))
551
552 Dr. Maria Rodriguez, lead researcher, recommends incorporating more olive oil, fish, and vegetables into daily meals.
553
554 (% class="box errormessage" style="margin:10px 0;" %)
555 (((
556 🔴 **The study proves that saturated fats cause heart disease**
557
558 (% style="font-size:0.9em; color:#666;" %)
559 ↑ REFUTED • 15% confidence
560 Claim not supported by study design; correlation ≠ causation
561 [[Click for counter-evidence →]]
562
563
564 )))
565
566 Participants also reported feeling more energetic and experiencing better sleep quality, though these were secondary measures.
567 )))
568
569 **Legend:**
570
571 * 🟢 = Well-supported claim (confidence ≥75%)
572 * 🟡 = Uncertain claim (confidence 40-74%)
573 * 🔴 = Refuted/unsupported claim (confidence <40%)
574 * Plain text = Non-factual content (context, opinions, recommendations)
575
576 ==== Tooltip on Hover/Click ====
577
578 (% class="box infomessage" %)
579 (((
580 **FactHarbor Analysis**
581
582 **Claim:**
583 "Researchers found that Mediterranean diet followers had a 25% lower risk of heart disease"
584
585 **Verdict:** WELL SUPPORTED
586 **Confidence:** 87%
587
588 **Evidence Summary:**
589
590 * Meta-analysis of 12 RCTs confirms 23-28% risk reduction
591 * Consistent findings across multiple populations
592 * Published in peer-reviewed journal (high credibility)
593
594 **Uncertainty Factors:**
595
596 * Exact percentage varies by study (20-30% range)
597
598 [[View Full Analysis →]]
599 )))
600
601 **Color-Coding System**:
602
603 * **Green**: Well-supported claims (confidence ≥75%, strong evidence)
604 * **Yellow/Orange**: Uncertain claims (confidence 40-74%, conflicting or limited evidence)
605 * **Red**: Refuted or unsupported claims (confidence <40%, contradicted by evidence)
606 * **Gray/Neutral**: Non-factual content (opinions, questions, procedural text)
607
608 ==== Interactive Highlighting Example (Detailed View) ====
609
610 (% style="width:100%; border-collapse:collapse;" %)
611 |=**Article Text**|=**Status**|=**Analysis**
612 |(((
613 A recent study published in the Journal of Nutrition has revealed new findings about the Mediterranean diet.
614 )))|(% style="text-align:center;" %)Plain text|(% style="font-style:italic; color:#888;" %)Context - no highlighting
615 |(((
616 //Researchers found that Mediterranean diet followers had a 25% lower risk of heart disease compared to control groups//
617 )))|(% style="background-color:#D4EDDA; text-align:center; padding:8px;" %)🟢 **WELL SUPPORTED**|(((
618 **87% confidence**
619
620 Meta-analysis of 12 RCTs confirms 23-28% risk reduction
621
622 [[View Full Analysis]]
623 )))
624 |(((
625 The study, which followed 10,000 participants over five years, showed significant improvements in cardiovascular health markers.
626 )))|(% style="text-align:center;" %)Plain text|(% style="font-style:italic; color:#888;" %)Methodology - no highlighting
627 |(((
628 //Some experts believe this diet can completely prevent heart attacks//
629 )))|(% style="background-color:#FFF3CD; text-align:center; padding:8px;" %)🟡 **UNCERTAIN**|(((
630 **45% confidence**
631
632 Overstated - evidence shows risk reduction, not prevention
633
634 [[View Details]]
635 )))
636 |(((
637 Dr. Rodriguez recommends incorporating more olive oil, fish, and vegetables into daily meals.
638 )))|(% style="text-align:center;" %)Plain text|(% style="font-style:italic; color:#888;" %)Recommendation - no highlighting
639 |(((
640 //The study proves that saturated fats cause heart disease//
641 )))|(% style="background-color:#F8D7DA; text-align:center; padding:8px;" %)🔴 **REFUTED**|(((
642 **15% confidence**
643
644 Claim not supported by study; correlation ≠ causation
645
646 [[View Counter-Evidence]]
647 )))
648
649 **Design Notes:**
650
651 * Highlighted claims use italics to distinguish from plain text
652 * Color backgrounds match XWiki message box colors (success/warning/error)
653 * Status column shows verdict prominently
654 * Analysis column provides quick summary with link to details
655
656 **User Actions**:
657
658 * **Hover** over highlighted claim → Tooltip appears
659 * **Click** highlighted claim → Detailed analysis modal/panel
660 * **Toggle** button to turn highlighting on/off
661 * **Keyboard**: Tab through highlighted claims
662
663 **Interaction Design**:
664
665 * Hover/click on highlighted claim → Show tooltip with:
666 * Claim text
667 * Verdict (e.g., "WELL SUPPORTED")
668 * Confidence score (e.g., "85%")
669 * Brief evidence summary
670 * Link to detailed analysis
671 * Toggle highlighting on/off (user preference)
672 * Adjustable color intensity for accessibility
673
674 **Technical Requirements**:
675
676 * Real-time highlighting as page loads (non-blocking)
677 * Claim boundary detection (start/end of assertion)
678 * Handle nested or overlapping claims
679 * Preserve original article formatting
680 * Work with various content formats (HTML, plain text, PDFs)
681
682 **Performance Requirements**:
683
684 * Highlighting renders within 500ms of page load
685 * No perceptible delay in reading experience
686 * Efficient DOM manipulation (avoid reflows)
687
688 **Accessibility**:
689
690 * Color-blind friendly palette (use patterns/icons in addition to color)
691 * Screen reader compatible (ARIA labels for claim credibility)
692 * Keyboard navigation to highlighted claims
693
694 **Implementation Notes**:
695
696 * Claims extracted and analyzed by AKEL during initial processing
697 * Highlighting data stored as annotations with byte offsets
698 * Client-side rendering of highlights based on verdict data
699 * Mobile responsive (tap instead of hover)
700
701 === 8.5 Workflow & Moderation ===
702
703 ==== FR9 — Publication Workflow ====
704
705 **Fulfills**: UN-1 (Fast access to verified content), UN-16 (Clear review status)
706
707 **Simple flow**:
708
709 1. Claim submitted
710 2. AKEL processes (automated)
711 3. If confidence > threshold: Publish (labeled as AI-generated)
712 4. If confidence < threshold: Flag for improvement
713 5. If risk score > threshold: Flag for moderator
714
715 **No multi-stage approval process**
716
717 ==== FR10 — Moderation ====
718
719 **Focus on abuse, not routine quality**:
720
721 * Automated abuse detection
722 * Moderators handle flags
723 * Quick response to harmful content
724 * Minimal involvement in routine content
725
726 ==== FR11 — Audit Trail ====
727
728 **Fulfills**: UN-14 (API access to histories), UN-15 (Evolution tracking)
729
730 * All edits logged
731 * Version history public
732 * Moderation decisions documented
733 * System improvements tracked
734
735 == 9. Non-Functional Requirements ==
736
737 === 9.1 NFR1 — Performance ===
738
739 **Fulfills**: UN-4 (Fast fact-checking), UN-11 (Responsive filtering)
740
741 * Claim processing: < 30 seconds
742 * Search response: < 2 seconds
743 * Page load: < 3 seconds
744 * 99% uptime
745
746 === 9.2 NFR2 — Scalability ===
747
748 **Fulfills**: UN-14 (API access at scale)
749
750 * Handle 10,000 claims initially
751 * Scale to 1M+ claims
752 * Support 100K+ concurrent users
753 * Automated processing scales linearly
754
755 === 9.3 NFR3 — Transparency ===
756
757 **Fulfills**: UN-7 (Evidence transparency), UN-9 (Methodology transparency), UN-13 (Citable verdicts), UN-15 (Evolution visibility)
758
759 * All algorithms open source
760 * All data exportable
761 * All decisions documented
762 * Quality metrics public
763
764 === 9.4 NFR4 — Security & Privacy ===
765
766 * Follow [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]]
767 * Secure authentication
768 * Data encryption
769 * Regular security audits
770
771 === 9.5 NFR5 — Maintainability ===
772
773 * Modular architecture
774 * Automated testing
775 * Continuous integration
776 * Comprehensive documentation
777
778 === NFR11: AKEL Quality Assurance Framework ===
779
780 **Fulfills:** AI safety, IFCN methodology transparency
781
782 **Specification:**
783
784 Multi-layer AI quality gates to detect hallucinations, low-confidence results, and logical inconsistencies.
785
786 ==== Quality Gate 1: Claim Extraction Validation ====
787
788 **Purpose:** Ensure extracted claims are factual assertions (not opinions/predictions)
789
790 **Checks:**
791
792 1. **Factual Statement Test:** Is this verifiable? (Yes/No)
793 2. **Opinion Detection:** Contains hedging language? ("I think", "probably", "best")
794 3. **Future Prediction Test:** Makes claims about future events?
795 4. **Specificity Score:** Contains specific entities, numbers, dates?
796
797 **Thresholds:**
798
799 * Factual: Must be "Yes"
800 * Opinion markers: <2 hedging phrases
801 * Specificity: ≥3 specific elements
802
803 **Action if Failed:** Flag as "Non-verifiable", do NOT generate verdict
804
805 ==== Quality Gate 2: Evidence Relevance Validation ====
806
807 **Purpose:** Ensure AI-linked evidence actually relates to claim
808
809 **Checks:**
810
811 1. **Semantic Similarity Score:** Evidence vs. claim (embeddings)
812 2. **Entity Overlap:** Shared people/places/things?
813 3. **Topic Relevance:** Discusses claim subject?
814
815 **Thresholds:**
816
817 * Similarity: ≥0.6 (cosine similarity)
818 * Entity overlap: ≥1 shared entity
819 * Topic relevance: ≥0.5
820
821 **Action if Failed:** Discard irrelevant evidence
822
823 ==== Quality Gate 3: Scenario Coherence Check ====
824
825 **Purpose:** Validate scenario assumptions are logical and complete
826
827 **Checks:**
828
829 1. **Completeness:** All required fields populated
830 2. **Internal Consistency:** Assumptions don't contradict
831 3. **Distinguishability:** Scenarios meaningfully different
832
833 **Thresholds:**
834
835 * Required fields: 100%
836 * Contradiction score: <0.3
837 * Scenario similarity: <0.8
838
839 **Action if Failed:** Merge duplicates, reduce confidence -20%
840
841 ==== Quality Gate 4: Verdict Confidence Assessment ====
842
843 **Purpose:** Only publish high-confidence verdicts
844
845 **Checks:**
846
847 1. **Evidence Count:** Minimum 2 sources
848 2. **Source Quality:** Average reliability ≥0.6
849 3. **Evidence Agreement:** Supporting vs. contradicting ≥0.6
850 4. **Uncertainty Factors:** Hedging in reasoning
851
852 **Confidence Tiers:**
853
854 * **HIGH (80-100%):** ≥3 sources, ≥0.7 quality, ≥80% agreement
855 * **MEDIUM (50-79%):** ≥2 sources, ≥0.6 quality, ≥60% agreement
856 * **LOW (0-49%):** <2 sources OR low quality/agreement
857 * **INSUFFICIENT:** <2 sources → DO NOT PUBLISH
858
859 **Implementation Phases:**
860
861 * **POC1:** Gates 1 & 4 only (basic validation)
862 * **POC2:** All 4 gates (complete framework)
863 * **V1.0:** Hardened with <5% hallucination rate
864
865 **Acceptance Criteria:**
866
867 * ✅ All gates operational
868 * ✅ Hallucination rate <5%
869 * ✅ Quality metrics public
870
871 === NFR12: Security Controls ===
872
873 **Fulfills:** Data protection, system integrity, user privacy, production readiness
874
875 **Phase:** Beta 0 (essential), V1.0 (complete) **BLOCKER**
876
877 **Purpose:** Protect FactHarbor systems, user data, and operations from security threats, ensuring production-grade security posture.
878
879 **Specification:**
880
881 ==== API Security ====
882
883 **Rate Limiting:**
884
885 * **Analysis endpoints:** 100 requests/hour per IP
886 * **Read endpoints:** 1,000 requests/hour per IP
887 * **Search:** 500 requests/hour per IP
888 * **Authenticated users:** 5x higher limits
889 * **Burst protection:** Max 10 requests/second
890
891 **Authentication & Authorization:**
892
893 * **API Keys:** Required for programmatic access
894 * **JWT tokens:** For user sessions (1-hour expiry)
895 * **OAuth2:** For third-party integrations
896 * **Role-Based Access Control (RBAC):**
897 * Public: Read-only access to published claims
898 * Contributor: Submit claims, provide evidence
899 * Moderator: Review contributions, manage quality
900 * Admin: System configuration, user management
901
902 **CORS Policies:**
903
904 * Whitelist approved domains only
905 * No wildcard origins in production
906 * Credentials required for sensitive endpoints
907
908 **Input Sanitization:**
909
910 * Validate all user input against schemas
911 * Sanitize HTML/JavaScript in text submissions
912 * Prevent SQL injection (use parameterized queries)
913 * Prevent command injection (no shell execution of user input)
914 * Max request size: 10MB
915 * File upload restrictions: Whitelist file types, scan for malware
916
917 ----
918
919 ==== Data Security ====
920
921 **Encryption at Rest:**
922
923 * Database encryption using AES-256
924 * Encrypted backups
925 * Key management via cloud provider KMS (AWS KMS, Google Cloud KMS)
926 * Regular key rotation (90-day cycle)
927
928 **Encryption in Transit:**
929
930 * HTTPS/TLS 1.3 only (no TLS 1.0/1.1)
931 * Strong cipher suites only
932 * HSTS (HTTP Strict Transport Security) enabled
933 * Certificate pinning for mobile apps
934
935 **Secure Credential Storage:**
936
937 * Passwords hashed with bcrypt (cost factor 12+)
938 * API keys encrypted in database
939 * Secrets stored in environment variables (never in code)
940 * Use secrets manager (AWS Secrets Manager, HashiCorp Vault)
941
942 **Data Privacy:**
943
944 * Minimal data collection (privacy by design)
945 * User data deletion on request (GDPR compliance)
946 * PII encryption in database
947 * Anonymize logs (no PII in log files)
948
949 ----
950
951 ==== Application Security ====
952
953 **OWASP Top 10 Compliance:**
954
955 1. **Broken Access Control:** RBAC implementation, path traversal prevention
956 2. **Cryptographic Failures:** Strong encryption, secure key management
957 3. **Injection:** Parameterized queries, input validation
958 4. **Insecure Design:** Security review of all features
959 5. **Security Misconfiguration:** Hardened defaults, security headers
960 6. **Vulnerable Components:** Dependency scanning (see below)
961 7. **Authentication Failures:** Strong password policy, MFA support
962 8. **Data Integrity Failures:** Signature verification, checksums
963 9. **Security Logging Failures:** Comprehensive audit logs
964 10. **Server-Side Request Forgery:** URL validation, whitelist domains
965
966 **Security Headers:**
967
968 * `Content-Security-Policy`: Strict CSP to prevent XSS
969 * `X-Frame-Options`: DENY (prevent clickjacking)
970 * `X-Content-Type-Options`: nosniff
971 * `Referrer-Policy`: strict-origin-when-cross-origin
972 * `Permissions-Policy`: Restrict browser features
973
974 **Dependency Vulnerability Scanning:**
975
976 * **Tools:** Snyk, Dependabot, npm audit, pip-audit
977 * **Frequency:** Daily automated scans
978 * **Action:** Patch critical vulnerabilities within 24 hours
979 * **Policy:** No known high/critical CVEs in production
980
981 **Security Audits:**
982
983 * **Internal:** Quarterly security reviews
984 * **External:** Annual penetration testing by certified firm
985 * **Bug Bounty:** Public bug bounty program (V1.1+)
986 * **Compliance:** SOC 2 Type II certification target (V1.5)
987
988 ----
989
990 ==== Operational Security ====
991
992 **DDoS Protection:**
993
994 * CloudFlare or AWS Shield
995 * Rate limiting at CDN layer
996 * Automatic IP blocking for abuse patterns
997
998 **Monitoring & Alerting:**
999
1000 * Real-time security event monitoring
1001 * Alerts for:
1002 * Failed login attempts (>5 in 10 minutes)
1003 * API abuse patterns
1004 * Unusual data access patterns
1005 * Security scan detections
1006 * Integration with SIEM (Security Information and Event Management)
1007
1008 **Incident Response:**
1009
1010 * Documented incident response plan
1011 * Security incident classification (P1-P4)
1012 * On-call rotation for security issues
1013 * Post-mortem for all security incidents
1014 * Public disclosure policy (coordinated disclosure)
1015
1016 **Backup & Recovery:**
1017
1018 * Daily encrypted backups
1019 * 30-day retention period
1020 * Tested recovery procedures (quarterly)
1021 * Disaster recovery plan (RTO: 4 hours, RPO: 1 hour)
1022
1023 ----
1024
1025 ==== Compliance & Standards ====
1026
1027 **GDPR Compliance:**
1028
1029 * User consent management
1030 * Right to access data
1031 * Right to deletion
1032 * Data portability
1033 * Privacy policy published
1034
1035 **Accessibility:**
1036
1037 * WCAG 2.1 AA compliance
1038 * Screen reader compatibility
1039 * Keyboard navigation
1040 * Alt text for images
1041
1042 **Browser Support:**
1043
1044 * Modern browsers only (Chrome/Edge/Firefox/Safari latest 2 versions)
1045 * No IE11 support
1046
1047 **Acceptance Criteria:**
1048
1049 * ✅ Passes OWASP ZAP security scan (no high/critical findings)
1050 * ✅ All dependencies with known vulnerabilities patched
1051 * ✅ Penetration test completed with no critical findings
1052 * ✅ Rate limiting blocks abuse attempts
1053 * ✅ Encryption at rest and in transit verified
1054 * ✅ Security headers scored A+ on securityheaders.com
1055 * ✅ Incident response plan documented and tested
1056 * ✅ 95% uptime over 30-day period
1057
1058 === NFR13: Quality Metrics Transparency ===
1059
1060 **Fulfills:** User trust, transparency, continuous improvement, IFCN methodology transparency
1061
1062 **Phase:** POC2 (internal), Beta 0 (public), V1.0 (real-time)
1063
1064 **Purpose:** Provide transparent, measurable quality metrics that demonstrate AKEL's performance and build user trust in automated fact-checking.
1065
1066 **Specification:**
1067
1068 ==== Component: Public Quality Dashboard ====
1069
1070 **Core Metrics to Display:**
1071
1072 * \\
1073 ** \\
1074 **1. Verdict Quality Metrics
1075
1076 **TIGERScore (Fact-Checking Quality):**
1077
1078 * **Definition:** Measures how well generated verdicts match expert fact-checker judgments
1079 * **Scale:** 0-100 (higher is better)
1080 * **Calculation:** Using TIGERScore framework (Truth-conditional accuracy, Informativeness, Generality, Evaluativeness, Relevance)
1081 * **Target:** Average ≥80 for production release
1082 * **Display:**
1083 {{code}}Verdict Quality (TIGERScore):
1084 Overall: 84.2 ▲ (+2.1 from last month)
1085
1086 Distribution:
1087 Excellent (>80): 67%
1088 Good (60-80): 28%
1089 Needs Improvement (<60): 5%
1090
1091 Trend: [Graph showing improvement over time]{{/code}}
1092
1093 **2. Hallucination & Faithfulness Metrics**
1094
1095 **AlignScore (Faithfulness to Evidence):**
1096
1097 * **Definition:** Measures how well verdicts align with actual evidence content
1098 * **Scale:** 0-1 (higher is better)
1099 * **Purpose:** Detect AI hallucinations (making claims not supported by evidence)
1100 * **Target:** Average ≥0.85, hallucination rate <5%
1101 * **Display:**
1102 {{code}}Evidence Faithfulness (AlignScore):
1103 Average: 0.87 ▼ (-0.02 from last month)
1104
1105 Hallucination Rate: 4.2%
1106 - Claims without evidence support: 3.1%
1107 - Misrepresented evidence: 1.1%
1108
1109 Action: Prompt engineering review scheduled{{/code}}
1110
1111 **3. Evidence Quality Metrics**
1112
1113 **Source Reliability:**
1114
1115 * Average source quality score (0-1 scale)
1116 * Distribution of high/medium/low quality sources
1117 * Publisher track record trends
1118
1119 **Evidence Coverage:**
1120
1121 * Average number of sources per claim
1122 * Percentage of claims with ≥2 sources (EFCSN minimum)
1123 * Geographic diversity of sources
1124
1125 **Display:**
1126 {{code}}Evidence Quality:
1127
1128 Average Sources per Claim: 4.2
1129 Claims with ≥2 sources: 94% (EFCSN compliant)
1130
1131 Source Quality Distribution:
1132 High quality (>0.8): 48%
1133 Medium quality (0.5-0.8): 43%
1134 Low quality (<0.5): 9%
1135
1136 Geographic Diversity: 23 countries represented{{/code}}
1137
1138 **4. Contributor Consensus Metrics** (when human reviewers involved)
1139
1140 **Inter-Rater Reliability (IRR):**
1141
1142 * **Calculation:** Cohen's Kappa or Fleiss' Kappa for multiple raters
1143 * **Scale:** 0-1 (higher is better)
1144 * **Interpretation:**
1145 * >0.8: Almost perfect agreement
1146 * 0.6-0.8: Substantial agreement
1147 * 0.4-0.6: Moderate agreement
1148 * <0.4: Poor agreement
1149 * **Target:** Maintain ≥0.7 (substantial agreement)
1150
1151 **Display:**
1152 {{code}}Contributor Consensus:
1153
1154 Inter-Rater Reliability (IRR): 0.73 (Substantial agreement)
1155 - Verdict agreement: 78%
1156 - Evidence quality agreement: 71%
1157 - Scenario structure agreement: 69%
1158
1159 Cases requiring moderator review: 12
1160 Moderator override rate: 8%{{/code}}
1161
1162 ----
1163
1164 ==== Quality Dashboard Implementation ====
1165
1166 **Dashboard Location:** `/quality-metrics`
1167
1168 **Update Frequency:**
1169
1170 * **POC2:** Weekly manual updates
1171 * **Beta 0:** Daily automated updates
1172 * **V1.0:** Real-time metrics (updated hourly)
1173
1174 **Dashboard Sections:**
1175
1176 1. **Overview:** Key metrics at a glance
1177 2. **Verdict Quality:** TIGERScore trends and distributions
1178 3. **Evidence Analysis:** Source quality and coverage
1179 4. **AI Performance:** Hallucination rates, AlignScore
1180 5. **Human Oversight:** Contributor consensus, review rates
1181 6. **System Health:** Processing times, error rates, uptime
1182
1183 **Example Dashboard Layout:**
1184
1185 {{code}}
1186 ┌─────────────────────────────────────────────────────────────┐
1187 │ FactHarbor Quality Metrics Last updated: │
1188 │ Public Dashboard 2 hours ago │
1189 └─────────────────────────────────────────────────────────────┘
1190
1191 📊 KEY METRICS
1192 ─────────────────────────────────────────────────────────────
1193 TIGERScore (Verdict Quality): 84.2 ▲ (+2.1)
1194 AlignScore (Faithfulness): 0.87 ▼ (-0.02)
1195 Hallucination Rate: 4.2% ✓ (Target: <5%)
1196 Average Sources per Claim: 4.2 ▲ (+0.3)
1197
1198 📈 TRENDS (30 days)
1199 ─────────────────────────────────────────────────────────────
1200 [Graph: TIGERScore trending upward]
1201 [Graph: Hallucination rate declining]
1202 [Graph: Evidence quality stable]
1203
1204 ⚠️ IMPROVEMENT TARGETS
1205 ─────────────────────────────────────────────────────────────
1206 1. Reduce hallucination rate to <3% (Current: 4.2%)
1207 2. Increase TIGERScore average to >85 (Current: 84.2)
1208 3. Maintain IRR >0.75 (Current: 0.73)
1209
1210 📄 DETAILED REPORTS
1211 ─────────────────────────────────────────────────────────────
1212 • Monthly Quality Report (PDF)
1213 • Methodology Documentation
1214 • AKEL Performance Analysis
1215 • Contributor Agreement Analysis
1216
1217 {{/code}}
1218
1219 ----
1220
1221 ==== Continuous Improvement Feedback Loop ====
1222
1223 **How Metrics Inform AKEL Improvements:**
1224
1225 1. **Identify Weak Areas:**
1226
1227 * Low TIGERScore → Review prompt engineering
1228 * High hallucination → Strengthen evidence grounding
1229 * Low IRR → Clarify evaluation criteria
1230
1231 2. **A/B Testing Integration:**
1232
1233 * Test prompt variations
1234 * Measure impact on quality metrics
1235 * Deploy winners automatically
1236
1237 3. **Alert Thresholds:**
1238
1239 * TIGERScore drops below 75 → Alert team
1240 * Hallucination rate exceeds 7% → Pause auto-publishing
1241 * IRR below 0.6 → Moderator training needed
1242
1243 4. **Monthly Quality Reviews:**
1244
1245 * Analyze trends
1246 * Identify systematic issues
1247 * Plan prompt improvements
1248 * Update AKEL models
1249
1250 ----
1251
1252 ==== Metric Calculation Details ====
1253
1254 **TIGERScore Implementation:**
1255
1256 * Reference: https://github.com/TIGER-AI-Lab/TIGERScore
1257 * Input: Generated verdict + reference verdict (from expert)
1258 * Output: 0-100 score across 5 dimensions
1259 * Requires: Test set of expert-reviewed claims (minimum 100)
1260
1261 **AlignScore Implementation:**
1262
1263 * Reference: https://github.com/yuh-zha/AlignScore
1264 * Input: Generated verdict + source evidence text
1265 * Output: 0-1 faithfulness score
1266 * Calculation: Semantic alignment between claim and evidence
1267
1268 **Source Quality Scoring:**
1269
1270 * Use existing source reliability database (e.g., NewsGuard, MBFC)
1271 * Factor in: Publication history, corrections record, transparency
1272 * Scale: 0-1 (weighted average across sources)
1273
1274 ----
1275
1276 ==== Integration Points ====
1277
1278 * **NFR11: AKEL Quality Assurance** - Metrics validate quality gate effectiveness
1279 * **FR49: A/B Testing** - Metrics measure test success
1280 * **FR11: Audit Trail** - Source of quality data
1281 * **NFR3: Transparency** - Public metrics build trust
1282
1283 **Acceptance Criteria:**
1284
1285 * ✅ All core metrics implemented and calculating correctly
1286 * ✅ Dashboard updates daily (Beta 0) or hourly (V1.0)
1287 * ✅ Alerts trigger when metrics degrade beyond thresholds
1288 * ✅ Monthly quality report auto-generates
1289 * ✅ Dashboard is publicly accessible (no login required)
1290 * ✅ Mobile-responsive dashboard design
1291 * ✅ Metrics inform quarterly AKEL improvement planning
1292
1293
1294
1295
1296 == 13. Requirements Traceability ==
1297
1298 For full traceability matrix showing which requirements fulfill which user needs, see:
1299
1300 * [[User Needs>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.User Needs.WebHome]] - Section 8 includes comprehensive mapping tables
1301
1302 == 14. Related Pages ==
1303
1304 **Non-Functional Requirements (see Section 9):**
1305
1306 * [[NFR11 — AKEL Quality Assurance Framework>>#NFR11]]
1307 * [[NFR12 — Security Controls>>#NFR12]]
1308 * [[NFR13 — Quality Metrics Transparency>>#NFR13]]
1309
1310 **Other Requirements:**
1311
1312 * [[User Needs>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.User Needs.WebHome]]
1313 * [[V1.0 Requirements>>FactHarbor.Specification.Requirements.V10.]]
1314 * [[Gap Analysis>>FactHarbor.Specification.Requirements.GapAnalysis]]
1315
1316 * **[[User Needs>>Archive.FactHarbor 2026\.01\.20.Specification.Requirements.User Needs.WebHome]]** - What users need (drives these requirements)
1317 * [[Architecture>>Archive.FactHarbor 2026\.01\.20.Specification.Architecture.WebHome]] - How requirements are implemented
1318 * [[Data Model>>Archive.FactHarbor 2026\.01\.20.Specification.Data Model.WebHome]] - Data structures supporting requirements
1319 * [[Workflows>>Archive.FactHarbor 2026\.01\.20.Specification.Workflows.WebHome]] - User interaction workflows
1320 * [[AKEL>>Archive.FactHarbor 2026\.01\.20.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI system fulfilling automation requirements
1321 * [[Global Rules>>Archive.FactHarbor.Organisation.How-We-Work-Together.GlobalRules.WebHome]]
1322 * [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]]
1323
1324 = V0.9.70 Additional Requirements =
1325
1326 == Functional Requirements (Additional) ==
1327
1328 === FR44: ClaimReview Schema Implementation ===
1329
1330 **Fulfills:** UN-13 (Cite FactHarbor Verdicts), UN-14 (API Access for Integration), UN-26 (Search Engine Visibility)
1331
1332 **Phase:** V1.0
1333
1334 **Purpose:** Generate valid ClaimReview structured data for every published analysis to enable Google/Bing search visibility and fact-check discovery.
1335
1336 **Specification:**
1337
1338 ==== Component: Schema.org Markup Generator ====
1339
1340 FactHarbor must generate valid ClaimReview structured data following Schema.org specifications for every published claim analysis.
1341
1342 **Required JSON-LD Schema:**
1343
1344 {{code language="json"}}
1345 {
1346 "@context": "https://schema.org",
1347 "@type": "ClaimReview",
1348 "datePublished": "YYYY-MM-DD",
1349 "url": "https://factharbor.org/claims/{claim_id}",
1350 "claimReviewed": "The exact claim text",
1351 "author": {
1352 "@type": "Organization",
1353 "name": "FactHarbor",
1354 "url": "https://factharbor.org"
1355 },
1356 "reviewRating": {
1357 "@type": "Rating",
1358 "ratingValue": "1-5",
1359 "bestRating": "5",
1360 "worstRating": "1",
1361 "alternateName": "FactHarbor likelihood score"
1362 },
1363 "itemReviewed": {
1364 "@type": "Claim",
1365 "author": {
1366 "@type": "Person",
1367 "name": "Claim author if known"
1368 },
1369 "datePublished": "YYYY-MM-DD if known",
1370 "appearance": {
1371 "@type": "CreativeWork",
1372 "url": "Original claim URL if from article"
1373 }
1374 }
1375 }
1376 {{/code}}
1377
1378 **FactHarbor-Specific Mapping:**
1379
1380 **Likelihood Score to Rating Scale:**
1381
1382 * 80-100% likelihood → 5 (Highly Supported)
1383 * 60-79% likelihood → 4 (Supported)
1384 * 40-59% likelihood → 3 (Mixed/Uncertain)
1385 * 20-39% likelihood → 2 (Questionable)
1386 * 0-19% likelihood → 1 (Refuted)
1387
1388 **Multiple Scenarios Handling:**
1389
1390 * If claim has multiple scenarios with different verdicts, generate **separate ClaimReview** for each scenario
1391 * Add `disambiguatingDescription` field explaining scenario context
1392 * Example: "Scenario: If interpreted as referring to 2023 data..."
1393
1394 ==== Implementation Requirements ====
1395
1396 1. **Auto-generate** on claim publication
1397 2. **Embed** in HTML `<head>` section as JSON-LD script
1398 3. **Validate** against Schema.org validator before publishing
1399 4. **Submit** to Google Search Console for indexing
1400 5. **Update** automatically when verdict changes (integrate with FR8: Time Evolution)
1401
1402 ==== Integration Points ====
1403
1404 * **FR7: Automated Verdicts** - Source of rating data and claim text
1405 * **FR8: Time Evolution** - Triggers schema updates when verdicts change
1406 * **FR11: Audit Trail** - Logs all schema generation and update events
1407
1408 ==== Resources ====
1409
1410 * ClaimReview Project: https://www.claimreviewproject.com
1411 * Schema.org ClaimReview: https://schema.org/ClaimReview
1412 * Google Fact Check Guidelines: https://developers.google.com/search/docs/appearance/fact-check
1413
1414 **Acceptance Criteria:**
1415
1416 * ✅ Passes Google Structured Data Testing Tool
1417 * ✅ Appears in Google Fact Check Explorer within 48 hours of publication
1418 * ✅ Valid JSON-LD syntax (no errors)
1419 * ✅ All required fields populated with correct data types
1420 * ✅ Handles multi-scenario claims correctly (separate ClaimReview per scenario)
1421
1422 === FR45: User Corrections Notification System ===
1423
1424 **Fulfills:** IFCN Principle 5 (Open & Honest Corrections), EFCSN compliance
1425
1426 **Phase:** Beta 0 (basic), V1.0 (complete) **BLOCKER**
1427
1428 **Purpose:** When any claim analysis is corrected, notify users who previously viewed the claim to maintain transparency and build trust.
1429
1430 **Specification:**
1431
1432 ==== Component: Corrections Visibility Framework ====
1433
1434 **Correction Types:**
1435
1436 1. **Major Correction:** Verdict changes category (e.g., "Supported" → "Refuted")
1437 2. **Significant Correction:** Likelihood score changes >20%
1438 3. **Minor Correction:** Evidence additions, source quality updates
1439 4. **Scenario Addition:** New scenario added to existing claim
1440
1441 ==== Notification Mechanisms ====
1442
1443 * \\
1444 ** \\
1445 **1. In-Page Banner:
1446
1447 Display prominent banner on claim page:
1448
1449 {{code}}
1450 [!] CORRECTION NOTICE
1451 This analysis was updated on [DATE]. [View what changed] [Dismiss]
1452
1453 Major changes:
1454 • Verdict changed from "Likely True (75%)" to "Uncertain (45%)"
1455 • New contradicting evidence added from [Source]
1456 • Scenario 2 updated with additional context
1457
1458 [See full correction log]
1459 {{/code}}
1460
1461 **2. Correction Log Page:**
1462
1463 * Public changelog at `/claims/{id}/corrections`
1464 * Displays for each correction:
1465 * Date/time of correction
1466 * What changed (before/after comparison)
1467 * Why changed (reason if provided)
1468 * Who made change (AKEL auto-update vs. contributor override)
1469
1470 **3. Email Notifications (opt-in):**
1471
1472 * Send to users who bookmarked or shared the claim
1473 * Subject: "FactHarbor Correction: [Claim title]"
1474 * Include summary of changes
1475 * Link to updated analysis
1476
1477 **4. RSS/API Feed:**
1478
1479 * Corrections feed at `/corrections.rss`
1480 * API endpoint: `GET /api/corrections?since={timestamp}`
1481 * Enables external monitoring by journalists and researchers
1482
1483 ==== Display Rules ====
1484
1485 * Show banner on **ALL pages** displaying the claim (search results, related claims, embeddings)
1486 * Banner persists for **30 days** after correction
1487 * **"Corrections" count badge** on claim card
1488 * **Timestamp** on every verdict: "Last updated: [datetime]"
1489
1490 ==== IFCN Compliance Requirements ====
1491
1492 * Corrections policy published at `/corrections-policy`
1493 * User can report suspected errors via `/report-error/{claim_id}`
1494 * Link to IFCN complaint process (if FactHarbor becomes signatory)
1495 * **Scrupulous transparency:** Never silently edit analyses
1496
1497 ==== Integration Points ====
1498
1499 * **FR8: Time Evolution** - Triggers corrections when verdicts change
1500 * **FR11: Audit Trail** - Source of correction data and change history
1501 * **NFR3: Transparency** - Public correction log demonstrates commitment
1502
1503 **Acceptance Criteria:**
1504
1505 * ✅ Banner appears within 60 seconds of correction
1506 * ✅ Correction log is permanent and publicly accessible
1507 * ✅ Email notifications deliver within 5 minutes
1508 * ✅ RSS feed updates in real-time
1509 * ✅ Mobile-responsive banner design
1510 * ✅ Accessible (screen reader compatible)
1511
1512 === FR46: Image Verification System ===
1513
1514 **Fulfills:** UN-27 (Visual Claim Verification)
1515
1516 **Phase:** Beta 0 (basic), V1.0 (extended)
1517
1518 **Purpose:** Verify authenticity and context of images shared with claims to detect manipulation, misattribution, and out-of-context usage.
1519
1520 **Specification:**
1521
1522 ==== Component: Multi-Method Image Verification ====
1523
1524 **Method 1: Reverse Image Search**
1525
1526 **Purpose:** Find earlier uses of the image to verify context
1527
1528 **Implementation:**
1529
1530 * Integrate APIs:
1531 * **Google Vision AI** (reverse search)
1532 * **TinEye** (oldest known uses)
1533 * **Bing Visual Search** (broad coverage)
1534
1535 **Process:**
1536
1537 1. Extract image from claim or user upload
1538 2. Query multiple reverse search services
1539 3. Analyze results for:
1540
1541 * Earliest known publication
1542 * Original context (what was it really showing?)
1543 * Publication timeline
1544 * Geographic spread
1545
1546 **Output:**
1547 {{code}}Reverse Image Search Results:
1548
1549 Earliest known use: 2019-03-15 (5 years before claim)
1550 Original context: "Photo from 2019 flooding in Mumbai"
1551 This claim uses it for: "2024 hurricane damage in Florida"
1552
1553 ⚠️ Image is OUT OF CONTEXT
1554
1555 Found in 47 other articles:
1556 • 2019-03-15: Mumbai floods (original)
1557 • 2020-07-22: Bangladesh monsoon
1558 • 2024-10-15: Current claim (misattributed)
1559
1560 [View full timeline]{{/code}}
1561
1562 ----
1563
1564 **Method 2: AI Manipulation Detection**
1565
1566 **Purpose:** Detect deepfakes, face swaps, and digital alterations
1567
1568 **Implementation:**
1569
1570 * Integrate detection services:
1571 * **Sensity AI** (deepfake detection)
1572 * **Reality Defender** (multimodal analysis)
1573 * **AWS Rekognition** (face detection inconsistencies)
1574
1575 **Detection Categories:**
1576
1577 1. **Face Manipulation:**
1578
1579 * Deepfake face swaps
1580 * Expression manipulation
1581 * Identity replacement
1582
1583 2. **Image Manipulation:**
1584
1585 * Copy-paste artifacts
1586 * Clone stamp detection
1587 * Content-aware fill detection
1588 * JPEG compression inconsistencies
1589
1590 3. **AI Generation:**
1591
1592 * Detect fully AI-generated images
1593 * Identify generation artifacts
1594 * Check for model signatures
1595
1596 **Confidence Scoring:**
1597
1598 * **HIGH (80-100%):** Strong evidence of manipulation
1599 * **MEDIUM (50-79%):** Suspicious artifacts detected
1600 * **LOW (0-49%):** Minor inconsistencies or inconclusive
1601
1602 **Output:**
1603 {{code}}Manipulation Analysis:
1604
1605 Face Manipulation: LOW RISK (12%)
1606 Image Editing: MEDIUM RISK (64%)
1607 • Clone stamp artifacts detected in sky region
1608 • JPEG compression inconsistent between objects
1609
1610 AI Generation: LOW RISK (8%)
1611
1612 ⚠️ Possible manipulation detected. Manual review recommended.{{/code}}
1613
1614 ----
1615
1616 **Method 3: Metadata Analysis (EXIF)**
1617
1618 **Purpose:** Extract technical details that may reveal manipulation or misattribution
1619
1620 **Extracted Data:**
1621
1622 * **Camera/Device:** Make, model, software
1623 * **Timestamps:** Original date, modification dates
1624 * **Location:** GPS coordinates (if present)
1625 * **Editing History:** Software used, edit count
1626 * **File Properties:** Resolution, compression, format conversions
1627
1628 **Red Flags:**
1629
1630 * Metadata completely stripped (suspicious)
1631 * Timestamp conflicts with claimed date
1632 * GPS location conflicts with claimed location
1633 * Multiple edit rounds (hiding something?)
1634 * Creation date after modification date (impossible)
1635
1636 **Output:**
1637 {{code}}Image Metadata:
1638
1639 Camera: iPhone 14 Pro
1640 Original date: 2023-08-12 14:32:15
1641 Location: 40.7128°N, 74.0060°W (New York City)
1642 Modified: 2024-10-15 08:45:22
1643 Software: Adobe Photoshop 2024
1644
1645 ⚠️ Location conflicts with claim
1646 Claim says: "Taken in Los Angeles"
1647 EXIF says: New York City
1648
1649 ⚠️ Edited 14 months after capture{{/code}}
1650
1651 ----
1652
1653 ==== Verification Workflow ====
1654
1655 **Automatic Triggers:**
1656
1657 1. User submits claim with image
1658 2. Article being analyzed contains images
1659 3. Social media post includes photos
1660
1661 **Process:**
1662
1663 1. Extract images from content
1664 2. Run all 3 verification methods in parallel
1665 3. Aggregate results into confidence score
1666 4. Generate human-readable summary
1667 5. Display prominently in analysis
1668
1669 **Display Integration:**
1670
1671 Show image verification panel in claim analysis:
1672
1673 {{code}}
1674 📷 IMAGE VERIFICATION
1675
1676 [Image thumbnail]
1677
1678 ✅ Reverse Search: Original context verified
1679 ⚠️ Manipulation: Possible editing detected (64% confidence)
1680 ✅ Metadata: Consistent with claim details
1681
1682 Overall Assessment: CAUTION ADVISED
1683 This image may have been edited. Original context appears accurate.
1684
1685 [View detailed analysis]
1686 {{/code}}
1687
1688 ==== Integration Points ====
1689
1690 * **FR7: Automated Verdicts** - Image verification affects claim credibility
1691 * **FR4: Analysis Summary** - Image findings included in summary
1692 * **UN-27: Visual Claim Verification** - Direct fulfillment
1693
1694 ==== Cost Considerations ====
1695
1696 **API Costs (estimated per image):**
1697
1698 * Google Vision AI: $0.001-0.003
1699 * TinEye: $0.02 (commercial API)
1700 * Sensity AI: $0.05-0.10
1701 * AWS Rekognition: $0.001-0.002
1702
1703 **Total per image:** $0.07-0.15**
1704
1705 **Mitigation Strategies:**
1706
1707 * Cache results for duplicate images
1708 * Use free tier quotas where available
1709 * Prioritize higher-value claims for deep analysis
1710 * Offer premium verification as paid tier
1711
1712 **Acceptance Criteria:**
1713
1714 * ✅ Reverse image search finds original sources
1715 * ✅ Manipulation detection accuracy >80% on test dataset
1716 * ✅ EXIF extraction works for major image formats (JPEG, PNG, HEIC)
1717 * ✅ Results display within 10 seconds
1718 * ✅ Mobile-friendly image comparison interface
1719 * ✅ False positive rate <15%
1720
1721 === FR47: Archive.org Integration ===
1722
1723 **Priority:** CRITICAL
1724 **Fulfills:** Evidence persistence, FR5 (Evidence linking)
1725 **Phase:** V1.0
1726
1727 **Purpose:** Ensure evidence remains accessible even if original sources are deleted.
1728
1729 **Specification:**
1730
1731 **Automatic Archiving:**
1732
1733 When AKEL links evidence:
1734
1735 1. Check if URL already archived (Wayback Machine API)
1736 2. If not, submit for archiving (Save Page Now API)
1737 3. Store both original URL and archive URL
1738 4. Display both to users
1739
1740 **Archive Display:**
1741
1742 {{code}}
1743 Evidence Source: [Original URL]
1744 Archived: [Archive.org URL] (Captured: [date])
1745
1746 [View Original] [View Archive]
1747 {{/code}}
1748
1749 **Fallback Logic:**
1750
1751 * If original URL unavailable → Auto-redirect to archive
1752 * If archive unavailable → Display warning
1753 * If both unavailable → Flag for manual review
1754
1755 **API Integration:**
1756
1757 * Use Wayback Machine Availability API
1758 * Use Save Page Now API (SPNv2)
1759 * Rate limiting: 15 requests/minute (Wayback limit)
1760
1761 **Acceptance Criteria:**
1762
1763 * ✅ All evidence URLs auto-archived
1764 * ✅ Archive links displayed to users
1765 * ✅ Fallback to archive if original unavailable
1766 * ✅ API rate limits respected
1767 * ✅ Archive status visible in evidence display
1768
1769 == Category 4: Community Safety ==
1770
1771 FR48: Contributor Safety Framework ===
1772
1773 **Priority:** CRITICAL
1774 **Fulfills:** UN-28 (Safe contribution environment)
1775 **Phase:** V1.0
1776
1777 **Purpose:** Protect contributors from harassment, doxxing, and coordinated attacks.
1778
1779 **Specification:**
1780
1781 * \\
1782 ** \\
1783 **1. Privacy Protection:
1784
1785 * **Optional Pseudonymity:** Contributors can use pseudonyms
1786 * **Email Privacy:** Emails never displayed publicly
1787 * **Profile Privacy:** Contributors control what's public
1788 * **IP Logging:** Only for abuse prevention, not public
1789
1790 **2. Harassment Prevention:**
1791
1792 * **Automated Toxicity Detection:** Flag abusive comments
1793 * **Personal Information Detection:** Auto-block doxxing attempts
1794 * **Coordinated Attack Detection:** Identify brigading patterns
1795 * **Rapid Response:** Moderator alerts for harassment
1796
1797 **3. Safety Features:**
1798
1799 * **Block Users:** Contributors can block harassers
1800 * **Private Contributions:** Option to contribute anonymously
1801 * **Report Harassment:** One-click harassment reporting
1802 * **Safety Resources:** Links to support resources
1803
1804 **4. Moderator Tools:**
1805
1806 * **Quick Ban:** Immediately block abusers
1807 * **Pattern Detection:** Identify coordinated attacks
1808 * **Appeal Process:** Fair review of moderation actions
1809 * **Escalation:** Serious threats escalated to authorities
1810
1811 **5. Trusted Contributor Protection:**
1812
1813 * **Enhanced Privacy:** Additional protection for high-profile contributors
1814 * **Verification:** Optional identity verification (not public)
1815 * **Legal Support:** Resources for contributors facing legal threats
1816
1817 **Acceptance Criteria:**
1818
1819 * ✅ Pseudonyms supported
1820 * ✅ Toxicity detection active
1821 * ✅ Doxxing auto-blocked
1822 * ✅ Harassment reporting functional
1823 * ✅ Moderator tools implemented
1824 * ✅ Safety policy published
1825
1826 == Category 5: Continuous Improvement ==
1827
1828 FR49: A/B Testing Framework ===
1829
1830 **Priority:** CRITICAL
1831 **Fulfills:** Continuous system improvement
1832 **Phase:** V1.0
1833
1834 **Purpose:** Test and measure improvements to AKEL prompts, algorithms, and workflows.
1835
1836 **Specification:**
1837
1838 **Test Capabilities:**
1839
1840 1. **Prompt Variations:**
1841
1842 * Test different claim extraction prompts
1843 * Test different verdict generation prompts
1844 * Measure: Accuracy, clarity, completeness
1845
1846 2. **Algorithm Variations:**
1847
1848 * Test different source scoring algorithms
1849 * Test different confidence calculations
1850 * Measure: Audit accuracy, user satisfaction
1851
1852 3. **Workflow Variations:**
1853
1854 * Test different quality gate thresholds
1855 * Test different risk tier assignments
1856 * Measure: Publication rate, quality scores
1857
1858 **Implementation:**
1859
1860 * **Traffic Split:** 50/50 or 90/10 splits
1861 * **Randomization:** Consistent per claim (not per user)
1862 * **Metrics Collection:** Automatic for all variants
1863 * **Statistical Significance:** Minimum sample size calculation
1864 * **Rollout:** Winner promoted to 100% traffic
1865
1866 **A/B Test Workflow:**
1867
1868 {{code}}
1869 1. Hypothesis: "New prompt improves claim extraction"
1870 2. Design test: Control vs. Variant
1871 3. Define metrics: Extraction accuracy, completeness
1872 4. Run test: 7-14 days, minimum 100 claims each
1873 5. Analyze results: Statistical significance?
1874 6. Decision: Deploy winner or iterate
1875 {{/code}}
1876
1877 **Acceptance Criteria:**
1878
1879 * ✅ A/B testing framework implemented
1880 * ✅ Can test prompt variations
1881 * ✅ Can test algorithm variations
1882 * ✅ Metrics automatically collected
1883 * ✅ Statistical significance calculated
1884 * ✅ Results inform system improvements
1885
1886 === FR54: Evidence Deduplication ===
1887
1888 **Priority:** CRITICAL (POC2/Beta)
1889 **Fulfills:** Accurate evidence counting, quality metrics
1890 **Phase:** POC2, Beta 0, V1.0
1891
1892 **Purpose:** Avoid counting the same source multiple times when it appears in different forms.
1893
1894 **Specification:**
1895
1896 **Deduplication Logic:**
1897
1898 1. **URL Normalization:**
1899
1900 * Remove tracking parameters (?utm_source=...)
1901 * Normalize http/https
1902 * Normalize www/non-www
1903 * Handle redirects
1904
1905 2. **Content Similarity:**
1906
1907 * If two sources have >90% text similarity → Same source
1908 * If one is subset of other → Same source
1909 * Use fuzzy matching for minor differences
1910
1911 3. **Cross-Domain Syndication:**
1912
1913 * Detect wire service content (AP, Reuters)
1914 * Mark as single source if syndicated
1915 * Count original publication only
1916
1917 **Display:**
1918
1919 {{code}}
1920 Evidence Sources (3 unique, 5 total):
1921
1922 1. Original Article (NYTimes)
1923 - Also appeared in: WashPost, Guardian (syndicated)
1924
1925 2. Research Paper (Nature)
1926
1927 3. Official Statement (WHO)
1928 {{/code}}
1929
1930 **Acceptance Criteria:**
1931
1932 * ✅ URL normalization works
1933 * ✅ Content similarity detected
1934 * ✅ Syndicated content identified
1935 * ✅ Unique vs. total counts accurate
1936 * ✅ Improves evidence quality metrics
1937
1938 == Additional Requirements (Lower Priority) ==
1939
1940 FR50: OSINT Toolkit Integration ===
1941
1942
1943
1944 **Fulfills:** Advanced media verification
1945 **Phase:** V1.1
1946
1947 **Purpose:** Integrate open-source intelligence tools for advanced verification.
1948
1949 **Tools to Integrate:**
1950
1951 * InVID/WeVerify (video verification)
1952 * Bellingcat toolkit
1953 * Additional TBD based on V1.0 learnings
1954
1955 === FR51: Video Verification System ===
1956
1957
1958
1959 **Fulfills:** UN-27 (Visual claims), advanced media verification
1960 **Phase:** V1.1
1961
1962 **Purpose:** Verify video-based claims.
1963
1964 **Specification:**
1965
1966 * Keyframe extraction
1967 * Reverse video search
1968 * Deepfake detection (AI-powered)
1969 * Metadata analysis
1970 * Acoustic signature analysis
1971
1972 === FR52: Interactive Detection Training ===
1973
1974
1975
1976 **Fulfills:** Media literacy education
1977 **Phase:** V1.5
1978
1979 **Purpose:** Teach users to identify misinformation.
1980
1981 **Specification:**
1982
1983 * Interactive tutorials
1984 * Practice exercises
1985 * Detection quizzes
1986 * Gamification elements
1987
1988 === FR53: Cross-Organizational Sharing ===
1989
1990
1991
1992 **Fulfills:** Collaboration with other fact-checkers
1993 **Phase:** V1.5
1994
1995 **Purpose:** Share findings with IFCN/EFCSN members.
1996
1997 **Specification:**
1998
1999 * API for fact-checking organizations
2000 * Structured data exchange
2001 * Privacy controls
2002 * Attribution requirements
2003
2004 == Summary ==
2005
2006 **V1.0 Critical Requirements (Must Have):**
2007
2008 * FR44: ClaimReview Schema ✅
2009 * FR45: Corrections Notification ✅
2010 * FR46: Image Verification ✅
2011 * FR47: Archive.org Integration ✅
2012 * FR48: Contributor Safety ✅
2013 * FR49: A/B Testing ✅
2014 * FR54: Evidence Deduplication ✅
2015 * NFR11: Quality Assurance Framework ✅
2016 * NFR12: Security Controls ✅
2017 * NFR13: Quality Metrics Dashboard ✅
2018
2019 **V1.1+ (Future):**
2020
2021 * FR50: OSINT Integration
2022 * FR51: Video Verification
2023 * FR52: Detection Training
2024 * FR53: Cross-Org Sharing
2025
2026 **Total:** 11 critical requirements for V1.0
2027
2028 === FR54: Evidence Deduplication ===
2029
2030
2031
2032 **Fulfills:** Accurate evidence counting, quality metrics
2033 **Phase:** POC2, Beta 0, V1.0
2034
2035 **Purpose:** Avoid counting the same source multiple times when it appears in different forms.
2036
2037 **Specification:**
2038
2039 **Deduplication Logic:**
2040
2041 1. **URL Normalization:**
2042
2043 * Remove tracking parameters (?utm_source=...)
2044 * Normalize http/https
2045 * Normalize www/non-www
2046 * Handle redirects
2047
2048 2. **Content Similarity:**
2049
2050 * If two sources have >90% text similarity → Same source
2051 * If one is subset of other → Same source
2052 * Use fuzzy matching for minor differences
2053
2054 3. **Cross-Domain Syndication:**
2055
2056 * Detect wire service content (AP, Reuters)
2057 * Mark as single source if syndicated
2058 * Count original publication only
2059
2060 **Display:**
2061
2062 {{code}}
2063 Evidence Sources (3 unique, 5 total):
2064
2065 1. Original Article (NYTimes)
2066 - Also appeared in: WashPost, Guardian (syndicated)
2067
2068 2. Research Paper (Nature)
2069
2070 3. Official Statement (WHO)
2071 {{/code}}
2072
2073 **Acceptance Criteria:**
2074
2075 * ✅ URL normalization works
2076 * ✅ Content similarity detected
2077 * ✅ Syndicated content identified
2078 * ✅ Unique vs. total counts accurate
2079 * ✅ Improves evidence quality metrics
2080
2081 == Additional Requirements (Lower Priority) ==
2082
2083 FR7: Automated Verdicts (Enhanced with Quality Gates) ===
2084
2085 **POC1+ Enhancement:**
2086
2087 After AKEL generates verdict, it passes through quality gates:
2088
2089 {{code}}
2090 Workflow:
2091 1. Extract claims
2092
2093 2. [GATE 1] Validate fact-checkable
2094
2095 3. Generate scenarios
2096
2097 4. Generate verdicts
2098
2099 5. [GATE 4] Validate confidence
2100
2101 6. Display to user
2102 {{/code}}
2103
2104 **Updated Verdict States:**
2105
2106 * PUBLISHED
2107 * INSUFFICIENT_EVIDENCE
2108 * NON_FACTUAL_CLAIM
2109 * PROCESSING
2110 * ERROR
2111
2112 === FR4: Analysis Summary (Enhanced with Quality Metadata) ===
2113
2114 **POC1+ Enhancement:**
2115
2116 Display quality indicators:
2117
2118 {{code}}
2119 Analysis Summary:
2120 Verifiable Claims: 3/5
2121 High Confidence Verdicts: 1
2122 Medium Confidence: 2
2123 Evidence Sources: 12
2124 Avg Source Quality: 0.73
2125 Quality Score: 8.5/10
2126 {{/code}}