Changes for page Requirements
Last modified by Robert Schaub on 2025/12/24 21:46
To version 4.1
edited by Robert Schaub
on 2025/12/19 10:02
on 2025/12/19 10:02
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,10 +1,36 @@ 1 1 = Requirements = 2 -This page defines **Roles**, **Content States**, **Rules**, and **System Principles** for FactHarbor. 2 + 3 +**This page defines Roles, Content States, Rules, and System Requirements for FactHarbor.** 4 + 3 3 **Core Philosophy:** Invest in system improvement, not manual data correction. When AI makes errors, improve the algorithm and re-process automatically. 6 + 7 +== Navigation == 8 + 9 +* **[[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]]** - What users need from FactHarbor (drives these requirements) 10 +* **This page** - How we fulfill those needs through system design 11 + 12 +(% class="box infomessage" %) 13 +((( 14 +**How to read this page:** 15 + 16 +1. **User Needs drive Requirements**: See [[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]] for what users need 17 +2. **Requirements define implementation**: This page shows how we fulfill those needs 18 +3. **Functional Requirements (FR)**: Specific features and capabilities 19 +4. **Non-Functional Requirements (NFR)**: Quality attributes (performance, security, etc.) 20 + 21 +Each requirement references which User Needs it fulfills. 22 +))) 23 + 4 4 == 1. Roles == 25 + 26 +**Fulfills**: UN-12 (Submit claims), UN-13 (Cite verdicts), UN-14 (API access) 27 + 5 5 FactHarbor uses three simple roles plus a reputation system. 29 + 6 6 === 1.1 Reader === 31 + 7 7 **Who**: Anyone (no login required) 33 + 8 8 **Can**: 9 9 * Browse and search claims 10 10 * View scenarios, evidence, verdicts, and confidence scores ... ... @@ -11,11 +11,17 @@ 11 11 * Flag issues or errors 12 12 * Use filters, search, and visualization tools 13 13 * Submit claims automatically (new claims added if not duplicates) 40 + 14 14 **Cannot**: 15 15 * Modify content 16 16 * Access edit history details 44 + 45 +**User Needs served**: UN-1 (Trust assessment), UN-2 (Claim verification), UN-3 (Article summary with FactHarbor analysis summary), UN-4 (Social media fact-checking), UN-5 (Source tracing), UN-7 (Evidence transparency), UN-8 (Understanding disagreement), UN-12 (Submit claims) 46 + 17 17 === 1.2 Contributor === 48 + 18 18 **Who**: Registered users (earns reputation through contributions) 50 + 19 19 **Can**: 20 20 * Everything a Reader can do 21 21 * Edit claims, evidence, and scenarios ... ... @@ -23,6 +23,7 @@ 23 23 * Suggest improvements to AI-generated content 24 24 * Participate in discussions 25 25 * Earn reputation points for quality contributions 58 + 26 26 **Reputation System**: 27 27 * New contributors: Limited edit privileges 28 28 * Established contributors (established reputation): Full edit access ... ... @@ -29,11 +29,17 @@ 29 29 * Trusted contributors (substantial reputation): Can approve certain changes 30 30 * Reputation earned through: Accepted edits, helpful flags, quality contributions 31 31 * Reputation lost through: Reverted edits, invalid flags, abuse 65 + 32 32 **Cannot**: 33 33 * Delete or hide content (only moderators) 34 34 * Override moderation decisions 69 + 70 +**User Needs served**: UN-13 (Cite and contribute) 71 + 35 35 === 1.3 Moderator === 73 + 36 36 **Who**: Trusted community members with proven track record, appointed by governance board 75 + 37 37 **Can**: 38 38 * Review flagged content 39 39 * Hide harmful or abusive content ... ... @@ -41,19 +41,26 @@ 41 41 * Issue warnings or temporary bans 42 42 * Make final decisions on content disputes 43 43 * Access full audit logs 83 + 44 44 **Cannot**: 45 45 * Change governance rules 46 46 * Permanently ban users without board approval 47 47 * Override technical quality gates 88 + 48 48 **Note**: Small team (3-5 initially), supported by automated moderation tools. 90 + 49 49 === 1.4 Domain Trusted Contributors (Optional, Task-Specific) === 92 + 50 50 **Who**: Subject matter specialists invited for specific high-stakes disputes 94 + 51 51 **Not a permanent role**: Contacted externally when needed for contested claims in their domain 96 + 52 52 **When used**: 53 53 * Medical claims with life/safety implications 54 54 * Legal interpretations with significant impact 55 55 * Scientific claims with high controversy 56 56 * Technical claims requiring specialized knowledge 102 + 57 57 **Process**: 58 58 * Moderator identifies need for expert input 59 59 * Contact expert externally (don't require them to be users) ... ... @@ -60,14 +60,24 @@ 60 60 * Trusted Contributor provides written opinion with sources 61 61 * Opinion added to claim record 62 62 * Trusted Contributor acknowledged in claim 109 + 110 +**User Needs served**: UN-16 (Expert validation status) 111 + 63 63 == 2. Content States == 113 + 114 +**Fulfills**: UN-1 (Trust indicators), UN-16 (Review status transparency) 115 + 64 64 FactHarbor uses two content states. Focus is on transparency and confidence scoring, not gatekeeping. 117 + 65 65 === 2.1 Published === 119 + 66 66 **Status**: Visible to all users 121 + 67 67 **Includes**: 68 68 * AI-generated analyses (default state) 69 69 * User-contributed content 70 70 * Edited/improved content 126 + 71 71 **Quality Indicators** (displayed with content): 72 72 * **Confidence Score**: 0-100% (AI's confidence in analysis) 73 73 * **Source Quality Score**: 0-100% (based on source track record) ... ... @@ -75,13 +75,20 @@ 75 75 * **Completeness Score**: % of expected fields filled 76 76 * **Last Updated**: Date of most recent change 77 77 * **Edit Count**: Number of revisions 134 +* **Review Status**: AI-generated / Human-reviewed / Expert-validated 135 + 78 78 **Automatic Warnings**: 79 79 * Confidence < 60%: "Low confidence - use caution" 80 80 * Source quality < 40%: "Sources may be unreliable" 81 81 * High controversy: "Disputed - multiple interpretations exist" 82 82 * Medical/Legal/Safety domain: "Seek professional advice" 141 + 142 +**User Needs served**: UN-1 (Trust score), UN-9 (Methodology transparency), UN-15 (Evolution timeline), UN-16 (Review status) 143 + 83 83 === 2.2 Hidden === 145 + 84 84 **Status**: Not visible to regular users (only to moderators) 147 + 85 85 **Reasons**: 86 86 * Spam or advertising 87 87 * Personal attacks or harassment ... ... @@ -89,21 +89,29 @@ 89 89 * Privacy violations 90 90 * Deliberate misinformation (verified) 91 91 * Abuse or harmful content 155 + 92 92 **Process**: 93 93 * Automated detection flags for moderator review 94 94 * Moderator confirms and hides 95 95 * Original author notified with reason 96 96 * Can appeal to board if disputes moderator decision 161 + 97 97 **Note**: Content is hidden, not deleted (for audit trail) 163 + 98 98 == 3. Contribution Rules == 165 + 99 99 === 3.1 All Contributors Must === 167 + 100 100 * Provide sources for factual claims 101 101 * Use clear, neutral language in FactHarbor's own summaries 102 102 * Respect others and maintain civil discourse 103 103 * Accept community feedback constructively 104 104 * Focus on improving quality, not protecting ego 173 + 105 105 === 3.2 AKEL (AI System) === 175 + 106 106 **AKEL is the primary system**. Human contributions supplement and train AKEL. 177 + 107 107 **AKEL Must**: 108 108 * Mark all outputs as AI-generated 109 109 * Display confidence scores prominently ... ... @@ -111,49 +111,74 @@ 111 111 * Flag uncertainty clearly 112 112 * Identify contradictions in evidence 113 113 * Learn from human corrections 185 + 114 114 **When AKEL Makes Errors**: 115 115 1. Capture the error pattern (what, why, how common) 116 116 2. Improve the system (better prompt, model, validation) 117 117 3. Re-process affected claims automatically 118 118 4. Measure improvement (did quality increase?) 191 + 119 119 **Human Role**: Train AKEL through corrections, not replace AKEL 193 + 120 120 === 3.3 Contributors Should === 195 + 121 121 * Improve clarity and structure 122 122 * Add missing sources 123 123 * Flag errors for system improvement 124 124 * Suggest better ways to present information 125 125 * Participate in quality discussions 201 + 126 126 === 3.4 Moderators Must === 203 + 127 127 * Be impartial 128 128 * Document moderation decisions 129 129 * Respond to appeals promptly 130 130 * Use automated tools to scale efforts 131 131 * Focus on abuse/harm, not routine quality control 209 + 132 132 == 4. Quality Standards == 211 + 212 +**Fulfills**: UN-5 (Source reliability), UN-6 (Publisher track records), UN-7 (Evidence transparency), UN-9 (Methodology transparency) 213 + 133 133 === 4.1 Source Requirements === 215 + 134 134 **Track Record Over Credentials**: 135 135 * Sources evaluated by historical accuracy 136 136 * Correction policy matters 137 137 * Independence from conflicts of interest 138 138 * Methodology transparency 221 + 139 139 **Source Quality Database**: 140 140 * Automated tracking of source accuracy 141 141 * Correction frequency 142 142 * Reliability score (updated continuously) 143 143 * Users can see source track record 227 + 144 144 **No automatic trust** for government, academia, or media - all evaluated by track record. 229 + 230 +**User Needs served**: UN-5 (Source provenance), UN-6 (Publisher reliability) 231 + 145 145 === 4.2 Claim Requirements === 233 + 146 146 * Clear subject and assertion 147 147 * Verifiable with available information 148 148 * Sourced (or explicitly marked as needing sources) 149 149 * Neutral language in FactHarbor summaries 150 150 * Appropriate context provided 239 + 240 +**User Needs served**: UN-2 (Claim extraction and verification) 241 + 151 151 === 4.3 Evidence Requirements === 243 + 152 152 * Publicly accessible (or explain why not) 153 153 * Properly cited with attribution 154 154 * Relevant to claim being evaluated 155 155 * Original source preferred over secondary 248 + 249 +**User Needs served**: UN-7 (Evidence transparency) 250 + 156 156 === 4.4 Confidence Scoring === 252 + 157 157 **Automated confidence calculation based on**: 158 158 * Source quality scores 159 159 * Evidence consistency ... ... @@ -160,14 +160,23 @@ 160 160 * Contradiction detection 161 161 * Completeness of analysis 162 162 * Historical accuracy of similar claims 259 + 163 163 **Thresholds**: 164 164 * < 40%: Too low to publish (needs improvement) 165 165 * 40-60%: Published with "Low confidence" warning 166 166 * 60-80%: Published as standard 167 167 * 80-100%: Published as "High confidence" 265 + 266 +**User Needs served**: UN-1 (Trust assessment), UN-9 (Methodology transparency) 267 + 168 168 == 5. Automated Risk Scoring == 269 + 270 +**Fulfills**: UN-10 (Manipulation detection), UN-16 (Appropriate review level) 271 + 169 169 **Replace manual risk tiers with continuous automated scoring**. 273 + 170 170 === 5.1 Risk Score Calculation === 275 + 171 171 **Factors** (weighted algorithm): 172 172 * **Domain sensitivity**: Medical, legal, safety auto-flagged higher 173 173 * **Potential impact**: Views, citations, spread ... ... @@ -174,16 +174,26 @@ 174 174 * **Controversy level**: Flags, disputes, edit wars 175 175 * **Uncertainty**: Low confidence, contradictory evidence 176 176 * **Source reliability**: Track record of sources used 282 + 177 177 **Score**: 0-100 (higher = more risk) 284 + 178 178 === 5.2 Automated Actions === 286 + 179 179 * **Score > 80**: Flag for moderator review before publication 180 180 * **Score 60-80**: Publish with prominent warnings 181 181 * **Score 40-60**: Publish with standard warnings 182 182 * **Score < 40**: Publish normally 291 + 183 183 **Continuous monitoring**: Risk score recalculated as new information emerges 293 + 294 +**User Needs served**: UN-10 (Detect manipulation tactics), UN-16 (Review status) 295 + 184 184 == 6. System Improvement Process == 297 + 185 185 **Core principle**: Fix the system, not just the data. 299 + 186 186 === 6.1 Error Capture === 301 + 187 187 **When users flag errors or make corrections**: 188 188 1. What was wrong? (categorize) 189 189 2. What should it have been? ... ... @@ -190,7 +190,9 @@ 190 190 3. Why did the system fail? (root cause) 191 191 4. How common is this pattern? 192 192 5. Store in ErrorPattern table (improvement queue) 308 + 193 193 === 6.2 Weekly Improvement Cycle === 310 + 194 194 1. **Review**: Analyze top error patterns 195 195 2. **Develop**: Create fix (prompt, model, validation) 196 196 3. **Test**: Validate fix on sample claims ... ... @@ -197,7 +197,9 @@ 197 197 4. **Deploy**: Roll out if quality improves 198 198 5. **Re-process**: Automatically update affected claims 199 199 6. **Monitor**: Track quality metrics 317 + 200 200 === 6.3 Quality Metrics Dashboard === 319 + 201 201 **Track continuously**: 202 202 * Error rate by category 203 203 * Source quality distribution ... ... @@ -206,16 +206,23 @@ 206 206 * Correction acceptance rate 207 207 * Re-work rate 208 208 * Claims processed per hour 328 + 209 209 **Goal**: 10% monthly improvement in error rate 330 + 210 210 == 7. Automated Quality Monitoring == 332 + 211 211 **Replace manual audit sampling with automated monitoring**. 334 + 212 212 === 7.1 Continuous Metrics === 336 + 213 213 * **Source quality**: Track record database 214 214 * **Consistency**: Contradiction detection 215 215 * **Clarity**: Readability scores 216 216 * **Completeness**: Field validation 217 217 * **Accuracy**: User corrections tracked 342 + 218 218 === 7.2 Anomaly Detection === 344 + 219 219 **Automated alerts for**: 220 220 * Sudden quality drops 221 221 * Unusual patterns ... ... @@ -222,103 +222,218 @@ 222 222 * Contradiction clusters 223 223 * Source reliability changes 224 224 * User behavior anomalies 351 + 225 225 === 7.3 Targeted Review === 353 + 226 226 * Review only flagged items 227 227 * Random sampling for calibration (not quotas) 228 228 * Learn from corrections to improve automation 229 -== 8. Claim Intake & Normalization == 230 -=== 8.1 FR1 – Claim Intake === 357 + 358 +== 8. Functional Requirements == 359 + 360 +This section defines specific features that fulfill user needs. 361 + 362 +=== 8.1 Claim Intake & Normalization === 363 + 364 +==== FR1 — Claim Intake ==== 365 + 366 +**Fulfills**: UN-2 (Claim extraction), UN-4 (Quick fact-checking), UN-12 (Submit claims) 367 + 231 231 * Users submit claims via simple form or API 232 232 * Claims can be text, URL, or image 233 233 * Duplicate detection (semantic similarity) 234 234 * Auto-categorization by domain 235 -=== 8.2 FR2 – Claim Normalization === 372 + 373 +==== FR2 — Claim Normalization ==== 374 + 375 +**Fulfills**: UN-2 (Claim verification) 376 + 236 236 * Standardize to clear assertion format 237 237 * Extract key entities (who, what, when, where) 238 238 * Identify claim type (factual, predictive, evaluative) 239 239 * Link to existing similar claims 240 -=== 8.3 FR3 – Claim Classification === 381 + 382 +==== FR3 — Claim Classification ==== 383 + 384 +**Fulfills**: UN-11 (Filtered research) 385 + 241 241 * Domain: Politics, Science, Health, etc. 242 242 * Type: Historical fact, current stat, prediction, etc. 243 243 * Risk score: Automated calculation 244 244 * Complexity: Simple, moderate, complex 245 -== 9. Scenario System == 246 -=== 9.1 FR4 – Scenario Generation === 390 + 391 +=== 8.2 Scenario System === 392 + 393 +==== FR4 — Scenario Generation ==== 394 + 395 +**Fulfills**: UN-2 (Context-dependent verification), UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) 396 + 247 247 **Automated scenario creation**: 248 -* AKEL analyzes claim and generates likely scenarios 249 -* Each scenario includes: assumptions, evidence ,conclusion398 +* AKEL analyzes claim and generates likely scenarios (use-cases and contexts) 399 +* Each scenario includes: assumptions, definitions, boundaries, evidence context 250 250 * Users can flag incorrect scenarios 251 251 * System learns from corrections 252 -=== 9.2 FR5 – Evidence Linking === 402 + 403 +**Key Concept**: Scenarios represent different interpretations or contexts (e.g., "Clinical trials with healthy adults" vs. "Real-world data with diverse populations") 404 + 405 +==== FR5 — Evidence Linking ==== 406 + 407 +**Fulfills**: UN-5 (Source tracing), UN-7 (Evidence transparency) 408 + 253 253 * Automated evidence discovery from sources 254 254 * Relevance scoring 255 255 * Contradiction detection 256 256 * Source quality assessment 257 -=== 9.3 FR6 – Scenario Comparison === 413 + 414 +==== FR6 — Scenario Comparison ==== 415 + 416 +**Fulfills**: UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) 417 + 258 258 * Side-by-side comparison interface 259 -* Highlight key differences 260 -* Show evidence supporting each 261 -* Display confidence scores 262 -== 10. Verdicts & Analysis == 263 -=== 10.1 FR7 – Automated Verdicts === 264 -* AKEL generates verdict based on evidence 419 +* Highlight key differences between scenarios 420 +* Show evidence supporting each scenario 421 +* Display confidence scores per scenario 422 + 423 +=== 8.3 Verdicts & Analysis === 424 + 425 +==== FR7 — Automated Verdicts ==== 426 + 427 +**Fulfills**: UN-1 (Trust score), UN-2 (Verification verdicts), UN-3 (Article summary with FactHarbor analysis summary), UN-13 (Cite verdicts) 428 + 429 +* AKEL generates verdict based on evidence within each scenario 430 +* **Likelihood range** displayed (e.g., "0.70-0.85 (likely true)") - NOT binary true/false 431 +* **Uncertainty factors** explicitly listed (e.g., "Small sample sizes", "Long-term effects unknown") 265 265 * Confidence score displayed prominently 266 -* Source quality indicators 433 +* Source quality indicators shown 267 267 * Contradictions noted 268 268 * Uncertainty acknowledged 269 -=== 10.2 FR8 – Time Evolution === 270 -* Claims update as new evidence emerges 271 -* Version history maintained 436 + 437 +**Key Innovation**: Detailed probabilistic verdicts with explicit uncertainty, not binary judgments 438 + 439 +==== FR8 — Time Evolution ==== 440 + 441 +**Fulfills**: UN-15 (Verdict evolution timeline) 442 + 443 +* Claims and verdicts update as new evidence emerges 444 +* Version history maintained for all verdicts 272 272 * Changes highlighted 273 273 * Confidence score trends visible 274 -== 11. Workflow & Moderation == 275 -=== 11.1 FR9 – Publication Workflow === 447 +* Users can see "as of date X, what did we know?" 448 + 449 +=== 8.4 User Interface & Presentation === 450 + 451 +==== FR12 — Two-Panel Summary View (Article Summary with FactHarbor Analysis Summary) ==== 452 + 453 +**Fulfills**: UN-3 (Article Summary with FactHarbor Analysis Summary) 454 + 455 +**Purpose**: Provide side-by-side comparison of what a document claims vs. FactHarbor's complete analysis of its credibility 456 + 457 +**Left Panel: Article Summary**: 458 +* Document title, source, and claimed credibility 459 +* "The Big Picture" - main thesis or position change 460 +* "Key Findings" - structured summary of document's main claims 461 +* "Reasoning" - document's explanation for positions 462 +* "Conclusion" - document's bottom line 463 + 464 +**Right Panel: FactHarbor Analysis Summary**: 465 +* FactHarbor's independent source credibility assessment 466 +* Claim-by-claim verdicts with confidence scores 467 +* Methodology assessment (strengths, limitations) 468 +* Overall verdict on document quality 469 +* Analysis ID for reference 470 + 471 +**Design Principles**: 472 +* No scrolling required - both panels visible simultaneously 473 +* Visual distinction between "what they say" and "FactHarbor's analysis" 474 +* Color coding for verdicts (supported, uncertain, refuted) 475 +* Confidence percentages clearly visible 476 +* Mobile responsive (panels stack vertically on small screens) 477 + 478 +**Implementation Notes**: 479 +* Generated automatically by AKEL for every analyzed document 480 +* Updates when verdict evolves (maintains version history) 481 +* Exportable as standalone summary report 482 +* Shareable via permanent URL 483 + 484 +=== 8.5 Workflow & Moderation === 485 + 486 +==== FR9 — Publication Workflow ==== 487 + 488 +**Fulfills**: UN-1 (Fast access to verified content), UN-16 (Clear review status) 489 + 276 276 **Simple flow**: 277 277 1. Claim submitted 278 278 2. AKEL processes (automated) 279 -3. If confidence > threshold: Publish 493 +3. If confidence > threshold: Publish (labeled as AI-generated) 280 280 4. If confidence < threshold: Flag for improvement 281 281 5. If risk score > threshold: Flag for moderator 496 + 282 282 **No multi-stage approval process** 283 -=== 11.2 FR10 – Moderation === 498 + 499 +==== FR10 — Moderation ==== 500 + 284 284 **Focus on abuse, not routine quality**: 285 285 * Automated abuse detection 286 286 * Moderators handle flags 287 287 * Quick response to harmful content 288 288 * Minimal involvement in routine content 289 -=== 11.3 FR11 – Audit Trail === 506 + 507 +==== FR11 — Audit Trail ==== 508 + 509 +**Fulfills**: UN-14 (API access to histories), UN-15 (Evolution tracking) 510 + 290 290 * All edits logged 291 291 * Version history public 292 292 * Moderation decisions documented 293 293 * System improvements tracked 294 -== 12. Technical Requirements == 295 -=== 12.1 NFR1 – Performance === 515 + 516 +== 9. Non-Functional Requirements == 517 + 518 +=== 9.1 NFR1 — Performance === 519 + 520 +**Fulfills**: UN-4 (Fast fact-checking), UN-11 (Responsive filtering) 521 + 296 296 * Claim processing: < 30 seconds 297 297 * Search response: < 2 seconds 298 298 * Page load: < 3 seconds 299 299 * 99% uptime 300 -=== 12.2 NFR2 – Scalability === 526 + 527 +=== 9.2 NFR2 — Scalability === 528 + 529 +**Fulfills**: UN-14 (API access at scale) 530 + 301 301 * Handle 10,000 claims initially 302 302 * Scale to 1M+ claims 303 303 * Support 100K+ concurrent users 304 304 * Automated processing scales linearly 305 -=== 12.3 NFR3 – Transparency === 535 + 536 +=== 9.3 NFR3 — Transparency === 537 + 538 +**Fulfills**: UN-7 (Evidence transparency), UN-9 (Methodology transparency), UN-13 (Citable verdicts), UN-15 (Evolution visibility) 539 + 306 306 * All algorithms open source 307 307 * All data exportable 308 308 * All decisions documented 309 309 * Quality metrics public 310 -=== 12.4 NFR4 – Security & Privacy === 544 + 545 +=== 9.4 NFR4 — Security & Privacy === 546 + 311 311 * Follow [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]] 312 312 * Secure authentication 313 313 * Data encryption 314 314 * Regular security audits 315 -=== 12.5 NFR5 – Maintainability === 551 + 552 +=== 9.5 NFR5 — Maintainability === 553 + 316 316 * Modular architecture 317 317 * Automated testing 318 318 * Continuous integration 319 319 * Comprehensive documentation 320 -== 13. MVP Scope == 558 + 559 +== 10. MVP Scope == 560 + 321 321 **Phase 1 (Months 1-3): Read-Only MVP** 562 + 322 322 Build: 323 323 * Automated claim analysis 324 324 * Confidence scoring ... ... @@ -325,39 +325,72 @@ 325 325 * Source evaluation 326 326 * Browse/search interface 327 327 * User flagging system 569 + 328 328 **Goal**: Prove AI quality before adding user editing 571 + 572 +**User Needs fulfilled in Phase 1**: UN-1, UN-2, UN-3, UN-4, UN-5, UN-6, UN-7, UN-8, UN-9, UN-12 573 + 329 329 **Phase 2 (Months 4-6): User Contributions** 575 + 330 330 Add only if needed: 331 331 * Simple editing (Wikipedia-style) 332 332 * Reputation system 333 333 * Basic moderation 580 + 581 +**Additional User Needs fulfilled**: UN-13 582 + 334 334 **Phase 3 (Months 7-12): Refinement** 584 + 335 335 * Continuous quality improvement 336 336 * Feature additions based on real usage 337 337 * Scale infrastructure 588 + 589 +**Additional User Needs fulfilled**: UN-14 (API access), UN-15 (Full evolution tracking) 590 + 338 338 **Deferred**: 339 339 * Federation (until multiple successful instances exist) 340 340 * Complex contribution workflows (focus on automation) 341 341 * Extensive role hierarchy (keep simple) 342 -== 14. Success Metrics == 595 + 596 +== 11. Success Metrics == 597 + 343 343 **System Quality** (track weekly): 344 344 * Error rate by category (target: -10%/month) 345 345 * Average confidence score (target: increase) 346 346 * Source quality distribution (target: more high-quality) 347 347 * Contradiction detection rate (target: increase) 603 + 348 348 **Efficiency** (track monthly): 349 349 * Claims processed per hour (target: increase) 350 350 * Human hours per claim (target: decrease) 351 351 * Automation coverage (target: >90%) 352 352 * Re-work rate (target: <5%) 609 + 353 353 **User Satisfaction** (track quarterly): 354 354 * User flag rate (issues found) 355 355 * Correction acceptance rate (flags valid) 356 356 * Return user rate 357 357 * Trust indicators (surveys) 358 -== 15. Related Pages == 359 -* [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] 360 -* [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] 361 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 615 + 616 +**User Needs Metrics** (track quarterly): 617 +* UN-1: % users who understand trust scores 618 +* UN-4: Time to verify social media claim (target: <30s) 619 +* UN-7: % users who access evidence details 620 +* UN-8: % users who view multiple scenarios 621 +* UN-15: % users who check evolution timeline 622 + 623 +== 12. Requirements Traceability == 624 + 625 +For full traceability matrix showing which requirements fulfill which user needs, see: 626 + 627 +* [[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]] - Section 8 includes comprehensive mapping tables 628 + 629 +== 13. Related Pages == 630 + 631 +* **[[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]]** - What users need (drives these requirements) 632 +* [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] - How requirements are implemented 633 +* [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] - Data structures supporting requirements 634 +* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] - User interaction workflows 635 +* [[AKEL>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI system fulfilling automation requirements 362 362 * [[Global Rules>>FactHarbor.Organisation.How-We-Work-Together.GlobalRules.WebHome]] 363 363 * [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]]