Changes for page Requirements
Last modified by Robert Schaub on 2025/12/24 21:46
From version 3.1
edited by Robert Schaub
on 2025/12/19 09:57
on 2025/12/19 09:57
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,36 +1,10 @@ 1 1 = Requirements = 2 - 3 -**This page defines Roles, Content States, Rules, and System Requirements for FactHarbor.** 4 - 2 +This page defines **Roles**, **Content States**, **Rules**, and **System Principles** for FactHarbor. 5 5 **Core Philosophy:** Invest in system improvement, not manual data correction. When AI makes errors, improve the algorithm and re-process automatically. 6 - 7 -== Navigation == 8 - 9 -* **[[User Needs>>FactHarbor.Specification.Requirements.User-Needs]]** - What users need from FactHarbor (drives these requirements) 10 -* **This page** - How we fulfill those needs through system design 11 - 12 -(% class="box infomessage" %) 13 -((( 14 -**How to read this page:** 15 - 16 -1. **User Needs drive Requirements**: See [[User Needs>>FactHarbor.Specification.Requirements.User-Needs]] for what users need 17 -2. **Requirements define implementation**: This page shows how we fulfill those needs 18 -3. **Functional Requirements (FR)**: Specific features and capabilities 19 -4. **Non-Functional Requirements (NFR)**: Quality attributes (performance, security, etc.) 20 - 21 -Each requirement references which User Needs it fulfills. 22 -))) 23 - 24 24 == 1. Roles == 25 - 26 -**Fulfills**: UN-12 (Submit claims), UN-13 (Cite verdicts), UN-14 (API access) 27 - 28 28 FactHarbor uses three simple roles plus a reputation system. 29 - 30 30 === 1.1 Reader === 31 - 32 32 **Who**: Anyone (no login required) 33 - 34 34 **Can**: 35 35 * Browse and search claims 36 36 * View scenarios, evidence, verdicts, and confidence scores ... ... @@ -37,17 +37,11 @@ 37 37 * Flag issues or errors 38 38 * Use filters, search, and visualization tools 39 39 * Submit claims automatically (new claims added if not duplicates) 40 - 41 41 **Cannot**: 42 42 * Modify content 43 43 * Access edit history details 44 - 45 -**User Needs served**: UN-1 (Trust assessment), UN-2 (Claim verification), UN-3 (Article summary with FactHarbor analysis summary), UN-4 (Social media fact-checking), UN-5 (Source tracing), UN-7 (Evidence transparency), UN-8 (Understanding disagreement), UN-12 (Submit claims) 46 - 47 47 === 1.2 Contributor === 48 - 49 49 **Who**: Registered users (earns reputation through contributions) 50 - 51 51 **Can**: 52 52 * Everything a Reader can do 53 53 * Edit claims, evidence, and scenarios ... ... @@ -55,7 +55,6 @@ 55 55 * Suggest improvements to AI-generated content 56 56 * Participate in discussions 57 57 * Earn reputation points for quality contributions 58 - 59 59 **Reputation System**: 60 60 * New contributors: Limited edit privileges 61 61 * Established contributors (established reputation): Full edit access ... ... @@ -62,17 +62,11 @@ 62 62 * Trusted contributors (substantial reputation): Can approve certain changes 63 63 * Reputation earned through: Accepted edits, helpful flags, quality contributions 64 64 * Reputation lost through: Reverted edits, invalid flags, abuse 65 - 66 66 **Cannot**: 67 67 * Delete or hide content (only moderators) 68 68 * Override moderation decisions 69 - 70 -**User Needs served**: UN-13 (Cite and contribute) 71 - 72 72 === 1.3 Moderator === 73 - 74 74 **Who**: Trusted community members with proven track record, appointed by governance board 75 - 76 76 **Can**: 77 77 * Review flagged content 78 78 * Hide harmful or abusive content ... ... @@ -80,26 +80,19 @@ 80 80 * Issue warnings or temporary bans 81 81 * Make final decisions on content disputes 82 82 * Access full audit logs 83 - 84 84 **Cannot**: 85 85 * Change governance rules 86 86 * Permanently ban users without board approval 87 87 * Override technical quality gates 88 - 89 89 **Note**: Small team (3-5 initially), supported by automated moderation tools. 90 - 91 91 === 1.4 Domain Trusted Contributors (Optional, Task-Specific) === 92 - 93 93 **Who**: Subject matter specialists invited for specific high-stakes disputes 94 - 95 95 **Not a permanent role**: Contacted externally when needed for contested claims in their domain 96 - 97 97 **When used**: 98 98 * Medical claims with life/safety implications 99 99 * Legal interpretations with significant impact 100 100 * Scientific claims with high controversy 101 101 * Technical claims requiring specialized knowledge 102 - 103 103 **Process**: 104 104 * Moderator identifies need for expert input 105 105 * Contact expert externally (don't require them to be users) ... ... @@ -106,24 +106,14 @@ 106 106 * Trusted Contributor provides written opinion with sources 107 107 * Opinion added to claim record 108 108 * Trusted Contributor acknowledged in claim 109 - 110 -**User Needs served**: UN-16 (Expert validation status) 111 - 112 112 == 2. Content States == 113 - 114 -**Fulfills**: UN-1 (Trust indicators), UN-16 (Review status transparency) 115 - 116 116 FactHarbor uses two content states. Focus is on transparency and confidence scoring, not gatekeeping. 117 - 118 118 === 2.1 Published === 119 - 120 120 **Status**: Visible to all users 121 - 122 122 **Includes**: 123 123 * AI-generated analyses (default state) 124 124 * User-contributed content 125 125 * Edited/improved content 126 - 127 127 **Quality Indicators** (displayed with content): 128 128 * **Confidence Score**: 0-100% (AI's confidence in analysis) 129 129 * **Source Quality Score**: 0-100% (based on source track record) ... ... @@ -131,20 +131,13 @@ 131 131 * **Completeness Score**: % of expected fields filled 132 132 * **Last Updated**: Date of most recent change 133 133 * **Edit Count**: Number of revisions 134 -* **Review Status**: AI-generated / Human-reviewed / Expert-validated 135 - 136 136 **Automatic Warnings**: 137 137 * Confidence < 60%: "Low confidence - use caution" 138 138 * Source quality < 40%: "Sources may be unreliable" 139 139 * High controversy: "Disputed - multiple interpretations exist" 140 140 * Medical/Legal/Safety domain: "Seek professional advice" 141 - 142 -**User Needs served**: UN-1 (Trust score), UN-9 (Methodology transparency), UN-15 (Evolution timeline), UN-16 (Review status) 143 - 144 144 === 2.2 Hidden === 145 - 146 146 **Status**: Not visible to regular users (only to moderators) 147 - 148 148 **Reasons**: 149 149 * Spam or advertising 150 150 * Personal attacks or harassment ... ... @@ -152,29 +152,21 @@ 152 152 * Privacy violations 153 153 * Deliberate misinformation (verified) 154 154 * Abuse or harmful content 155 - 156 156 **Process**: 157 157 * Automated detection flags for moderator review 158 158 * Moderator confirms and hides 159 159 * Original author notified with reason 160 160 * Can appeal to board if disputes moderator decision 161 - 162 162 **Note**: Content is hidden, not deleted (for audit trail) 163 - 164 164 == 3. Contribution Rules == 165 - 166 166 === 3.1 All Contributors Must === 167 - 168 168 * Provide sources for factual claims 169 169 * Use clear, neutral language in FactHarbor's own summaries 170 170 * Respect others and maintain civil discourse 171 171 * Accept community feedback constructively 172 172 * Focus on improving quality, not protecting ego 173 - 174 174 === 3.2 AKEL (AI System) === 175 - 176 176 **AKEL is the primary system**. Human contributions supplement and train AKEL. 177 - 178 178 **AKEL Must**: 179 179 * Mark all outputs as AI-generated 180 180 * Display confidence scores prominently ... ... @@ -182,74 +182,49 @@ 182 182 * Flag uncertainty clearly 183 183 * Identify contradictions in evidence 184 184 * Learn from human corrections 185 - 186 186 **When AKEL Makes Errors**: 187 187 1. Capture the error pattern (what, why, how common) 188 188 2. Improve the system (better prompt, model, validation) 189 189 3. Re-process affected claims automatically 190 190 4. Measure improvement (did quality increase?) 191 - 192 192 **Human Role**: Train AKEL through corrections, not replace AKEL 193 - 194 194 === 3.3 Contributors Should === 195 - 196 196 * Improve clarity and structure 197 197 * Add missing sources 198 198 * Flag errors for system improvement 199 199 * Suggest better ways to present information 200 200 * Participate in quality discussions 201 - 202 202 === 3.4 Moderators Must === 203 - 204 204 * Be impartial 205 205 * Document moderation decisions 206 206 * Respond to appeals promptly 207 207 * Use automated tools to scale efforts 208 208 * Focus on abuse/harm, not routine quality control 209 - 210 210 == 4. Quality Standards == 211 - 212 -**Fulfills**: UN-5 (Source reliability), UN-6 (Publisher track records), UN-7 (Evidence transparency), UN-9 (Methodology transparency) 213 - 214 214 === 4.1 Source Requirements === 215 - 216 216 **Track Record Over Credentials**: 217 217 * Sources evaluated by historical accuracy 218 218 * Correction policy matters 219 219 * Independence from conflicts of interest 220 220 * Methodology transparency 221 - 222 222 **Source Quality Database**: 223 223 * Automated tracking of source accuracy 224 224 * Correction frequency 225 225 * Reliability score (updated continuously) 226 226 * Users can see source track record 227 - 228 228 **No automatic trust** for government, academia, or media - all evaluated by track record. 229 - 230 -**User Needs served**: UN-5 (Source provenance), UN-6 (Publisher reliability) 231 - 232 232 === 4.2 Claim Requirements === 233 - 234 234 * Clear subject and assertion 235 235 * Verifiable with available information 236 236 * Sourced (or explicitly marked as needing sources) 237 237 * Neutral language in FactHarbor summaries 238 238 * Appropriate context provided 239 - 240 -**User Needs served**: UN-2 (Claim extraction and verification) 241 - 242 242 === 4.3 Evidence Requirements === 243 - 244 244 * Publicly accessible (or explain why not) 245 245 * Properly cited with attribution 246 246 * Relevant to claim being evaluated 247 247 * Original source preferred over secondary 248 - 249 -**User Needs served**: UN-7 (Evidence transparency) 250 - 251 251 === 4.4 Confidence Scoring === 252 - 253 253 **Automated confidence calculation based on**: 254 254 * Source quality scores 255 255 * Evidence consistency ... ... @@ -256,23 +256,14 @@ 256 256 * Contradiction detection 257 257 * Completeness of analysis 258 258 * Historical accuracy of similar claims 259 - 260 260 **Thresholds**: 261 261 * < 40%: Too low to publish (needs improvement) 262 262 * 40-60%: Published with "Low confidence" warning 263 263 * 60-80%: Published as standard 264 264 * 80-100%: Published as "High confidence" 265 - 266 -**User Needs served**: UN-1 (Trust assessment), UN-9 (Methodology transparency) 267 - 268 268 == 5. Automated Risk Scoring == 269 - 270 -**Fulfills**: UN-10 (Manipulation detection), UN-16 (Appropriate review level) 271 - 272 272 **Replace manual risk tiers with continuous automated scoring**. 273 - 274 274 === 5.1 Risk Score Calculation === 275 - 276 276 **Factors** (weighted algorithm): 277 277 * **Domain sensitivity**: Medical, legal, safety auto-flagged higher 278 278 * **Potential impact**: Views, citations, spread ... ... @@ -279,26 +279,16 @@ 279 279 * **Controversy level**: Flags, disputes, edit wars 280 280 * **Uncertainty**: Low confidence, contradictory evidence 281 281 * **Source reliability**: Track record of sources used 282 - 283 283 **Score**: 0-100 (higher = more risk) 284 - 285 285 === 5.2 Automated Actions === 286 - 287 287 * **Score > 80**: Flag for moderator review before publication 288 288 * **Score 60-80**: Publish with prominent warnings 289 289 * **Score 40-60**: Publish with standard warnings 290 290 * **Score < 40**: Publish normally 291 - 292 292 **Continuous monitoring**: Risk score recalculated as new information emerges 293 - 294 -**User Needs served**: UN-10 (Detect manipulation tactics), UN-16 (Review status) 295 - 296 296 == 6. System Improvement Process == 297 - 298 298 **Core principle**: Fix the system, not just the data. 299 - 300 300 === 6.1 Error Capture === 301 - 302 302 **When users flag errors or make corrections**: 303 303 1. What was wrong? (categorize) 304 304 2. What should it have been? ... ... @@ -305,9 +305,7 @@ 305 305 3. Why did the system fail? (root cause) 306 306 4. How common is this pattern? 307 307 5. Store in ErrorPattern table (improvement queue) 308 - 309 309 === 6.2 Weekly Improvement Cycle === 310 - 311 311 1. **Review**: Analyze top error patterns 312 312 2. **Develop**: Create fix (prompt, model, validation) 313 313 3. **Test**: Validate fix on sample claims ... ... @@ -314,9 +314,7 @@ 314 314 4. **Deploy**: Roll out if quality improves 315 315 5. **Re-process**: Automatically update affected claims 316 316 6. **Monitor**: Track quality metrics 317 - 318 318 === 6.3 Quality Metrics Dashboard === 319 - 320 320 **Track continuously**: 321 321 * Error rate by category 322 322 * Source quality distribution ... ... @@ -325,23 +325,16 @@ 325 325 * Correction acceptance rate 326 326 * Re-work rate 327 327 * Claims processed per hour 328 - 329 329 **Goal**: 10% monthly improvement in error rate 330 - 331 331 == 7. Automated Quality Monitoring == 332 - 333 333 **Replace manual audit sampling with automated monitoring**. 334 - 335 335 === 7.1 Continuous Metrics === 336 - 337 337 * **Source quality**: Track record database 338 338 * **Consistency**: Contradiction detection 339 339 * **Clarity**: Readability scores 340 340 * **Completeness**: Field validation 341 341 * **Accuracy**: User corrections tracked 342 - 343 343 === 7.2 Anomaly Detection === 344 - 345 345 **Automated alerts for**: 346 346 * Sudden quality drops 347 347 * Unusual patterns ... ... @@ -348,218 +348,103 @@ 348 348 * Contradiction clusters 349 349 * Source reliability changes 350 350 * User behavior anomalies 351 - 352 352 === 7.3 Targeted Review === 353 - 354 354 * Review only flagged items 355 355 * Random sampling for calibration (not quotas) 356 356 * Learn from corrections to improve automation 357 - 358 -== 8. Functional Requirements == 359 - 360 -This section defines specific features that fulfill user needs. 361 - 362 -=== 8.1 Claim Intake & Normalization === 363 - 364 -==== FR1 — Claim Intake ==== 365 - 366 -**Fulfills**: UN-2 (Claim extraction), UN-4 (Quick fact-checking), UN-12 (Submit claims) 367 - 229 +== 8. Claim Intake & Normalization == 230 +=== 8.1 FR1 – Claim Intake === 368 368 * Users submit claims via simple form or API 369 369 * Claims can be text, URL, or image 370 370 * Duplicate detection (semantic similarity) 371 371 * Auto-categorization by domain 372 - 373 -==== FR2 — Claim Normalization ==== 374 - 375 -**Fulfills**: UN-2 (Claim verification) 376 - 235 +=== 8.2 FR2 – Claim Normalization === 377 377 * Standardize to clear assertion format 378 378 * Extract key entities (who, what, when, where) 379 379 * Identify claim type (factual, predictive, evaluative) 380 380 * Link to existing similar claims 381 - 382 -==== FR3 — Claim Classification ==== 383 - 384 -**Fulfills**: UN-11 (Filtered research) 385 - 240 +=== 8.3 FR3 – Claim Classification === 386 386 * Domain: Politics, Science, Health, etc. 387 387 * Type: Historical fact, current stat, prediction, etc. 388 388 * Risk score: Automated calculation 389 389 * Complexity: Simple, moderate, complex 390 - 391 -=== 8.2 Scenario System === 392 - 393 -==== FR4 — Scenario Generation ==== 394 - 395 -**Fulfills**: UN-2 (Context-dependent verification), UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) 396 - 245 +== 9. Scenario System == 246 +=== 9.1 FR4 – Scenario Generation === 397 397 **Automated scenario creation**: 398 -* AKEL analyzes claim and generates likely scenarios (use-cases and contexts)399 -* Each scenario includes: assumptions, definitions, boundaries, evidence context248 +* AKEL analyzes claim and generates likely scenarios 249 +* Each scenario includes: assumptions, evidence, conclusion 400 400 * Users can flag incorrect scenarios 401 401 * System learns from corrections 402 - 403 -**Key Concept**: Scenarios represent different interpretations or contexts (e.g., "Clinical trials with healthy adults" vs. "Real-world data with diverse populations") 404 - 405 -==== FR5 — Evidence Linking ==== 406 - 407 -**Fulfills**: UN-5 (Source tracing), UN-7 (Evidence transparency) 408 - 252 +=== 9.2 FR5 – Evidence Linking === 409 409 * Automated evidence discovery from sources 410 410 * Relevance scoring 411 411 * Contradiction detection 412 412 * Source quality assessment 413 - 414 -==== FR6 — Scenario Comparison ==== 415 - 416 -**Fulfills**: UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) 417 - 257 +=== 9.3 FR6 – Scenario Comparison === 418 418 * Side-by-side comparison interface 419 -* Highlight key differences between scenarios 420 -* Show evidence supporting each scenario 421 -* Display confidence scores per scenario 422 - 423 -=== 8.3 Verdicts & Analysis === 424 - 425 -==== FR7 — Automated Verdicts ==== 426 - 427 -**Fulfills**: UN-1 (Trust score), UN-2 (Verification verdicts), UN-3 (Article summary with FactHarbor analysis summary), UN-13 (Cite verdicts) 428 - 429 -* AKEL generates verdict based on evidence within each scenario 430 -* **Likelihood range** displayed (e.g., "0.70-0.85 (likely true)") - NOT binary true/false 431 -* **Uncertainty factors** explicitly listed (e.g., "Small sample sizes", "Long-term effects unknown") 259 +* Highlight key differences 260 +* Show evidence supporting each 261 +* Display confidence scores 262 +== 10. Verdicts & Analysis == 263 +=== 10.1 FR7 – Automated Verdicts === 264 +* AKEL generates verdict based on evidence 432 432 * Confidence score displayed prominently 433 -* Source quality indicators shown266 +* Source quality indicators 434 434 * Contradictions noted 435 435 * Uncertainty acknowledged 436 - 437 -**Key Innovation**: Detailed probabilistic verdicts with explicit uncertainty, not binary judgments 438 - 439 -==== FR8 — Time Evolution ==== 440 - 441 -**Fulfills**: UN-15 (Verdict evolution timeline) 442 - 443 -* Claims and verdicts update as new evidence emerges 444 -* Version history maintained for all verdicts 269 +=== 10.2 FR8 – Time Evolution === 270 +* Claims update as new evidence emerges 271 +* Version history maintained 445 445 * Changes highlighted 446 446 * Confidence score trends visible 447 -* Users can see "as of date X, what did we know?" 448 - 449 -=== 8.4 User Interface & Presentation === 450 - 451 -==== FR12 — Two-Panel Summary View (Article Summary with FactHarbor Analysis Summary) ==== 452 - 453 -**Fulfills**: UN-3 (Article Summary with FactHarbor Analysis Summary) 454 - 455 -**Purpose**: Provide side-by-side comparison of what a document claims vs. FactHarbor's complete analysis of its credibility 456 - 457 -**Left Panel: Article Summary**: 458 -* Document title, source, and claimed credibility 459 -* "The Big Picture" - main thesis or position change 460 -* "Key Findings" - structured summary of document's main claims 461 -* "Reasoning" - document's explanation for positions 462 -* "Conclusion" - document's bottom line 463 - 464 -**Right Panel: FactHarbor Analysis Summary**: 465 -* FactHarbor's independent source credibility assessment 466 -* Claim-by-claim verdicts with confidence scores 467 -* Methodology assessment (strengths, limitations) 468 -* Overall verdict on document quality 469 -* Analysis ID for reference 470 - 471 -**Design Principles**: 472 -* No scrolling required - both panels visible simultaneously 473 -* Visual distinction between "what they say" and "FactHarbor's analysis" 474 -* Color coding for verdicts (supported, uncertain, refuted) 475 -* Confidence percentages clearly visible 476 -* Mobile responsive (panels stack vertically on small screens) 477 - 478 -**Implementation Notes**: 479 -* Generated automatically by AKEL for every analyzed document 480 -* Updates when verdict evolves (maintains version history) 481 -* Exportable as standalone summary report 482 -* Shareable via permanent URL 483 - 484 -=== 8.5 Workflow & Moderation === 485 - 486 -==== FR9 — Publication Workflow ==== 487 - 488 -**Fulfills**: UN-1 (Fast access to verified content), UN-16 (Clear review status) 489 - 274 +== 11. Workflow & Moderation == 275 +=== 11.1 FR9 – Publication Workflow === 490 490 **Simple flow**: 491 491 1. Claim submitted 492 492 2. AKEL processes (automated) 493 -3. If confidence > threshold: Publish (labeled as AI-generated)279 +3. If confidence > threshold: Publish 494 494 4. If confidence < threshold: Flag for improvement 495 495 5. If risk score > threshold: Flag for moderator 496 - 497 497 **No multi-stage approval process** 498 - 499 -==== FR10 — Moderation ==== 500 - 283 +=== 11.2 FR10 – Moderation === 501 501 **Focus on abuse, not routine quality**: 502 502 * Automated abuse detection 503 503 * Moderators handle flags 504 504 * Quick response to harmful content 505 505 * Minimal involvement in routine content 506 - 507 -==== FR11 — Audit Trail ==== 508 - 509 -**Fulfills**: UN-14 (API access to histories), UN-15 (Evolution tracking) 510 - 289 +=== 11.3 FR11 – Audit Trail === 511 511 * All edits logged 512 512 * Version history public 513 513 * Moderation decisions documented 514 514 * System improvements tracked 515 - 516 -== 9. Non-Functional Requirements == 517 - 518 -=== 9.1 NFR1 — Performance === 519 - 520 -**Fulfills**: UN-4 (Fast fact-checking), UN-11 (Responsive filtering) 521 - 294 +== 12. Technical Requirements == 295 +=== 12.1 NFR1 – Performance === 522 522 * Claim processing: < 30 seconds 523 523 * Search response: < 2 seconds 524 524 * Page load: < 3 seconds 525 525 * 99% uptime 526 - 527 -=== 9.2 NFR2 — Scalability === 528 - 529 -**Fulfills**: UN-14 (API access at scale) 530 - 300 +=== 12.2 NFR2 – Scalability === 531 531 * Handle 10,000 claims initially 532 532 * Scale to 1M+ claims 533 533 * Support 100K+ concurrent users 534 534 * Automated processing scales linearly 535 - 536 -=== 9.3 NFR3 — Transparency === 537 - 538 -**Fulfills**: UN-7 (Evidence transparency), UN-9 (Methodology transparency), UN-13 (Citable verdicts), UN-15 (Evolution visibility) 539 - 305 +=== 12.3 NFR3 – Transparency === 540 540 * All algorithms open source 541 541 * All data exportable 542 542 * All decisions documented 543 543 * Quality metrics public 544 - 545 -=== 9.4 NFR4 — Security & Privacy === 546 - 310 +=== 12.4 NFR4 – Security & Privacy === 547 547 * Follow [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]] 548 548 * Secure authentication 549 549 * Data encryption 550 550 * Regular security audits 551 - 552 -=== 9.5 NFR5 — Maintainability === 553 - 315 +=== 12.5 NFR5 – Maintainability === 554 554 * Modular architecture 555 555 * Automated testing 556 556 * Continuous integration 557 557 * Comprehensive documentation 558 - 559 -== 10. MVP Scope == 560 - 320 +== 13. MVP Scope == 561 561 **Phase 1 (Months 1-3): Read-Only MVP** 562 - 563 563 Build: 564 564 * Automated claim analysis 565 565 * Confidence scoring ... ... @@ -566,72 +566,39 @@ 566 566 * Source evaluation 567 567 * Browse/search interface 568 568 * User flagging system 569 - 570 570 **Goal**: Prove AI quality before adding user editing 571 - 572 -**User Needs fulfilled in Phase 1**: UN-1, UN-2, UN-3, UN-4, UN-5, UN-6, UN-7, UN-8, UN-9, UN-12 573 - 574 574 **Phase 2 (Months 4-6): User Contributions** 575 - 576 576 Add only if needed: 577 577 * Simple editing (Wikipedia-style) 578 578 * Reputation system 579 579 * Basic moderation 580 - 581 -**Additional User Needs fulfilled**: UN-13 582 - 583 583 **Phase 3 (Months 7-12): Refinement** 584 - 585 585 * Continuous quality improvement 586 586 * Feature additions based on real usage 587 587 * Scale infrastructure 588 - 589 -**Additional User Needs fulfilled**: UN-14 (API access), UN-15 (Full evolution tracking) 590 - 591 591 **Deferred**: 592 592 * Federation (until multiple successful instances exist) 593 593 * Complex contribution workflows (focus on automation) 594 594 * Extensive role hierarchy (keep simple) 595 - 596 -== 11. Success Metrics == 597 - 342 +== 14. Success Metrics == 598 598 **System Quality** (track weekly): 599 599 * Error rate by category (target: -10%/month) 600 600 * Average confidence score (target: increase) 601 601 * Source quality distribution (target: more high-quality) 602 602 * Contradiction detection rate (target: increase) 603 - 604 604 **Efficiency** (track monthly): 605 605 * Claims processed per hour (target: increase) 606 606 * Human hours per claim (target: decrease) 607 607 * Automation coverage (target: >90%) 608 608 * Re-work rate (target: <5%) 609 - 610 610 **User Satisfaction** (track quarterly): 611 611 * User flag rate (issues found) 612 612 * Correction acceptance rate (flags valid) 613 613 * Return user rate 614 614 * Trust indicators (surveys) 615 - 616 -**User Needs Metrics** (track quarterly): 617 -* UN-1: % users who understand trust scores 618 -* UN-4: Time to verify social media claim (target: <30s) 619 -* UN-7: % users who access evidence details 620 -* UN-8: % users who view multiple scenarios 621 -* UN-15: % users who check evolution timeline 622 - 623 -== 12. Requirements Traceability == 624 - 625 -For full traceability matrix showing which requirements fulfill which user needs, see: 626 - 627 -* [[User Needs>>FactHarbor.Specification.Requirements.User-Needs]] - Section 8 includes comprehensive mapping tables 628 - 629 -== 13. Related Pages == 630 - 631 -* **[[User Needs>>FactHarbor.Specification.Requirements.User-Needs]]** - What users need (drives these requirements) 632 -* [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] - How requirements are implemented 633 -* [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] - Data structures supporting requirements 634 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] - User interaction workflows 635 -* [[AKEL>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI system fulfilling automation requirements 358 +== 15. Related Pages == 359 +* [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] 360 +* [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] 361 +* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 636 636 * [[Global Rules>>FactHarbor.Organisation.How-We-Work-Together.GlobalRules.WebHome]] 637 637 * [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]]