Wiki source code of Requirements
Version 4.1 by Robert Schaub on 2025/12/19 10:02
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Requirements = | ||
| 2 | |||
| 3 | **This page defines Roles, Content States, Rules, and System Requirements for FactHarbor.** | ||
| 4 | |||
| 5 | **Core Philosophy:** Invest in system improvement, not manual data correction. When AI makes errors, improve the algorithm and re-process automatically. | ||
| 6 | |||
| 7 | == Navigation == | ||
| 8 | |||
| 9 | * **[[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]]** - What users need from FactHarbor (drives these requirements) | ||
| 10 | * **This page** - How we fulfill those needs through system design | ||
| 11 | |||
| 12 | (% class="box infomessage" %) | ||
| 13 | ((( | ||
| 14 | **How to read this page:** | ||
| 15 | |||
| 16 | 1. **User Needs drive Requirements**: See [[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]] for what users need | ||
| 17 | 2. **Requirements define implementation**: This page shows how we fulfill those needs | ||
| 18 | 3. **Functional Requirements (FR)**: Specific features and capabilities | ||
| 19 | 4. **Non-Functional Requirements (NFR)**: Quality attributes (performance, security, etc.) | ||
| 20 | |||
| 21 | Each requirement references which User Needs it fulfills. | ||
| 22 | ))) | ||
| 23 | |||
| 24 | == 1. Roles == | ||
| 25 | |||
| 26 | **Fulfills**: UN-12 (Submit claims), UN-13 (Cite verdicts), UN-14 (API access) | ||
| 27 | |||
| 28 | FactHarbor uses three simple roles plus a reputation system. | ||
| 29 | |||
| 30 | === 1.1 Reader === | ||
| 31 | |||
| 32 | **Who**: Anyone (no login required) | ||
| 33 | |||
| 34 | **Can**: | ||
| 35 | * Browse and search claims | ||
| 36 | * View scenarios, evidence, verdicts, and confidence scores | ||
| 37 | * Flag issues or errors | ||
| 38 | * Use filters, search, and visualization tools | ||
| 39 | * Submit claims automatically (new claims added if not duplicates) | ||
| 40 | |||
| 41 | **Cannot**: | ||
| 42 | * Modify content | ||
| 43 | * Access edit history details | ||
| 44 | |||
| 45 | **User Needs served**: UN-1 (Trust assessment), UN-2 (Claim verification), UN-3 (Article summary with FactHarbor analysis summary), UN-4 (Social media fact-checking), UN-5 (Source tracing), UN-7 (Evidence transparency), UN-8 (Understanding disagreement), UN-12 (Submit claims) | ||
| 46 | |||
| 47 | === 1.2 Contributor === | ||
| 48 | |||
| 49 | **Who**: Registered users (earns reputation through contributions) | ||
| 50 | |||
| 51 | **Can**: | ||
| 52 | * Everything a Reader can do | ||
| 53 | * Edit claims, evidence, and scenarios | ||
| 54 | * Add sources and citations | ||
| 55 | * Suggest improvements to AI-generated content | ||
| 56 | * Participate in discussions | ||
| 57 | * Earn reputation points for quality contributions | ||
| 58 | |||
| 59 | **Reputation System**: | ||
| 60 | * New contributors: Limited edit privileges | ||
| 61 | * Established contributors (established reputation): Full edit access | ||
| 62 | * Trusted contributors (substantial reputation): Can approve certain changes | ||
| 63 | * Reputation earned through: Accepted edits, helpful flags, quality contributions | ||
| 64 | * Reputation lost through: Reverted edits, invalid flags, abuse | ||
| 65 | |||
| 66 | **Cannot**: | ||
| 67 | * Delete or hide content (only moderators) | ||
| 68 | * Override moderation decisions | ||
| 69 | |||
| 70 | **User Needs served**: UN-13 (Cite and contribute) | ||
| 71 | |||
| 72 | === 1.3 Moderator === | ||
| 73 | |||
| 74 | **Who**: Trusted community members with proven track record, appointed by governance board | ||
| 75 | |||
| 76 | **Can**: | ||
| 77 | * Review flagged content | ||
| 78 | * Hide harmful or abusive content | ||
| 79 | * Resolve disputes between contributors | ||
| 80 | * Issue warnings or temporary bans | ||
| 81 | * Make final decisions on content disputes | ||
| 82 | * Access full audit logs | ||
| 83 | |||
| 84 | **Cannot**: | ||
| 85 | * Change governance rules | ||
| 86 | * Permanently ban users without board approval | ||
| 87 | * Override technical quality gates | ||
| 88 | |||
| 89 | **Note**: Small team (3-5 initially), supported by automated moderation tools. | ||
| 90 | |||
| 91 | === 1.4 Domain Trusted Contributors (Optional, Task-Specific) === | ||
| 92 | |||
| 93 | **Who**: Subject matter specialists invited for specific high-stakes disputes | ||
| 94 | |||
| 95 | **Not a permanent role**: Contacted externally when needed for contested claims in their domain | ||
| 96 | |||
| 97 | **When used**: | ||
| 98 | * Medical claims with life/safety implications | ||
| 99 | * Legal interpretations with significant impact | ||
| 100 | * Scientific claims with high controversy | ||
| 101 | * Technical claims requiring specialized knowledge | ||
| 102 | |||
| 103 | **Process**: | ||
| 104 | * Moderator identifies need for expert input | ||
| 105 | * Contact expert externally (don't require them to be users) | ||
| 106 | * Trusted Contributor provides written opinion with sources | ||
| 107 | * Opinion added to claim record | ||
| 108 | * Trusted Contributor acknowledged in claim | ||
| 109 | |||
| 110 | **User Needs served**: UN-16 (Expert validation status) | ||
| 111 | |||
| 112 | == 2. Content States == | ||
| 113 | |||
| 114 | **Fulfills**: UN-1 (Trust indicators), UN-16 (Review status transparency) | ||
| 115 | |||
| 116 | FactHarbor uses two content states. Focus is on transparency and confidence scoring, not gatekeeping. | ||
| 117 | |||
| 118 | === 2.1 Published === | ||
| 119 | |||
| 120 | **Status**: Visible to all users | ||
| 121 | |||
| 122 | **Includes**: | ||
| 123 | * AI-generated analyses (default state) | ||
| 124 | * User-contributed content | ||
| 125 | * Edited/improved content | ||
| 126 | |||
| 127 | **Quality Indicators** (displayed with content): | ||
| 128 | * **Confidence Score**: 0-100% (AI's confidence in analysis) | ||
| 129 | * **Source Quality Score**: 0-100% (based on source track record) | ||
| 130 | * **Controversy Flag**: If high dispute/edit activity | ||
| 131 | * **Completeness Score**: % of expected fields filled | ||
| 132 | * **Last Updated**: Date of most recent change | ||
| 133 | * **Edit Count**: Number of revisions | ||
| 134 | * **Review Status**: AI-generated / Human-reviewed / Expert-validated | ||
| 135 | |||
| 136 | **Automatic Warnings**: | ||
| 137 | * Confidence < 60%: "Low confidence - use caution" | ||
| 138 | * Source quality < 40%: "Sources may be unreliable" | ||
| 139 | * High controversy: "Disputed - multiple interpretations exist" | ||
| 140 | * Medical/Legal/Safety domain: "Seek professional advice" | ||
| 141 | |||
| 142 | **User Needs served**: UN-1 (Trust score), UN-9 (Methodology transparency), UN-15 (Evolution timeline), UN-16 (Review status) | ||
| 143 | |||
| 144 | === 2.2 Hidden === | ||
| 145 | |||
| 146 | **Status**: Not visible to regular users (only to moderators) | ||
| 147 | |||
| 148 | **Reasons**: | ||
| 149 | * Spam or advertising | ||
| 150 | * Personal attacks or harassment | ||
| 151 | * Illegal content | ||
| 152 | * Privacy violations | ||
| 153 | * Deliberate misinformation (verified) | ||
| 154 | * Abuse or harmful content | ||
| 155 | |||
| 156 | **Process**: | ||
| 157 | * Automated detection flags for moderator review | ||
| 158 | * Moderator confirms and hides | ||
| 159 | * Original author notified with reason | ||
| 160 | * Can appeal to board if disputes moderator decision | ||
| 161 | |||
| 162 | **Note**: Content is hidden, not deleted (for audit trail) | ||
| 163 | |||
| 164 | == 3. Contribution Rules == | ||
| 165 | |||
| 166 | === 3.1 All Contributors Must === | ||
| 167 | |||
| 168 | * Provide sources for factual claims | ||
| 169 | * Use clear, neutral language in FactHarbor's own summaries | ||
| 170 | * Respect others and maintain civil discourse | ||
| 171 | * Accept community feedback constructively | ||
| 172 | * Focus on improving quality, not protecting ego | ||
| 173 | |||
| 174 | === 3.2 AKEL (AI System) === | ||
| 175 | |||
| 176 | **AKEL is the primary system**. Human contributions supplement and train AKEL. | ||
| 177 | |||
| 178 | **AKEL Must**: | ||
| 179 | * Mark all outputs as AI-generated | ||
| 180 | * Display confidence scores prominently | ||
| 181 | * Provide source citations | ||
| 182 | * Flag uncertainty clearly | ||
| 183 | * Identify contradictions in evidence | ||
| 184 | * Learn from human corrections | ||
| 185 | |||
| 186 | **When AKEL Makes Errors**: | ||
| 187 | 1. Capture the error pattern (what, why, how common) | ||
| 188 | 2. Improve the system (better prompt, model, validation) | ||
| 189 | 3. Re-process affected claims automatically | ||
| 190 | 4. Measure improvement (did quality increase?) | ||
| 191 | |||
| 192 | **Human Role**: Train AKEL through corrections, not replace AKEL | ||
| 193 | |||
| 194 | === 3.3 Contributors Should === | ||
| 195 | |||
| 196 | * Improve clarity and structure | ||
| 197 | * Add missing sources | ||
| 198 | * Flag errors for system improvement | ||
| 199 | * Suggest better ways to present information | ||
| 200 | * Participate in quality discussions | ||
| 201 | |||
| 202 | === 3.4 Moderators Must === | ||
| 203 | |||
| 204 | * Be impartial | ||
| 205 | * Document moderation decisions | ||
| 206 | * Respond to appeals promptly | ||
| 207 | * Use automated tools to scale efforts | ||
| 208 | * Focus on abuse/harm, not routine quality control | ||
| 209 | |||
| 210 | == 4. Quality Standards == | ||
| 211 | |||
| 212 | **Fulfills**: UN-5 (Source reliability), UN-6 (Publisher track records), UN-7 (Evidence transparency), UN-9 (Methodology transparency) | ||
| 213 | |||
| 214 | === 4.1 Source Requirements === | ||
| 215 | |||
| 216 | **Track Record Over Credentials**: | ||
| 217 | * Sources evaluated by historical accuracy | ||
| 218 | * Correction policy matters | ||
| 219 | * Independence from conflicts of interest | ||
| 220 | * Methodology transparency | ||
| 221 | |||
| 222 | **Source Quality Database**: | ||
| 223 | * Automated tracking of source accuracy | ||
| 224 | * Correction frequency | ||
| 225 | * Reliability score (updated continuously) | ||
| 226 | * Users can see source track record | ||
| 227 | |||
| 228 | **No automatic trust** for government, academia, or media - all evaluated by track record. | ||
| 229 | |||
| 230 | **User Needs served**: UN-5 (Source provenance), UN-6 (Publisher reliability) | ||
| 231 | |||
| 232 | === 4.2 Claim Requirements === | ||
| 233 | |||
| 234 | * Clear subject and assertion | ||
| 235 | * Verifiable with available information | ||
| 236 | * Sourced (or explicitly marked as needing sources) | ||
| 237 | * Neutral language in FactHarbor summaries | ||
| 238 | * Appropriate context provided | ||
| 239 | |||
| 240 | **User Needs served**: UN-2 (Claim extraction and verification) | ||
| 241 | |||
| 242 | === 4.3 Evidence Requirements === | ||
| 243 | |||
| 244 | * Publicly accessible (or explain why not) | ||
| 245 | * Properly cited with attribution | ||
| 246 | * Relevant to claim being evaluated | ||
| 247 | * Original source preferred over secondary | ||
| 248 | |||
| 249 | **User Needs served**: UN-7 (Evidence transparency) | ||
| 250 | |||
| 251 | === 4.4 Confidence Scoring === | ||
| 252 | |||
| 253 | **Automated confidence calculation based on**: | ||
| 254 | * Source quality scores | ||
| 255 | * Evidence consistency | ||
| 256 | * Contradiction detection | ||
| 257 | * Completeness of analysis | ||
| 258 | * Historical accuracy of similar claims | ||
| 259 | |||
| 260 | **Thresholds**: | ||
| 261 | * < 40%: Too low to publish (needs improvement) | ||
| 262 | * 40-60%: Published with "Low confidence" warning | ||
| 263 | * 60-80%: Published as standard | ||
| 264 | * 80-100%: Published as "High confidence" | ||
| 265 | |||
| 266 | **User Needs served**: UN-1 (Trust assessment), UN-9 (Methodology transparency) | ||
| 267 | |||
| 268 | == 5. Automated Risk Scoring == | ||
| 269 | |||
| 270 | **Fulfills**: UN-10 (Manipulation detection), UN-16 (Appropriate review level) | ||
| 271 | |||
| 272 | **Replace manual risk tiers with continuous automated scoring**. | ||
| 273 | |||
| 274 | === 5.1 Risk Score Calculation === | ||
| 275 | |||
| 276 | **Factors** (weighted algorithm): | ||
| 277 | * **Domain sensitivity**: Medical, legal, safety auto-flagged higher | ||
| 278 | * **Potential impact**: Views, citations, spread | ||
| 279 | * **Controversy level**: Flags, disputes, edit wars | ||
| 280 | * **Uncertainty**: Low confidence, contradictory evidence | ||
| 281 | * **Source reliability**: Track record of sources used | ||
| 282 | |||
| 283 | **Score**: 0-100 (higher = more risk) | ||
| 284 | |||
| 285 | === 5.2 Automated Actions === | ||
| 286 | |||
| 287 | * **Score > 80**: Flag for moderator review before publication | ||
| 288 | * **Score 60-80**: Publish with prominent warnings | ||
| 289 | * **Score 40-60**: Publish with standard warnings | ||
| 290 | * **Score < 40**: Publish normally | ||
| 291 | |||
| 292 | **Continuous monitoring**: Risk score recalculated as new information emerges | ||
| 293 | |||
| 294 | **User Needs served**: UN-10 (Detect manipulation tactics), UN-16 (Review status) | ||
| 295 | |||
| 296 | == 6. System Improvement Process == | ||
| 297 | |||
| 298 | **Core principle**: Fix the system, not just the data. | ||
| 299 | |||
| 300 | === 6.1 Error Capture === | ||
| 301 | |||
| 302 | **When users flag errors or make corrections**: | ||
| 303 | 1. What was wrong? (categorize) | ||
| 304 | 2. What should it have been? | ||
| 305 | 3. Why did the system fail? (root cause) | ||
| 306 | 4. How common is this pattern? | ||
| 307 | 5. Store in ErrorPattern table (improvement queue) | ||
| 308 | |||
| 309 | === 6.2 Weekly Improvement Cycle === | ||
| 310 | |||
| 311 | 1. **Review**: Analyze top error patterns | ||
| 312 | 2. **Develop**: Create fix (prompt, model, validation) | ||
| 313 | 3. **Test**: Validate fix on sample claims | ||
| 314 | 4. **Deploy**: Roll out if quality improves | ||
| 315 | 5. **Re-process**: Automatically update affected claims | ||
| 316 | 6. **Monitor**: Track quality metrics | ||
| 317 | |||
| 318 | === 6.3 Quality Metrics Dashboard === | ||
| 319 | |||
| 320 | **Track continuously**: | ||
| 321 | * Error rate by category | ||
| 322 | * Source quality distribution | ||
| 323 | * Confidence score trends | ||
| 324 | * User flag rate (issues found) | ||
| 325 | * Correction acceptance rate | ||
| 326 | * Re-work rate | ||
| 327 | * Claims processed per hour | ||
| 328 | |||
| 329 | **Goal**: 10% monthly improvement in error rate | ||
| 330 | |||
| 331 | == 7. Automated Quality Monitoring == | ||
| 332 | |||
| 333 | **Replace manual audit sampling with automated monitoring**. | ||
| 334 | |||
| 335 | === 7.1 Continuous Metrics === | ||
| 336 | |||
| 337 | * **Source quality**: Track record database | ||
| 338 | * **Consistency**: Contradiction detection | ||
| 339 | * **Clarity**: Readability scores | ||
| 340 | * **Completeness**: Field validation | ||
| 341 | * **Accuracy**: User corrections tracked | ||
| 342 | |||
| 343 | === 7.2 Anomaly Detection === | ||
| 344 | |||
| 345 | **Automated alerts for**: | ||
| 346 | * Sudden quality drops | ||
| 347 | * Unusual patterns | ||
| 348 | * Contradiction clusters | ||
| 349 | * Source reliability changes | ||
| 350 | * User behavior anomalies | ||
| 351 | |||
| 352 | === 7.3 Targeted Review === | ||
| 353 | |||
| 354 | * Review only flagged items | ||
| 355 | * Random sampling for calibration (not quotas) | ||
| 356 | * Learn from corrections to improve automation | ||
| 357 | |||
| 358 | == 8. Functional Requirements == | ||
| 359 | |||
| 360 | This section defines specific features that fulfill user needs. | ||
| 361 | |||
| 362 | === 8.1 Claim Intake & Normalization === | ||
| 363 | |||
| 364 | ==== FR1 — Claim Intake ==== | ||
| 365 | |||
| 366 | **Fulfills**: UN-2 (Claim extraction), UN-4 (Quick fact-checking), UN-12 (Submit claims) | ||
| 367 | |||
| 368 | * Users submit claims via simple form or API | ||
| 369 | * Claims can be text, URL, or image | ||
| 370 | * Duplicate detection (semantic similarity) | ||
| 371 | * Auto-categorization by domain | ||
| 372 | |||
| 373 | ==== FR2 — Claim Normalization ==== | ||
| 374 | |||
| 375 | **Fulfills**: UN-2 (Claim verification) | ||
| 376 | |||
| 377 | * Standardize to clear assertion format | ||
| 378 | * Extract key entities (who, what, when, where) | ||
| 379 | * Identify claim type (factual, predictive, evaluative) | ||
| 380 | * Link to existing similar claims | ||
| 381 | |||
| 382 | ==== FR3 — Claim Classification ==== | ||
| 383 | |||
| 384 | **Fulfills**: UN-11 (Filtered research) | ||
| 385 | |||
| 386 | * Domain: Politics, Science, Health, etc. | ||
| 387 | * Type: Historical fact, current stat, prediction, etc. | ||
| 388 | * Risk score: Automated calculation | ||
| 389 | * Complexity: Simple, moderate, complex | ||
| 390 | |||
| 391 | === 8.2 Scenario System === | ||
| 392 | |||
| 393 | ==== FR4 — Scenario Generation ==== | ||
| 394 | |||
| 395 | **Fulfills**: UN-2 (Context-dependent verification), UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) | ||
| 396 | |||
| 397 | **Automated scenario creation**: | ||
| 398 | * AKEL analyzes claim and generates likely scenarios (use-cases and contexts) | ||
| 399 | * Each scenario includes: assumptions, definitions, boundaries, evidence context | ||
| 400 | * Users can flag incorrect scenarios | ||
| 401 | * System learns from corrections | ||
| 402 | |||
| 403 | **Key Concept**: Scenarios represent different interpretations or contexts (e.g., "Clinical trials with healthy adults" vs. "Real-world data with diverse populations") | ||
| 404 | |||
| 405 | ==== FR5 — Evidence Linking ==== | ||
| 406 | |||
| 407 | **Fulfills**: UN-5 (Source tracing), UN-7 (Evidence transparency) | ||
| 408 | |||
| 409 | * Automated evidence discovery from sources | ||
| 410 | * Relevance scoring | ||
| 411 | * Contradiction detection | ||
| 412 | * Source quality assessment | ||
| 413 | |||
| 414 | ==== FR6 — Scenario Comparison ==== | ||
| 415 | |||
| 416 | **Fulfills**: UN-3 (Article summary with FactHarbor analysis summary), UN-8 (Understanding disagreement) | ||
| 417 | |||
| 418 | * Side-by-side comparison interface | ||
| 419 | * Highlight key differences between scenarios | ||
| 420 | * Show evidence supporting each scenario | ||
| 421 | * Display confidence scores per scenario | ||
| 422 | |||
| 423 | === 8.3 Verdicts & Analysis === | ||
| 424 | |||
| 425 | ==== FR7 — Automated Verdicts ==== | ||
| 426 | |||
| 427 | **Fulfills**: UN-1 (Trust score), UN-2 (Verification verdicts), UN-3 (Article summary with FactHarbor analysis summary), UN-13 (Cite verdicts) | ||
| 428 | |||
| 429 | * AKEL generates verdict based on evidence within each scenario | ||
| 430 | * **Likelihood range** displayed (e.g., "0.70-0.85 (likely true)") - NOT binary true/false | ||
| 431 | * **Uncertainty factors** explicitly listed (e.g., "Small sample sizes", "Long-term effects unknown") | ||
| 432 | * Confidence score displayed prominently | ||
| 433 | * Source quality indicators shown | ||
| 434 | * Contradictions noted | ||
| 435 | * Uncertainty acknowledged | ||
| 436 | |||
| 437 | **Key Innovation**: Detailed probabilistic verdicts with explicit uncertainty, not binary judgments | ||
| 438 | |||
| 439 | ==== FR8 — Time Evolution ==== | ||
| 440 | |||
| 441 | **Fulfills**: UN-15 (Verdict evolution timeline) | ||
| 442 | |||
| 443 | * Claims and verdicts update as new evidence emerges | ||
| 444 | * Version history maintained for all verdicts | ||
| 445 | * Changes highlighted | ||
| 446 | * Confidence score trends visible | ||
| 447 | * Users can see "as of date X, what did we know?" | ||
| 448 | |||
| 449 | === 8.4 User Interface & Presentation === | ||
| 450 | |||
| 451 | ==== FR12 — Two-Panel Summary View (Article Summary with FactHarbor Analysis Summary) ==== | ||
| 452 | |||
| 453 | **Fulfills**: UN-3 (Article Summary with FactHarbor Analysis Summary) | ||
| 454 | |||
| 455 | **Purpose**: Provide side-by-side comparison of what a document claims vs. FactHarbor's complete analysis of its credibility | ||
| 456 | |||
| 457 | **Left Panel: Article Summary**: | ||
| 458 | * Document title, source, and claimed credibility | ||
| 459 | * "The Big Picture" - main thesis or position change | ||
| 460 | * "Key Findings" - structured summary of document's main claims | ||
| 461 | * "Reasoning" - document's explanation for positions | ||
| 462 | * "Conclusion" - document's bottom line | ||
| 463 | |||
| 464 | **Right Panel: FactHarbor Analysis Summary**: | ||
| 465 | * FactHarbor's independent source credibility assessment | ||
| 466 | * Claim-by-claim verdicts with confidence scores | ||
| 467 | * Methodology assessment (strengths, limitations) | ||
| 468 | * Overall verdict on document quality | ||
| 469 | * Analysis ID for reference | ||
| 470 | |||
| 471 | **Design Principles**: | ||
| 472 | * No scrolling required - both panels visible simultaneously | ||
| 473 | * Visual distinction between "what they say" and "FactHarbor's analysis" | ||
| 474 | * Color coding for verdicts (supported, uncertain, refuted) | ||
| 475 | * Confidence percentages clearly visible | ||
| 476 | * Mobile responsive (panels stack vertically on small screens) | ||
| 477 | |||
| 478 | **Implementation Notes**: | ||
| 479 | * Generated automatically by AKEL for every analyzed document | ||
| 480 | * Updates when verdict evolves (maintains version history) | ||
| 481 | * Exportable as standalone summary report | ||
| 482 | * Shareable via permanent URL | ||
| 483 | |||
| 484 | === 8.5 Workflow & Moderation === | ||
| 485 | |||
| 486 | ==== FR9 — Publication Workflow ==== | ||
| 487 | |||
| 488 | **Fulfills**: UN-1 (Fast access to verified content), UN-16 (Clear review status) | ||
| 489 | |||
| 490 | **Simple flow**: | ||
| 491 | 1. Claim submitted | ||
| 492 | 2. AKEL processes (automated) | ||
| 493 | 3. If confidence > threshold: Publish (labeled as AI-generated) | ||
| 494 | 4. If confidence < threshold: Flag for improvement | ||
| 495 | 5. If risk score > threshold: Flag for moderator | ||
| 496 | |||
| 497 | **No multi-stage approval process** | ||
| 498 | |||
| 499 | ==== FR10 — Moderation ==== | ||
| 500 | |||
| 501 | **Focus on abuse, not routine quality**: | ||
| 502 | * Automated abuse detection | ||
| 503 | * Moderators handle flags | ||
| 504 | * Quick response to harmful content | ||
| 505 | * Minimal involvement in routine content | ||
| 506 | |||
| 507 | ==== FR11 — Audit Trail ==== | ||
| 508 | |||
| 509 | **Fulfills**: UN-14 (API access to histories), UN-15 (Evolution tracking) | ||
| 510 | |||
| 511 | * All edits logged | ||
| 512 | * Version history public | ||
| 513 | * Moderation decisions documented | ||
| 514 | * System improvements tracked | ||
| 515 | |||
| 516 | == 9. Non-Functional Requirements == | ||
| 517 | |||
| 518 | === 9.1 NFR1 — Performance === | ||
| 519 | |||
| 520 | **Fulfills**: UN-4 (Fast fact-checking), UN-11 (Responsive filtering) | ||
| 521 | |||
| 522 | * Claim processing: < 30 seconds | ||
| 523 | * Search response: < 2 seconds | ||
| 524 | * Page load: < 3 seconds | ||
| 525 | * 99% uptime | ||
| 526 | |||
| 527 | === 9.2 NFR2 — Scalability === | ||
| 528 | |||
| 529 | **Fulfills**: UN-14 (API access at scale) | ||
| 530 | |||
| 531 | * Handle 10,000 claims initially | ||
| 532 | * Scale to 1M+ claims | ||
| 533 | * Support 100K+ concurrent users | ||
| 534 | * Automated processing scales linearly | ||
| 535 | |||
| 536 | === 9.3 NFR3 — Transparency === | ||
| 537 | |||
| 538 | **Fulfills**: UN-7 (Evidence transparency), UN-9 (Methodology transparency), UN-13 (Citable verdicts), UN-15 (Evolution visibility) | ||
| 539 | |||
| 540 | * All algorithms open source | ||
| 541 | * All data exportable | ||
| 542 | * All decisions documented | ||
| 543 | * Quality metrics public | ||
| 544 | |||
| 545 | === 9.4 NFR4 — Security & Privacy === | ||
| 546 | |||
| 547 | * Follow [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]] | ||
| 548 | * Secure authentication | ||
| 549 | * Data encryption | ||
| 550 | * Regular security audits | ||
| 551 | |||
| 552 | === 9.5 NFR5 — Maintainability === | ||
| 553 | |||
| 554 | * Modular architecture | ||
| 555 | * Automated testing | ||
| 556 | * Continuous integration | ||
| 557 | * Comprehensive documentation | ||
| 558 | |||
| 559 | == 10. MVP Scope == | ||
| 560 | |||
| 561 | **Phase 1 (Months 1-3): Read-Only MVP** | ||
| 562 | |||
| 563 | Build: | ||
| 564 | * Automated claim analysis | ||
| 565 | * Confidence scoring | ||
| 566 | * Source evaluation | ||
| 567 | * Browse/search interface | ||
| 568 | * User flagging system | ||
| 569 | |||
| 570 | **Goal**: Prove AI quality before adding user editing | ||
| 571 | |||
| 572 | **User Needs fulfilled in Phase 1**: UN-1, UN-2, UN-3, UN-4, UN-5, UN-6, UN-7, UN-8, UN-9, UN-12 | ||
| 573 | |||
| 574 | **Phase 2 (Months 4-6): User Contributions** | ||
| 575 | |||
| 576 | Add only if needed: | ||
| 577 | * Simple editing (Wikipedia-style) | ||
| 578 | * Reputation system | ||
| 579 | * Basic moderation | ||
| 580 | |||
| 581 | **Additional User Needs fulfilled**: UN-13 | ||
| 582 | |||
| 583 | **Phase 3 (Months 7-12): Refinement** | ||
| 584 | |||
| 585 | * Continuous quality improvement | ||
| 586 | * Feature additions based on real usage | ||
| 587 | * Scale infrastructure | ||
| 588 | |||
| 589 | **Additional User Needs fulfilled**: UN-14 (API access), UN-15 (Full evolution tracking) | ||
| 590 | |||
| 591 | **Deferred**: | ||
| 592 | * Federation (until multiple successful instances exist) | ||
| 593 | * Complex contribution workflows (focus on automation) | ||
| 594 | * Extensive role hierarchy (keep simple) | ||
| 595 | |||
| 596 | == 11. Success Metrics == | ||
| 597 | |||
| 598 | **System Quality** (track weekly): | ||
| 599 | * Error rate by category (target: -10%/month) | ||
| 600 | * Average confidence score (target: increase) | ||
| 601 | * Source quality distribution (target: more high-quality) | ||
| 602 | * Contradiction detection rate (target: increase) | ||
| 603 | |||
| 604 | **Efficiency** (track monthly): | ||
| 605 | * Claims processed per hour (target: increase) | ||
| 606 | * Human hours per claim (target: decrease) | ||
| 607 | * Automation coverage (target: >90%) | ||
| 608 | * Re-work rate (target: <5%) | ||
| 609 | |||
| 610 | **User Satisfaction** (track quarterly): | ||
| 611 | * User flag rate (issues found) | ||
| 612 | * Correction acceptance rate (flags valid) | ||
| 613 | * Return user rate | ||
| 614 | * Trust indicators (surveys) | ||
| 615 | |||
| 616 | **User Needs Metrics** (track quarterly): | ||
| 617 | * UN-1: % users who understand trust scores | ||
| 618 | * UN-4: Time to verify social media claim (target: <30s) | ||
| 619 | * UN-7: % users who access evidence details | ||
| 620 | * UN-8: % users who view multiple scenarios | ||
| 621 | * UN-15: % users who check evolution timeline | ||
| 622 | |||
| 623 | == 12. Requirements Traceability == | ||
| 624 | |||
| 625 | For full traceability matrix showing which requirements fulfill which user needs, see: | ||
| 626 | |||
| 627 | * [[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]] - Section 8 includes comprehensive mapping tables | ||
| 628 | |||
| 629 | == 13. Related Pages == | ||
| 630 | |||
| 631 | * **[[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]]** - What users need (drives these requirements) | ||
| 632 | * [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] - How requirements are implemented | ||
| 633 | * [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] - Data structures supporting requirements | ||
| 634 | * [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] - User interaction workflows | ||
| 635 | * [[AKEL>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI system fulfilling automation requirements | ||
| 636 | * [[Global Rules>>FactHarbor.Organisation.How-We-Work-Together.GlobalRules.WebHome]] | ||
| 637 | * [[Privacy Policy>>FactHarbor.Organisation.How-We-Work-Together.Privacy-Policy]] |