Changes for page Requirements
Last modified by Robert Schaub on 2025/12/24 20:34
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -9,7 +9,6 @@ 9 9 **Who**: Anyone (no login required). 10 10 11 11 **Can**: 12 - 13 13 * Browse and search claims 14 14 * View scenarios, evidence, verdicts, and timelines 15 15 * Compare scenarios and explore assumptions ... ... @@ -19,7 +19,6 @@ 19 19 * **Submit claims automatically** by providing text to analyze - new claims are added automatically unless equal claims already exist in the system 20 20 21 21 **Cannot**: 22 - 23 23 * Modify existing content 24 24 * Access draft content 25 25 * Participate in governance decisions ... ... @@ -31,7 +31,6 @@ 31 31 **Who**: Registered and logged-in users (extends Reader capabilities). 32 32 33 33 **Can**: 34 - 35 35 * Everything a Reader can do 36 36 * Submit claims 37 37 * Submit evidence ... ... @@ -41,7 +41,6 @@ 41 41 * Request human review of AI-generated content 42 42 43 43 **Cannot**: 44 - 45 45 * Publish or mark content as "reviewed" or "approved" 46 46 * Override expert or maintainer decisions 47 47 * Directly modify AKEL or quality gate configurations ... ... @@ -51,7 +51,6 @@ 51 51 **Who**: Trusted community members, appointed by maintainers. 52 52 53 53 **Can**: 54 - 55 55 * Review contributions from Contributors and AKEL drafts 56 56 * Validate AI-generated content (Mode 2 → Mode 3 transition) 57 57 * Edit claims, scenarios, and evidence ... ... @@ -62,7 +62,6 @@ 62 62 * Participate in audit sampling 63 63 64 64 **Cannot**: 65 - 66 66 * Approve Tier A content for "Human-Reviewed" status (requires Expert) 67 67 * Change governance rules 68 68 * Unilaterally change expert conclusions without process ... ... @@ -69,7 +69,6 @@ 69 69 * Bypass quality gates 70 70 71 71 **Note on AI-Drafted Content**: 72 - 73 73 * Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3) 74 74 * For Tier B and C, Reviewers have approval authority 75 75 * For Tier A, only Experts can grant "Human-Reviewed" status ... ... @@ -79,7 +79,6 @@ 79 79 **Who**: Subject-matter specialists in specific domains (medicine, law, science, etc.). 80 80 81 81 **Can**: 82 - 83 83 * Everything a Reviewer can do 84 84 * **Final authority** on Tier A content "Human-Reviewed" status 85 85 * Validate complex or controversial claims in their domain ... ... @@ -89,13 +89,11 @@ 89 89 * Override AKEL suggestions in their domain (with documentation) 90 90 91 91 **Cannot**: 92 - 93 93 * Change platform governance policies 94 94 * Approve content outside their expertise domain 95 95 * Bypass technical quality gates (but can flag for adjustment) 96 96 97 97 **Specialization**: 98 - 99 99 * Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert") 100 100 * Cross-domain claims may require multiple expert reviews 101 101 ... ... @@ -104,7 +104,6 @@ 104 104 **Who**: Reviewers or Experts assigned to sampling audit duties. 105 105 106 106 **Can**: 107 - 108 108 * Review sampled AI-generated content against quality standards 109 109 * Validate quality gate enforcement 110 110 * Identify patterns in AI errors or hallucinations ... ... @@ -113,19 +113,16 @@ 113 113 * Contribute to audit statistics and transparency reports 114 114 115 115 **Cannot**: 116 - 117 117 * Change audit sampling algorithms (maintainer responsibility) 118 118 * Bypass normal review workflows 119 119 * Audit content they personally created 120 120 121 121 **Selection**: 122 - 123 123 * Auditors selected based on domain expertise and review quality 124 124 * Rotation to prevent audit fatigue 125 125 * Stratified assignment (Tier A auditors need higher expertise) 126 126 127 127 **Audit Focus**: 128 - 129 129 * Tier A: Recommendation 30-50% sampling rate, expert auditors 130 130 * Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors 131 131 * Tier C: Recommendation 5-10% sampling rate, reviewer auditors ... ... @@ -135,7 +135,6 @@ 135 135 **Who**: Maintainers or trusted long-term contributors. 136 136 137 137 **Can**: 138 - 139 139 * All Reviewer and Expert capabilities (cross-domain) 140 140 * Manage user accounts and permissions 141 141 * Handle disputes and conflicts ... ... @@ -146,7 +146,6 @@ 146 146 * Oversee audit system performance 147 147 148 148 **Cannot**: 149 - 150 150 * Change core data model or architecture 151 151 * Override technical system constraints 152 152 * Make unilateral governance decisions without consensus ... ... @@ -156,7 +156,6 @@ 156 156 **Who**: Core team members responsible for the platform. 157 157 158 158 **Can**: 159 - 160 160 * All Moderator capabilities 161 161 * Change data model, architecture, and technical systems 162 162 * Configure quality gates and AKEL parameters ... ... @@ -168,7 +168,6 @@ 168 168 * Grant and revoke roles 169 169 170 170 **Governance**: 171 - 172 172 * Maintainers operate under organizational governance rules 173 173 * Major policy changes require Governing Team approval 174 174 * Technical decisions made collaboratively ... ... @@ -178,7 +178,6 @@ 178 178 == Content Publication States == 179 179 180 180 === Mode 1: Draft === 181 - 182 182 * Not visible to public 183 183 * Visible to contributor and reviewers 184 184 * Can be edited by contributor or reviewers ... ... @@ -185,20 +185,18 @@ 185 185 * Default state for failed quality gates 186 186 187 187 === Mode 2: AI-Generated (Published) === 188 - 189 189 * **Public** and visible to all users 190 190 * Clearly labeled as "AI-Generated, Awaiting Human Review" 191 191 * Passed all automated quality gates 192 192 * Risk tier displayed (A/B/C) 193 193 * Users can: 194 -** Read and use content 195 -** Request human review 196 -** Flag for expert attention 174 + ** Read and use content 175 + ** Request human review 176 + ** Flag for expert attention 197 197 * Subject to sampling audits 198 198 * Can be promoted to Mode 3 by reviewer/expert validation 199 199 200 200 === Mode 3: Human-Reviewed (Published) === 201 - 202 202 * **Public** and visible to all users 203 203 * Labeled as "Human-Reviewed" with reviewer/expert attribution 204 204 * Passed quality gates + human validation ... ... @@ -207,7 +207,6 @@ 207 207 * For Tier B/C, Reviewer approval sufficient 208 208 209 209 === Rejected === 210 - 211 211 * Not visible to public 212 212 * Visible to contributor with rejection reason 213 213 * Can be resubmitted after addressing issues ... ... @@ -218,7 +218,6 @@ 218 218 == Contribution Rules == 219 219 220 220 === All Contributors Must: === 221 - 222 222 * Provide sources for claims 223 223 * Use clear, neutral language 224 224 * Avoid personal attacks or insults ... ... @@ -226,7 +226,6 @@ 226 226 * Accept community feedback gracefully 227 227 228 228 === AKEL (AI) Must: === 229 - 230 230 * Mark all outputs with `AuthorType = AI` 231 231 * Pass quality gates before Mode 2 publication 232 232 * Perform mandatory contradiction search ... ... @@ -236,7 +236,6 @@ 236 236 * Submit to audit sampling 237 237 238 238 === Reviewers Must: === 239 - 240 240 * Be impartial and evidence-based 241 241 * Document reasoning for decisions 242 242 * Escalate to experts when appropriate ... ... @@ -244,7 +244,6 @@ 244 244 * Provide constructive feedback 245 245 246 246 === Experts Must: === 247 - 248 248 * Stay within domain expertise 249 249 * Disclose conflicts of interest 250 250 * Document specialized terminology ... ... @@ -256,7 +256,6 @@ 256 256 == Quality Standards == 257 257 258 258 === Source Requirements === 259 - 260 260 * Primary sources preferred over secondary 261 261 * Publication date and author must be identifiable 262 262 * Sources must be accessible (not paywalled when possible) ... ... @@ -264,7 +264,6 @@ 264 264 * Echo chamber sources must be flagged 265 265 266 266 === Claim Requirements === 267 - 268 268 * Falsifiable or evaluable 269 269 * Clear definitions of key terms 270 270 * Boundaries and scope stated ... ... @@ -272,7 +272,6 @@ 272 272 * Uncertainty acknowledged 273 273 274 274 === Evidence Requirements === 275 - 276 276 * Relevant to the claim and scenario 277 277 * Reliability assessment provided 278 278 * Methodology described (for studies) ... ... @@ -288,7 +288,6 @@ 288 288 **Review**: Risk tiers periodically reviewed based on audit outcomes 289 289 290 290 **Tier A Indicators**: 291 - 292 292 * Medical diagnosis or treatment advice 293 293 * Legal interpretation or advice 294 294 * Election or voting information ... ... @@ -297,7 +297,6 @@ 297 297 * Potential for significant harm 298 298 299 299 **Tier B Indicators**: 300 - 301 301 * Complex scientific causality 302 302 * Contested policy domains 303 303 * Historical interpretation with political implications ... ... @@ -304,7 +304,6 @@ 304 304 * Significant economic impact claims 305 305 306 306 **Tier C Indicators**: 307 - 308 308 * Established historical facts 309 309 * Simple definitions 310 310 * Well-documented scientific consensus ... ... @@ -312,371 +312,10 @@ 312 312 313 313 ---- 314 314 315 - 316 ----- 317 - 318 -== User Role Hierarchy Diagram == 319 - 320 -The following diagram visualizes the complete role hierarchy: 321 - 322 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.User Class Diagram.WebHome"/}} 323 - 324 ----- 325 - 326 ----- 327 - 328 -== Role Hierarchy Diagrams == 329 - 330 -=== User Class Diagram === 331 - 332 -The following class diagram visualizes the complete user role hierarchy: 333 - 334 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.User Class Diagram.WebHome"/}} 335 - 336 -=== Human User Roles === 337 - 338 -This diagram shows the two-track progression for human users: 339 - 340 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Human User Roles.WebHome"/}} 341 - 342 -=== Technical and System Users === 343 - 344 -This diagram shows system processes and their management: 345 - 346 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Technical and System Users.WebHome"/}} 347 - 348 -**Key Design Principles**: 349 - 350 -* **Two tracks from Contributor**: Content Track (Reviewer) and Technical Track (Maintainer) 351 -* **Technical Users**: System processes (AKEL, bots) managed by Maintainers 352 -* **Separation of concerns**: Editorial authority independent from technical authority 353 - 354 ----- 355 - 356 - 357 - 358 ----- 359 - 360 -= Functional Requirements = 361 - 362 - 363 - 364 -This page defines what the FactHarbor system must **do** to fulfill its mission. 365 - 366 -Requirements are structured as FR (Functional Requirement) items and organized by capability area. 367 - 368 ----- 369 - 370 -== Claim Intake & Normalization == 371 - 372 -=== FR1 – Claim Intake === 373 - 374 -The system must support Claim creation from: 375 - 376 -* Free-text input (from any Reader) 377 -* URLs (web pages, articles, posts) 378 -* Uploaded documents and transcripts 379 -* Structured feeds (optional, e.g. from partner platforms) 380 -* Automated ingestion (federation input) 381 -* AKEL extraction from multi-claim texts 382 - 383 -**Automatic submission**: Any Reader can submit text, and new claims are added automatically unless identical claims already exist. 384 - 385 -=== FR2 – Claim Normalization === 386 - 387 -* Convert diverse inputs into short, structured, declarative claims 388 -* Preserve original phrasing for reference 389 -* Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible 390 - 391 -=== FR3 – Claim Classification === 392 - 393 -* Classify claims by topic, domain, and type (e.g., quantitative, causal, normative) 394 -* Assign risk tier (A/B/C) based on domain and potential impact 395 -* Suggest which node / experts are relevant 396 - 397 -=== FR4 – Claim Clustering === 398 - 399 -* Group similar claims into Claim Clusters 400 -* Allow manual correction of cluster membership 401 -* Provide explanation why two claims are considered "same cluster" 402 - 403 ----- 404 - 405 -== Scenario System == 406 - 407 -=== FR5 – Scenario Creation === 408 - 409 -* Contributors, Reviewers, and Experts can create scenarios 410 -* AKEL can propose draft scenarios 411 -* Each scenario is tied to exactly one Claim Cluster 412 - 413 -=== FR6 – Required Scenario Fields === 414 - 415 -Each scenario includes: 416 - 417 -* Definitions (key terms) 418 -* Assumptions (explicit, testable where possible) 419 -* Boundaries (time, geography, population, conditions) 420 -* Scope of evidence considered 421 -* Intended decision / context (optional) 422 - 423 -=== FR7 – Scenario Versioning === 424 - 425 -* Every change to a scenario creates a new version 426 -* Previous versions remain accessible with timestamps and rationale 427 -* ParentVersionID links versions 428 - 429 -=== FR8 – Scenario Comparison === 430 - 431 -* Users can compare scenarios side by side 432 -* Show differences in assumptions, definitions, and evidence sets 433 - 434 ----- 435 - 436 -== Evidence Management == 437 - 438 -=== FR9 – Evidence Ingestion === 439 - 440 -* Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios 441 -* Allow multiple pieces of evidence per Scenario 442 -* Support large file uploads (with size limits) 443 - 444 -=== FR10 – Evidence Assessment === 445 - 446 -For each piece of evidence: 447 - 448 -* Assign reliability / quality ratings 449 -* Capture who rated it and why 450 -* Indicate known limitations, biases, or conflicts of interest 451 -* Track evidence version history 452 - 453 -=== FR11 – Evidence Linking === 454 - 455 -* Link one piece of evidence to multiple scenarios if relevant 456 -* Make dependencies explicit (e.g., "Scenario A uses subset of evidence used in Scenario B") 457 -* Use ScenarioEvidenceLink table with RelevanceScore 458 - 459 ----- 460 - 461 -== Verdicts & Truth Landscape == 462 - 463 -=== FR12 – Scenario Verdicts === 464 - 465 -For each Scenario: 466 - 467 -* Provide a **probability- or likelihood-based verdict** 468 -* Capture uncertainty and reasoning 469 -* Distinguish between AKEL draft and human-approved verdict 470 -* Support Mode 1 (draft), Mode 2 (AI-generated), Mode 3 (human-reviewed) 471 - 472 -=== FR13 – Truth Landscape === 473 - 474 -* Aggregate all scenario-specific verdicts into a "truth landscape" for a claim 475 -* Make disagreements visible rather than collapsing them into a single binary result 476 -* Show parallel scenarios and their respective verdicts 477 - 478 -=== FR14 – Time Evolution === 479 - 480 -* Show how verdicts and evidence evolve over time 481 -* Allow users to see "as of date X, what did we know?" 482 -* Maintain complete version history for auditing 483 - 484 ----- 485 - 486 -== Workflow, Moderation & Audit == 487 - 488 -=== FR15 – Workflow States === 489 - 490 -* Draft → In Review → Published / Rejected 491 -* Separate states for Claims, Scenarios, Evidence, and Verdicts 492 -* Support Mode 1/2/3 publication model 493 - 494 -=== FR16 – Moderation & Abuse Handling === 495 - 496 -* Allow Moderators to hide content or lock edits for abuse or legal reasons 497 -* Keep internal audit trail even if public view is restricted 498 -* Support user reporting and flagging 499 - 500 -=== FR17 – Audit Trail === 501 - 502 -* Every significant action (create, edit, publish, delete/hide) is logged with: 503 -** Who did it 504 -** When (timestamp) 505 -** What changed (diffs) 506 -** Why (justification text) 507 - 508 ----- 509 - 510 -== Quality Gates & AI Review == 511 - 512 -=== FR18 – Quality Gate Validation === 513 - 514 -Before AI-generated content (Mode 2) publication, enforce: 515 - 516 -* Gate 1: Source Quality 517 -* Gate 2: Contradiction Search (MANDATORY) 518 -* Gate 3: Uncertainty Quantification 519 -* Gate 4: Structural Integrity 520 - 521 -=== FR19 – Audit Sampling === 522 - 523 -* Implement stratified sampling by risk tier 524 -* Recommendation: 30-50% Tier A, 10-20% Tier B, 5-10% Tier C 525 -* Support audit workflow and feedback loop 526 - 527 -=== FR20 – Risk Tier Assignment === 528 - 529 -* AKEL suggests tier based on domain, keywords, impact 530 -* Moderators and Experts can override 531 -* Risk tier affects publication workflow 532 - 533 ----- 534 - 535 -== Federation Requirements == 536 - 537 -=== FR21 – Node Autonomy === 538 - 539 -* Each node can run independently (local policies, local users, local moderation) 540 -* Nodes decide which other nodes to federate with 541 -* Trust levels: Trusted / Neutral / Untrusted 542 - 543 -=== FR22 – Data Sharing Modes === 544 - 545 -Nodes must be able to: 546 - 547 -* Share claims and summaries only 548 -* Share selected claims, scenarios, and verdicts 549 -* Share full underlying evidence metadata where allowed 550 -* Opt-out of sharing sensitive or restricted content 551 - 552 -=== FR23 – Synchronization & Conflict Handling === 553 - 554 -* Changes from remote nodes must be mergeable or explicitly conflict-marked 555 -* Conflicting verdicts are allowed and visible; not forced into consensus 556 -* Support push/pull/subscription synchronization 557 - 558 -=== FR24 – Federation Discovery === 559 - 560 -* Discover other nodes and their capabilities (public endpoints, policies) 561 -* Allow whitelisting / blacklisting of nodes 562 -* Global identifier format: `factharbor://node_url/type/local_id` 563 - 564 -=== FR25 – Cross-Node AI Knowledge Exchange === 565 - 566 -* Share vector embeddings for clustering 567 -* Share canonical claim forms 568 -* Share scenario templates 569 -* Share contradiction alerts 570 -* NEVER share model weights 571 -* NEVER override local governance 572 - 573 ----- 574 - 575 -== Non-Functional Requirements == 576 - 577 -=== NFR1 – Transparency === 578 - 579 -* All assumptions, evidence, and reasoning behind verdicts must be visible 580 -* AKEL involvement must be clearly labeled 581 -* Users must be able to inspect the chain of reasoning and versions 582 - 583 -=== NFR2 – Security === 584 - 585 -* Role-based access control 586 -* Transport-level security (HTTPS) 587 -* Secure storage of secrets (API keys, credentials) 588 -* Audit trails for sensitive actions 589 - 590 -=== NFR3 – Privacy & Compliance === 591 - 592 -* Configurable data retention policies 593 -* Ability to redact or pseudonymize personal data when required 594 -* Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests) 595 - 596 -=== NFR4 – Performance === 597 - 598 -* POC: typical interactions < 2 s 599 -* Release 1.0: < 300 ms for common read operations after caching 600 -* Degradation strategies under load 601 - 602 -=== NFR5 – Scalability === 603 - 604 -* POC: 50 internal testers on one node 605 -* Beta 0: 100 external testers on one node 606 -* Release 1.0: **2000+ concurrent users** on a reasonably provisioned node 607 - 608 -Technical targets for Release 1.0: 609 - 610 -* Scalable monolith or early microservice architecture 611 -* Sharded vector database (for semantic search) 612 -* Optional IPFS or other decentralized storage for large artifacts 613 -* Horizontal scalability for read capacity 614 - 615 -=== NFR6 – Interoperability === 616 - 617 -* Open, documented API 618 -* Modular AKEL that can be swapped or extended 619 -* Federation protocols that follow open standards where possible 620 -* Standard model for external integrations 621 - 622 -=== NFR7 – Observability & Operations === 623 - 624 -* Metrics for performance, errors, and queue backlogs 625 -* Logs for key flows (claim intake, scenario changes, verdict updates, federation sync) 626 -* Health endpoints for monitoring 627 - 628 -=== NFR8 – Maintainability === 629 - 630 -* Clear module boundaries (API, core services, AKEL, storage, federation) 631 -* Backward-compatible schema migration strategy where feasible 632 -* Configuration via files / environment variables, not hard-coded 633 - 634 -=== NFR9 – Usability === 635 - 636 -* UI optimized for **exploring complexity**, not hiding it 637 -* Support for saved views, filters, and user-level preferences 638 -* Progressive disclosure: casual users see summaries, advanced users can dive deep 639 - 640 ----- 641 - 642 -== Release Levels == 643 - 644 -=== Proof of Concept (POC) === 645 - 646 -* Single node 647 -* Limited user set (50 internal testers) 648 -* Basic claim → scenario → evidence → verdict flow 649 -* Minimal federation (optional) 650 -* AI-generated publication (Mode 2) demonstration 651 -* Quality gates active 652 - 653 -=== Beta 0 === 654 - 655 -* One or few nodes 656 -* External testers (100) 657 -* Expanded workflows and basic moderation 658 -* Initial federation experiments 659 -* Audit sampling implemented 660 - 661 -=== Release 1.0 === 662 - 663 -* 2000+ concurrent users 664 -* Scalable architecture 665 -* Sharded vector DB 666 -* IPFS optional 667 -* High automation (AKEL assistance) 668 -* Multi-node federation with full sync protocol 669 -* Mature audit system 670 - 671 ----- 672 - 673 - 674 - 675 675 == Related Pages == 676 676 677 - 678 - 679 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification V0\.9\.18.AI Knowledge Extraction Layer (AKEL).WebHome]] 680 -* [[Automation>>FactHarbor.Specification V0\.9\.18.Automation.WebHome]] 285 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] 286 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]] 681 681 * [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] 682 682 * [[Governance>>FactHarbor.Organisation.Governance]] 289 +