Changes for page Requirements

Last modified by Robert Schaub on 2025/12/24 20:34

From version 5.1
edited by Robert Schaub
on 2025/12/12 21:13
Change comment: Rollback to version 3.1
To version 8.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -1,267 +1,426 @@
1 1  = Requirements =
2 2  
3 -This chapter defines all functional, non-functional, user-role, and federation requirements for FactHarbor.
3 +This page defines **Roles**, **Responsibilities**, and **Rules** for contributors and users of FactHarbor.
4 4  
5 -It answers:
5 +== Roles ==
6 6  
7 -* Who can do what in the system?
8 -* What workflows must FactHarbor support?
9 -* What quality, transparency, and federation guarantees must be met?
7 +=== Reader ===
10 10  
11 -----
9 +**Who**: Anyone (no login required).
12 12  
13 -= User Roles =
11 +**Can**:
12 +* Browse and search claims
13 +* View scenarios, evidence, verdicts, and timelines
14 +* Compare scenarios and explore assumptions
15 +* Flag issues, errors, contradictions, or suspicious patterns
16 +* Use filters, search, and visualization tools
17 +* Create personal views (saved searches, bookmarks - local browser storage)
18 +* **Submit claims automatically** by providing text to analyze - new claims are added automatically unless equal claims already exist in the system
14 14  
15 -== Reader ==
20 +**Cannot**:
21 +* Modify existing content
22 +* Access draft content
23 +* Participate in governance decisions
16 16  
17 -Responsibilities:
25 +**Note**: Readers can request human review of AI-generated content by flagging it.
18 18  
19 -* Browse and search claims
20 -* View scenarios, evidence, verdicts, and timelines
21 -* Compare scenarios and explore assumptions
22 -* Flag issues, errors, contradictions, or suspicious patterns
27 +=== Contributor ===
23 23  
24 -Permissions:
29 +**Who**: Registered and logged-in users (extends Reader capabilities).
25 25  
26 -* Read-only access to all published claims, scenarios, evidence, and verdicts
27 -* Use filters, search, and visualization tools (“truth landscape”, timelines, scenario comparison, etc.)
28 -* Create personal views (saved searches, bookmarks, etc.)
31 +**Can**:
32 +* Everything a Reader can do
33 +* Submit claims
34 +* Submit evidence
35 +* Provide feedback
36 +* Suggest scenarios
37 +* Flag content for review
38 +* Request human review of AI-generated content
29 29  
30 -Limitations:
40 +**Cannot**:
41 +* Publish or mark content as "reviewed" or "approved"
42 +* Override expert or maintainer decisions
43 +* Directly modify AKEL or quality gate configurations
31 31  
32 -* Cannot change shared content
33 -* Cannot publish new claims, scenarios, or verdicts
45 +=== Reviewer ===
34 34  
35 -----
47 +**Who**: Trusted community members, appointed by maintainers.
36 36  
37 -== Contributor ==
49 +**Can**:
50 +* Review contributions from Contributors and AKEL drafts
51 +* Validate AI-generated content (Mode 2 → Mode 3 transition)
52 +* Edit claims, scenarios, and evidence
53 +* Add clarifications or warnings
54 +* Change content status: `draft` → `in review` → `published` / `rejected`
55 +* Approve or reject **Tier B and C** content for "Human-Reviewed" status
56 +* Flag content for expert review
57 +* Participate in audit sampling
38 38  
39 -Responsibilities:
59 +**Cannot**:
60 +* Approve Tier A content for "Human-Reviewed" status (requires Expert)
61 +* Change governance rules
62 +* Unilaterally change expert conclusions without process
63 +* Bypass quality gates
40 40  
41 -* Submit claims
42 -* Propose new claim clusters (if automatic clustering is insufficient)
43 -* Draft scenarios (definitions, assumptions, boundaries)
44 -* Attach evidence (sources, documents, links, datasets)
45 -* Suggest verdict drafts and uncertainty ranges
46 -* Respond to reviewer or expert feedback
65 +**Note on AI-Drafted Content**:
66 +* Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3)
67 +* For Tier B and C, Reviewers have approval authority
68 +* For Tier A, only Experts can grant "Human-Reviewed" status
47 47  
48 -Permissions:
70 +=== Expert (Domain-Specific) ===
49 49  
50 -* Everything a Reader can do
51 -* Create and edit **draft** claims, scenarios, evidence links, and verdict drafts
52 -* Comment on existing content and discuss assumptions
53 -* Propose corrections to misclassified or outdated content
72 +**Who**: Subject-matter specialists in specific domains (medicine, law, science, etc.).
54 54  
55 -Limitations:
74 +**Can**:
75 +* Everything a Reviewer can do
76 +* **Final authority** on Tier A content "Human-Reviewed" status
77 +* Validate complex or controversial claims in their domain
78 +* Define domain-specific quality standards
79 +* Set reliability thresholds for domain sources
80 +* Participate in risk tier assignment review
81 +* Override AKEL suggestions in their domain (with documentation)
56 56  
57 -* Cannot *publish* or mark content as “reviewed” or “approved”
58 -* Cannot override expert or maintainer decisions
59 -* Cannot change system-level settings, roles, or federation configuration
83 +**Cannot**:
84 +* Change platform governance policies
85 +* Approve content outside their expertise domain
86 +* Bypass technical quality gates (but can flag for adjustment)
60 60  
61 -----
88 +**Specialization**:
89 +* Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert")
90 +* Cross-domain claims may require multiple expert reviews
62 62  
63 -== Reviewer ==
92 +=== Auditor ===
64 64  
65 -Responsibilities:
94 +**Who**: Reviewers or Experts assigned to sampling audit duties.
66 66  
67 -* Review contributions from Contributors and AKEL drafts
68 -* Check internal consistency and clarity of scenarios
69 -* Validate that evidence is correctly linked and described
70 -* Ensure verdicts match the evidence and stated assumptions
71 -* Reject, request change, or accept content
96 +**Can**:
97 +* Review sampled AI-generated content against quality standards
98 +* Validate quality gate enforcement
99 +* Identify patterns in AI errors or hallucinations
100 +* Provide feedback for system improvement
101 +* Flag content for immediate review if errors found
102 +* Contribute to audit statistics and transparency reports
72 72  
73 -Permissions:
104 +**Cannot**:
105 +* Change audit sampling algorithms (maintainer responsibility)
106 +* Bypass normal review workflows
107 +* Audit content they personally created
74 74  
75 -* Everything a Contributor can do
76 -* Change content status from `draft` → `in review` → `published` / `rejected`
77 -* Send content back to Contributors with comments
78 -* Flag content for expert review
109 +**Selection**:
110 +* Auditors selected based on domain expertise and review quality
111 +* Rotation to prevent audit fatigue
112 +* Stratified assignment (Tier A auditors need higher expertise)
79 79  
80 -Limitations:
114 +**Audit Focus**:
115 +* Tier A: Recommendation 30-50% sampling rate, expert auditors
116 +* Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors
117 +* Tier C: Recommendation 5-10% sampling rate, reviewer auditors
81 81  
82 -* Cannot modify system-wide configuration or federation topology
83 -* Cannot unilaterally change expert conclusions without process
119 +=== Moderator ===
84 84  
85 -----
121 +**Who**: Maintainers or trusted long-term contributors.
86 86  
87 -== Expert ==
123 +**Can**:
124 +* All Reviewer and Expert capabilities (cross-domain)
125 +* Manage user accounts and permissions
126 +* Handle disputes and conflicts
127 +* Enforce community guidelines
128 +* Suspend or ban abusive users
129 +* Finalize publication status for sensitive content
130 +* Review and adjust risk tier assignments
131 +* Oversee audit system performance
88 88  
89 -Responsibilities:
133 +**Cannot**:
134 +* Change core data model or architecture
135 +* Override technical system constraints
136 +* Make unilateral governance decisions without consensus
90 90  
91 -* Provide domain-specific judgment on scenarios, evidence, and verdicts
92 -* Refine assumptions and definitions in complex or ambiguous topics
93 -* Identify subtle biases, missing evidence, or misinterpretations
94 -* Propose improved verdicts and uncertainty assessments
138 +=== Maintainer ===
95 95  
96 -Permissions:
140 +**Who**: Core team members responsible for the platform.
97 97  
98 -* Everything a Reviewer can do
99 -* Attach expert annotations and signed opinions to scenarios and verdicts
100 -* Propose re-evaluation of already published content based on new evidence
142 +**Can**:
143 +* All Moderator capabilities
144 +* Change data model, architecture, and technical systems
145 +* Configure quality gates and AKEL parameters
146 +* Adjust audit sampling algorithms
147 +* Set and modify risk tier policies
148 +* Make platform-wide governance decisions
149 +* Access and modify backend systems
150 +* Deploy updates and fixes
151 +* Grant and revoke roles
101 101  
102 -Limitations:
153 +**Governance**:
154 +* Maintainers operate under organizational governance rules
155 +* Major policy changes require Governing Team approval
156 +* Technical decisions made collaboratively
103 103  
104 -* Expert status is scoped to specific domains
105 -* Cannot bypass moderation, abuse policies, or legal constraints
158 +----
106 106  
160 +== Content Publication States ==
161 +
162 +=== Mode 1: Draft ===
163 +* Not visible to public
164 +* Visible to contributor and reviewers
165 +* Can be edited by contributor or reviewers
166 +* Default state for failed quality gates
167 +
168 +=== Mode 2: AI-Generated (Published) ===
169 +* **Public** and visible to all users
170 +* Clearly labeled as "AI-Generated, Awaiting Human Review"
171 +* Passed all automated quality gates
172 +* Risk tier displayed (A/B/C)
173 +* Users can:
174 + ** Read and use content
175 + ** Request human review
176 + ** Flag for expert attention
177 +* Subject to sampling audits
178 +* Can be promoted to Mode 3 by reviewer/expert validation
179 +
180 +=== Mode 3: Human-Reviewed (Published) ===
181 +* **Public** and visible to all users
182 +* Labeled as "Human-Reviewed" with reviewer/expert attribution
183 +* Passed quality gates + human validation
184 +* Highest trust level
185 +* For Tier A, requires Expert approval
186 +* For Tier B/C, Reviewer approval sufficient
187 +
188 +=== Rejected ===
189 +* Not visible to public
190 +* Visible to contributor with rejection reason
191 +* Can be resubmitted after addressing issues
192 +* Rejection logged for transparency
193 +
107 107  ----
108 108  
109 -== Moderator ==
196 +== Contribution Rules ==
110 110  
111 -Responsibilities:
198 +=== All Contributors Must: ===
199 +* Provide sources for claims
200 +* Use clear, neutral language
201 +* Avoid personal attacks or insults
202 +* Respect intellectual property (cite sources)
203 +* Accept community feedback gracefully
112 112  
113 -* Handle abuse reports, spam, harassment, and coordinated manipulation
114 -* Enforce community guidelines and legal constraints
115 -* Manage user bans, content takedowns, and visibility restrictions
205 +=== AKEL (AI) Must: ===
206 +* Mark all outputs with `AuthorType = AI`
207 +* Pass quality gates before Mode 2 publication
208 +* Perform mandatory contradiction search
209 +* Disclose confidence levels and uncertainty
210 +* Provide traceable reasoning chains
211 +* Flag potential bubbles or echo chambers
212 +* Submit to audit sampling
116 116  
117 -Permissions:
214 +=== Reviewers Must: ===
215 +* Be impartial and evidence-based
216 +* Document reasoning for decisions
217 +* Escalate to experts when appropriate
218 +* Participate in audits when assigned
219 +* Provide constructive feedback
118 118  
119 -* Hide or temporarily disable access to abusive content
120 -* Ban or restrict users in line with policy
121 -* Edit or redact sensitive content (e.g., doxxing, illegal material)
221 +=== Experts Must: ===
222 +* Stay within domain expertise
223 +* Disclose conflicts of interest
224 +* Document specialized terminology
225 +* Provide reasoning for domain-specific decisions
226 +* Participate in Tier A audits
122 122  
123 -Limitations:
228 +----
124 124  
125 -* Does not change factual verdicts except where required for legal / safety reasons
126 -* Substantive fact changes must go through the review / expert process
230 +== Quality Standards ==
127 127  
232 +=== Source Requirements ===
233 +* Primary sources preferred over secondary
234 +* Publication date and author must be identifiable
235 +* Sources must be accessible (not paywalled when possible)
236 +* Contradictory sources must be acknowledged
237 +* Echo chamber sources must be flagged
238 +
239 +=== Claim Requirements ===
240 +* Falsifiable or evaluable
241 +* Clear definitions of key terms
242 +* Boundaries and scope stated
243 +* Assumptions made explicit
244 +* Uncertainty acknowledged
245 +
246 +=== Evidence Requirements ===
247 +* Relevant to the claim and scenario
248 +* Reliability assessment provided
249 +* Methodology described (for studies)
250 +* Limitations noted
251 +* Conflicting evidence acknowledged
252 +
128 128  ----
129 129  
130 -== Maintainer / Administrator ==
255 +== Risk Tier Assignment ==
131 131  
132 -Responsibilities:
257 +**Automated (AKEL)**: Initial tier suggested based on domain, keywords, impact
258 +**Human Validation**: Moderators or Experts can override AKEL suggestions
259 +**Review**: Risk tiers periodically reviewed based on audit outcomes
133 133  
134 -* Maintain node configuration, security settings, and backups
135 -* Configure AKEL, storage, federation endpoints, and performance tuning
136 -* Manage role assignments (who is Reviewer, Expert, Moderator, etc.)
137 -* Approve software updates and schema migrations
261 +**Tier A Indicators**:
262 +* Medical diagnosis or treatment advice
263 +* Legal interpretation or advice
264 +* Election or voting information
265 +* Safety or security sensitive
266 +* Major financial decisions
267 +* Potential for significant harm
138 138  
139 -Permissions:
269 +**Tier B Indicators**:
270 +* Complex scientific causality
271 +* Contested policy domains
272 +* Historical interpretation with political implications
273 +* Significant economic impact claims
140 140  
141 -* All read/write access to configuration, but not necessarily content editorial authority
142 -* Define organization-level policies (e.g., which sources are allowed by default)
275 +**Tier C Indicators**:
276 +* Established historical facts
277 +* Simple definitions
278 +* Well-documented scientific consensus
279 +* Basic reference information
143 143  
144 -Limitations:
281 +----
145 145  
146 -* Editorial decisions on controversial topics follow governance rules, not arbitrary admin choice
147 147  
148 148  ----
149 149  
150 -== AKEL (AI Knowledge Extraction Layer) ==
286 +== User Role Hierarchy Diagram ==
151 151  
152 -Responsibilities:
288 +The following diagram visualizes the complete role hierarchy:
153 153  
154 -* Propose drafts — never final decisions
155 -* Normalize claims and extract candidate clusters
156 -* Draft scenarios, evidence candidates, and preliminary verdict suggestions
157 -* Propose re-evaluation when new evidence appears
290 +{{include reference="Test.FactHarborV09.Specification.Diagrams.User Class Diagram.WebHome"/}}
158 158  
159 -Permissions:
292 +----
160 160  
161 -* Create and update **machine-generated drafts** and suggestions
162 -* Never directly publish content without human approval
294 +----
163 163  
164 -Limitations:
296 +== Role Hierarchy Diagrams ==
165 165  
166 -* AKEL output is always labeled as AI-generated draft
167 -* Must be reviewable, auditable, and overridable by humans
298 +=== User Class Diagram ===
168 168  
300 +The following class diagram visualizes the complete user role hierarchy:
301 +
302 +{{include reference="Test.FactHarborV09.Specification.Diagrams.User Class Diagram.WebHome"/}}
303 +
304 +=== Human User Roles ===
305 +
306 +This diagram shows the two-track progression for human users:
307 +
308 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Human User Roles.WebHome"/}}
309 +
310 +=== Technical and System Users ===
311 +
312 +This diagram shows system processes and their management:
313 +
314 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Technical and System Users.WebHome"/}}
315 +
316 +**Key Design Principles**:
317 +* **Two tracks from Contributor**: Content Track (Reviewer) and Technical Track (Maintainer)
318 +* **Technical Users**: System processes (AKEL, bots) managed by Maintainers
319 +* **Separation of concerns**: Editorial authority independent from technical authority
320 +
169 169  ----
170 170  
323 +
324 +
325 +----
326 +
171 171  = Functional Requirements =
172 172  
173 -This section defines what the system must **do**.
174 174  
330 +
331 +This page defines what the FactHarbor system must **do** to fulfill its mission.
332 +
333 +Requirements are structured as FR (Functional Requirement) items and organized by capability area.
334 +
335 +---
336 +
175 175  == Claim Intake & Normalization ==
176 176  
177 177  === FR1 – Claim Intake ===
178 178  
179 179  The system must support Claim creation from:
342 +* Free-text input (from any Reader)
343 +* URLs (web pages, articles, posts)
344 +* Uploaded documents and transcripts
345 +* Structured feeds (optional, e.g. from partner platforms)
346 +* Automated ingestion (federation input)
347 +* AKEL extraction from multi-claim texts
180 180  
181 -* Free-text input
182 -* URLs (web pages, articles, posts)
183 -* Uploaded documents and transcripts
184 -* Structured feeds (optional, e.g. from partner platforms)
349 +**Automatic submission**: Any Reader can submit text, and new claims are added automatically unless identical claims already exist.
185 185  
186 -Accepted sources:
187 -
188 -* Text entered by users
189 -* URLs
190 -* Uploaded documents
191 -* Transcripts
192 -* Automated ingestion (optional federation input)
193 -* AKEL extraction from multi-claim texts
194 -
195 195  === FR2 – Claim Normalization ===
196 196  
197 -* Convert diverse inputs into short, structured, declarative claims
198 -* Preserve original phrasing for reference
199 -* Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible
353 +* Convert diverse inputs into short, structured, declarative claims
354 +* Preserve original phrasing for reference
355 +* Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible
200 200  
201 201  === FR3 – Claim Classification ===
202 202  
203 -* Classify claims by topic, domain, and type (e.g., quantitative, causal, normative)
204 -* Suggest which node / experts are relevant
359 +* Classify claims by topic, domain, and type (e.g., quantitative, causal, normative)
360 +* Assign risk tier (A/B/C) based on domain and potential impact
361 +* Suggest which node / experts are relevant
205 205  
206 206  === FR4 – Claim Clustering ===
207 207  
208 -* Group similar claims into Claim Clusters
209 -* Allow manual correction of cluster membership
210 -* Provide explanation why two claims are considered same cluster
365 +* Group similar claims into Claim Clusters
366 +* Allow manual correction of cluster membership
367 +* Provide explanation why two claims are considered "same cluster"
211 211  
212 -----
369 +---
213 213  
214 214  == Scenario System ==
215 215  
216 216  === FR5 – Scenario Creation ===
217 217  
218 -* Contributors, Reviewers, and Experts can create scenarios
219 -* AKEL can propose draft scenarios
220 -* Each scenario is tied to exactly one Claim Cluster
375 +* Contributors, Reviewers, and Experts can create scenarios
376 +* AKEL can propose draft scenarios
377 +* Each scenario is tied to exactly one Claim Cluster
221 221  
222 222  === FR6 – Required Scenario Fields ===
223 223  
224 224  Each scenario includes:
382 +* Definitions (key terms)
383 +* Assumptions (explicit, testable where possible)
384 +* Boundaries (time, geography, population, conditions)
385 +* Scope of evidence considered
386 +* Intended decision / context (optional)
225 225  
226 -* Definitions (key terms)
227 -* Assumptions (explicit, testable where possible)
228 -* Boundaries (time, geography, population, conditions)
229 -* Scope of evidence considered
230 -* Intended decision / context (optional)
231 -
232 232  === FR7 – Scenario Versioning ===
233 233  
234 -* Every change to a scenario creates a new version
235 -* Previous versions remain accessible with timestamps and rationale
390 +* Every change to a scenario creates a new version
391 +* Previous versions remain accessible with timestamps and rationale
392 +* ParentVersionID links versions
236 236  
237 237  === FR8 – Scenario Comparison ===
238 238  
239 -* Users can compare scenarios side by side
240 -* Show differences in assumptions, definitions, and evidence sets
396 +* Users can compare scenarios side by side
397 +* Show differences in assumptions, definitions, and evidence sets
241 241  
242 -----
399 +---
243 243  
244 244  == Evidence Management ==
245 245  
246 246  === FR9 – Evidence Ingestion ===
247 247  
248 -* Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios
249 -* Allow multiple pieces of evidence per Scenario
405 +* Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios
406 +* Allow multiple pieces of evidence per Scenario
407 +* Support large file uploads (with size limits)
250 250  
251 251  === FR10 – Evidence Assessment ===
252 252  
253 253  For each piece of evidence:
412 +* Assign reliability / quality ratings
413 +* Capture who rated it and why
414 +* Indicate known limitations, biases, or conflicts of interest
415 +* Track evidence version history
254 254  
255 -* Assign reliability / quality ratings
256 -* Capture who rated it and why
257 -* Indicate known limitations, biases, or conflicts of interest
258 -
259 259  === FR11 – Evidence Linking ===
260 260  
261 -* Link one piece of evidence to multiple scenarios if relevant
262 -* Make dependencies explicit (e.g., “Scenario A uses subset of evidence used in Scenario B”)
419 +* Link one piece of evidence to multiple scenarios if relevant
420 +* Make dependencies explicit (e.g., "Scenario A uses subset of evidence used in Scenario B")
421 +* Use ScenarioEvidenceLink table with RelevanceScore
263 263  
264 -----
423 +---
265 265  
266 266  == Verdicts & Truth Landscape ==
267 267  
... ... @@ -268,174 +268,217 @@
268 268  === FR12 – Scenario Verdicts ===
269 269  
270 270  For each Scenario:
430 +* Provide a **probability- or likelihood-based verdict**
431 +* Capture uncertainty and reasoning
432 +* Distinguish between AKEL draft and human-approved verdict
433 +* Support Mode 1 (draft), Mode 2 (AI-generated), Mode 3 (human-reviewed)
271 271  
272 -* Provide a **probability- or likelihood-based verdict**
273 -* Capture uncertainty and reasoning
274 -* Distinguish between AKEL draft and human-approved verdict
275 -
276 276  === FR13 – Truth Landscape ===
277 277  
278 -* Aggregate all scenario-specific verdicts into a “truth landscape” for a claim
279 -* Make disagreements visible rather than collapsing them into a single binary result
437 +* Aggregate all scenario-specific verdicts into a "truth landscape" for a claim
438 +* Make disagreements visible rather than collapsing them into a single binary result
439 +* Show parallel scenarios and their respective verdicts
280 280  
281 281  === FR14 – Time Evolution ===
282 282  
283 -* Show how verdicts and evidence evolve over time
284 -* Allow users to see “as of date X, what did we know?”
443 +* Show how verdicts and evidence evolve over time
444 +* Allow users to see "as of date X, what did we know?"
445 +* Maintain complete version history for auditing
285 285  
286 -----
447 +---
287 287  
288 288  == Workflow, Moderation & Audit ==
289 289  
290 290  === FR15 – Workflow States ===
291 291  
292 -* Draft → In Review → Published / Rejected
293 -* Separate states for Claims, Scenarios, Evidence, and Verdicts
453 +* Draft → In Review → Published / Rejected
454 +* Separate states for Claims, Scenarios, Evidence, and Verdicts
455 +* Support Mode 1/2/3 publication model
294 294  
295 295  === FR16 – Moderation & Abuse Handling ===
296 296  
297 -* Allow Moderators to hide content or lock edits for abuse or legal reasons
298 -* Keep internal audit trail even if public view is restricted
459 +* Allow Moderators to hide content or lock edits for abuse or legal reasons
460 +* Keep internal audit trail even if public view is restricted
461 +* Support user reporting and flagging
299 299  
300 300  === FR17 – Audit Trail ===
301 301  
302 -* Every significant action (create, edit, publish, delete/hide) is logged with:
303 - * Who did it
304 - * When
305 - * What changed
306 - * Why (short comment, optional but recommended)
465 +* Every significant action (create, edit, publish, delete/hide) is logged with:
466 + ** Who did it
467 + ** When (timestamp)
468 + ** What changed (diffs)
469 + ** Why (justification text)
307 307  
308 -----
471 +---
309 309  
310 -= Federation Requirements =
473 +== Quality Gates & AI Review ==
311 311  
312 -FactHarbor is designed to operate as a **federated network of nodes**.
475 +=== FR18 Quality Gate Validation ===
313 313  
314 -=== FR18 – Node Autonomy ===
477 +Before AI-generated content (Mode 2) publication, enforce:
478 +* Gate 1: Source Quality
479 +* Gate 2: Contradiction Search (MANDATORY)
480 +* Gate 3: Uncertainty Quantification
481 +* Gate 4: Structural Integrity
315 315  
316 -* Each node can run independently (local policies, local users, local moderation)
317 -* Nodes decide which other nodes to federate with
483 +=== FR19 – Audit Sampling ===
318 318  
319 -=== FR19 – Data Sharing Modes ===
485 +* Implement stratified sampling by risk tier
486 +* Recommendation: 30-50% Tier A, 10-20% Tier B, 5-10% Tier C
487 +* Support audit workflow and feedback loop
320 320  
321 -Nodes must be able to:
489 +=== FR20 – Risk Tier Assignment ===
322 322  
323 -* Share claims and summaries only
324 -* Share selected claims, scenarios, and verdicts
325 -* Share full underlying evidence metadata where allowed
326 -* Opt-out of sharing sensitive or restricted content
491 +* AKEL suggests tier based on domain, keywords, impact
492 +* Moderators and Experts can override
493 +* Risk tier affects publication workflow
327 327  
328 -=== FR20 – Synchronization & Conflict Handling ===
495 +---
329 329  
330 -* Changes from remote nodes must be mergeable or explicitly conflict-marked
331 -* Conflicting verdicts are allowed and visible; not forced into consensus
497 +== Federation Requirements ==
332 332  
333 -=== FR21 – Federation Discovery ===
499 +=== FR21 – Node Autonomy ===
334 334  
335 -* Discover other nodes and their capabilities (public endpoints, policies)
336 -* Allow whitelisting / blacklisting of nodes
501 +* Each node can run independently (local policies, local users, local moderation)
502 +* Nodes decide which other nodes to federate with
503 +* Trust levels: Trusted / Neutral / Untrusted
337 337  
338 -**Basic federation** (minimum):
505 +=== FR22 – Data Sharing Modes ===
339 339  
340 -* Subscribe to and import selected claims and scenarios from other nodes
341 -* Keep provenance (which node originated what)
342 -* Respect remote deletion / redaction notices where required by policy or law
507 +Nodes must be able to:
508 +* Share claims and summaries only
509 +* Share selected claims, scenarios, and verdicts
510 +* Share full underlying evidence metadata where allowed
511 +* Opt-out of sharing sensitive or restricted content
343 343  
344 -Advanced federation (later versions):
513 +=== FR23 – Synchronization & Conflict Handling ===
345 345  
346 -* Cross-node search
347 -* Federation-wide discovery and reputation signals
515 +* Changes from remote nodes must be mergeable or explicitly conflict-marked
516 +* Conflicting verdicts are allowed and visible; not forced into consensus
517 +* Support push/pull/subscription synchronization
348 348  
349 -----
519 +=== FR24 – Federation Discovery ===
350 350  
351 -= Non-Functional Requirements (NFR) =
521 +* Discover other nodes and their capabilities (public endpoints, policies)
522 +* Allow whitelisting / blacklisting of nodes
523 +* Global identifier format: `factharbor://node_url/type/local_id`
352 352  
353 -== NFR1Transparency ==
525 +=== FR25Cross-Node AI Knowledge Exchange ===
354 354  
355 -* All assumptions, evidence, and reasoning behind verdicts must be visible
356 -* AKEL involvement must be clearly labeled
357 -* Users must be able to inspect the chain of reasoning and versions
527 +* Share vector embeddings for clustering
528 +* Share canonical claim forms
529 +* Share scenario templates
530 +* Share contradiction alerts
531 +* NEVER share model weights
532 +* NEVER override local governance
358 358  
359 -== NFR2 – Security ==
534 +---
360 360  
361 -* Role-based access control
362 -* Transport-level security (HTTPS)
363 -* Secure storage of secrets (API keys, credentials)
364 -* Audit trails for sensitive actions
536 +== Non-Functional Requirements ==
365 365  
366 -== NFR3Privacy & Compliance ==
538 +=== NFR1Transparency ===
367 367  
368 -* Configurable data retention policies
369 -* Ability to redact or pseudonymize personal data when required
370 -* Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests)
540 +* All assumptions, evidence, and reasoning behind verdicts must be visible
541 +* AKEL involvement must be clearly labeled
542 +* Users must be able to inspect the chain of reasoning and versions
371 371  
372 -== NFR4Performance ==
544 +=== NFR2Security ===
373 373  
374 -* POC: typical interactions < 2 s
375 -* Release 1.0: < 300 ms for common read operations after caching
376 -* Degradation strategies under load (e.g. partial federation results, limited history)
546 +* Role-based access control
547 +* Transport-level security (HTTPS)
548 +* Secure storage of secrets (API keys, credentials)
549 +* Audit trails for sensitive actions
377 377  
378 -== NFR5Scalability ==
551 +=== NFR3Privacy & Compliance ===
379 379  
380 -* POC: **Fully automated text-to-truth-landscape** pipeline for validation.
381 -* Beta 0: ~100 external testers on one node
382 -* Release 1.0: **2000+ concurrent users** on a reasonably provisioned node
553 +* Configurable data retention policies
554 +* Ability to redact or pseudonymize personal data when required
555 +* Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests)
383 383  
384 -Suggested technical targets for Release 1.0:
557 +=== NFR4 Performance ===
385 385  
386 -* Scalable monolith or early microservice architecture
387 -* Sharded vector database (for semantic search)
388 -* Optional IPFS or other decentralized storage for large artefacts
389 -* Horizontal scalability for read capacity
559 +* POC: typical interactions < 2 s
560 +* Release 1.0: < 300 ms for common read operations after caching
561 +* Degradation strategies under load
390 390  
391 -== NFR6Interoperability ==
563 +=== NFR5Scalability ===
392 392  
393 -* Open, documented API
394 -* Modular AKEL that can be swapped or extended
395 -* Federation protocols that follow open standards where possible
396 -* Standard model for external integrations (e.g. news platforms, research tools)
565 +* POC: 50 internal testers on one node
566 +* Beta 0: 100 external testers on one node
567 +* Release 1.0: **2000+ concurrent users** on a reasonably provisioned node
397 397  
398 -== NFR7 – Observability & Operations ==
569 +Technical targets for Release 1.0:
570 +* Scalable monolith or early microservice architecture
571 +* Sharded vector database (for semantic search)
572 +* Optional IPFS or other decentralized storage for large artifacts
573 +* Horizontal scalability for read capacity
399 399  
400 -* Metrics for performance, errors, and queue backlogs
401 -* Logs for key flows (claim intake, scenario changes, verdict updates, federation sync)
402 -* Health endpoints for monitoring
575 +=== NFR6 – Interoperability ===
403 403  
404 -== NFR8 – Maintainability ==
577 +* Open, documented API
578 +* Modular AKEL that can be swapped or extended
579 +* Federation protocols that follow open standards where possible
580 +* Standard model for external integrations
405 405  
406 -* Clear module boundaries (API, core services, AKEL, storage, federation)
407 -* Backward-compatible schema migration strategy where feasible
408 -* Configuration via files / environment variables, not hard-coded
582 +=== NFR7 – Observability & Operations ===
409 409  
410 -== NFR9 – Usability ==
584 +* Metrics for performance, errors, and queue backlogs
585 +* Logs for key flows (claim intake, scenario changes, verdict updates, federation sync)
586 +* Health endpoints for monitoring
411 411  
412 -* UI optimized for **exploring complexity**, not hiding it
413 -* Support for saved views, filters, and user-level preferences
414 -* Progressive disclosure: casual users see summaries, advanced users can dive deep
588 +=== NFR8 – Maintainability ===
415 415  
416 -----
590 +* Clear module boundaries (API, core services, AKEL, storage, federation)
591 +* Backward-compatible schema migration strategy where feasible
592 +* Configuration via files / environment variables, not hard-coded
417 417  
418 -= Release Levels =
594 +=== NFR9 – Usability ===
419 419  
596 +* UI optimized for **exploring complexity**, not hiding it
597 +* Support for saved views, filters, and user-level preferences
598 +* Progressive disclosure: casual users see summaries, advanced users can dive deep
599 +
600 +---
601 +
602 +== Release Levels ==
603 +
420 420  === Proof of Concept (POC) ===
421 421  
422 -* **Status:** Fully Automated "Text to Truth Landscape"
423 -* **Focus:** Validating automated extraction, scenario generation, and verdict computation without human-in-the-loop.
424 -* **Goal:** Demonstrate model capability on raw text input.
606 +* Single node
607 +* Limited user set (50 internal testers)
608 +* Basic claim → scenario → evidence → verdict flow
609 +* Minimal federation (optional)
610 +* AI-generated publication (Mode 2) demonstration
611 +* Quality gates active
425 425  
426 426  === Beta 0 ===
427 427  
428 -* One or few nodes
429 -* External testers (~100)
430 -* Expanded workflows and basic moderation
431 -* Initial federation experiments
615 +* One or few nodes
616 +* External testers (100)
617 +* Expanded workflows and basic moderation
618 +* Initial federation experiments
619 +* Audit sampling implemented
432 432  
433 433  === Release 1.0 ===
434 434  
435 -* 2000+ concurrent users
436 -* Scalable monolith or early microservices
437 -* Sharded vector DB
438 -* IPFS optional
439 -* High automation (AKEL assistance)
440 -* Multi-node federation
623 +* 2000+ concurrent users
624 +* Scalable architecture
625 +* Sharded vector DB
626 +* IPFS optional
627 +* High automation (AKEL assistance)
628 +* Multi-node federation with full sync protocol
629 +* Mature audit system
441 441  
631 +---
632 +
633 +
634 +
635 +== Related Pages ==
636 +
637 +
638 +
639 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
640 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
641 +* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]]
642 +* [[Governance>>FactHarbor.Organisation.Governance]]
643 +