Changes for page Requirements

Last modified by Robert Schaub on 2025/12/24 20:34

From version 6.1
edited by Robert Schaub
on 2025/12/14 18:59
Change comment: Imported from XAR
To version 5.1
edited by Robert Schaub
on 2025/12/12 21:13
Change comment: Rollback to version 3.1

Summary

Details

Page properties
Content
... ... @@ -1,268 +1,441 @@
1 1  = Requirements =
2 2  
3 -This page defines **Roles**, **Responsibilities**, and **Rules** for contributors and users of FactHarbor.
3 +This chapter defines all functional, non-functional, user-role, and federation requirements for FactHarbor.
4 4  
5 -== Roles ==
5 +It answers:
6 6  
7 -=== Contributor ===
7 +* Who can do what in the system?
8 +* What workflows must FactHarbor support?
9 +* What quality, transparency, and federation guarantees must be met?
8 8  
9 -**Who**: Anyone (logged in or anonymous).
11 +----
10 10  
11 -**Can**:
12 -* Submit claims
13 -* Submit evidence
14 -* Provide feedback
15 -* Suggest scenarios
16 -* Flag content for review
17 -* Request human review of AI-generated content
13 += User Roles =
18 18  
19 -**Cannot**:
20 -* Publish or mark content as "reviewed" or "approved"
21 -* Override expert or maintainer decisions
22 -* Directly modify AKEL or quality gate configurations
15 +== Reader ==
23 23  
24 -=== Reviewer ===
17 +Responsibilities:
25 25  
26 -**Who**: Trusted community members, appointed by maintainers.
19 +* Browse and search claims
20 +* View scenarios, evidence, verdicts, and timelines
21 +* Compare scenarios and explore assumptions
22 +* Flag issues, errors, contradictions, or suspicious patterns
27 27  
28 -**Can**:
29 -* Review contributions from Contributors and AKEL drafts
30 -* Validate AI-generated content (Mode 2 → Mode 3 transition)
31 -* Edit claims, scenarios, and evidence
32 -* Add clarifications or warnings
33 -* Change content status: `draft` → `in review` → `published` / `rejected`
34 -* Approve or reject **Tier B and C** content for "Human-Reviewed" status
35 -* Flag content for expert review
36 -* Participate in audit sampling
24 +Permissions:
37 37  
38 -**Cannot**:
39 -* Approve Tier A content for "Human-Reviewed" status (requires Expert)
40 -* Change governance rules
41 -* Unilaterally change expert conclusions without process
42 -* Bypass quality gates
26 +* Read-only access to all published claims, scenarios, evidence, and verdicts
27 +* Use filters, search, and visualization tools (“truth landscape”, timelines, scenario comparison, etc.)
28 +* Create personal views (saved searches, bookmarks, etc.)
43 43  
44 -**Note on AI-Drafted Content**:
45 -* Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3)
46 -* For Tier B and C, Reviewers have approval authority
47 -* For Tier A, only Experts can grant "Human-Reviewed" status
30 +Limitations:
48 48  
49 -=== Expert (Domain-Specific) ===
32 +* Cannot change shared content
33 +* Cannot publish new claims, scenarios, or verdicts
50 50  
51 -**Who**: Subject-matter specialists in specific domains (medicine, law, science, etc.).
35 +----
52 52  
53 -**Can**:
54 -* Everything a Reviewer can do
55 -* **Final authority** on Tier A content "Human-Reviewed" status
56 -* Validate complex or controversial claims in their domain
57 -* Define domain-specific quality standards
58 -* Set reliability thresholds for domain sources
59 -* Participate in risk tier assignment review
60 -* Override AKEL suggestions in their domain (with documentation)
37 +== Contributor ==
61 61  
62 -**Cannot**:
63 -* Change platform governance policies
64 -* Approve content outside their expertise domain
65 -* Bypass technical quality gates (but can flag for adjustment)
39 +Responsibilities:
66 66  
67 -**Specialization**:
68 -* Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert")
69 -* Cross-domain claims may require multiple expert reviews
41 +* Submit claims
42 +* Propose new claim clusters (if automatic clustering is insufficient)
43 +* Draft scenarios (definitions, assumptions, boundaries)
44 +* Attach evidence (sources, documents, links, datasets)
45 +* Suggest verdict drafts and uncertainty ranges
46 +* Respond to reviewer or expert feedback
70 70  
71 -=== Auditor ===
48 +Permissions:
72 72  
73 -**Who**: Reviewers or Experts assigned to sampling audit duties.
50 +* Everything a Reader can do
51 +* Create and edit **draft** claims, scenarios, evidence links, and verdict drafts
52 +* Comment on existing content and discuss assumptions
53 +* Propose corrections to misclassified or outdated content
74 74  
75 -**Can**:
76 -* Review sampled AI-generated content against quality standards
77 -* Validate quality gate enforcement
78 -* Identify patterns in AI errors or hallucinations
79 -* Provide feedback for system improvement
80 -* Flag content for immediate review if errors found
81 -* Contribute to audit statistics and transparency reports
55 +Limitations:
82 82  
83 -**Cannot**:
84 -* Change audit sampling algorithms (maintainer responsibility)
85 -* Bypass normal review workflows
86 -* Audit content they personally created
57 +* Cannot *publish* or mark content as “reviewed” or “approved”
58 +* Cannot override expert or maintainer decisions
59 +* Cannot change system-level settings, roles, or federation configuration
87 87  
88 -**Selection**:
89 -* Auditors selected based on domain expertise and review quality
90 -* Rotation to prevent audit fatigue
91 -* Stratified assignment (Tier A auditors need higher expertise)
61 +----
92 92  
93 -**Audit Focus**:
94 -* Tier A: Recommendation 30-50% sampling rate, expert auditors
95 -* Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors
96 -* Tier C: Recommendation 5-10% sampling rate, reviewer auditors
63 +== Reviewer ==
97 97  
98 -=== Moderator ===
65 +Responsibilities:
99 99  
100 -**Who**: Maintainers or trusted long-term contributors.
67 +* Review contributions from Contributors and AKEL drafts
68 +* Check internal consistency and clarity of scenarios
69 +* Validate that evidence is correctly linked and described
70 +* Ensure verdicts match the evidence and stated assumptions
71 +* Reject, request change, or accept content
101 101  
102 -**Can**:
103 -* All Reviewer and Expert capabilities (cross-domain)
104 -* Manage user accounts and permissions
105 -* Handle disputes and conflicts
106 -* Enforce community guidelines
107 -* Suspend or ban abusive users
108 -* Finalize publication status for sensitive content
109 -* Review and adjust risk tier assignments
110 -* Oversee audit system performance
73 +Permissions:
111 111  
112 -**Cannot**:
113 -* Change core data model or architecture
114 -* Override technical system constraints
115 -* Make unilateral governance decisions without consensus
75 +* Everything a Contributor can do
76 +* Change content status from `draft` → `in review` → `published` / `rejected`
77 +* Send content back to Contributors with comments
78 +* Flag content for expert review
116 116  
117 -=== Maintainer ===
80 +Limitations:
118 118  
119 -**Who**: Core team members responsible for the platform.
82 +* Cannot modify system-wide configuration or federation topology
83 +* Cannot unilaterally change expert conclusions without process
120 120  
121 -**Can**:
122 -* All Moderator capabilities
123 -* Change data model, architecture, and technical systems
124 -* Configure quality gates and AKEL parameters
125 -* Adjust audit sampling algorithms
126 -* Set and modify risk tier policies
127 -* Make platform-wide governance decisions
128 -* Access and modify backend systems
129 -* Deploy updates and fixes
130 -* Grant and revoke roles
85 +----
131 131  
132 -**Governance**:
133 -* Maintainers operate under organizational governance rules
134 -* Major policy changes require Governing Team approval
135 -* Technical decisions made collaboratively
87 +== Expert ==
136 136  
89 +Responsibilities:
90 +
91 +* Provide domain-specific judgment on scenarios, evidence, and verdicts
92 +* Refine assumptions and definitions in complex or ambiguous topics
93 +* Identify subtle biases, missing evidence, or misinterpretations
94 +* Propose improved verdicts and uncertainty assessments
95 +
96 +Permissions:
97 +
98 +* Everything a Reviewer can do
99 +* Attach expert annotations and signed opinions to scenarios and verdicts
100 +* Propose re-evaluation of already published content based on new evidence
101 +
102 +Limitations:
103 +
104 +* Expert status is scoped to specific domains
105 +* Cannot bypass moderation, abuse policies, or legal constraints
106 +
137 137  ----
138 138  
139 -== Content Publication States ==
109 +== Moderator ==
140 140  
141 -=== Mode 1: Draft ===
142 -* Not visible to public
143 -* Visible to contributor and reviewers
144 -* Can be edited by contributor or reviewers
145 -* Default state for failed quality gates
111 +Responsibilities:
146 146  
147 -=== Mode 2: AI-Generated (Published) ===
148 -* **Public** and visible to all users
149 -* Clearly labeled as "AI-Generated, Awaiting Human Review"
150 -* Passed all automated quality gates
151 -* Risk tier displayed (A/B/C)
152 -* Users can:
153 - ** Read and use content
154 - ** Request human review
155 - ** Flag for expert attention
156 -* Subject to sampling audits
157 -* Can be promoted to Mode 3 by reviewer/expert validation
113 +* Handle abuse reports, spam, harassment, and coordinated manipulation
114 +* Enforce community guidelines and legal constraints
115 +* Manage user bans, content takedowns, and visibility restrictions
158 158  
159 -=== Mode 3: Human-Reviewed (Published) ===
160 -* **Public** and visible to all users
161 -* Labeled as "Human-Reviewed" with reviewer/expert attribution
162 -* Passed quality gates + human validation
163 -* Highest trust level
164 -* For Tier A, requires Expert approval
165 -* For Tier B/C, Reviewer approval sufficient
117 +Permissions:
166 166  
167 -=== Rejected ===
168 -* Not visible to public
169 -* Visible to contributor with rejection reason
170 -* Can be resubmitted after addressing issues
171 -* Rejection logged for transparency
119 +* Hide or temporarily disable access to abusive content
120 +* Ban or restrict users in line with policy
121 +* Edit or redact sensitive content (e.g., doxxing, illegal material)
172 172  
123 +Limitations:
124 +
125 +* Does not change factual verdicts except where required for legal / safety reasons
126 +* Substantive fact changes must go through the review / expert process
127 +
173 173  ----
174 174  
175 -== Contribution Rules ==
130 +== Maintainer / Administrator ==
176 176  
177 -=== All Contributors Must: ===
178 -* Provide sources for claims
179 -* Use clear, neutral language
180 -* Avoid personal attacks or insults
181 -* Respect intellectual property (cite sources)
182 -* Accept community feedback gracefully
132 +Responsibilities:
183 183  
184 -=== AKEL (AI) Must: ===
185 -* Mark all outputs with `AuthorType = AI`
186 -* Pass quality gates before Mode 2 publication
187 -* Perform mandatory contradiction search
188 -* Disclose confidence levels and uncertainty
189 -* Provide traceable reasoning chains
190 -* Flag potential bubbles or echo chambers
191 -* Submit to audit sampling
134 +* Maintain node configuration, security settings, and backups
135 +* Configure AKEL, storage, federation endpoints, and performance tuning
136 +* Manage role assignments (who is Reviewer, Expert, Moderator, etc.)
137 +* Approve software updates and schema migrations
192 192  
193 -=== Reviewers Must: ===
194 -* Be impartial and evidence-based
195 -* Document reasoning for decisions
196 -* Escalate to experts when appropriate
197 -* Participate in audits when assigned
198 -* Provide constructive feedback
139 +Permissions:
199 199  
200 -=== Experts Must: ===
201 -* Stay within domain expertise
202 -* Disclose conflicts of interest
203 -* Document specialized terminology
204 -* Provide reasoning for domain-specific decisions
205 -* Participate in Tier A audits
141 +* All read/write access to configuration, but not necessarily content editorial authority
142 +* Define organization-level policies (e.g., which sources are allowed by default)
206 206  
144 +Limitations:
145 +
146 +* Editorial decisions on controversial topics follow governance rules, not arbitrary admin choice
147 +
207 207  ----
208 208  
209 -== Quality Standards ==
150 +== AKEL (AI Knowledge Extraction Layer) ==
210 210  
211 -=== Source Requirements ===
212 -* Primary sources preferred over secondary
213 -* Publication date and author must be identifiable
214 -* Sources must be accessible (not paywalled when possible)
215 -* Contradictory sources must be acknowledged
216 -* Echo chamber sources must be flagged
152 +Responsibilities:
217 217  
218 -=== Claim Requirements ===
219 -* Falsifiable or evaluable
220 -* Clear definitions of key terms
221 -* Boundaries and scope stated
222 -* Assumptions made explicit
223 -* Uncertainty acknowledged
154 +* Propose drafts — never final decisions
155 +* Normalize claims and extract candidate clusters
156 +* Draft scenarios, evidence candidates, and preliminary verdict suggestions
157 +* Propose re-evaluation when new evidence appears
224 224  
225 -=== Evidence Requirements ===
226 -* Relevant to the claim and scenario
227 -* Reliability assessment provided
228 -* Methodology described (for studies)
229 -* Limitations noted
230 -* Conflicting evidence acknowledged
159 +Permissions:
231 231  
161 +* Create and update **machine-generated drafts** and suggestions
162 +* Never directly publish content without human approval
163 +
164 +Limitations:
165 +
166 +* AKEL output is always labeled as AI-generated draft
167 +* Must be reviewable, auditable, and overridable by humans
168 +
232 232  ----
233 233  
234 -== Risk Tier Assignment ==
171 += Functional Requirements =
235 235  
236 -**Automated (AKEL)**: Initial tier suggested based on domain, keywords, impact
237 -**Human Validation**: Moderators or Experts can override AKEL suggestions
238 -**Review**: Risk tiers periodically reviewed based on audit outcomes
173 +This section defines what the system must **do**.
239 239  
240 -**Tier A Indicators**:
241 -* Medical diagnosis or treatment advice
242 -* Legal interpretation or advice
243 -* Election or voting information
244 -* Safety or security sensitive
245 -* Major financial decisions
246 -* Potential for significant harm
175 +== Claim Intake & Normalization ==
247 247  
248 -**Tier B Indicators**:
249 -* Complex scientific causality
250 -* Contested policy domains
251 -* Historical interpretation with political implications
252 -* Significant economic impact claims
177 +=== FR1 – Claim Intake ===
253 253  
254 -**Tier C Indicators**:
255 -* Established historical facts
256 -* Simple definitions
257 -* Well-documented scientific consensus
258 -* Basic reference information
179 +The system must support Claim creation from:
259 259  
181 +* Free-text input
182 +* URLs (web pages, articles, posts)
183 +* Uploaded documents and transcripts
184 +* Structured feeds (optional, e.g. from partner platforms)
185 +
186 +Accepted sources:
187 +
188 +* Text entered by users
189 +* URLs
190 +* Uploaded documents
191 +* Transcripts
192 +* Automated ingestion (optional federation input)
193 +* AKEL extraction from multi-claim texts
194 +
195 +=== FR2 – Claim Normalization ===
196 +
197 +* Convert diverse inputs into short, structured, declarative claims
198 +* Preserve original phrasing for reference
199 +* Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible
200 +
201 +=== FR3 – Claim Classification ===
202 +
203 +* Classify claims by topic, domain, and type (e.g., quantitative, causal, normative)
204 +* Suggest which node / experts are relevant
205 +
206 +=== FR4 – Claim Clustering ===
207 +
208 +* Group similar claims into Claim Clusters
209 +* Allow manual correction of cluster membership
210 +* Provide explanation why two claims are considered “same cluster”
211 +
260 260  ----
261 261  
262 -== Related Pages ==
214 +== Scenario System ==
263 263  
264 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
265 -* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
266 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]]
267 -* [[Governance>>FactHarbor.Organisation.Governance]]
216 +=== FR5 – Scenario Creation ===
268 268  
218 +* Contributors, Reviewers, and Experts can create scenarios
219 +* AKEL can propose draft scenarios
220 +* Each scenario is tied to exactly one Claim Cluster
221 +
222 +=== FR6 – Required Scenario Fields ===
223 +
224 +Each scenario includes:
225 +
226 +* Definitions (key terms)
227 +* Assumptions (explicit, testable where possible)
228 +* Boundaries (time, geography, population, conditions)
229 +* Scope of evidence considered
230 +* Intended decision / context (optional)
231 +
232 +=== FR7 – Scenario Versioning ===
233 +
234 +* Every change to a scenario creates a new version
235 +* Previous versions remain accessible with timestamps and rationale
236 +
237 +=== FR8 – Scenario Comparison ===
238 +
239 +* Users can compare scenarios side by side
240 +* Show differences in assumptions, definitions, and evidence sets
241 +
242 +----
243 +
244 +== Evidence Management ==
245 +
246 +=== FR9 – Evidence Ingestion ===
247 +
248 +* Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios
249 +* Allow multiple pieces of evidence per Scenario
250 +
251 +=== FR10 – Evidence Assessment ===
252 +
253 +For each piece of evidence:
254 +
255 +* Assign reliability / quality ratings
256 +* Capture who rated it and why
257 +* Indicate known limitations, biases, or conflicts of interest
258 +
259 +=== FR11 – Evidence Linking ===
260 +
261 +* Link one piece of evidence to multiple scenarios if relevant
262 +* Make dependencies explicit (e.g., “Scenario A uses subset of evidence used in Scenario B”)
263 +
264 +----
265 +
266 +== Verdicts & Truth Landscape ==
267 +
268 +=== FR12 – Scenario Verdicts ===
269 +
270 +For each Scenario:
271 +
272 +* Provide a **probability- or likelihood-based verdict**
273 +* Capture uncertainty and reasoning
274 +* Distinguish between AKEL draft and human-approved verdict
275 +
276 +=== FR13 – Truth Landscape ===
277 +
278 +* Aggregate all scenario-specific verdicts into a “truth landscape” for a claim
279 +* Make disagreements visible rather than collapsing them into a single binary result
280 +
281 +=== FR14 – Time Evolution ===
282 +
283 +* Show how verdicts and evidence evolve over time
284 +* Allow users to see “as of date X, what did we know?”
285 +
286 +----
287 +
288 +== Workflow, Moderation & Audit ==
289 +
290 +=== FR15 – Workflow States ===
291 +
292 +* Draft → In Review → Published / Rejected
293 +* Separate states for Claims, Scenarios, Evidence, and Verdicts
294 +
295 +=== FR16 – Moderation & Abuse Handling ===
296 +
297 +* Allow Moderators to hide content or lock edits for abuse or legal reasons
298 +* Keep internal audit trail even if public view is restricted
299 +
300 +=== FR17 – Audit Trail ===
301 +
302 +* Every significant action (create, edit, publish, delete/hide) is logged with:
303 + * Who did it
304 + * When
305 + * What changed
306 + * Why (short comment, optional but recommended)
307 +
308 +----
309 +
310 += Federation Requirements =
311 +
312 +FactHarbor is designed to operate as a **federated network of nodes**.
313 +
314 +=== FR18 – Node Autonomy ===
315 +
316 +* Each node can run independently (local policies, local users, local moderation)
317 +* Nodes decide which other nodes to federate with
318 +
319 +=== FR19 – Data Sharing Modes ===
320 +
321 +Nodes must be able to:
322 +
323 +* Share claims and summaries only
324 +* Share selected claims, scenarios, and verdicts
325 +* Share full underlying evidence metadata where allowed
326 +* Opt-out of sharing sensitive or restricted content
327 +
328 +=== FR20 – Synchronization & Conflict Handling ===
329 +
330 +* Changes from remote nodes must be mergeable or explicitly conflict-marked
331 +* Conflicting verdicts are allowed and visible; not forced into consensus
332 +
333 +=== FR21 – Federation Discovery ===
334 +
335 +* Discover other nodes and their capabilities (public endpoints, policies)
336 +* Allow whitelisting / blacklisting of nodes
337 +
338 +**Basic federation** (minimum):
339 +
340 +* Subscribe to and import selected claims and scenarios from other nodes
341 +* Keep provenance (which node originated what)
342 +* Respect remote deletion / redaction notices where required by policy or law
343 +
344 +Advanced federation (later versions):
345 +
346 +* Cross-node search
347 +* Federation-wide discovery and reputation signals
348 +
349 +----
350 +
351 += Non-Functional Requirements (NFR) =
352 +
353 +== NFR1 – Transparency ==
354 +
355 +* All assumptions, evidence, and reasoning behind verdicts must be visible
356 +* AKEL involvement must be clearly labeled
357 +* Users must be able to inspect the chain of reasoning and versions
358 +
359 +== NFR2 – Security ==
360 +
361 +* Role-based access control
362 +* Transport-level security (HTTPS)
363 +* Secure storage of secrets (API keys, credentials)
364 +* Audit trails for sensitive actions
365 +
366 +== NFR3 – Privacy & Compliance ==
367 +
368 +* Configurable data retention policies
369 +* Ability to redact or pseudonymize personal data when required
370 +* Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests)
371 +
372 +== NFR4 – Performance ==
373 +
374 +* POC: typical interactions < 2 s
375 +* Release 1.0: < 300 ms for common read operations after caching
376 +* Degradation strategies under load (e.g. partial federation results, limited history)
377 +
378 +== NFR5 – Scalability ==
379 +
380 +* POC: **Fully automated text-to-truth-landscape** pipeline for validation.
381 +* Beta 0: ~100 external testers on one node
382 +* Release 1.0: **2000+ concurrent users** on a reasonably provisioned node
383 +
384 +Suggested technical targets for Release 1.0:
385 +
386 +* Scalable monolith or early microservice architecture
387 +* Sharded vector database (for semantic search)
388 +* Optional IPFS or other decentralized storage for large artefacts
389 +* Horizontal scalability for read capacity
390 +
391 +== NFR6 – Interoperability ==
392 +
393 +* Open, documented API
394 +* Modular AKEL that can be swapped or extended
395 +* Federation protocols that follow open standards where possible
396 +* Standard model for external integrations (e.g. news platforms, research tools)
397 +
398 +== NFR7 – Observability & Operations ==
399 +
400 +* Metrics for performance, errors, and queue backlogs
401 +* Logs for key flows (claim intake, scenario changes, verdict updates, federation sync)
402 +* Health endpoints for monitoring
403 +
404 +== NFR8 – Maintainability ==
405 +
406 +* Clear module boundaries (API, core services, AKEL, storage, federation)
407 +* Backward-compatible schema migration strategy where feasible
408 +* Configuration via files / environment variables, not hard-coded
409 +
410 +== NFR9 – Usability ==
411 +
412 +* UI optimized for **exploring complexity**, not hiding it
413 +* Support for saved views, filters, and user-level preferences
414 +* Progressive disclosure: casual users see summaries, advanced users can dive deep
415 +
416 +----
417 +
418 += Release Levels =
419 +
420 +=== Proof of Concept (POC) ===
421 +
422 +* **Status:** Fully Automated "Text to Truth Landscape"
423 +* **Focus:** Validating automated extraction, scenario generation, and verdict computation without human-in-the-loop.
424 +* **Goal:** Demonstrate model capability on raw text input.
425 +
426 +=== Beta 0 ===
427 +
428 +* One or few nodes
429 +* External testers (~100)
430 +* Expanded workflows and basic moderation
431 +* Initial federation experiments
432 +
433 +=== Release 1.0 ===
434 +
435 +* 2000+ concurrent users
436 +* Scalable monolith or early microservices
437 +* Sharded vector DB
438 +* IPFS optional
439 +* High automation (AKEL assistance)
440 +* Multi-node federation
441 +