Changes for page Requirements

Last modified by Robert Schaub on 2025/12/24 20:34

From version 7.1
edited by Robert Schaub
on 2025/12/14 22:27
Change comment: Imported from XAR
To version 3.1
edited by Robert Schaub
on 2025/12/12 15:41
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -1,289 +1,441 @@
1 1  = Requirements =
2 2  
3 -This page defines **Roles**, **Responsibilities**, and **Rules** for contributors and users of FactHarbor.
3 +This chapter defines all functional, non-functional, user-role, and federation requirements for FactHarbor.
4 4  
5 -== Roles ==
5 +It answers:
6 6  
7 -=== Reader ===
7 +* Who can do what in the system?
8 +* What workflows must FactHarbor support?
9 +* What quality, transparency, and federation guarantees must be met?
8 8  
9 -**Who**: Anyone (no login required).
11 +----
10 10  
11 -**Can**:
12 -* Browse and search claims
13 -* View scenarios, evidence, verdicts, and timelines
14 -* Compare scenarios and explore assumptions
15 -* Flag issues, errors, contradictions, or suspicious patterns
16 -* Use filters, search, and visualization tools
17 -* Create personal views (saved searches, bookmarks - local browser storage)
18 -* **Submit claims automatically** by providing text to analyze - new claims are added automatically unless equal claims already exist in the system
13 += User Roles =
19 19  
20 -**Cannot**:
21 -* Modify existing content
22 -* Access draft content
23 -* Participate in governance decisions
15 +== Reader ==
24 24  
25 -**Note**: Readers can request human review of AI-generated content by flagging it.
17 +Responsibilities:
26 26  
27 -=== Contributor ===
19 +* Browse and search claims
20 +* View scenarios, evidence, verdicts, and timelines
21 +* Compare scenarios and explore assumptions
22 +* Flag issues, errors, contradictions, or suspicious patterns
28 28  
29 -**Who**: Registered and logged-in users (extends Reader capabilities).
24 +Permissions:
30 30  
31 -**Can**:
32 -* Everything a Reader can do
33 -* Submit claims
34 -* Submit evidence
35 -* Provide feedback
36 -* Suggest scenarios
37 -* Flag content for review
38 -* Request human review of AI-generated content
26 +* Read-only access to all published claims, scenarios, evidence, and verdicts
27 +* Use filters, search, and visualization tools (“truth landscape”, timelines, scenario comparison, etc.)
28 +* Create personal views (saved searches, bookmarks, etc.)
39 39  
40 -**Cannot**:
41 -* Publish or mark content as "reviewed" or "approved"
42 -* Override expert or maintainer decisions
43 -* Directly modify AKEL or quality gate configurations
30 +Limitations:
44 44  
45 -=== Reviewer ===
32 +* Cannot change shared content
33 +* Cannot publish new claims, scenarios, or verdicts
46 46  
47 -**Who**: Trusted community members, appointed by maintainers.
35 +----
48 48  
49 -**Can**:
50 -* Review contributions from Contributors and AKEL drafts
51 -* Validate AI-generated content (Mode 2 → Mode 3 transition)
52 -* Edit claims, scenarios, and evidence
53 -* Add clarifications or warnings
54 -* Change content status: `draft` → `in review` → `published` / `rejected`
55 -* Approve or reject **Tier B and C** content for "Human-Reviewed" status
56 -* Flag content for expert review
57 -* Participate in audit sampling
37 +== Contributor ==
58 58  
59 -**Cannot**:
60 -* Approve Tier A content for "Human-Reviewed" status (requires Expert)
61 -* Change governance rules
62 -* Unilaterally change expert conclusions without process
63 -* Bypass quality gates
39 +Responsibilities:
64 64  
65 -**Note on AI-Drafted Content**:
66 -* Reviewers can validate AI-generated content (Mode 2) to promote it to "Human-Reviewed" (Mode 3)
67 -* For Tier B and C, Reviewers have approval authority
68 -* For Tier A, only Experts can grant "Human-Reviewed" status
41 +* Submit claims
42 +* Propose new claim clusters (if automatic clustering is insufficient)
43 +* Draft scenarios (definitions, assumptions, boundaries)
44 +* Attach evidence (sources, documents, links, datasets)
45 +* Suggest verdict drafts and uncertainty ranges
46 +* Respond to reviewer or expert feedback
69 69  
70 -=== Expert (Domain-Specific) ===
48 +Permissions:
71 71  
72 -**Who**: Subject-matter specialists in specific domains (medicine, law, science, etc.).
50 +* Everything a Reader can do
51 +* Create and edit **draft** claims, scenarios, evidence links, and verdict drafts
52 +* Comment on existing content and discuss assumptions
53 +* Propose corrections to misclassified or outdated content
73 73  
74 -**Can**:
75 -* Everything a Reviewer can do
76 -* **Final authority** on Tier A content "Human-Reviewed" status
77 -* Validate complex or controversial claims in their domain
78 -* Define domain-specific quality standards
79 -* Set reliability thresholds for domain sources
80 -* Participate in risk tier assignment review
81 -* Override AKEL suggestions in their domain (with documentation)
55 +Limitations:
82 82  
83 -**Cannot**:
84 -* Change platform governance policies
85 -* Approve content outside their expertise domain
86 -* Bypass technical quality gates (but can flag for adjustment)
57 +* Cannot *publish* or mark content as “reviewed” or “approved”
58 +* Cannot override expert or maintainer decisions
59 +* Cannot change system-level settings, roles, or federation configuration
87 87  
88 -**Specialization**:
89 -* Experts are domain-specific (e.g., "Medical Expert", "Legal Expert", "Climate Science Expert")
90 -* Cross-domain claims may require multiple expert reviews
61 +----
91 91  
92 -=== Auditor ===
63 +== Reviewer ==
93 93  
94 -**Who**: Reviewers or Experts assigned to sampling audit duties.
65 +Responsibilities:
95 95  
96 -**Can**:
97 -* Review sampled AI-generated content against quality standards
98 -* Validate quality gate enforcement
99 -* Identify patterns in AI errors or hallucinations
100 -* Provide feedback for system improvement
101 -* Flag content for immediate review if errors found
102 -* Contribute to audit statistics and transparency reports
67 +* Review contributions from Contributors and AKEL drafts
68 +* Check internal consistency and clarity of scenarios
69 +* Validate that evidence is correctly linked and described
70 +* Ensure verdicts match the evidence and stated assumptions
71 +* Reject, request change, or accept content
103 103  
104 -**Cannot**:
105 -* Change audit sampling algorithms (maintainer responsibility)
106 -* Bypass normal review workflows
107 -* Audit content they personally created
73 +Permissions:
108 108  
109 -**Selection**:
110 -* Auditors selected based on domain expertise and review quality
111 -* Rotation to prevent audit fatigue
112 -* Stratified assignment (Tier A auditors need higher expertise)
75 +* Everything a Contributor can do
76 +* Change content status from `draft` → `in review` → `published` / `rejected`
77 +* Send content back to Contributors with comments
78 +* Flag content for expert review
113 113  
114 -**Audit Focus**:
115 -* Tier A: Recommendation 30-50% sampling rate, expert auditors
116 -* Tier B: Recommendation 10-20% sampling rate, reviewer/expert auditors
117 -* Tier C: Recommendation 5-10% sampling rate, reviewer auditors
80 +Limitations:
118 118  
119 -=== Moderator ===
82 +* Cannot modify system-wide configuration or federation topology
83 +* Cannot unilaterally change expert conclusions without process
120 120  
121 -**Who**: Maintainers or trusted long-term contributors.
85 +----
122 122  
123 -**Can**:
124 -* All Reviewer and Expert capabilities (cross-domain)
125 -* Manage user accounts and permissions
126 -* Handle disputes and conflicts
127 -* Enforce community guidelines
128 -* Suspend or ban abusive users
129 -* Finalize publication status for sensitive content
130 -* Review and adjust risk tier assignments
131 -* Oversee audit system performance
87 +== Expert ==
132 132  
133 -**Cannot**:
134 -* Change core data model or architecture
135 -* Override technical system constraints
136 -* Make unilateral governance decisions without consensus
89 +Responsibilities:
137 137  
138 -=== Maintainer ===
91 +* Provide domain-specific judgment on scenarios, evidence, and verdicts
92 +* Refine assumptions and definitions in complex or ambiguous topics
93 +* Identify subtle biases, missing evidence, or misinterpretations
94 +* Propose improved verdicts and uncertainty assessments
139 139  
140 -**Who**: Core team members responsible for the platform.
96 +Permissions:
141 141  
142 -**Can**:
143 -* All Moderator capabilities
144 -* Change data model, architecture, and technical systems
145 -* Configure quality gates and AKEL parameters
146 -* Adjust audit sampling algorithms
147 -* Set and modify risk tier policies
148 -* Make platform-wide governance decisions
149 -* Access and modify backend systems
150 -* Deploy updates and fixes
151 -* Grant and revoke roles
98 +* Everything a Reviewer can do
99 +* Attach expert annotations and signed opinions to scenarios and verdicts
100 +* Propose re-evaluation of already published content based on new evidence
152 152  
153 -**Governance**:
154 -* Maintainers operate under organizational governance rules
155 -* Major policy changes require Governing Team approval
156 -* Technical decisions made collaboratively
102 +Limitations:
157 157  
104 +* Expert status is scoped to specific domains
105 +* Cannot bypass moderation, abuse policies, or legal constraints
106 +
158 158  ----
159 159  
160 -== Content Publication States ==
109 +== Moderator ==
161 161  
162 -=== Mode 1: Draft ===
163 -* Not visible to public
164 -* Visible to contributor and reviewers
165 -* Can be edited by contributor or reviewers
166 -* Default state for failed quality gates
111 +Responsibilities:
167 167  
168 -=== Mode 2: AI-Generated (Published) ===
169 -* **Public** and visible to all users
170 -* Clearly labeled as "AI-Generated, Awaiting Human Review"
171 -* Passed all automated quality gates
172 -* Risk tier displayed (A/B/C)
173 -* Users can:
174 - ** Read and use content
175 - ** Request human review
176 - ** Flag for expert attention
177 -* Subject to sampling audits
178 -* Can be promoted to Mode 3 by reviewer/expert validation
113 +* Handle abuse reports, spam, harassment, and coordinated manipulation
114 +* Enforce community guidelines and legal constraints
115 +* Manage user bans, content takedowns, and visibility restrictions
179 179  
180 -=== Mode 3: Human-Reviewed (Published) ===
181 -* **Public** and visible to all users
182 -* Labeled as "Human-Reviewed" with reviewer/expert attribution
183 -* Passed quality gates + human validation
184 -* Highest trust level
185 -* For Tier A, requires Expert approval
186 -* For Tier B/C, Reviewer approval sufficient
117 +Permissions:
187 187  
188 -=== Rejected ===
189 -* Not visible to public
190 -* Visible to contributor with rejection reason
191 -* Can be resubmitted after addressing issues
192 -* Rejection logged for transparency
119 +* Hide or temporarily disable access to abusive content
120 +* Ban or restrict users in line with policy
121 +* Edit or redact sensitive content (e.g., doxxing, illegal material)
193 193  
123 +Limitations:
124 +
125 +* Does not change factual verdicts except where required for legal / safety reasons
126 +* Substantive fact changes must go through the review / expert process
127 +
194 194  ----
195 195  
196 -== Contribution Rules ==
130 +== Maintainer / Administrator ==
197 197  
198 -=== All Contributors Must: ===
199 -* Provide sources for claims
200 -* Use clear, neutral language
201 -* Avoid personal attacks or insults
202 -* Respect intellectual property (cite sources)
203 -* Accept community feedback gracefully
132 +Responsibilities:
204 204  
205 -=== AKEL (AI) Must: ===
206 -* Mark all outputs with `AuthorType = AI`
207 -* Pass quality gates before Mode 2 publication
208 -* Perform mandatory contradiction search
209 -* Disclose confidence levels and uncertainty
210 -* Provide traceable reasoning chains
211 -* Flag potential bubbles or echo chambers
212 -* Submit to audit sampling
134 +* Maintain node configuration, security settings, and backups
135 +* Configure AKEL, storage, federation endpoints, and performance tuning
136 +* Manage role assignments (who is Reviewer, Expert, Moderator, etc.)
137 +* Approve software updates and schema migrations
213 213  
214 -=== Reviewers Must: ===
215 -* Be impartial and evidence-based
216 -* Document reasoning for decisions
217 -* Escalate to experts when appropriate
218 -* Participate in audits when assigned
219 -* Provide constructive feedback
139 +Permissions:
220 220  
221 -=== Experts Must: ===
222 -* Stay within domain expertise
223 -* Disclose conflicts of interest
224 -* Document specialized terminology
225 -* Provide reasoning for domain-specific decisions
226 -* Participate in Tier A audits
141 +* All read/write access to configuration, but not necessarily content editorial authority
142 +* Define organization-level policies (e.g., which sources are allowed by default)
227 227  
144 +Limitations:
145 +
146 +* Editorial decisions on controversial topics follow governance rules, not arbitrary admin choice
147 +
228 228  ----
229 229  
230 -== Quality Standards ==
150 +== AKEL (AI Knowledge Extraction Layer) ==
231 231  
232 -=== Source Requirements ===
233 -* Primary sources preferred over secondary
234 -* Publication date and author must be identifiable
235 -* Sources must be accessible (not paywalled when possible)
236 -* Contradictory sources must be acknowledged
237 -* Echo chamber sources must be flagged
152 +Responsibilities:
238 238  
239 -=== Claim Requirements ===
240 -* Falsifiable or evaluable
241 -* Clear definitions of key terms
242 -* Boundaries and scope stated
243 -* Assumptions made explicit
244 -* Uncertainty acknowledged
154 +* Propose drafts — never final decisions
155 +* Normalize claims and extract candidate clusters
156 +* Draft scenarios, evidence candidates, and preliminary verdict suggestions
157 +* Propose re-evaluation when new evidence appears
245 245  
246 -=== Evidence Requirements ===
247 -* Relevant to the claim and scenario
248 -* Reliability assessment provided
249 -* Methodology described (for studies)
250 -* Limitations noted
251 -* Conflicting evidence acknowledged
159 +Permissions:
252 252  
161 +* Create and update **machine-generated drafts** and suggestions
162 +* Never directly publish content without human approval
163 +
164 +Limitations:
165 +
166 +* AKEL output is always labeled as AI-generated draft
167 +* Must be reviewable, auditable, and overridable by humans
168 +
253 253  ----
254 254  
255 -== Risk Tier Assignment ==
171 += Functional Requirements =
256 256  
257 -**Automated (AKEL)**: Initial tier suggested based on domain, keywords, impact
258 -**Human Validation**: Moderators or Experts can override AKEL suggestions
259 -**Review**: Risk tiers periodically reviewed based on audit outcomes
173 +This section defines what the system must **do**.
260 260  
261 -**Tier A Indicators**:
262 -* Medical diagnosis or treatment advice
263 -* Legal interpretation or advice
264 -* Election or voting information
265 -* Safety or security sensitive
266 -* Major financial decisions
267 -* Potential for significant harm
175 +== Claim Intake & Normalization ==
268 268  
269 -**Tier B Indicators**:
270 -* Complex scientific causality
271 -* Contested policy domains
272 -* Historical interpretation with political implications
273 -* Significant economic impact claims
177 +=== FR1 – Claim Intake ===
274 274  
275 -**Tier C Indicators**:
276 -* Established historical facts
277 -* Simple definitions
278 -* Well-documented scientific consensus
279 -* Basic reference information
179 +The system must support Claim creation from:
280 280  
181 +* Free-text input
182 +* URLs (web pages, articles, posts)
183 +* Uploaded documents and transcripts
184 +* Structured feeds (optional, e.g. from partner platforms)
185 +
186 +Accepted sources:
187 +
188 +* Text entered by users
189 +* URLs
190 +* Uploaded documents
191 +* Transcripts
192 +* Automated ingestion (optional federation input)
193 +* AKEL extraction from multi-claim texts
194 +
195 +=== FR2 – Claim Normalization ===
196 +
197 +* Convert diverse inputs into short, structured, declarative claims
198 +* Preserve original phrasing for reference
199 +* Avoid hidden reinterpretation; differences between original and normalized phrasing must be visible
200 +
201 +=== FR3 – Claim Classification ===
202 +
203 +* Classify claims by topic, domain, and type (e.g., quantitative, causal, normative)
204 +* Suggest which node / experts are relevant
205 +
206 +=== FR4 – Claim Clustering ===
207 +
208 +* Group similar claims into Claim Clusters
209 +* Allow manual correction of cluster membership
210 +* Provide explanation why two claims are considered “same cluster”
211 +
281 281  ----
282 282  
283 -== Related Pages ==
214 +== Scenario System ==
284 284  
285 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
286 -* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
287 -* [[Workflows>>FactHarbor.Specification.Workflows.WebHome]]
288 -* [[Governance>>FactHarbor.Organisation.Governance]]
216 +=== FR5 – Scenario Creation ===
289 289  
218 +* Contributors, Reviewers, and Experts can create scenarios
219 +* AKEL can propose draft scenarios
220 +* Each scenario is tied to exactly one Claim Cluster
221 +
222 +=== FR6 – Required Scenario Fields ===
223 +
224 +Each scenario includes:
225 +
226 +* Definitions (key terms)
227 +* Assumptions (explicit, testable where possible)
228 +* Boundaries (time, geography, population, conditions)
229 +* Scope of evidence considered
230 +* Intended decision / context (optional)
231 +
232 +=== FR7 – Scenario Versioning ===
233 +
234 +* Every change to a scenario creates a new version
235 +* Previous versions remain accessible with timestamps and rationale
236 +
237 +=== FR8 – Scenario Comparison ===
238 +
239 +* Users can compare scenarios side by side
240 +* Show differences in assumptions, definitions, and evidence sets
241 +
242 +----
243 +
244 +== Evidence Management ==
245 +
246 +=== FR9 – Evidence Ingestion ===
247 +
248 +* Attach external sources (articles, studies, datasets, reports, transcripts) to Scenarios
249 +* Allow multiple pieces of evidence per Scenario
250 +
251 +=== FR10 – Evidence Assessment ===
252 +
253 +For each piece of evidence:
254 +
255 +* Assign reliability / quality ratings
256 +* Capture who rated it and why
257 +* Indicate known limitations, biases, or conflicts of interest
258 +
259 +=== FR11 – Evidence Linking ===
260 +
261 +* Link one piece of evidence to multiple scenarios if relevant
262 +* Make dependencies explicit (e.g., “Scenario A uses subset of evidence used in Scenario B”)
263 +
264 +----
265 +
266 +== Verdicts & Truth Landscape ==
267 +
268 +=== FR12 – Scenario Verdicts ===
269 +
270 +For each Scenario:
271 +
272 +* Provide a **probability- or likelihood-based verdict**
273 +* Capture uncertainty and reasoning
274 +* Distinguish between AKEL draft and human-approved verdict
275 +
276 +=== FR13 – Truth Landscape ===
277 +
278 +* Aggregate all scenario-specific verdicts into a “truth landscape” for a claim
279 +* Make disagreements visible rather than collapsing them into a single binary result
280 +
281 +=== FR14 – Time Evolution ===
282 +
283 +* Show how verdicts and evidence evolve over time
284 +* Allow users to see “as of date X, what did we know?”
285 +
286 +----
287 +
288 +== Workflow, Moderation & Audit ==
289 +
290 +=== FR15 – Workflow States ===
291 +
292 +* Draft → In Review → Published / Rejected
293 +* Separate states for Claims, Scenarios, Evidence, and Verdicts
294 +
295 +=== FR16 – Moderation & Abuse Handling ===
296 +
297 +* Allow Moderators to hide content or lock edits for abuse or legal reasons
298 +* Keep internal audit trail even if public view is restricted
299 +
300 +=== FR17 – Audit Trail ===
301 +
302 +* Every significant action (create, edit, publish, delete/hide) is logged with:
303 + * Who did it
304 + * When
305 + * What changed
306 + * Why (short comment, optional but recommended)
307 +
308 +----
309 +
310 += Federation Requirements =
311 +
312 +FactHarbor is designed to operate as a **federated network of nodes**.
313 +
314 +=== FR18 – Node Autonomy ===
315 +
316 +* Each node can run independently (local policies, local users, local moderation)
317 +* Nodes decide which other nodes to federate with
318 +
319 +=== FR19 – Data Sharing Modes ===
320 +
321 +Nodes must be able to:
322 +
323 +* Share claims and summaries only
324 +* Share selected claims, scenarios, and verdicts
325 +* Share full underlying evidence metadata where allowed
326 +* Opt-out of sharing sensitive or restricted content
327 +
328 +=== FR20 – Synchronization & Conflict Handling ===
329 +
330 +* Changes from remote nodes must be mergeable or explicitly conflict-marked
331 +* Conflicting verdicts are allowed and visible; not forced into consensus
332 +
333 +=== FR21 – Federation Discovery ===
334 +
335 +* Discover other nodes and their capabilities (public endpoints, policies)
336 +* Allow whitelisting / blacklisting of nodes
337 +
338 +**Basic federation** (minimum):
339 +
340 +* Subscribe to and import selected claims and scenarios from other nodes
341 +* Keep provenance (which node originated what)
342 +* Respect remote deletion / redaction notices where required by policy or law
343 +
344 +Advanced federation (later versions):
345 +
346 +* Cross-node search
347 +* Federation-wide discovery and reputation signals
348 +
349 +----
350 +
351 += Non-Functional Requirements (NFR) =
352 +
353 +== NFR1 – Transparency ==
354 +
355 +* All assumptions, evidence, and reasoning behind verdicts must be visible
356 +* AKEL involvement must be clearly labeled
357 +* Users must be able to inspect the chain of reasoning and versions
358 +
359 +== NFR2 – Security ==
360 +
361 +* Role-based access control
362 +* Transport-level security (HTTPS)
363 +* Secure storage of secrets (API keys, credentials)
364 +* Audit trails for sensitive actions
365 +
366 +== NFR3 – Privacy & Compliance ==
367 +
368 +* Configurable data retention policies
369 +* Ability to redact or pseudonymize personal data when required
370 +* Compliance hooks for jurisdiction-specific rules (e.g. GDPR-like deletion requests)
371 +
372 +== NFR4 – Performance ==
373 +
374 +* POC: typical interactions < 2 s
375 +* Release 1.0: < 300 ms for common read operations after caching
376 +* Degradation strategies under load (e.g. partial federation results, limited history)
377 +
378 +== NFR5 – Scalability ==
379 +
380 +* POC: **Fully automated text-to-truth-landscape** pipeline for validation.
381 +* Beta 0: ~100 external testers on one node
382 +* Release 1.0: **2000+ concurrent users** on a reasonably provisioned node
383 +
384 +Suggested technical targets for Release 1.0:
385 +
386 +* Scalable monolith or early microservice architecture
387 +* Sharded vector database (for semantic search)
388 +* Optional IPFS or other decentralized storage for large artefacts
389 +* Horizontal scalability for read capacity
390 +
391 +== NFR6 – Interoperability ==
392 +
393 +* Open, documented API
394 +* Modular AKEL that can be swapped or extended
395 +* Federation protocols that follow open standards where possible
396 +* Standard model for external integrations (e.g. news platforms, research tools)
397 +
398 +== NFR7 – Observability & Operations ==
399 +
400 +* Metrics for performance, errors, and queue backlogs
401 +* Logs for key flows (claim intake, scenario changes, verdict updates, federation sync)
402 +* Health endpoints for monitoring
403 +
404 +== NFR8 – Maintainability ==
405 +
406 +* Clear module boundaries (API, core services, AKEL, storage, federation)
407 +* Backward-compatible schema migration strategy where feasible
408 +* Configuration via files / environment variables, not hard-coded
409 +
410 +== NFR9 – Usability ==
411 +
412 +* UI optimized for **exploring complexity**, not hiding it
413 +* Support for saved views, filters, and user-level preferences
414 +* Progressive disclosure: casual users see summaries, advanced users can dive deep
415 +
416 +----
417 +
418 += Release Levels =
419 +
420 +=== Proof of Concept (POC) ===
421 +
422 +* **Status:** Fully Automated "Text to Truth Landscape"
423 +* **Focus:** Validating automated extraction, scenario generation, and verdict computation without human-in-the-loop.
424 +* **Goal:** Demonstrate model capability on raw text input.
425 +
426 +=== Beta 0 ===
427 +
428 +* One or few nodes
429 +* External testers (~100)
430 +* Expanded workflows and basic moderation
431 +* Initial federation experiments
432 +
433 +=== Release 1.0 ===
434 +
435 +* 2000+ concurrent users
436 +* Scalable monolith or early microservices
437 +* Sharded vector DB
438 +* IPFS optional
439 +* High automation (AKEL assistance)
440 +* Multi-node federation
441 +