Changes for page Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

From version 7.6
edited by Robert Schaub
on 2025/12/16 20:28
Change comment: Renamed back-links.
To version 2.1
edited by Robert Schaub
on 2025/12/11 20:16
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -1,410 +1,343 @@
1 1  = Workflows =
2 2  
3 -This page describes the core workflows for content creation, review, and publication in FactHarbor.
3 +This chapter defines the core workflows used across the FactHarbor system.
4 4  
5 -== Overview ==
5 +Each workflow describes:
6 6  
7 -FactHarbor workflows support three publication modes with risk-based review:
7 +* Purpose
8 +* Participants
9 +* Steps
10 +* Automation vs. manual work
11 +* (Wherever applicable) linear ASCII flow: a → b → c → d
8 8  
9 -* **Mode 1 (Draft)**: Internal only, failed quality gates or pending review
10 -* **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates
11 -* **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust
13 +Workflows included:
12 12  
13 -Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict).
15 +1. Claim Workflow
16 +2. Scenario Workflow
17 +3. Evidence Workflow
18 +4. Verdict Workflow
19 +5. Re-evaluation Workflow
20 +6. Federation Synchronization Workflow
21 +7. User Role & Review Workflow
22 +8. AKEL Workflow
23 +9. Global Trigger Flow
24 +10. Entity Lifecycle Notes
14 14  
15 15  ----
16 16  
17 -== Claim Submission & Publication Workflow ==
28 +== Claim Workflow ==
18 18  
19 -=== Step 1: Claim Submission ===
30 +**Purpose:**
31 +Transform raw text or input material into a normalized, classified, deduplicated, and versioned claim ready for scenario evaluation.
20 20  
21 -**Actor**: Contributor or AKEL
33 +**Participants:**
34 +* Contributor
35 +* AKEL
36 +* Reviewer
22 22  
23 -**Actions**:
38 +=== Steps ===
24 24  
25 -* Submit claim text
26 -* Provide initial sources (optional for human contributors, mandatory for AKEL)
27 -* System assigns initial AuthorType (Human or AI)
40 +**1. Ingestion**
41 +* User submits text, URL, transcript, or multi-claim content
42 +* AKEL extracts one or multiple claims
28 28  
29 -**Output**: Claim draft created
44 +**2. Normalization**
45 +* Standardizes wording
46 +* Reduces ambiguity
47 +* Flags implicit assumptions
30 30  
31 -=== Step 2: AKEL Processing ===
49 +**3. Classification (AKEL draft → Reviewer confirm)**
50 +* ClaimType
51 +* Domain
52 +* Evaluability
53 +* SafetyCategory
32 32  
33 -**Automated Steps**:
55 +**4. Duplicate & Similarity Detection**
56 +* Embeddings created
57 +* Similar claims found
58 +* Reviewer merges, splits, or confirms uniqueness
34 34  
35 -1. Claim extraction and normalization
36 -2. Classification (domain, type, evaluability)
37 -3. Risk tier assignment (A/B/C suggested)
38 -4. Initial scenario generation
39 -5. Evidence search
40 -6. **Contradiction search** (mandatory)
41 -7. Quality gate validation
60 +**5. Version Creation**
61 +* New ClaimVersion stored
62 +* Every edit creates a new immutable version
42 42  
43 -**Output**: Processed claim with risk tier and quality gate results
64 +**6. Cluster Assignment**
65 +* AKEL proposes cluster membership
66 +* Reviewer confirms
44 44  
45 -=== Step 3: Quality Gate Checkpoint ===
68 +**7. Scenario Linking (optional)**
69 +* Existing scenarios connected
70 +* AKEL may propose new drafts
46 46  
47 -**Gates Evaluated**:
72 +**8. Publication**
73 +* Claim becomes active and visible
48 48  
49 -* Source quality
50 -* Contradiction search completion
51 -* Uncertainty quantification
52 -* Structural integrity
75 +**Flow:**
76 +Ingest → Normalize → Classify → Deduplicate → Cluster → Version → Publish
53 53  
54 -**Outcomes**:
78 +----
55 55  
56 -* **All gates pass** → Proceed to Mode 2 publication (if Tier B or C)
57 -* **Any gate fails** → Mode 1 (Draft), flag for human review
58 -* **Tier A** → Mode 2 with warnings + auto-escalate to expert queue
80 +== Scenario Workflow ==
59 59  
60 -=== Step 4: Publication (Risk-Tier Dependent) ===
82 +**Purpose:**
83 +Define the specific analytic contexts needed to evaluate each claim.
61 61  
62 -**Tier C (Low Risk)**:
85 +**Participants:**
86 +* Contributor
87 +* Reviewer
88 +* Domain Expert
89 +* AKEL
63 63  
64 -* **Direct to Mode 2**: AI-generated, public, clearly labeled
65 -* User can request human review
66 -* Sampling audit applies
91 +=== Steps ===
67 67  
68 -**Tier B (Medium Risk)**:
93 +**1. Scenario Proposal**
94 +* Drafted by contributor or generated by AKEL
69 69  
70 -* **Direct to Mode 2**: AI-generated, public, clearly labeled
71 -* Higher audit sampling rate
72 -* High-engagement content may auto-escalate
96 +**2. Completion of Required Fields**
97 +Must include:
98 +* Definitions
99 +* Assumptions
100 +* ContextBoundary
101 +* EvaluationMethod
102 +* SafetyClass
103 +* VersionMetadata
73 73  
74 -**Tier A (High Risk)**:
105 +**3. Safety Interception (AKEL)**
106 +Flags:
107 +* non-falsifiable structures
108 +* pseudoscientific assumptions
109 +* unsafe contexts
75 75  
76 -* **Mode 2 with warnings**: AI-generated, public, prominent disclaimers
77 -* **Auto-escalated** to expert review queue
78 -* User warnings displayed
79 -* Highest audit sampling rate
111 +**4. Redundancy & Conflict Check**
112 +* Similar scenarios merged
113 +* Contradictions flagged
80 80  
81 -=== Step 5: Human Review (Optional for B/C, Escalated for A) ===
115 +**5. Reviewer Validation**
116 +Ensures clarity, neutrality, and methodological validity.
82 82  
83 -**Triggers**:
118 +**6. Expert Approval (mandatory for high-risk domains)**
84 84  
85 -* User requests review
86 -* Audit flags issues
87 -* High engagement (Tier B)
88 -* Automatic (Tier A)
120 +**7. Version Storage**
121 +* Each revision = new ScenarioVersion
89 89  
90 -**Process**:
123 +**Flow:**
124 +Draft → Validate → Safety Check → Review → Expert Approval → Version → Activate
91 91  
92 -1. Reviewer/Expert examines claim
93 -2. Validates quality gates
94 -3. Checks contradiction search results
95 -4. Assesses risk tier appropriateness
96 -5. Decision: Approve, Request Changes, or Reject
97 -
98 -**Outcomes**:
99 -
100 -* **Approved** → Mode 3 (Human-Reviewed)
101 -* **Changes Requested** → Back to contributor or AKEL for revision
102 -* **Rejected** → Rejected status with reasoning
103 -
104 104  ----
105 105  
106 -== Scenario Creation Workflow ==
128 +== Evidence Workflow ==
107 107  
108 -=== Step 1: Scenario Generation ===
130 +**Purpose:**
131 +Structure, classify, validate, version, and link evidence to scenarios.
109 109  
110 -**Automated (AKEL)**:
133 +**Participants:**
134 +* Contributor
135 +* Reviewer
136 +* Domain Expert
137 +* AKEL
111 111  
112 -* Generate scenarios for claim
113 -* Define boundaries, assumptions, context
114 -* Identify evaluation methods
139 +=== Steps ===
115 115  
116 -**Manual (Expert/Reviewer)**:
141 +**1. Evidence Submission**
142 +* File, dataset, URL, or extracted text
117 117  
118 -* Create custom scenarios
119 -* Refine AKEL-generated scenarios
120 -* Add domain-specific nuances
144 +**2. Metadata Extraction (AKEL)**
145 +* EvidenceType
146 +* Category
147 +* Provenance
148 +* Study design
149 +* ExtractionMethod
150 +* ReliabilityHints
121 121  
122 -=== Step 2: Scenario Validation ===
152 +**3. Relevance Check**
153 +Reviewer verifies which scenarios the evidence applies to.
123 123  
124 -**Quality Checks**:
155 +**4. Reliability Assessment**
156 +* AKEL proposes reliability
157 +* Reviewer confirms
158 +* Expert review for complex papers
125 125  
126 -* Completeness (definitions, boundaries, assumptions clear)
127 -* Relevance to claim
128 -* Evaluability
129 -* No circular logic
160 +**5. ScenarioEvidenceLink Creation**
161 +Each link stores:
162 +* relevance score
163 +* justification
164 +* evidence version
130 130  
131 -**Risk Tier Assignment**:
166 +**6. Versioning**
167 +* Any update = new EvidenceVersion
132 132  
133 -* Inherits from parent claim
134 -* Can be overridden by expert if scenario increases/decreases risk
169 +**Flow:**
170 +Submit → Extract Metadata → Evaluate Relevance Score Reliability → Link → Version
135 135  
136 -=== Step 3: Scenario Publication ===
137 -
138 -**Mode 2 (AI-Generated)**:
139 -
140 -* Tier B/C scenarios can publish immediately
141 -* Subject to sampling audits
142 -
143 -**Mode 1 (Draft)**:
144 -
145 -* Tier A scenarios default to draft
146 -* Require expert validation for Mode 2 or Mode 3
147 -
148 148  ----
149 149  
150 -== Evidence Evaluation Workflow ==
174 +== Verdict Workflow ==
151 151  
152 -=== Step 1: Evidence Search & Retrieval ===
176 +**Purpose:**
177 +Generate likelihood estimates per scenario based on evidence and scenario structure.
153 153  
154 -**AKEL Actions**:
179 +**Participants:**
180 +* AKEL (drafts)
181 +* Reviewer
182 +* Domain Expert
155 155  
156 -* Search academic databases, reputable media
157 -* **Mandatory contradiction search** (counter-evidence, reservations)
158 -* Extract metadata (author, date, publication, methodology)
159 -* Assess source reliability
184 +=== Steps ===
160 160  
161 -**Quality Requirements**:
186 +**1. Evidence Aggregation**
187 +Collect relevant evidence versions.
162 162  
163 -* Primary sources preferred
164 -* Diverse perspectives included
165 -* Echo chambers flagged
166 -* Conflicting evidence acknowledged
189 +**2. Draft Verdict Generation (AKEL)**
190 +Outputs:
191 +* likelihood range
192 +* uncertainty factors
193 +* conflict detection
194 +* sensitivity analysis
167 167  
168 -=== Step 2: Evidence Summarization ===
196 +**3. Reasoning Draft**
197 +Structured explanation chain generated by AKEL.
169 169  
170 -**AKEL Generates**:
199 +**4. Reviewer Validation**
200 +Ensures logic, evidence fit, no hallucinations.
171 171  
172 -* Summary of evidence
173 -* Relevance assessment
174 -* Reliability score
175 -* Limitations and caveats
176 -* Conflicting evidence summary
202 +**5. Expert Review**
203 +Required for:
204 +* medicine
205 +* psychology
206 +* engineering
207 +* political misinformation
208 +* controversial or risky domains
177 177  
178 -**Quality Gate**: Structural integrity, source quality
210 +**6. Verdict Storage**
211 +* Every update creates a new VerdictVersion
179 179  
180 -=== Step 3: Evidence Review ===
213 +**Flow:**
214 +Aggregate → Draft Verdict → Draft Explanation → Review → Expert Approval → Version
181 181  
182 -**Reviewer/Expert Validates**:
183 -
184 -* Accuracy of summaries
185 -* Appropriateness of sources
186 -* Completeness of contradiction search
187 -* Reliability assessments
188 -
189 -**Outcomes**:
190 -
191 -* **Mode 2**: Evidence summaries published as AI-generated
192 -* **Mode 3**: After human validation
193 -* **Mode 1**: Failed quality checks or pending expert review
194 -
195 195  ----
196 196  
197 -== Verdict Generation Workflow ==
218 +== Re-evaluation Workflow ==
198 198  
199 -=== Step 1: Verdict Computation ===
220 +**Purpose:**
221 +Keep verdicts current when evidence or scenarios change.
200 200  
201 -**AKEL Computes**:
223 +=== Trigger Types ===
202 202  
203 -* Verdict across scenarios
204 -* Confidence scores
205 -* Uncertainty quantification
206 -* Key assumptions
207 -* Limitations
225 +* Evidence updated, disputed, retracted
226 +* Scenario assumptions changed
227 +* Claim reclassification
228 +* AKEL contradiction detection
229 +* Federation sync
208 208  
209 -**Inputs**:
231 +=== Steps ===
210 210  
211 -* Claim text
212 -* Scenario definitions
213 -* Evidence assessments
214 -* Contradiction search results
233 +**1. Trigger Detection**
234 +Re-evaluation engine receives event.
215 215  
216 -=== Step 2: Verdict Validation ===
236 +**2. Impact Analysis**
237 +Find affected:
238 +* scenarios
239 +* evidence links
240 +* verdicts
217 217  
218 -**Quality Gates**:
242 +**3. AKEL Draft Re-calculation**
243 +New:
244 +* likelihood
245 +* reasoning
246 +* uncertainty
219 219  
220 -* All four gates apply (source, contradiction, uncertainty, structure)
221 -* Reasoning chain must be traceable
222 -* Assumptions must be explicit
248 +**4. Reviewer Validation**
249 +**5. Expert Review** (high-risk)
250 +**6. Version Storage**
223 223  
224 -**Risk Tier Check**:
252 +**Flow:**
253 +Trigger → Analyze → Recompute → Review → Expert → Version
225 225  
226 -* Tier A: Always requires expert validation for Mode 3
227 -* Tier B: Mode 2 allowed, audit sampling
228 -* Tier C: Mode 2 default
229 -
230 -=== Step 3: Verdict Publication ===
231 -
232 -**Mode 2 (AI-Generated Verdict)**:
233 -
234 -* Clear labeling with confidence scores
235 -* Uncertainty disclosure
236 -* Links to reasoning trail
237 -* User can request expert review
238 -
239 -**Mode 3 (Expert-Validated Verdict)**:
240 -
241 -* Human reviewer/expert stamp
242 -* Additional commentary (optional)
243 -* Highest trust level
244 -
245 245  ----
246 246  
247 -== Audit Workflow ==
257 +== Federation Synchronization Workflow ==
248 248  
249 -=== Step 1: Audit Sampling Selection ===
259 +**Purpose:**
260 +Exchange structured data between nodes.
250 250  
251 -**Stratified Sampling**:
262 +=== Steps ===
263 +1. Detect version changes
264 +1. Build bundle (diff + Merkle tree + signatures)
265 +1. Push to peers
266 +1. Validate lineage + hashes
267 +1. Resolve conflicts (merge or branch)
268 +1. Optional re-evaluation
252 252  
253 -* Risk tier priority (A > B > C)
254 -* Low confidence scores
255 -* High traffic content
256 -* Novel topics
257 -* User flags
270 +**Flow:**
271 +Change → Bundle → Push → Validate → Merge/Fork → Update
258 258  
259 -**Sampling Rates** (Recommendations):
260 -
261 -* Tier A: 30-50%
262 -* Tier B: 10-20%
263 -* Tier C: 5-10%
264 -
265 -=== Step 2: Audit Execution ===
266 -
267 -**Auditor Actions**:
268 -
269 -1. Review sampled AI-generated content
270 -2. Validate quality gates were properly applied
271 -3. Check contradiction search completeness
272 -4. Assess reasoning quality
273 -5. Identify errors or hallucinations
274 -
275 -**Audit Outcome**:
276 -
277 -* **Pass**: Content remains in Mode 2, logged as validated
278 -* **Fail**: Content flagged for review, system improvement triggered
279 -
280 -=== Step 3: Feedback Loop ===
281 -
282 -**System Improvements**:
283 -
284 -* Failed audits analyzed for patterns
285 -* AKEL parameters adjusted
286 -* Quality gates refined
287 -* Risk tier assignments recalibrated
288 -
289 -**Transparency**:
290 -
291 -* Audit statistics published periodically
292 -* Patterns shared with community
293 -* System improvements documented
294 -
295 295  ----
296 296  
297 -== Mode Transition Workflow ==
275 +== User Role & Review Workflow ==
298 298  
299 -=== Mode 1 → Mode 2 ===
277 +**Purpose:**
278 +Ensure correctness, neutrality, safety, and resistance to manipulation.
300 300  
301 -**Requirements**:
280 +=== Steps ===
302 302  
303 -* All quality gates pass
304 -* Risk tier B or C (or A with warnings)
305 -* Contradiction search completed
282 +**1. Submission**
283 +Claim / scenario / evidence / verdict.
306 306  
307 -**Trigger**: Automatic upon quality gate validation
285 +**2. Auto-check (AKEL)**
286 +Flags unsafe content, contradictions, format issues.
308 308  
309 -=== Mode 2 → Mode 3 ===
288 +**3. Reviewer Validation**
310 310  
311 -**Requirements**:
290 +**4. Expert Validation**
291 +Required for sensitive domains.
312 312  
313 -* Human reviewer/expert validation
314 -* Quality standards confirmed
315 -* For Tier A: Expert approval required
316 -* For Tier B/C: Reviewer approval sufficient
293 +**5. Moderator Oversight**
294 +Triggered by suspicious behavior.
317 317  
318 -**Trigger**: Human review completion
296 +**Flow:**
297 +Submit → Auto-check → Review → Expert → Moderator (if needed)
319 319  
320 -=== Mode 3 → Mode 1 (Demotion) ===
321 -
322 -**Rare - Only if**:
323 -
324 -* New evidence contradicts verdict
325 -* Error discovered in reasoning
326 -* Source retraction
327 -
328 -**Process**:
329 -
330 -1. Content flagged for re-evaluation
331 -2. Moved to draft (Mode 1)
332 -3. Re-processed through workflow
333 -4. Reason for demotion documented
334 -
335 335  ----
336 336  
337 -== User Actions Across Modes ==
301 +== AKEL Workflow ==
338 338  
339 -=== On Mode 1 (Draft) Content ===
303 +**Purpose:**
304 +Support extraction, drafting, structuring, and contradiction detection.
340 340  
341 -**Contributors**:
306 +=== Stages ===
342 342  
343 -* Edit their own drafts
344 -* Submit for review
308 +**A Input Understanding:**
309 +Extraction, normalization, classification.
345 345  
346 -**Reviewers/Experts**:
311 +**B — Scenario Drafting:**
312 +Definitions, boundaries, assumptions.
347 347  
348 -* View and comment
349 -* Request changes
350 -* Approve for Mode 2 or Mode 3
314 +**C — Evidence Processing:**
315 +Retrieval, summarization, ranking.
351 351  
352 -=== On Mode 2 (AI-Generated) Content ===
317 +**D — Verdict Drafting:**
318 +Likelihood, explanations, uncertainties.
353 353  
354 -**All Users**:
320 +**E — Safety & Integrity:**
321 +Contradictions, hallucination detection.
355 355  
356 -* Read and use content
357 -* Request human review
358 -* Flag for expert attention
359 -* Provide feedback
323 +**F — Human Approval:**
324 +Reviewer and/or expert.
360 360  
361 -**Reviewers/Experts**:
326 +**Flow:**
327 +Input → Drafts → Integrity → Human Approval
362 362  
363 -* Validate for Mode 3 transition
364 -* Edit and refine
365 -* Adjust risk tier if needed
366 -
367 -=== On Mode 3 (Human-Reviewed) Content ===
368 -
369 -**All Users**:
370 -
371 -* Read with highest confidence
372 -* Still can flag if new evidence emerges
373 -
374 -**Reviewers/Experts**:
375 -
376 -* Update if needed
377 -* Trigger re-evaluation if new evidence
378 -
379 379  ----
380 380  
381 -== Diagram References ==
331 +== Global Trigger Flow (Cascade) ==
382 382  
383 -=== Claim and Scenario Lifecycle (Overview) ===
333 +Trigger Sources:
334 +* Claim change
335 +* Scenario change
336 +* Evidence change
337 +* Verdict contradiction
338 +* Federation update
339 +* AKEL model improvements
384 384  
385 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}}
341 +**Cascade Flow:**
342 +Trigger → Dependency Graph → Re-evaluation → Updated Verdicts
386 386  
387 -=== Claim and Scenario Workflow ===
388 -
389 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
390 -
391 -=== Evidence and Verdict Workflow ===
392 -
393 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
394 -
395 -=== Quality and Audit Workflow ===
396 -
397 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
398 -
399 -
400 -
401 -{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
402 -
403 -----
404 -
405 -== Related Pages ==
406 -
407 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
408 -* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
409 -* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
410 -* [[Governance>>FactHarbor.Organisation.Governance]]