Changes for page Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

From version 4.1
edited by Robert Schaub
on 2025/12/12 15:41
Change comment: Imported from XAR
To version 7.2
edited by Robert Schaub
on 2025/12/16 20:28
Change comment: Renamed back-links.

Summary

Details

Page properties
Content
... ... @@ -1,342 +1,410 @@
1 1  = Workflows =
2 2  
3 -This chapter defines the core workflows used across the FactHarbor system.
3 +This page describes the core workflows for content creation, review, and publication in FactHarbor.
4 4  
5 -Each workflow describes:
5 +== Overview ==
6 6  
7 -* Purpose
8 -* Participants
9 -* Steps
10 -* Automation vs. manual work
7 +FactHarbor workflows support three publication modes with risk-based review:
11 11  
12 -Workflows included:
9 +* **Mode 1 (Draft)**: Internal only, failed quality gates or pending review
10 +* **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates
11 +* **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust
13 13  
14 -1. Claim Workflow
15 -2. Scenario Workflow
16 -3. Evidence Workflow
17 -4. Verdict Workflow
18 -5. Re-evaluation Workflow
19 -6. Federation Synchronization Workflow
20 -7. User Role & Review Workflow
21 -8. AKEL Workflow
22 -9. Global Trigger Flow
23 -10. Entity Lifecycle Notes
13 +Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict).
24 24  
25 25  ----
26 26  
27 -== Claim Workflow ==
17 +== Claim Submission & Publication Workflow ==
28 28  
29 -**Purpose:**
30 -Transform raw text or input material into a normalized, classified, deduplicated, and versioned claim ready for scenario evaluation.
19 +=== Step 1: Claim Submission ===
31 31  
32 -**Participants:**
33 -* Contributor
34 -* AKEL
35 -* Reviewer
21 +**Actor**: Contributor or AKEL
36 36  
37 -=== Steps ===
23 +**Actions**:
38 38  
39 -1. **Ingestion**
40 -* User submits text, URL, transcript, or multi-claim content
41 -* AKEL extracts one or multiple claims
25 +* Submit claim text
26 +* Provide initial sources (optional for human contributors, mandatory for AKEL)
27 +* System assigns initial AuthorType (Human or AI)
42 42  
43 -1. **Normalization**
44 -* Standardizes wording
45 -* Reduces ambiguity
46 -* Flags implicit assumptions
29 +**Output**: Claim draft created
47 47  
48 -1. **Classification (AKEL draft → Reviewer confirm)**
49 -* ClaimType
50 -* Domain
51 -* Evaluability
52 -* SafetyCategory
31 +=== Step 2: AKEL Processing ===
53 53  
54 -1. **Duplicate & Similarity Detection**
55 -* Embeddings created
56 -* Similar claims found
57 -* Reviewer merges, splits, or confirms uniqueness
33 +**Automated Steps**:
58 58  
59 -1. **Version Creation**
60 -* New ClaimVersion stored
61 -* Every edit creates a new immutable version
35 +1. Claim extraction and normalization
36 +2. Classification (domain, type, evaluability)
37 +3. Risk tier assignment (A/B/C suggested)
38 +4. Initial scenario generation
39 +5. Evidence search
40 +6. **Contradiction search** (mandatory)
41 +7. Quality gate validation
62 62  
63 -1. **Cluster Assignment**
64 -* AKEL proposes cluster membership
65 -* Reviewer confirms
43 +**Output**: Processed claim with risk tier and quality gate results
66 66  
67 -1. **Scenario Linking (optional)**
68 -* Existing scenarios connected
69 -* AKEL may propose new drafts
45 +=== Step 3: Quality Gate Checkpoint ===
70 70  
71 -1. **Publication**
72 -* Claim becomes active and visible
47 +**Gates Evaluated**:
73 73  
74 -**Flow:**
75 -Ingest → Normalize → Classify → Deduplicate → Cluster → Version → Publish
49 +* Source quality
50 +* Contradiction search completion
51 +* Uncertainty quantification
52 +* Structural integrity
76 76  
77 -----
54 +**Outcomes**:
78 78  
79 -== Scenario Workflow ==
56 +* **All gates pass** → Proceed to Mode 2 publication (if Tier B or C)
57 +* **Any gate fails** → Mode 1 (Draft), flag for human review
58 +* **Tier A** → Mode 2 with warnings + auto-escalate to expert queue
80 80  
81 -**Purpose:**
82 -Define the specific analytic contexts needed to evaluate each claim.
60 +=== Step 4: Publication (Risk-Tier Dependent) ===
83 83  
84 -**Participants:**
85 -* Contributor
86 -* Reviewer
87 -* Domain Expert
88 -* AKEL
62 +**Tier C (Low Risk)**:
89 89  
90 -=== Steps ===
64 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
65 +* User can request human review
66 +* Sampling audit applies
91 91  
92 -1. **Scenario Proposal**
93 -* Drafted by contributor or generated by AKEL
68 +**Tier B (Medium Risk)**:
94 94  
95 -1. **Completion of Required Fields**
96 -Must include:
97 -* Definitions
98 -* Assumptions
99 -* ContextBoundary
100 -* EvaluationMethod
101 -* SafetyClass
102 -* VersionMetadata
70 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
71 +* Higher audit sampling rate
72 +* High-engagement content may auto-escalate
103 103  
104 -1. **Safety Interception (AKEL)**
105 -Flags:
106 -* non-falsifiable structures
107 -* pseudoscientific assumptions
108 -* unsafe contexts
74 +**Tier A (High Risk)**:
109 109  
110 -1. **Redundancy & Conflict Check**
111 -* Similar scenarios merged
112 -* Contradictions flagged
76 +* **Mode 2 with warnings**: AI-generated, public, prominent disclaimers
77 +* **Auto-escalated** to expert review queue
78 +* User warnings displayed
79 +* Highest audit sampling rate
113 113  
114 -1. **Reviewer Validation**
115 -Ensures clarity, neutrality, and methodological validity.
81 +=== Step 5: Human Review (Optional for B/C, Escalated for A) ===
116 116  
117 -1. **Expert Approval (mandatory for high-risk domains)**
83 +**Triggers**:
118 118  
119 -1. **Version Storage**
120 -* Each revision = new ScenarioVersion
85 +* User requests review
86 +* Audit flags issues
87 +* High engagement (Tier B)
88 +* Automatic (Tier A)
121 121  
122 -**Flow:**
123 -Draft → Validate → Safety Check → Review → Expert Approval → Version → Activate
90 +**Process**:
124 124  
92 +1. Reviewer/Expert examines claim
93 +2. Validates quality gates
94 +3. Checks contradiction search results
95 +4. Assesses risk tier appropriateness
96 +5. Decision: Approve, Request Changes, or Reject
97 +
98 +**Outcomes**:
99 +
100 +* **Approved** → Mode 3 (Human-Reviewed)
101 +* **Changes Requested** → Back to contributor or AKEL for revision
102 +* **Rejected** → Rejected status with reasoning
103 +
125 125  ----
126 126  
127 -== Evidence Workflow ==
106 +== Scenario Creation Workflow ==
128 128  
129 -**Purpose:**
130 -Structure, classify, validate, version, and link evidence to scenarios.
108 +=== Step 1: Scenario Generation ===
131 131  
132 -**Participants:**
133 -* Contributor
134 -* Reviewer
135 -* Domain Expert
136 -* AKEL
110 +**Automated (AKEL)**:
137 137  
138 -=== Steps ===
112 +* Generate scenarios for claim
113 +* Define boundaries, assumptions, context
114 +* Identify evaluation methods
139 139  
140 -1. **Evidence Submission**
141 -* File, dataset, URL, or extracted text
116 +**Manual (Expert/Reviewer)**:
142 142  
143 -1. **Metadata Extraction (AKEL)**
144 -* EvidenceType
145 -* Category
146 -* Provenance
147 -* Study design
148 -* ExtractionMethod
149 -* ReliabilityHints
118 +* Create custom scenarios
119 +* Refine AKEL-generated scenarios
120 +* Add domain-specific nuances
150 150  
151 -1. **Relevance Check**
152 -Reviewer verifies which scenarios the evidence applies to.
122 +=== Step 2: Scenario Validation ===
153 153  
154 -1. **Reliability Assessment**
155 -* AKEL proposes reliability
156 -* Reviewer confirms
157 -* Expert review for complex papers
124 +**Quality Checks**:
158 158  
159 -1. **ScenarioEvidenceLink Creation**
160 -Each link stores:
161 -* relevance score
162 -* justification
163 -* evidence version
126 +* Completeness (definitions, boundaries, assumptions clear)
127 +* Relevance to claim
128 +* Evaluability
129 +* No circular logic
164 164  
165 -1. **Versioning**
166 -* Any update = new EvidenceVersion
131 +**Risk Tier Assignment**:
167 167  
168 -**Flow:**
169 -Submit → Extract Metadata → Evaluate Relevance Score Reliability → Link → Version
133 +* Inherits from parent claim
134 +* Can be overridden by expert if scenario increases/decreases risk
170 170  
136 +=== Step 3: Scenario Publication ===
137 +
138 +**Mode 2 (AI-Generated)**:
139 +
140 +* Tier B/C scenarios can publish immediately
141 +* Subject to sampling audits
142 +
143 +**Mode 1 (Draft)**:
144 +
145 +* Tier A scenarios default to draft
146 +* Require expert validation for Mode 2 or Mode 3
147 +
171 171  ----
172 172  
173 -== Verdict Workflow ==
150 +== Evidence Evaluation Workflow ==
174 174  
175 -**Purpose:**
176 -Generate likelihood estimates per scenario based on evidence and scenario structure.
152 +=== Step 1: Evidence Search & Retrieval ===
177 177  
178 -**Participants:**
179 -* AKEL (drafts)
180 -* Reviewer
181 -* Domain Expert
154 +**AKEL Actions**:
182 182  
183 -=== Steps ===
156 +* Search academic databases, reputable media
157 +* **Mandatory contradiction search** (counter-evidence, reservations)
158 +* Extract metadata (author, date, publication, methodology)
159 +* Assess source reliability
184 184  
185 -1. **Evidence Aggregation**
186 -Collect relevant evidence versions.
161 +**Quality Requirements**:
187 187  
188 -1. **Draft Verdict Generation (AKEL)**
189 -Outputs:
190 -* likelihood range
191 -* uncertainty factors
192 -* conflict detection
193 -* sensitivity analysis
163 +* Primary sources preferred
164 +* Diverse perspectives included
165 +* Echo chambers flagged
166 +* Conflicting evidence acknowledged
194 194  
195 -1. **Reasoning Draft**
196 -Structured explanation chain generated by AKEL.
168 +=== Step 2: Evidence Summarization ===
197 197  
198 -1. **Reviewer Validation**
199 -Ensures logic, evidence fit, no hallucinations.
170 +**AKEL Generates**:
200 200  
201 -1. **Expert Review**
202 -Required for:
203 -* medicine
204 -* psychology
205 -* engineering
206 -* political misinformation
207 -* controversial or risky domains
172 +* Summary of evidence
173 +* Relevance assessment
174 +* Reliability score
175 +* Limitations and caveats
176 +* Conflicting evidence summary
208 208  
209 -1. **Verdict Storage**
210 -* Every update creates a new VerdictVersion
178 +**Quality Gate**: Structural integrity, source quality
211 211  
212 -**Flow:**
213 -Aggregate → Draft Verdict → Draft Explanation → Review → Expert Approval → Version
180 +=== Step 3: Evidence Review ===
214 214  
182 +**Reviewer/Expert Validates**:
183 +
184 +* Accuracy of summaries
185 +* Appropriateness of sources
186 +* Completeness of contradiction search
187 +* Reliability assessments
188 +
189 +**Outcomes**:
190 +
191 +* **Mode 2**: Evidence summaries published as AI-generated
192 +* **Mode 3**: After human validation
193 +* **Mode 1**: Failed quality checks or pending expert review
194 +
215 215  ----
216 216  
217 -== Re-evaluation Workflow ==
197 +== Verdict Generation Workflow ==
218 218  
219 -**Purpose:**
220 -Keep verdicts current when evidence or scenarios change.
199 +=== Step 1: Verdict Computation ===
221 221  
222 -=== Trigger Types ===
201 +**AKEL Computes**:
223 223  
224 -* Evidence updated, disputed, retracted
225 -* Scenario assumptions changed
226 -* Claim reclassification
227 -* AKEL contradiction detection
228 -* Federation sync
203 +* Verdict across scenarios
204 +* Confidence scores
205 +* Uncertainty quantification
206 +* Key assumptions
207 +* Limitations
229 229  
230 -=== Steps ===
209 +**Inputs**:
231 231  
232 -1. **Trigger Detection**
233 -Re-evaluation engine receives event.
211 +* Claim text
212 +* Scenario definitions
213 +* Evidence assessments
214 +* Contradiction search results
234 234  
235 -1. **Impact Analysis**
236 -Find affected:
237 -* scenarios
238 -* evidence links
239 -* verdicts
216 +=== Step 2: Verdict Validation ===
240 240  
241 -1. **AKEL Draft Re-calculation**
242 -New:
243 -* likelihood
244 -* reasoning
245 -* uncertainty
218 +**Quality Gates**:
246 246  
247 -1. **Reviewer Validation**
248 -1. **Expert Review** (high-risk)
249 -1. **Version Storage**
220 +* All four gates apply (source, contradiction, uncertainty, structure)
221 +* Reasoning chain must be traceable
222 +* Assumptions must be explicit
250 250  
251 -**Flow:**
252 -Trigger → Analyze → Recompute → Review → Expert → Version
224 +**Risk Tier Check**:
253 253  
226 +* Tier A: Always requires expert validation for Mode 3
227 +* Tier B: Mode 2 allowed, audit sampling
228 +* Tier C: Mode 2 default
229 +
230 +=== Step 3: Verdict Publication ===
231 +
232 +**Mode 2 (AI-Generated Verdict)**:
233 +
234 +* Clear labeling with confidence scores
235 +* Uncertainty disclosure
236 +* Links to reasoning trail
237 +* User can request expert review
238 +
239 +**Mode 3 (Expert-Validated Verdict)**:
240 +
241 +* Human reviewer/expert stamp
242 +* Additional commentary (optional)
243 +* Highest trust level
244 +
254 254  ----
255 255  
256 -== Federation Synchronization Workflow ==
247 +== Audit Workflow ==
257 257  
258 -**Purpose:**
259 -Exchange structured data between nodes.
249 +=== Step 1: Audit Sampling Selection ===
260 260  
261 -=== Steps ===
262 -1. Detect version changes
263 -1. Build bundle (diff + Merkle tree + signatures)
264 -1. Push to peers
265 -1. Validate lineage + hashes
266 -1. Resolve conflicts (merge or branch)
267 -1. Optional re-evaluation
251 +**Stratified Sampling**:
268 268  
269 -**Flow:**
270 -Change → Bundle → Push → Validate → Merge/Fork → Update
253 +* Risk tier priority (A > B > C)
254 +* Low confidence scores
255 +* High traffic content
256 +* Novel topics
257 +* User flags
271 271  
259 +**Sampling Rates** (Recommendations):
260 +
261 +* Tier A: 30-50%
262 +* Tier B: 10-20%
263 +* Tier C: 5-10%
264 +
265 +=== Step 2: Audit Execution ===
266 +
267 +**Auditor Actions**:
268 +
269 +1. Review sampled AI-generated content
270 +2. Validate quality gates were properly applied
271 +3. Check contradiction search completeness
272 +4. Assess reasoning quality
273 +5. Identify errors or hallucinations
274 +
275 +**Audit Outcome**:
276 +
277 +* **Pass**: Content remains in Mode 2, logged as validated
278 +* **Fail**: Content flagged for review, system improvement triggered
279 +
280 +=== Step 3: Feedback Loop ===
281 +
282 +**System Improvements**:
283 +
284 +* Failed audits analyzed for patterns
285 +* AKEL parameters adjusted
286 +* Quality gates refined
287 +* Risk tier assignments recalibrated
288 +
289 +**Transparency**:
290 +
291 +* Audit statistics published periodically
292 +* Patterns shared with community
293 +* System improvements documented
294 +
272 272  ----
273 273  
274 -== User Role & Review Workflow ==
297 +== Mode Transition Workflow ==
275 275  
276 -**Purpose:**
277 -Ensure correctness, neutrality, safety, and resistance to manipulation.
299 +=== Mode 1 → Mode 2 ===
278 278  
279 -=== Steps ===
301 +**Requirements**:
280 280  
281 -1. **Submission**
282 -Claim / scenario / evidence / verdict.
303 +* All quality gates pass
304 +* Risk tier B or C (or A with warnings)
305 +* Contradiction search completed
283 283  
284 -1. **Auto-check (AKEL)**
285 -Flags unsafe content, contradictions, format issues.
307 +**Trigger**: Automatic upon quality gate validation
286 286  
287 -1. **Reviewer Validation**
309 +=== Mode 2 → Mode 3 ===
288 288  
289 -1. **Expert Validation**
290 -Required for sensitive domains.
311 +**Requirements**:
291 291  
292 -1. **Moderator Oversight**
293 -Triggered by suspicious behavior.
313 +* Human reviewer/expert validation
314 +* Quality standards confirmed
315 +* For Tier A: Expert approval required
316 +* For Tier B/C: Reviewer approval sufficient
294 294  
295 -**Flow:**
296 -Submit → Auto-check → Review → Expert → Moderator (if needed)
318 +**Trigger**: Human review completion
297 297  
320 +=== Mode 3 → Mode 1 (Demotion) ===
321 +
322 +**Rare - Only if**:
323 +
324 +* New evidence contradicts verdict
325 +* Error discovered in reasoning
326 +* Source retraction
327 +
328 +**Process**:
329 +
330 +1. Content flagged for re-evaluation
331 +2. Moved to draft (Mode 1)
332 +3. Re-processed through workflow
333 +4. Reason for demotion documented
334 +
298 298  ----
299 299  
300 -== AKEL Workflow ==
337 +== User Actions Across Modes ==
301 301  
302 -**Purpose:**
303 -Support extraction, drafting, structuring, and contradiction detection.
339 +=== On Mode 1 (Draft) Content ===
304 304  
305 -=== Stages ===
341 +**Contributors**:
306 306  
307 -**A Input Understanding:**
308 -Extraction, normalization, classification.
343 +* Edit their own drafts
344 +* Submit for review
309 309  
310 -**B — Scenario Drafting:**
311 -Definitions, boundaries, assumptions.
346 +**Reviewers/Experts**:
312 312  
313 -**C — Evidence Processing:**
314 -Retrieval, summarization, ranking.
348 +* View and comment
349 +* Request changes
350 +* Approve for Mode 2 or Mode 3
315 315  
316 -**D — Verdict Drafting:**
317 -Likelihood, explanations, uncertainties.
352 +=== On Mode 2 (AI-Generated) Content ===
318 318  
319 -**E — Safety & Integrity:**
320 -Contradictions, hallucination detection.
354 +**All Users**:
321 321  
322 -**F — Human Approval:**
323 -Reviewer and/or expert.
356 +* Read and use content
357 +* Request human review
358 +* Flag for expert attention
359 +* Provide feedback
324 324  
325 -**Flow:**
326 -Input → Drafts → Integrity → Human Approval
361 +**Reviewers/Experts**:
327 327  
363 +* Validate for Mode 3 transition
364 +* Edit and refine
365 +* Adjust risk tier if needed
366 +
367 +=== On Mode 3 (Human-Reviewed) Content ===
368 +
369 +**All Users**:
370 +
371 +* Read with highest confidence
372 +* Still can flag if new evidence emerges
373 +
374 +**Reviewers/Experts**:
375 +
376 +* Update if needed
377 +* Trigger re-evaluation if new evidence
378 +
328 328  ----
329 329  
330 -== Global Trigger Flow (Cascade) ==
381 +== Diagram References ==
331 331  
332 -Trigger Sources:
333 -* Claim change
334 -* Scenario change
335 -* Evidence change
336 -* Verdict contradiction
337 -* Federation update
338 -* AKEL model improvements
383 +=== Claim and Scenario Lifecycle (Overview) ===
339 339  
340 -**Cascade Flow:**
341 -Trigger → Dependency Graph → Re-evaluation → Updated Verdicts
385 +{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}}
342 342  
387 +=== Claim and Scenario Workflow ===
388 +
389 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
390 +
391 +=== Evidence and Verdict Workflow ===
392 +
393 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
394 +
395 +=== Quality and Audit Workflow ===
396 +
397 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
398 +
399 +
400 +
401 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
402 +
403 +----
404 +
405 +== Related Pages ==
406 +
407 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
408 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
409 +* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
410 +* [[Governance>>FactHarbor.Organisation.Governance]]