Changes for page Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

From version 2.1
edited by Robert Schaub
on 2025/12/11 20:16
Change comment: Imported from XAR
To version 7.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -1,343 +1,368 @@
1 1  = Workflows =
2 2  
3 -This chapter defines the core workflows used across the FactHarbor system.
3 +This page describes the core workflows for content creation, review, and publication in FactHarbor.
4 4  
5 -Each workflow describes:
5 +== Overview ==
6 6  
7 -* Purpose
8 -* Participants
9 -* Steps
10 -* Automation vs. manual work
11 -* (Wherever applicable) linear ASCII flow: a → b → c → d
7 +FactHarbor workflows support three publication modes with risk-based review:
12 12  
13 -Workflows included:
9 +* **Mode 1 (Draft)**: Internal only, failed quality gates or pending review
10 +* **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates
11 +* **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust
14 14  
15 -1. Claim Workflow
16 -2. Scenario Workflow
17 -3. Evidence Workflow
18 -4. Verdict Workflow
19 -5. Re-evaluation Workflow
20 -6. Federation Synchronization Workflow
21 -7. User Role & Review Workflow
22 -8. AKEL Workflow
23 -9. Global Trigger Flow
24 -10. Entity Lifecycle Notes
13 +Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict).
25 25  
26 26  ----
27 27  
28 -== Claim Workflow ==
17 +== Claim Submission & Publication Workflow ==
29 29  
30 -**Purpose:**
31 -Transform raw text or input material into a normalized, classified, deduplicated, and versioned claim ready for scenario evaluation.
19 +=== Step 1: Claim Submission ===
32 32  
33 -**Participants:**
34 -* Contributor
35 -* AKEL
36 -* Reviewer
21 +**Actor**: Contributor or AKEL
37 37  
38 -=== Steps ===
23 +**Actions**:
24 +* Submit claim text
25 +* Provide initial sources (optional for human contributors, mandatory for AKEL)
26 +* System assigns initial AuthorType (Human or AI)
39 39  
40 -**1. Ingestion**
41 -* User submits text, URL, transcript, or multi-claim content
42 -* AKEL extracts one or multiple claims
28 +**Output**: Claim draft created
43 43  
44 -**2. Normalization**
45 -* Standardizes wording
46 -* Reduces ambiguity
47 -* Flags implicit assumptions
30 +=== Step 2: AKEL Processing ===
48 48  
49 -**3. Classification (AKEL draft → Reviewer confirm)**
50 -* ClaimType
51 -* Domain
52 -* Evaluability
53 -* SafetyCategory
32 +**Automated Steps**:
33 +1. Claim extraction and normalization
34 +2. Classification (domain, type, evaluability)
35 +3. Risk tier assignment (A/B/C suggested)
36 +4. Initial scenario generation
37 +5. Evidence search
38 +6. **Contradiction search** (mandatory)
39 +7. Quality gate validation
54 54  
55 -**4. Duplicate & Similarity Detection**
56 -* Embeddings created
57 -* Similar claims found
58 -* Reviewer merges, splits, or confirms uniqueness
41 +**Output**: Processed claim with risk tier and quality gate results
59 59  
60 -**5. Version Creation**
61 -* New ClaimVersion stored
62 -* Every edit creates a new immutable version
43 +=== Step 3: Quality Gate Checkpoint ===
63 63  
64 -**6. Cluster Assignment**
65 -* AKEL proposes cluster membership
66 -* Reviewer confirms
45 +**Gates Evaluated**:
46 +* Source quality
47 +* Contradiction search completion
48 +* Uncertainty quantification
49 +* Structural integrity
67 67  
68 -**7. Scenario Linking (optional)**
69 -* Existing scenarios connected
70 -* AKEL may propose new drafts
51 +**Outcomes**:
52 +* **All gates pass** → Proceed to Mode 2 publication (if Tier B or C)
53 +* **Any gate fails** → Mode 1 (Draft), flag for human review
54 +* **Tier A** → Mode 2 with warnings + auto-escalate to expert queue
71 71  
72 -**8. Publication**
73 -* Claim becomes active and visible
56 +=== Step 4: Publication (Risk-Tier Dependent) ===
74 74  
75 -**Flow:**
76 -Ingest → Normalize → Classify → Deduplicate → Cluster → Version → Publish
58 +**Tier C (Low Risk)**:
59 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
60 +* User can request human review
61 +* Sampling audit applies
77 77  
78 -----
63 +**Tier B (Medium Risk)**:
64 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
65 +* Higher audit sampling rate
66 +* High-engagement content may auto-escalate
79 79  
80 -== Scenario Workflow ==
68 +**Tier A (High Risk)**:
69 +* **Mode 2 with warnings**: AI-generated, public, prominent disclaimers
70 +* **Auto-escalated** to expert review queue
71 +* User warnings displayed
72 +* Highest audit sampling rate
81 81  
82 -**Purpose:**
83 -Define the specific analytic contexts needed to evaluate each claim.
74 +=== Step 5: Human Review (Optional for B/C, Escalated for A) ===
84 84  
85 -**Participants:**
86 -* Contributor
87 -* Reviewer
88 -* Domain Expert
89 -* AKEL
76 +**Triggers**:
77 +* User requests review
78 +* Audit flags issues
79 +* High engagement (Tier B)
80 +* Automatic (Tier A)
90 90  
91 -=== Steps ===
82 +**Process**:
83 +1. Reviewer/Expert examines claim
84 +2. Validates quality gates
85 +3. Checks contradiction search results
86 +4. Assesses risk tier appropriateness
87 +5. Decision: Approve, Request Changes, or Reject
92 92  
93 -**1. Scenario Proposal**
94 -* Drafted by contributor or generated by AKEL
89 +**Outcomes**:
90 +* **Approved** → Mode 3 (Human-Reviewed)
91 +* **Changes Requested** → Back to contributor or AKEL for revision
92 +* **Rejected** → Rejected status with reasoning
95 95  
96 -**2. Completion of Required Fields**
97 -Must include:
98 -* Definitions
99 -* Assumptions
100 -* ContextBoundary
101 -* EvaluationMethod
102 -* SafetyClass
103 -* VersionMetadata
94 +----
104 104  
105 -**3. Safety Interception (AKEL)**
106 -Flags:
107 -* non-falsifiable structures
108 -* pseudoscientific assumptions
109 -* unsafe contexts
96 +== Scenario Creation Workflow ==
110 110  
111 -**4. Redundancy & Conflict Check**
112 -* Similar scenarios merged
113 -* Contradictions flagged
98 +=== Step 1: Scenario Generation ===
114 114  
115 -**5. Reviewer Validation**
116 -Ensures clarity, neutrality, and methodological validity.
100 +**Automated (AKEL)**:
101 +* Generate scenarios for claim
102 +* Define boundaries, assumptions, context
103 +* Identify evaluation methods
117 117  
118 -**6. Expert Approval (mandatory for high-risk domains)**
105 +**Manual (Expert/Reviewer)**:
106 +* Create custom scenarios
107 +* Refine AKEL-generated scenarios
108 +* Add domain-specific nuances
119 119  
120 -**7. Version Storage**
121 -* Each revision = new ScenarioVersion
110 +=== Step 2: Scenario Validation ===
122 122  
123 -**Flow:**
124 -Draft → Validate → Safety Check → Review → Expert Approval → Version → Activate
112 +**Quality Checks**:
113 +* Completeness (definitions, boundaries, assumptions clear)
114 +* Relevance to claim
115 +* Evaluability
116 +* No circular logic
125 125  
126 -----
118 +**Risk Tier Assignment**:
119 +* Inherits from parent claim
120 +* Can be overridden by expert if scenario increases/decreases risk
127 127  
128 -== Evidence Workflow ==
122 +=== Step 3: Scenario Publication ===
129 129  
130 -**Purpose:**
131 -Structure, classify, validate, version, and link evidence to scenarios.
124 +**Mode 2 (AI-Generated)**:
125 +* Tier B/C scenarios can publish immediately
126 +* Subject to sampling audits
132 132  
133 -**Participants:**
134 -* Contributor
135 -* Reviewer
136 -* Domain Expert
137 -* AKEL
128 +**Mode 1 (Draft)**:
129 +* Tier A scenarios default to draft
130 +* Require expert validation for Mode 2 or Mode 3
138 138  
139 -=== Steps ===
132 +----
140 140  
141 -**1. Evidence Submission**
142 -* File, dataset, URL, or extracted text
134 +== Evidence Evaluation Workflow ==
143 143  
144 -**2. Metadata Extraction (AKEL)**
145 -* EvidenceType
146 -* Category
147 -* Provenance
148 -* Study design
149 -* ExtractionMethod
150 -* ReliabilityHints
136 +=== Step 1: Evidence Search & Retrieval ===
151 151  
152 -**3. Relevance Check**
153 -Reviewer verifies which scenarios the evidence applies to.
138 +**AKEL Actions**:
139 +* Search academic databases, reputable media
140 +* **Mandatory contradiction search** (counter-evidence, reservations)
141 +* Extract metadata (author, date, publication, methodology)
142 +* Assess source reliability
154 154  
155 -**4. Reliability Assessment**
156 -* AKEL proposes reliability
157 -* Reviewer confirms
158 -* Expert review for complex papers
144 +**Quality Requirements**:
145 +* Primary sources preferred
146 +* Diverse perspectives included
147 +* Echo chambers flagged
148 +* Conflicting evidence acknowledged
159 159  
160 -**5. ScenarioEvidenceLink Creation**
161 -Each link stores:
162 -* relevance score
163 -* justification
164 -* evidence version
150 +=== Step 2: Evidence Summarization ===
165 165  
166 -**6. Versioning**
167 -* Any update = new EvidenceVersion
152 +**AKEL Generates**:
153 +* Summary of evidence
154 +* Relevance assessment
155 +* Reliability score
156 +* Limitations and caveats
157 +* Conflicting evidence summary
168 168  
169 -**Flow:**
170 -Submit → Extract Metadata → Evaluate Relevance → Score Reliability → Link → Version
159 +**Quality Gate**: Structural integrity, source quality
171 171  
172 -----
161 +=== Step 3: Evidence Review ===
173 173  
174 -== Verdict Workflow ==
163 +**Reviewer/Expert Validates**:
164 +* Accuracy of summaries
165 +* Appropriateness of sources
166 +* Completeness of contradiction search
167 +* Reliability assessments
175 175  
176 -**Purpose:**
177 -Generate likelihood estimates per scenario based on evidence and scenario structure.
169 +**Outcomes**:
170 +* **Mode 2**: Evidence summaries published as AI-generated
171 +* **Mode 3**: After human validation
172 +* **Mode 1**: Failed quality checks or pending expert review
178 178  
179 -**Participants:**
180 -* AKEL (drafts)
181 -* Reviewer
182 -* Domain Expert
174 +----
183 183  
184 -=== Steps ===
176 +== Verdict Generation Workflow ==
185 185  
186 -**1. Evidence Aggregation**
187 -Collect relevant evidence versions.
178 +=== Step 1: Verdict Computation ===
188 188  
189 -**2. Draft Verdict Generation (AKEL)**
190 -Outputs:
191 -* likelihood range
192 -* uncertainty factors
193 -* conflict detection
194 -* sensitivity analysis
180 +**AKEL Computes**:
181 +* Verdict across scenarios
182 +* Confidence scores
183 +* Uncertainty quantification
184 +* Key assumptions
185 +* Limitations
195 195  
196 -**3. Reasoning Draft**
197 -Structured explanation chain generated by AKEL.
187 +**Inputs**:
188 +* Claim text
189 +* Scenario definitions
190 +* Evidence assessments
191 +* Contradiction search results
198 198  
199 -**4. Reviewer Validation**
200 -Ensures logic, evidence fit, no hallucinations.
193 +=== Step 2: Verdict Validation ===
201 201  
202 -**5. Expert Review**
203 -Required for:
204 -* medicine
205 -* psychology
206 -* engineering
207 -* political misinformation
208 -* controversial or risky domains
195 +**Quality Gates**:
196 +* All four gates apply (source, contradiction, uncertainty, structure)
197 +* Reasoning chain must be traceable
198 +* Assumptions must be explicit
209 209  
210 -**6. Verdict Storage**
211 -* Every update creates a new VerdictVersion
200 +**Risk Tier Check**:
201 +* Tier A: Always requires expert validation for Mode 3
202 +* Tier B: Mode 2 allowed, audit sampling
203 +* Tier C: Mode 2 default
212 212  
213 -**Flow:**
214 -Aggregate → Draft Verdict → Draft Explanation → Review → Expert Approval → Version
205 +=== Step 3: Verdict Publication ===
215 215  
207 +**Mode 2 (AI-Generated Verdict)**:
208 +* Clear labeling with confidence scores
209 +* Uncertainty disclosure
210 +* Links to reasoning trail
211 +* User can request expert review
212 +
213 +**Mode 3 (Expert-Validated Verdict)**:
214 +* Human reviewer/expert stamp
215 +* Additional commentary (optional)
216 +* Highest trust level
217 +
216 216  ----
217 217  
218 -== Re-evaluation Workflow ==
220 +== Audit Workflow ==
219 219  
220 -**Purpose:**
221 -Keep verdicts current when evidence or scenarios change.
222 +=== Step 1: Audit Sampling Selection ===
222 222  
223 -=== Trigger Types ===
224 +**Stratified Sampling**:
225 +* Risk tier priority (A > B > C)
226 +* Low confidence scores
227 +* High traffic content
228 +* Novel topics
229 +* User flags
224 224  
225 -* Evidence updated, disputed, retracted
226 -* Scenario assumptions changed
227 -* Claim reclassification
228 -* AKEL contradiction detection
229 -* Federation sync
231 +**Sampling Rates** (Recommendations):
232 +* Tier A: 30-50%
233 +* Tier B: 10-20%
234 +* Tier C: 5-10%
230 230  
231 -=== Steps ===
236 +=== Step 2: Audit Execution ===
232 232  
233 -**1. Trigger Detection**
234 -Re-evaluation engine receives event.
238 +**Auditor Actions**:
239 +1. Review sampled AI-generated content
240 +2. Validate quality gates were properly applied
241 +3. Check contradiction search completeness
242 +4. Assess reasoning quality
243 +5. Identify errors or hallucinations
235 235  
236 -**2. Impact Analysis**
237 -Find affected:
238 -* scenarios
239 -* evidence links
240 -* verdicts
245 +**Audit Outcome**:
246 +* **Pass**: Content remains in Mode 2, logged as validated
247 +* **Fail**: Content flagged for review, system improvement triggered
241 241  
242 -**3. AKEL Draft Re-calculation**
243 -New:
244 -* likelihood
245 -* reasoning
246 -* uncertainty
249 +=== Step 3: Feedback Loop ===
247 247  
248 -**4. Reviewer Validation**
249 -**5. Expert Review** (high-risk)
250 -**6. Version Storage**
251 +**System Improvements**:
252 +* Failed audits analyzed for patterns
253 +* AKEL parameters adjusted
254 +* Quality gates refined
255 +* Risk tier assignments recalibrated
251 251  
252 -**Flow:**
253 -Trigger → Analyze → Recompute → Review → Expert → Version
257 +**Transparency**:
258 +* Audit statistics published periodically
259 +* Patterns shared with community
260 +* System improvements documented
254 254  
255 255  ----
256 256  
257 -== Federation Synchronization Workflow ==
264 +== Mode Transition Workflow ==
258 258  
259 -**Purpose:**
260 -Exchange structured data between nodes.
266 +=== Mode 1 → Mode 2 ===
261 261  
262 -=== Steps ===
263 -1. Detect version changes
264 -1. Build bundle (diff + Merkle tree + signatures)
265 -1. Push to peers
266 -1. Validate lineage + hashes
267 -1. Resolve conflicts (merge or branch)
268 -1. Optional re-evaluation
268 +**Requirements**:
269 +* All quality gates pass
270 +* Risk tier B or C (or A with warnings)
271 +* Contradiction search completed
269 269  
270 -**Flow:**
271 -Change → Bundle → Push → Validate → Merge/Fork → Update
273 +**Trigger**: Automatic upon quality gate validation
272 272  
275 +=== Mode 2 → Mode 3 ===
276 +
277 +**Requirements**:
278 +* Human reviewer/expert validation
279 +* Quality standards confirmed
280 +* For Tier A: Expert approval required
281 +* For Tier B/C: Reviewer approval sufficient
282 +
283 +**Trigger**: Human review completion
284 +
285 +=== Mode 3 → Mode 1 (Demotion) ===
286 +
287 +**Rare - Only if**:
288 +* New evidence contradicts verdict
289 +* Error discovered in reasoning
290 +* Source retraction
291 +
292 +**Process**:
293 +1. Content flagged for re-evaluation
294 +2. Moved to draft (Mode 1)
295 +3. Re-processed through workflow
296 +4. Reason for demotion documented
297 +
273 273  ----
274 274  
275 -== User Role & Review Workflow ==
300 +== User Actions Across Modes ==
276 276  
277 -**Purpose:**
278 -Ensure correctness, neutrality, safety, and resistance to manipulation.
302 +=== On Mode 1 (Draft) Content ===
279 279  
280 -=== Steps ===
304 +**Contributors**:
305 +* Edit their own drafts
306 +* Submit for review
281 281  
282 -**1. Submission**
283 -Claim / scenario / evidence / verdict.
308 +**Reviewers/Experts**:
309 +* View and comment
310 +* Request changes
311 +* Approve for Mode 2 or Mode 3
284 284  
285 -**2. Auto-check (AKEL)**
286 -Flags unsafe content, contradictions, format issues.
313 +=== On Mode 2 (AI-Generated) Content ===
287 287  
288 -**3. Reviewer Validation**
315 +**All Users**:
316 +* Read and use content
317 +* Request human review
318 +* Flag for expert attention
319 +* Provide feedback
289 289  
290 -**4. Expert Validation**
291 -Required for sensitive domains.
321 +**Reviewers/Experts**:
322 +* Validate for Mode 3 transition
323 +* Edit and refine
324 +* Adjust risk tier if needed
292 292  
293 -**5. Moderator Oversight**
294 -Triggered by suspicious behavior.
326 +=== On Mode 3 (Human-Reviewed) Content ===
295 295  
296 -**Flow:**
297 -Submit → Auto-check → Review → Expert → Moderator (if needed)
328 +**All Users**:
329 +* Read with highest confidence
330 +* Still can flag if new evidence emerges
298 298  
332 +**Reviewers/Experts**:
333 +* Update if needed
334 +* Trigger re-evaluation if new evidence
335 +
299 299  ----
300 300  
301 -== AKEL Workflow ==
338 +== Diagram References ==
302 302  
303 -**Purpose:**
304 -Support extraction, drafting, structuring, and contradiction detection.
340 +=== Claim and Scenario Lifecycle (Overview) ===
305 305  
306 -=== Stages ===
342 +{{include reference="Test.FactHarborV09.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}}
307 307  
308 -**A — Input Understanding:**
309 -Extraction, normalization, classification.
344 +=== Claim and Scenario Workflow ===
310 310  
311 -**B — Scenario Drafting:**
312 -Definitions, boundaries, assumptions.
346 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
313 313  
314 -**C — Evidence Processing:**
315 -Retrieval, summarization, ranking.
348 +=== Evidence and Verdict Workflow ===
316 316  
317 -**D — Verdict Drafting:**
318 -Likelihood, explanations, uncertainties.
350 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
319 319  
320 -**E — Safety & Integrity:**
321 -Contradictions, hallucination detection.
352 +=== Quality and Audit Workflow ===
322 322  
323 -**F — Human Approval:**
324 -Reviewer and/or expert.
354 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
325 325  
326 -**Flow:**
327 -Input → Drafts → Integrity → Human Approval
328 328  
329 -----
330 330  
331 -== Global Trigger Flow (Cascade) ==
358 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
332 332  
333 -Trigger Sources:
334 -* Claim change
335 -* Scenario change
336 -* Evidence change
337 -* Verdict contradiction
338 -* Federation update
339 -* AKEL model improvements
360 +----
340 340  
341 -**Cascade Flow:**
342 -Trigger → Dependency Graph → Re-evaluation → Updated Verdicts
362 +== Related Pages ==
343 343  
364 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
365 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
366 +* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
367 +* [[Governance>>FactHarbor.Organisation.Governance]]
368 +