Changes for page Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

From version 7.1
edited by Robert Schaub
on 2025/12/15 16:56
Change comment: Imported from XAR
To version 3.1
edited by Robert Schaub
on 2025/12/12 09:32
Change comment: Imported from XAR

Summary

Details

Page properties
Content
... ... @@ -1,368 +1,127 @@
1 1  = Workflows =
2 2  
3 -This page describes the core workflows for content creation, review, and publication in FactHarbor.
3 +This chapter defines the core workflows used across the FactHarbor system.
4 4  
5 -== Overview ==
5 +Each workflow describes:
6 +* Purpose
7 +* Participants
8 +* Steps
9 +* Automation vs. manual work
6 6  
7 -FactHarbor workflows support three publication modes with risk-based review:
11 +== 1. Claim Workflow ==
8 8  
9 -* **Mode 1 (Draft)**: Internal only, failed quality gates or pending review
10 -* **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates
11 -* **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust
13 +**Purpose:** Transform raw text or input material into a normalized, classified, deduplicated, and versioned claim.
12 12  
13 -Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict).
15 +**Participants:**
16 +* Contributor
17 +* AKEL
18 +* Reviewer
14 14  
15 -----
20 +**Steps:**
21 +1. **Ingestion**: User submits text/URL; AKEL extracts claims.
22 +1. **Normalization**: Standardize wording, reduce ambiguity.
23 +1. **Classification**: Domain, Evaluability, Safety (AKEL draft → Human confirm).
24 +1. **Duplicate Detection**: Check embeddings for existing claims.
25 +1. **Version Creation**: Store new ClaimVersion.
26 +1. **Cluster Assignment**: Assign to Claim Cluster.
27 +1. **Scenario Linking**: Connect to existing or draft new scenarios.
28 +1. **Publication**: Make visible.
16 16  
17 -== Claim Submission & Publication Workflow ==
30 +**Flow:** Ingest → Normalize → Classify Deduplicate → Cluster → Version Publish
18 18  
19 -=== Step 1: Claim Submission ===
32 +== 2. Scenario Workflow ==
20 20  
21 -**Actor**: Contributor or AKEL
34 +**Purpose:** Define the specific analytic contexts needed to evaluate each claim.
22 22  
23 -**Actions**:
24 -* Submit claim text
25 -* Provide initial sources (optional for human contributors, mandatory for AKEL)
26 -* System assigns initial AuthorType (Human or AI)
36 +**Steps:**
37 +1. **Scenario Proposal**: Drafted by contributor or AKEL.
38 +1. **Required Fields**: Definitions, Assumptions, ContextBoundary, EvaluationMethod, SafetyClass.
39 +1. **Safety Interception**: AKEL flags non-falsifiable or unsafe content.
40 +1. **Conflict Check**: Merge similar scenarios, flag contradictions.
41 +1. **Reviewer Validation**: Ensure clarity and validity.
42 +1. **Expert Approval**: Mandatory for high-risk domains.
43 +1. **Version Storage**: Save ScenarioVersion.
27 27  
28 -**Output**: Claim draft created
45 +**Flow:** Draft → Validate → Safety Check → Review → Expert → Version → Activate
29 29  
30 -=== Step 2: AKEL Processing ===
47 +== 3. Evidence Workflow ==
31 31  
32 -**Automated Steps**:
33 -1. Claim extraction and normalization
34 -2. Classification (domain, type, evaluability)
35 -3. Risk tier assignment (A/B/C suggested)
36 -4. Initial scenario generation
37 -5. Evidence search
38 -6. **Contradiction search** (mandatory)
39 -7. Quality gate validation
49 +**Purpose:** Structure, classify, validate, version, and link evidence to scenarios.
40 40  
41 -**Output**: Processed claim with risk tier and quality gate results
51 +**Steps:**
52 +1. **Submission**: File, URL, or text.
53 +1. **Metadata Extraction**: Type, Category, Provenance, ReliabilityHints.
54 +1. **Relevance Check**: Verify applicability to scenario.
55 +1. **Reliability Assessment**: Score reliability (Reviewer + Expert).
56 +1. **Link Creation**: Create ScenarioEvidenceLink with relevance score.
57 +1. **Versioning**: Update EvidenceVersion.
42 42  
43 -=== Step 3: Quality Gate Checkpoint ===
59 +**Flow:** Submit → Extract → Relevance Reliability Link → Version
44 44  
45 -**Gates Evaluated**:
46 -* Source quality
47 -* Contradiction search completion
48 -* Uncertainty quantification
49 -* Structural integrity
61 +== 4. Verdict Workflow ==
50 50  
51 -**Outcomes**:
52 -* **All gates pass** → Proceed to Mode 2 publication (if Tier B or C)
53 -* **Any gate fails** → Mode 1 (Draft), flag for human review
54 -* **Tier A** → Mode 2 with warnings + auto-escalate to expert queue
63 +**Purpose:** Generate likelihood estimates **per scenario** based on evidence.
55 55  
56 -=== Step 4: Publication (Risk-Tier Dependent) ===
65 +**Steps:**
66 +1. **Aggregation**: Collect linked evidence for a specific scenario.
67 +1. **Draft Verdict**: AKEL proposes likelihood and uncertainty for that scenario.
68 +1. **Reasoning**: AKEL drafts explanation chain.
69 +1. **Validation**: Reviewer checks logic and hallucinations.
70 +1. **Expert Review**: Required for sensitive topics.
71 +1. **Storage**: Save VerdictVersion.
57 57  
58 -**Tier C (Low Risk)**:
59 -* **Direct to Mode 2**: AI-generated, public, clearly labeled
60 -* User can request human review
61 -* Sampling audit applies
73 +**Flow:** Aggregate → Draft → Reasoning → Review → Expert → Version
62 62  
63 -**Tier B (Medium Risk)**:
64 -* **Direct to Mode 2**: AI-generated, public, clearly labeled
65 -* Higher audit sampling rate
66 -* High-engagement content may auto-escalate
75 +== 5. Re-evaluation Workflow ==
67 67  
68 -**Tier A (High Risk)**:
69 -* **Mode 2 with warnings**: AI-generated, public, prominent disclaimers
70 -* **Auto-escalated** to expert review queue
71 -* User warnings displayed
72 -* Highest audit sampling rate
77 +**Purpose:** Keep verdicts current when inputs change.
73 73  
74 -=== Step 5: Human Review (Optional for B/C, Escalated for A) ===
79 +**Steps:**
80 +1. **Trigger**: Evidence update, Scenario change, or Contradiction.
81 +1. **Impact Analysis**: Identify affected nodes.
82 +1. **Re-calculation**: AKEL proposes new likelihoods.
83 +1. **Validation**: Human review.
84 +1. **Storage**: New version.
75 75  
76 -**Triggers**:
77 -* User requests review
78 -* Audit flags issues
79 -* High engagement (Tier B)
80 -* Automatic (Tier A)
86 +**Flow:** Trigger → Analyze → Recompute → Review → Version
81 81  
82 -**Process**:
83 -1. Reviewer/Expert examines claim
84 -2. Validates quality gates
85 -3. Checks contradiction search results
86 -4. Assesses risk tier appropriateness
87 -5. Decision: Approve, Request Changes, or Reject
88 +== 6. Federation Synchronization Workflow ==
88 88  
89 -**Outcomes**:
90 -* **Approved** → Mode 3 (Human-Reviewed)
91 -* **Changes Requested** → Back to contributor or AKEL for revision
92 -* **Rejected** → Rejected status with reasoning
90 +**Purpose:** Exchange structured data between nodes.
93 93  
94 -----
92 +**Steps:**
93 +1. Detect Version Changes.
94 +1. Build Signed Bundle (Merkle tree).
95 +1. Push/Pull to Peers.
96 +1. Validate Signatures & Lineage.
97 +1. Resolve Conflicts (Merge/Fork).
98 +1. Trigger Re-evaluation.
95 95  
96 -== Scenario Creation Workflow ==
100 +== 7. User Role & Review Workflow ==
97 97  
98 -=== Step 1: Scenario Generation ===
102 +**Purpose:** Ensure correctness and safety.
99 99  
100 -**Automated (AKEL)**:
101 -* Generate scenarios for claim
102 -* Define boundaries, assumptions, context
103 -* Identify evaluation methods
104 +**Steps:**
105 +1. Submission.
106 +1. Auto-check (AKEL).
107 +1. Reviewer Validation.
108 +1. Expert Validation (if needed).
109 +1. Moderator Oversight (if flagged).
104 104  
105 -**Manual (Expert/Reviewer)**:
106 -* Create custom scenarios
107 -* Refine AKEL-generated scenarios
108 -* Add domain-specific nuances
111 +== 8. AKEL Workflow ==
109 109  
110 -=== Step 2: Scenario Validation ===
113 +**Stages:**
114 +* Input Understanding
115 +* Scenario Drafting
116 +* Evidence Processing
117 +* Verdict Drafting
118 +* Safety & Integrity
119 +* Human Approval
111 111  
112 -**Quality Checks**:
113 -* Completeness (definitions, boundaries, assumptions clear)
114 -* Relevance to claim
115 -* Evaluability
116 -* No circular logic
121 +== 9. Global Trigger Flow (Cascade) ==
117 117  
118 -**Risk Tier Assignment**:
119 -* Inherits from parent claim
120 -* Can be overridden by expert if scenario increases/decreases risk
123 +**Sources:** Claim/Scenario/Evidence change, Verdict contradiction, Federation update.
121 121  
122 -=== Step 3: Scenario Publication ===
125 +**Flow:** Trigger Dependency Graph → Re-evaluation → Updated Verdicts
123 123  
124 -**Mode 2 (AI-Generated)**:
125 -* Tier B/C scenarios can publish immediately
126 -* Subject to sampling audits
127 -
128 -**Mode 1 (Draft)**:
129 -* Tier A scenarios default to draft
130 -* Require expert validation for Mode 2 or Mode 3
131 -
132 -----
133 -
134 -== Evidence Evaluation Workflow ==
135 -
136 -=== Step 1: Evidence Search & Retrieval ===
137 -
138 -**AKEL Actions**:
139 -* Search academic databases, reputable media
140 -* **Mandatory contradiction search** (counter-evidence, reservations)
141 -* Extract metadata (author, date, publication, methodology)
142 -* Assess source reliability
143 -
144 -**Quality Requirements**:
145 -* Primary sources preferred
146 -* Diverse perspectives included
147 -* Echo chambers flagged
148 -* Conflicting evidence acknowledged
149 -
150 -=== Step 2: Evidence Summarization ===
151 -
152 -**AKEL Generates**:
153 -* Summary of evidence
154 -* Relevance assessment
155 -* Reliability score
156 -* Limitations and caveats
157 -* Conflicting evidence summary
158 -
159 -**Quality Gate**: Structural integrity, source quality
160 -
161 -=== Step 3: Evidence Review ===
162 -
163 -**Reviewer/Expert Validates**:
164 -* Accuracy of summaries
165 -* Appropriateness of sources
166 -* Completeness of contradiction search
167 -* Reliability assessments
168 -
169 -**Outcomes**:
170 -* **Mode 2**: Evidence summaries published as AI-generated
171 -* **Mode 3**: After human validation
172 -* **Mode 1**: Failed quality checks or pending expert review
173 -
174 -----
175 -
176 -== Verdict Generation Workflow ==
177 -
178 -=== Step 1: Verdict Computation ===
179 -
180 -**AKEL Computes**:
181 -* Verdict across scenarios
182 -* Confidence scores
183 -* Uncertainty quantification
184 -* Key assumptions
185 -* Limitations
186 -
187 -**Inputs**:
188 -* Claim text
189 -* Scenario definitions
190 -* Evidence assessments
191 -* Contradiction search results
192 -
193 -=== Step 2: Verdict Validation ===
194 -
195 -**Quality Gates**:
196 -* All four gates apply (source, contradiction, uncertainty, structure)
197 -* Reasoning chain must be traceable
198 -* Assumptions must be explicit
199 -
200 -**Risk Tier Check**:
201 -* Tier A: Always requires expert validation for Mode 3
202 -* Tier B: Mode 2 allowed, audit sampling
203 -* Tier C: Mode 2 default
204 -
205 -=== Step 3: Verdict Publication ===
206 -
207 -**Mode 2 (AI-Generated Verdict)**:
208 -* Clear labeling with confidence scores
209 -* Uncertainty disclosure
210 -* Links to reasoning trail
211 -* User can request expert review
212 -
213 -**Mode 3 (Expert-Validated Verdict)**:
214 -* Human reviewer/expert stamp
215 -* Additional commentary (optional)
216 -* Highest trust level
217 -
218 -----
219 -
220 -== Audit Workflow ==
221 -
222 -=== Step 1: Audit Sampling Selection ===
223 -
224 -**Stratified Sampling**:
225 -* Risk tier priority (A > B > C)
226 -* Low confidence scores
227 -* High traffic content
228 -* Novel topics
229 -* User flags
230 -
231 -**Sampling Rates** (Recommendations):
232 -* Tier A: 30-50%
233 -* Tier B: 10-20%
234 -* Tier C: 5-10%
235 -
236 -=== Step 2: Audit Execution ===
237 -
238 -**Auditor Actions**:
239 -1. Review sampled AI-generated content
240 -2. Validate quality gates were properly applied
241 -3. Check contradiction search completeness
242 -4. Assess reasoning quality
243 -5. Identify errors or hallucinations
244 -
245 -**Audit Outcome**:
246 -* **Pass**: Content remains in Mode 2, logged as validated
247 -* **Fail**: Content flagged for review, system improvement triggered
248 -
249 -=== Step 3: Feedback Loop ===
250 -
251 -**System Improvements**:
252 -* Failed audits analyzed for patterns
253 -* AKEL parameters adjusted
254 -* Quality gates refined
255 -* Risk tier assignments recalibrated
256 -
257 -**Transparency**:
258 -* Audit statistics published periodically
259 -* Patterns shared with community
260 -* System improvements documented
261 -
262 -----
263 -
264 -== Mode Transition Workflow ==
265 -
266 -=== Mode 1 → Mode 2 ===
267 -
268 -**Requirements**:
269 -* All quality gates pass
270 -* Risk tier B or C (or A with warnings)
271 -* Contradiction search completed
272 -
273 -**Trigger**: Automatic upon quality gate validation
274 -
275 -=== Mode 2 → Mode 3 ===
276 -
277 -**Requirements**:
278 -* Human reviewer/expert validation
279 -* Quality standards confirmed
280 -* For Tier A: Expert approval required
281 -* For Tier B/C: Reviewer approval sufficient
282 -
283 -**Trigger**: Human review completion
284 -
285 -=== Mode 3 → Mode 1 (Demotion) ===
286 -
287 -**Rare - Only if**:
288 -* New evidence contradicts verdict
289 -* Error discovered in reasoning
290 -* Source retraction
291 -
292 -**Process**:
293 -1. Content flagged for re-evaluation
294 -2. Moved to draft (Mode 1)
295 -3. Re-processed through workflow
296 -4. Reason for demotion documented
297 -
298 -----
299 -
300 -== User Actions Across Modes ==
301 -
302 -=== On Mode 1 (Draft) Content ===
303 -
304 -**Contributors**:
305 -* Edit their own drafts
306 -* Submit for review
307 -
308 -**Reviewers/Experts**:
309 -* View and comment
310 -* Request changes
311 -* Approve for Mode 2 or Mode 3
312 -
313 -=== On Mode 2 (AI-Generated) Content ===
314 -
315 -**All Users**:
316 -* Read and use content
317 -* Request human review
318 -* Flag for expert attention
319 -* Provide feedback
320 -
321 -**Reviewers/Experts**:
322 -* Validate for Mode 3 transition
323 -* Edit and refine
324 -* Adjust risk tier if needed
325 -
326 -=== On Mode 3 (Human-Reviewed) Content ===
327 -
328 -**All Users**:
329 -* Read with highest confidence
330 -* Still can flag if new evidence emerges
331 -
332 -**Reviewers/Experts**:
333 -* Update if needed
334 -* Trigger re-evaluation if new evidence
335 -
336 -----
337 -
338 -== Diagram References ==
339 -
340 -=== Claim and Scenario Lifecycle (Overview) ===
341 -
342 -{{include reference="Test.FactHarborV09.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}}
343 -
344 -=== Claim and Scenario Workflow ===
345 -
346 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
347 -
348 -=== Evidence and Verdict Workflow ===
349 -
350 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
351 -
352 -=== Quality and Audit Workflow ===
353 -
354 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
355 -
356 -
357 -
358 -{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
359 -
360 -----
361 -
362 -== Related Pages ==
363 -
364 -* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
365 -* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
366 -* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
367 -* [[Governance>>FactHarbor.Organisation.Governance]]
368 -
127 +{{include reference="FactHarbor.Specification.Diagrams.Global Trigger Cascade.WebHome"/}}