Changes for page Workflows

Last modified by Robert Schaub on 2025/12/24 20:34

From version 5.1
edited by Robert Schaub
on 2025/12/12 21:50
Change comment: Rollback to version 3.1
To version 7.2
edited by Robert Schaub
on 2025/12/16 20:28
Change comment: Renamed back-links.

Summary

Details

Page properties
Content
... ... @@ -1,127 +1,410 @@
1 1  = Workflows =
2 2  
3 -This chapter defines the core workflows used across the FactHarbor system.
3 +This page describes the core workflows for content creation, review, and publication in FactHarbor.
4 4  
5 -Each workflow describes:
6 -* Purpose
7 -* Participants
8 -* Steps
9 -* Automation vs. manual work
5 +== Overview ==
10 10  
11 -== 1. Claim Workflow ==
7 +FactHarbor workflows support three publication modes with risk-based review:
12 12  
13 -**Purpose:** Transform raw text or input material into a normalized, classified, deduplicated, and versioned claim.
9 +* **Mode 1 (Draft)**: Internal only, failed quality gates or pending review
10 +* **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates
11 +* **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust
14 14  
15 -**Participants:**
16 -* Contributor
17 -* AKEL
18 -* Reviewer
13 +Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict).
19 19  
20 -**Steps:**
21 -1. **Ingestion**: User submits text/URL; AKEL extracts claims.
22 -1. **Normalization**: Standardize wording, reduce ambiguity.
23 -1. **Classification**: Domain, Evaluability, Safety (AKEL draft → Human confirm).
24 -1. **Duplicate Detection**: Check embeddings for existing claims.
25 -1. **Version Creation**: Store new ClaimVersion.
26 -1. **Cluster Assignment**: Assign to Claim Cluster.
27 -1. **Scenario Linking**: Connect to existing or draft new scenarios.
28 -1. **Publication**: Make visible.
15 +----
29 29  
30 -**Flow:** Ingest → Normalize → Classify → Deduplicate → Cluster → Version Publish
17 +== Claim Submission & Publication Workflow ==
31 31  
32 -== 2. Scenario Workflow ==
19 +=== Step 1: Claim Submission ===
33 33  
34 -**Purpose:** Define the specific analytic contexts needed to evaluate each claim.
21 +**Actor**: Contributor or AKEL
35 35  
36 -**Steps:**
37 -1. **Scenario Proposal**: Drafted by contributor or AKEL.
38 -1. **Required Fields**: Definitions, Assumptions, ContextBoundary, EvaluationMethod, SafetyClass.
39 -1. **Safety Interception**: AKEL flags non-falsifiable or unsafe content.
40 -1. **Conflict Check**: Merge similar scenarios, flag contradictions.
41 -1. **Reviewer Validation**: Ensure clarity and validity.
42 -1. **Expert Approval**: Mandatory for high-risk domains.
43 -1. **Version Storage**: Save ScenarioVersion.
23 +**Actions**:
44 44  
45 -**Flow:** Draft → Validate → Safety Check → Review → Expert → Version → Activate
25 +* Submit claim text
26 +* Provide initial sources (optional for human contributors, mandatory for AKEL)
27 +* System assigns initial AuthorType (Human or AI)
46 46  
47 -== 3. Evidence Workflow ==
29 +**Output**: Claim draft created
48 48  
49 -**Purpose:** Structure, classify, validate, version, and link evidence to scenarios.
31 +=== Step 2: AKEL Processing ===
50 50  
51 -**Steps:**
52 -1. **Submission**: File, URL, or text.
53 -1. **Metadata Extraction**: Type, Category, Provenance, ReliabilityHints.
54 -1. **Relevance Check**: Verify applicability to scenario.
55 -1. **Reliability Assessment**: Score reliability (Reviewer + Expert).
56 -1. **Link Creation**: Create ScenarioEvidenceLink with relevance score.
57 -1. **Versioning**: Update EvidenceVersion.
33 +**Automated Steps**:
58 58  
59 -**Flow:** Submit → Extract → Relevance → Reliability → Link → Version
35 +1. Claim extraction and normalization
36 +2. Classification (domain, type, evaluability)
37 +3. Risk tier assignment (A/B/C suggested)
38 +4. Initial scenario generation
39 +5. Evidence search
40 +6. **Contradiction search** (mandatory)
41 +7. Quality gate validation
60 60  
61 -== 4. Verdict Workflow ==
43 +**Output**: Processed claim with risk tier and quality gate results
62 62  
63 -**Purpose:** Generate likelihood estimates **per scenario** based on evidence.
45 +=== Step 3: Quality Gate Checkpoint ===
64 64  
65 -**Steps:**
66 -1. **Aggregation**: Collect linked evidence for a specific scenario.
67 -1. **Draft Verdict**: AKEL proposes likelihood and uncertainty for that scenario.
68 -1. **Reasoning**: AKEL drafts explanation chain.
69 -1. **Validation**: Reviewer checks logic and hallucinations.
70 -1. **Expert Review**: Required for sensitive topics.
71 -1. **Storage**: Save VerdictVersion.
47 +**Gates Evaluated**:
72 72  
73 -**Flow:** Aggregate → Draft → Reasoning → Review → Expert → Version
49 +* Source quality
50 +* Contradiction search completion
51 +* Uncertainty quantification
52 +* Structural integrity
74 74  
75 -== 5. Re-evaluation Workflow ==
54 +**Outcomes**:
76 76  
77 -**Purpose:** Keep verdicts current when inputs change.
56 +* **All gates pass** → Proceed to Mode 2 publication (if Tier B or C)
57 +* **Any gate fails** → Mode 1 (Draft), flag for human review
58 +* **Tier A** → Mode 2 with warnings + auto-escalate to expert queue
78 78  
79 -**Steps:**
80 -1. **Trigger**: Evidence update, Scenario change, or Contradiction.
81 -1. **Impact Analysis**: Identify affected nodes.
82 -1. **Re-calculation**: AKEL proposes new likelihoods.
83 -1. **Validation**: Human review.
84 -1. **Storage**: New version.
60 +=== Step 4: Publication (Risk-Tier Dependent) ===
85 85  
86 -**Flow:** Trigger Analyze → Recompute Review → Version
62 +**Tier C (Low Risk)**:
87 87  
88 -== 6. Federation Synchronization Workflow ==
64 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
65 +* User can request human review
66 +* Sampling audit applies
89 89  
90 -**Purpose:** Exchange structured data between nodes.
68 +**Tier B (Medium Risk)**:
91 91  
92 -**Steps:**
93 -1. Detect Version Changes.
94 -1. Build Signed Bundle (Merkle tree).
95 -1. Push/Pull to Peers.
96 -1. Validate Signatures & Lineage.
97 -1. Resolve Conflicts (Merge/Fork).
98 -1. Trigger Re-evaluation.
70 +* **Direct to Mode 2**: AI-generated, public, clearly labeled
71 +* Higher audit sampling rate
72 +* High-engagement content may auto-escalate
99 99  
100 -== 7. User Role & Review Workflow ==
74 +**Tier A (High Risk)**:
101 101  
102 -**Purpose:** Ensure correctness and safety.
76 +* **Mode 2 with warnings**: AI-generated, public, prominent disclaimers
77 +* **Auto-escalated** to expert review queue
78 +* User warnings displayed
79 +* Highest audit sampling rate
103 103  
104 -**Steps:**
105 -1. Submission.
106 -1. Auto-check (AKEL).
107 -1. Reviewer Validation.
108 -1. Expert Validation (if needed).
109 -1. Moderator Oversight (if flagged).
81 +=== Step 5: Human Review (Optional for B/C, Escalated for A) ===
110 110  
111 -== 8. AKEL Workflow ==
83 +**Triggers**:
112 112  
113 -**Stages:**
114 -* Input Understanding
115 -* Scenario Drafting
116 -* Evidence Processing
117 -* Verdict Drafting
118 -* Safety & Integrity
119 -* Human Approval
85 +* User requests review
86 +* Audit flags issues
87 +* High engagement (Tier B)
88 +* Automatic (Tier A)
120 120  
121 -== 9. Global Trigger Flow (Cascade) ==
90 +**Process**:
122 122  
123 -**Sources:** Claim/Scenario/Evidence change, Verdict contradiction, Federation update.
92 +1. Reviewer/Expert examines claim
93 +2. Validates quality gates
94 +3. Checks contradiction search results
95 +4. Assesses risk tier appropriateness
96 +5. Decision: Approve, Request Changes, or Reject
124 124  
125 -**Flow:** Trigger → Dependency Graph → Re-evaluation → Updated Verdicts
98 +**Outcomes**:
126 126  
127 -{{include reference="FactHarbor.Specification.Diagrams.Global Trigger Cascade.WebHome"/}}
100 +* **Approved** → Mode 3 (Human-Reviewed)
101 +* **Changes Requested** → Back to contributor or AKEL for revision
102 +* **Rejected** → Rejected status with reasoning
103 +
104 +----
105 +
106 +== Scenario Creation Workflow ==
107 +
108 +=== Step 1: Scenario Generation ===
109 +
110 +**Automated (AKEL)**:
111 +
112 +* Generate scenarios for claim
113 +* Define boundaries, assumptions, context
114 +* Identify evaluation methods
115 +
116 +**Manual (Expert/Reviewer)**:
117 +
118 +* Create custom scenarios
119 +* Refine AKEL-generated scenarios
120 +* Add domain-specific nuances
121 +
122 +=== Step 2: Scenario Validation ===
123 +
124 +**Quality Checks**:
125 +
126 +* Completeness (definitions, boundaries, assumptions clear)
127 +* Relevance to claim
128 +* Evaluability
129 +* No circular logic
130 +
131 +**Risk Tier Assignment**:
132 +
133 +* Inherits from parent claim
134 +* Can be overridden by expert if scenario increases/decreases risk
135 +
136 +=== Step 3: Scenario Publication ===
137 +
138 +**Mode 2 (AI-Generated)**:
139 +
140 +* Tier B/C scenarios can publish immediately
141 +* Subject to sampling audits
142 +
143 +**Mode 1 (Draft)**:
144 +
145 +* Tier A scenarios default to draft
146 +* Require expert validation for Mode 2 or Mode 3
147 +
148 +----
149 +
150 +== Evidence Evaluation Workflow ==
151 +
152 +=== Step 1: Evidence Search & Retrieval ===
153 +
154 +**AKEL Actions**:
155 +
156 +* Search academic databases, reputable media
157 +* **Mandatory contradiction search** (counter-evidence, reservations)
158 +* Extract metadata (author, date, publication, methodology)
159 +* Assess source reliability
160 +
161 +**Quality Requirements**:
162 +
163 +* Primary sources preferred
164 +* Diverse perspectives included
165 +* Echo chambers flagged
166 +* Conflicting evidence acknowledged
167 +
168 +=== Step 2: Evidence Summarization ===
169 +
170 +**AKEL Generates**:
171 +
172 +* Summary of evidence
173 +* Relevance assessment
174 +* Reliability score
175 +* Limitations and caveats
176 +* Conflicting evidence summary
177 +
178 +**Quality Gate**: Structural integrity, source quality
179 +
180 +=== Step 3: Evidence Review ===
181 +
182 +**Reviewer/Expert Validates**:
183 +
184 +* Accuracy of summaries
185 +* Appropriateness of sources
186 +* Completeness of contradiction search
187 +* Reliability assessments
188 +
189 +**Outcomes**:
190 +
191 +* **Mode 2**: Evidence summaries published as AI-generated
192 +* **Mode 3**: After human validation
193 +* **Mode 1**: Failed quality checks or pending expert review
194 +
195 +----
196 +
197 +== Verdict Generation Workflow ==
198 +
199 +=== Step 1: Verdict Computation ===
200 +
201 +**AKEL Computes**:
202 +
203 +* Verdict across scenarios
204 +* Confidence scores
205 +* Uncertainty quantification
206 +* Key assumptions
207 +* Limitations
208 +
209 +**Inputs**:
210 +
211 +* Claim text
212 +* Scenario definitions
213 +* Evidence assessments
214 +* Contradiction search results
215 +
216 +=== Step 2: Verdict Validation ===
217 +
218 +**Quality Gates**:
219 +
220 +* All four gates apply (source, contradiction, uncertainty, structure)
221 +* Reasoning chain must be traceable
222 +* Assumptions must be explicit
223 +
224 +**Risk Tier Check**:
225 +
226 +* Tier A: Always requires expert validation for Mode 3
227 +* Tier B: Mode 2 allowed, audit sampling
228 +* Tier C: Mode 2 default
229 +
230 +=== Step 3: Verdict Publication ===
231 +
232 +**Mode 2 (AI-Generated Verdict)**:
233 +
234 +* Clear labeling with confidence scores
235 +* Uncertainty disclosure
236 +* Links to reasoning trail
237 +* User can request expert review
238 +
239 +**Mode 3 (Expert-Validated Verdict)**:
240 +
241 +* Human reviewer/expert stamp
242 +* Additional commentary (optional)
243 +* Highest trust level
244 +
245 +----
246 +
247 +== Audit Workflow ==
248 +
249 +=== Step 1: Audit Sampling Selection ===
250 +
251 +**Stratified Sampling**:
252 +
253 +* Risk tier priority (A > B > C)
254 +* Low confidence scores
255 +* High traffic content
256 +* Novel topics
257 +* User flags
258 +
259 +**Sampling Rates** (Recommendations):
260 +
261 +* Tier A: 30-50%
262 +* Tier B: 10-20%
263 +* Tier C: 5-10%
264 +
265 +=== Step 2: Audit Execution ===
266 +
267 +**Auditor Actions**:
268 +
269 +1. Review sampled AI-generated content
270 +2. Validate quality gates were properly applied
271 +3. Check contradiction search completeness
272 +4. Assess reasoning quality
273 +5. Identify errors or hallucinations
274 +
275 +**Audit Outcome**:
276 +
277 +* **Pass**: Content remains in Mode 2, logged as validated
278 +* **Fail**: Content flagged for review, system improvement triggered
279 +
280 +=== Step 3: Feedback Loop ===
281 +
282 +**System Improvements**:
283 +
284 +* Failed audits analyzed for patterns
285 +* AKEL parameters adjusted
286 +* Quality gates refined
287 +* Risk tier assignments recalibrated
288 +
289 +**Transparency**:
290 +
291 +* Audit statistics published periodically
292 +* Patterns shared with community
293 +* System improvements documented
294 +
295 +----
296 +
297 +== Mode Transition Workflow ==
298 +
299 +=== Mode 1 → Mode 2 ===
300 +
301 +**Requirements**:
302 +
303 +* All quality gates pass
304 +* Risk tier B or C (or A with warnings)
305 +* Contradiction search completed
306 +
307 +**Trigger**: Automatic upon quality gate validation
308 +
309 +=== Mode 2 → Mode 3 ===
310 +
311 +**Requirements**:
312 +
313 +* Human reviewer/expert validation
314 +* Quality standards confirmed
315 +* For Tier A: Expert approval required
316 +* For Tier B/C: Reviewer approval sufficient
317 +
318 +**Trigger**: Human review completion
319 +
320 +=== Mode 3 → Mode 1 (Demotion) ===
321 +
322 +**Rare - Only if**:
323 +
324 +* New evidence contradicts verdict
325 +* Error discovered in reasoning
326 +* Source retraction
327 +
328 +**Process**:
329 +
330 +1. Content flagged for re-evaluation
331 +2. Moved to draft (Mode 1)
332 +3. Re-processed through workflow
333 +4. Reason for demotion documented
334 +
335 +----
336 +
337 +== User Actions Across Modes ==
338 +
339 +=== On Mode 1 (Draft) Content ===
340 +
341 +**Contributors**:
342 +
343 +* Edit their own drafts
344 +* Submit for review
345 +
346 +**Reviewers/Experts**:
347 +
348 +* View and comment
349 +* Request changes
350 +* Approve for Mode 2 or Mode 3
351 +
352 +=== On Mode 2 (AI-Generated) Content ===
353 +
354 +**All Users**:
355 +
356 +* Read and use content
357 +* Request human review
358 +* Flag for expert attention
359 +* Provide feedback
360 +
361 +**Reviewers/Experts**:
362 +
363 +* Validate for Mode 3 transition
364 +* Edit and refine
365 +* Adjust risk tier if needed
366 +
367 +=== On Mode 3 (Human-Reviewed) Content ===
368 +
369 +**All Users**:
370 +
371 +* Read with highest confidence
372 +* Still can flag if new evidence emerges
373 +
374 +**Reviewers/Experts**:
375 +
376 +* Update if needed
377 +* Trigger re-evaluation if new evidence
378 +
379 +----
380 +
381 +== Diagram References ==
382 +
383 +=== Claim and Scenario Lifecycle (Overview) ===
384 +
385 +{{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}}
386 +
387 +=== Claim and Scenario Workflow ===
388 +
389 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}}
390 +
391 +=== Evidence and Verdict Workflow ===
392 +
393 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}}
394 +
395 +=== Quality and Audit Workflow ===
396 +
397 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}}
398 +
399 +
400 +
401 +{{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}}
402 +
403 +----
404 +
405 +== Related Pages ==
406 +
407 +* [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]]
408 +* [[Automation>>FactHarbor.Specification.Automation.WebHome]]
409 +* [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]]
410 +* [[Governance>>FactHarbor.Organisation.Governance]]