Wiki source code of Workflows
Last modified by Robert Schaub on 2025/12/24 20:35
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Workflows = | ||
| 2 | |||
| 3 | This page describes the core workflows for content creation, review, and publication in Test.FactHarborV09. | ||
| 4 | |||
| 5 | == 1. Overview == | ||
| 6 | |||
| 7 | FactHarbor workflows support three publication modes with risk-based review: | ||
| 8 | |||
| 9 | * **Mode 1 (Draft)**: Internal only, failed quality gates or pending review | ||
| 10 | * **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates | ||
| 11 | * **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust | ||
| 12 | |||
| 13 | Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict). | ||
| 14 | |||
| 15 | |||
| 16 | == 2. Claim Submission & Publication Workflow == | ||
| 17 | |||
| 18 | === 2.1 Step 1: Claim Submission === | ||
| 19 | |||
| 20 | **Actor**: Contributor or AKEL | ||
| 21 | |||
| 22 | **Actions**: | ||
| 23 | |||
| 24 | * Submit claim text | ||
| 25 | * Provide initial sources (optional for human contributors, mandatory for AKEL) | ||
| 26 | * System assigns initial AuthorType (Human or AI) | ||
| 27 | |||
| 28 | **Output**: Claim draft created | ||
| 29 | |||
| 30 | === 2.2 Step 2: AKEL Processing === | ||
| 31 | |||
| 32 | **Automated Steps**: | ||
| 33 | |||
| 34 | 1. Claim extraction and normalization | ||
| 35 | 2. Classification (domain, type, evaluability) | ||
| 36 | 3. Risk tier assignment (A/B/C suggested) | ||
| 37 | 4. Initial scenario generation | ||
| 38 | 5. Evidence search | ||
| 39 | 6. **Contradiction search** (mandatory) | ||
| 40 | 7. Quality gate validation | ||
| 41 | |||
| 42 | **Output**: Processed claim with risk tier and quality gate results | ||
| 43 | |||
| 44 | === 2.3 Step 3: Quality Gate Checkpoint === | ||
| 45 | |||
| 46 | **Gates Evaluated**: | ||
| 47 | |||
| 48 | * Source quality | ||
| 49 | * Contradiction search completion | ||
| 50 | * Uncertainty quantification | ||
| 51 | * Structural integrity | ||
| 52 | |||
| 53 | **Outcomes**: | ||
| 54 | |||
| 55 | * **All gates pass** → Proceed to Mode 2 publication (if Tier B or C) | ||
| 56 | * **Any gate fails** → Mode 1 (Draft), flag for human review | ||
| 57 | * **Tier A** → Mode 2 with warnings + auto-escalate to expert queue | ||
| 58 | |||
| 59 | === 2.4 Step 4: Publication (Risk-Tier Dependent) === | ||
| 60 | |||
| 61 | **Tier C (Low Risk)**: | ||
| 62 | |||
| 63 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 64 | * User can request human review | ||
| 65 | * Sampling audit applies | ||
| 66 | |||
| 67 | **Tier B (Medium Risk)**: | ||
| 68 | |||
| 69 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 70 | * Higher audit sampling rate | ||
| 71 | * High-engagement content may auto-escalate | ||
| 72 | |||
| 73 | **Tier A (High Risk)**: | ||
| 74 | |||
| 75 | * **Mode 2 with warnings**: AI-generated, public, prominent disclaimers | ||
| 76 | * **Auto-escalated** to expert review queue | ||
| 77 | * User warnings displayed | ||
| 78 | * Highest audit sampling rate | ||
| 79 | |||
| 80 | === 2.5 Step 5: Human Review (Optional for B/C, Escalated for A) === | ||
| 81 | |||
| 82 | **Triggers**: | ||
| 83 | |||
| 84 | * User requests review | ||
| 85 | * Audit flags issues | ||
| 86 | * High engagement (Tier B) | ||
| 87 | * Automatic (Tier A) | ||
| 88 | |||
| 89 | **Process**: | ||
| 90 | |||
| 91 | 1. Reviewer/Expert examines claim | ||
| 92 | 2. Validates quality gates | ||
| 93 | 3. Checks contradiction search results | ||
| 94 | 4. Assesses risk tier appropriateness | ||
| 95 | 5. Decision: Approve, Request Changes, or Reject | ||
| 96 | |||
| 97 | **Outcomes**: | ||
| 98 | |||
| 99 | * **Approved** → Mode 3 (Human-Reviewed) | ||
| 100 | * **Changes Requested** → Back to contributor or AKEL for revision | ||
| 101 | * **Rejected** → Rejected status with reasoning | ||
| 102 | |||
| 103 | == 3. Scenario Creation Workflow == | ||
| 104 | |||
| 105 | === 3.1 Step 1: Scenario Generation === | ||
| 106 | |||
| 107 | **Automated (AKEL)**: | ||
| 108 | |||
| 109 | * Generate scenarios for claim | ||
| 110 | * Define boundaries, assumptions, context | ||
| 111 | * Identify evaluation methods | ||
| 112 | |||
| 113 | **Manual (Expert/Reviewer)**: | ||
| 114 | |||
| 115 | * Create custom scenarios | ||
| 116 | * Refine AKEL-generated scenarios | ||
| 117 | * Add domain-specific nuances | ||
| 118 | |||
| 119 | === 3.2 Step 2: Scenario Validation === | ||
| 120 | |||
| 121 | **Quality Checks**: | ||
| 122 | |||
| 123 | * Completeness (definitions, boundaries, assumptions clear) | ||
| 124 | * Relevance to claim | ||
| 125 | * Evaluability | ||
| 126 | * No circular logic | ||
| 127 | |||
| 128 | **Risk Tier Assignment**: | ||
| 129 | |||
| 130 | * Inherits from parent claim | ||
| 131 | * Can be overridden by expert if scenario increases/decreases risk | ||
| 132 | |||
| 133 | === 3.3 Step 3: Scenario Publication === | ||
| 134 | |||
| 135 | **Mode 2 (AI-Generated)**: | ||
| 136 | |||
| 137 | * Tier B/C scenarios can publish immediately | ||
| 138 | * Subject to sampling audits | ||
| 139 | |||
| 140 | **Mode 1 (Draft)**: | ||
| 141 | |||
| 142 | * Tier A scenarios default to draft | ||
| 143 | * Require expert validation for Mode 2 or Mode 3 | ||
| 144 | |||
| 145 | == 4. Evidence Evaluation Workflow == | ||
| 146 | |||
| 147 | === 4.1 Step 1: Evidence Search & Retrieval === | ||
| 148 | |||
| 149 | **AKEL Actions**: | ||
| 150 | |||
| 151 | * Search academic databases, reputable media | ||
| 152 | * **Mandatory contradiction search** (counter-evidence, reservations) | ||
| 153 | * Extract metadata (author, date, publication, methodology) | ||
| 154 | * Assess source reliability | ||
| 155 | |||
| 156 | **Quality Requirements**: | ||
| 157 | |||
| 158 | * Primary sources preferred | ||
| 159 | * Diverse perspectives included | ||
| 160 | * Echo chambers flagged | ||
| 161 | * Conflicting evidence acknowledged | ||
| 162 | |||
| 163 | === 4.2 Step 2: Evidence Summarization === | ||
| 164 | |||
| 165 | **AKEL Generates**: | ||
| 166 | |||
| 167 | * Summary of evidence | ||
| 168 | * Relevance assessment | ||
| 169 | * Reliability score | ||
| 170 | * Limitations and caveats | ||
| 171 | * Conflicting evidence summary | ||
| 172 | |||
| 173 | **Quality Gate**: Structural integrity, source quality | ||
| 174 | |||
| 175 | === 4.3 Step 3: Evidence Review === | ||
| 176 | |||
| 177 | **Reviewer/Expert Validates**: | ||
| 178 | |||
| 179 | * Accuracy of summaries | ||
| 180 | * Appropriateness of sources | ||
| 181 | * Completeness of contradiction search | ||
| 182 | * Reliability assessments | ||
| 183 | |||
| 184 | **Outcomes**: | ||
| 185 | |||
| 186 | * **Mode 2**: Evidence summaries published as AI-generated | ||
| 187 | * **Mode 3**: After human validation | ||
| 188 | * **Mode 1**: Failed quality checks or pending expert review | ||
| 189 | |||
| 190 | == 5. Verdict Generation Workflow == | ||
| 191 | |||
| 192 | === 5.1 Step 1: Verdict Computation === | ||
| 193 | |||
| 194 | **AKEL Computes**: | ||
| 195 | |||
| 196 | * Verdict across scenarios | ||
| 197 | * Confidence scores | ||
| 198 | * Uncertainty quantification | ||
| 199 | * Key assumptions | ||
| 200 | * Limitations | ||
| 201 | |||
| 202 | **Inputs**: | ||
| 203 | |||
| 204 | * Claim text | ||
| 205 | * Scenario definitions | ||
| 206 | * Evidence assessments | ||
| 207 | * Contradiction search results | ||
| 208 | |||
| 209 | === 5.2 Step 2: Verdict Validation === | ||
| 210 | |||
| 211 | **Quality Gates**: | ||
| 212 | |||
| 213 | * All four gates apply (source, contradiction, uncertainty, structure) | ||
| 214 | * Reasoning chain must be traceable | ||
| 215 | * Assumptions must be explicit | ||
| 216 | |||
| 217 | **Risk Tier Check**: | ||
| 218 | |||
| 219 | * Tier A: Always requires expert validation for Mode 3 | ||
| 220 | * Tier B: Mode 2 allowed, audit sampling | ||
| 221 | * Tier C: Mode 2 default | ||
| 222 | |||
| 223 | === 5.3 Step 3: Verdict Publication === | ||
| 224 | |||
| 225 | **Mode 2 (AI-Generated Verdict)**: | ||
| 226 | |||
| 227 | * Clear labeling with confidence scores | ||
| 228 | * Uncertainty disclosure | ||
| 229 | * Links to reasoning trail | ||
| 230 | * User can request expert review | ||
| 231 | |||
| 232 | **Mode 3 (Expert-Validated Verdict)**: | ||
| 233 | |||
| 234 | * Human reviewer/expert stamp | ||
| 235 | * Additional commentary (optional) | ||
| 236 | * Highest trust level | ||
| 237 | |||
| 238 | == 6. Audit Workflow == | ||
| 239 | |||
| 240 | === 6.1 Step 1: Audit Sampling Selection === | ||
| 241 | |||
| 242 | **Stratified Sampling**: | ||
| 243 | |||
| 244 | * Risk tier priority (A > B > C) | ||
| 245 | * Low confidence scores | ||
| 246 | * High traffic content | ||
| 247 | * Novel topics | ||
| 248 | * User flags | ||
| 249 | |||
| 250 | **Sampling Rates** (Recommendations): | ||
| 251 | |||
| 252 | * Tier A: 30-50% | ||
| 253 | * Tier B: 10-20% | ||
| 254 | * Tier C: 5-10% | ||
| 255 | |||
| 256 | === 6.2 Step 2: Audit Execution === | ||
| 257 | |||
| 258 | **Auditor Actions**: | ||
| 259 | |||
| 260 | 1. Review sampled AI-generated content | ||
| 261 | 2. Validate quality gates were properly applied | ||
| 262 | 3. Check contradiction search completeness | ||
| 263 | 4. Assess reasoning quality | ||
| 264 | 5. Identify errors or hallucinations | ||
| 265 | |||
| 266 | **Audit Outcome**: | ||
| 267 | |||
| 268 | * **Pass**: Content remains in Mode 2, logged as validated | ||
| 269 | * **Fail**: Content flagged for review, system improvement triggered | ||
| 270 | |||
| 271 | === 6.3 Step 3: Feedback Loop === | ||
| 272 | |||
| 273 | **System Improvements**: | ||
| 274 | |||
| 275 | * Failed audits analyzed for patterns | ||
| 276 | * AKEL parameters adjusted | ||
| 277 | * Quality gates refined | ||
| 278 | * Risk tier assignments recalibrated | ||
| 279 | |||
| 280 | **Transparency**: | ||
| 281 | |||
| 282 | * Audit statistics published periodically | ||
| 283 | * Patterns shared with community | ||
| 284 | * System improvements documented | ||
| 285 | |||
| 286 | == 7. Mode Transition Workflow == | ||
| 287 | |||
| 288 | === 7.1 Mode 1 → Mode 2 === | ||
| 289 | |||
| 290 | **Requirements**: | ||
| 291 | |||
| 292 | * All quality gates pass | ||
| 293 | * Risk tier B or C (or A with warnings) | ||
| 294 | * Contradiction search completed | ||
| 295 | |||
| 296 | **Trigger**: Automatic upon quality gate validation | ||
| 297 | |||
| 298 | === 7.2 Mode 2 → Mode 3 === | ||
| 299 | |||
| 300 | **Requirements**: | ||
| 301 | |||
| 302 | * Human reviewer/expert validation | ||
| 303 | * Quality standards confirmed | ||
| 304 | * For Tier A: Expert approval required | ||
| 305 | * For Tier B/C: Reviewer approval sufficient | ||
| 306 | |||
| 307 | **Trigger**: Human review completion | ||
| 308 | |||
| 309 | === 7.3 Mode 3 → Mode 1 (Demotion) === | ||
| 310 | |||
| 311 | **Rare - Only if**: | ||
| 312 | |||
| 313 | * New evidence contradicts verdict | ||
| 314 | * Error discovered in reasoning | ||
| 315 | * Source retraction | ||
| 316 | |||
| 317 | **Process**: | ||
| 318 | |||
| 319 | 1. Content flagged for re-evaluation | ||
| 320 | 2. Moved to draft (Mode 1) | ||
| 321 | 3. Re-processed through workflow | ||
| 322 | 4. Reason for demotion documented | ||
| 323 | |||
| 324 | == 8. User Actions Across Modes == | ||
| 325 | |||
| 326 | === 8.1 On Mode 1 (Draft) Content === | ||
| 327 | |||
| 328 | **Contributors**: | ||
| 329 | |||
| 330 | * Edit their own drafts | ||
| 331 | * Submit for review | ||
| 332 | |||
| 333 | **Reviewers/Experts**: | ||
| 334 | |||
| 335 | * View and comment | ||
| 336 | * Request changes | ||
| 337 | * Approve for Mode 2 or Mode 3 | ||
| 338 | |||
| 339 | === 8.2 On Mode 2 (AI-Generated) Content === | ||
| 340 | |||
| 341 | **All Users**: | ||
| 342 | |||
| 343 | * Read and use content | ||
| 344 | * Request human review | ||
| 345 | * Flag for expert attention | ||
| 346 | * Provide feedback | ||
| 347 | |||
| 348 | **Reviewers/Experts**: | ||
| 349 | |||
| 350 | * Validate for Mode 3 transition | ||
| 351 | * Edit and refine | ||
| 352 | * Adjust risk tier if needed | ||
| 353 | |||
| 354 | === 8.3 On Mode 3 (Human-Reviewed) Content === | ||
| 355 | |||
| 356 | **All Users**: | ||
| 357 | |||
| 358 | * Read with highest confidence | ||
| 359 | * Still can flag if new evidence emerges | ||
| 360 | |||
| 361 | **Reviewers/Experts**: | ||
| 362 | |||
| 363 | * Update if needed | ||
| 364 | * Trigger re-evaluation if new evidence | ||
| 365 | |||
| 366 | == 9. Diagram References == | ||
| 367 | |||
| 368 | === 9.1 Claim and Scenario Lifecycle (Overview) === | ||
| 369 | |||
| 370 | {{include reference="Archive.FactHarbor V0\.9\.23 Lost Data.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"}} | ||
| 371 | |||
| 372 | === 9.2 Claim and Scenario Workflow === | ||
| 373 | |||
| 374 | {{include reference="Test.FactHarborV09.Specification.Diagrams.Claim and Scenario Workflow.WebHome"}} | ||
| 375 | |||
| 376 | === 9.3 Evidence and Verdict Workflow === | ||
| 377 | |||
| 378 | {{include reference="Test.FactHarborV09.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"}} | ||
| 379 | |||
| 380 | === 9.4 Quality and Audit Workflow === | ||
| 381 | |||
| 382 | {{include reference="Test.FactHarborV09.Specification.Diagrams.Quality and Audit Workflow.WebHome"}} | ||
| 383 | |||
| 384 | |||
| 385 | |||
| 386 | {{include reference="Test.FactHarborV09.Specification.Diagrams.Manual vs Automated matrix.WebHome"}} | ||
| 387 | |||
| 388 | |||
| 389 | == 10. Related Pages == | ||
| 390 | |||
| 391 | * [[AKEL (AI Knowledge Extraction Layer)>>Test.FactHarborV09.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 392 | * [[Automation>>Test.FactHarborV09.Specification.Automation.WebHome]] | ||
| 393 | * [[Requirements (Roles)>>Test.FactHarborV09.Specification.Requirements.WebHome]] | ||
| 394 | * [[Governance>>Test.FactHarborV09.Organisation.Governance]] | ||
| 395 | {{/include}} | ||
| 396 | {{/include}} | ||
| 397 | {{/include}} |