Wiki source code of Workflows
Version 7.25 by Robert Schaub on 2025/12/24 20:34
Hide last authors
| author | version | line-number | content |
|---|---|---|---|
| |
1.1 | 1 | = Workflows = |
| 2 | |||
| |
6.1 | 3 | This page describes the core workflows for content creation, review, and publication in FactHarbor. |
| |
1.1 | 4 | |
| |
6.1 | 5 | == Overview == |
| |
1.1 | 6 | |
| |
6.1 | 7 | FactHarbor workflows support three publication modes with risk-based review: |
| |
1.1 | 8 | |
| |
6.1 | 9 | * **Mode 1 (Draft)**: Internal only, failed quality gates or pending review |
| 10 | * **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates | ||
| 11 | * **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust | ||
| |
1.1 | 12 | |
| |
6.1 | 13 | Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict). |
| |
1.1 | 14 | |
| |
6.1 | 15 | ---- |
| |
1.1 | 16 | |
| |
6.1 | 17 | == Claim Submission & Publication Workflow == |
| |
1.1 | 18 | |
| |
6.1 | 19 | === Step 1: Claim Submission === |
| |
1.1 | 20 | |
| |
6.1 | 21 | **Actor**: Contributor or AKEL |
| |
1.1 | 22 | |
| |
6.1 | 23 | **Actions**: |
| |
7.2 | 24 | |
| |
6.1 | 25 | * Submit claim text |
| 26 | * Provide initial sources (optional for human contributors, mandatory for AKEL) | ||
| 27 | * System assigns initial AuthorType (Human or AI) | ||
| |
1.1 | 28 | |
| |
6.1 | 29 | **Output**: Claim draft created |
| |
1.1 | 30 | |
| |
6.1 | 31 | === Step 2: AKEL Processing === |
| |
1.1 | 32 | |
| |
6.1 | 33 | **Automated Steps**: |
| |
7.2 | 34 | |
| |
6.1 | 35 | 1. Claim extraction and normalization |
| 36 | 2. Classification (domain, type, evaluability) | ||
| 37 | 3. Risk tier assignment (A/B/C suggested) | ||
| 38 | 4. Initial scenario generation | ||
| 39 | 5. Evidence search | ||
| 40 | 6. **Contradiction search** (mandatory) | ||
| 41 | 7. Quality gate validation | ||
| |
1.1 | 42 | |
| |
6.1 | 43 | **Output**: Processed claim with risk tier and quality gate results |
| |
1.1 | 44 | |
| |
6.1 | 45 | === Step 3: Quality Gate Checkpoint === |
| |
1.1 | 46 | |
| |
6.1 | 47 | **Gates Evaluated**: |
| |
7.2 | 48 | |
| |
6.1 | 49 | * Source quality |
| 50 | * Contradiction search completion | ||
| 51 | * Uncertainty quantification | ||
| 52 | * Structural integrity | ||
| |
1.1 | 53 | |
| |
6.1 | 54 | **Outcomes**: |
| |
7.2 | 55 | |
| |
6.1 | 56 | * **All gates pass** → Proceed to Mode 2 publication (if Tier B or C) |
| 57 | * **Any gate fails** → Mode 1 (Draft), flag for human review | ||
| 58 | * **Tier A** → Mode 2 with warnings + auto-escalate to expert queue | ||
| |
1.1 | 59 | |
| |
6.1 | 60 | === Step 4: Publication (Risk-Tier Dependent) === |
| |
1.1 | 61 | |
| |
6.1 | 62 | **Tier C (Low Risk)**: |
| |
7.2 | 63 | |
| |
6.1 | 64 | * **Direct to Mode 2**: AI-generated, public, clearly labeled |
| 65 | * User can request human review | ||
| 66 | * Sampling audit applies | ||
| |
1.1 | 67 | |
| |
6.1 | 68 | **Tier B (Medium Risk)**: |
| |
7.2 | 69 | |
| |
6.1 | 70 | * **Direct to Mode 2**: AI-generated, public, clearly labeled |
| 71 | * Higher audit sampling rate | ||
| 72 | * High-engagement content may auto-escalate | ||
| |
1.1 | 73 | |
| |
6.1 | 74 | **Tier A (High Risk)**: |
| |
7.2 | 75 | |
| |
6.1 | 76 | * **Mode 2 with warnings**: AI-generated, public, prominent disclaimers |
| 77 | * **Auto-escalated** to expert review queue | ||
| 78 | * User warnings displayed | ||
| 79 | * Highest audit sampling rate | ||
| |
1.1 | 80 | |
| |
6.1 | 81 | === Step 5: Human Review (Optional for B/C, Escalated for A) === |
| |
1.1 | 82 | |
| |
6.1 | 83 | **Triggers**: |
| |
7.2 | 84 | |
| |
6.1 | 85 | * User requests review |
| 86 | * Audit flags issues | ||
| 87 | * High engagement (Tier B) | ||
| 88 | * Automatic (Tier A) | ||
| |
1.1 | 89 | |
| |
6.1 | 90 | **Process**: |
| |
7.2 | 91 | |
| |
6.1 | 92 | 1. Reviewer/Expert examines claim |
| 93 | 2. Validates quality gates | ||
| 94 | 3. Checks contradiction search results | ||
| 95 | 4. Assesses risk tier appropriateness | ||
| 96 | 5. Decision: Approve, Request Changes, or Reject | ||
| |
1.1 | 97 | |
| |
6.1 | 98 | **Outcomes**: |
| |
7.2 | 99 | |
| |
6.1 | 100 | * **Approved** → Mode 3 (Human-Reviewed) |
| 101 | * **Changes Requested** → Back to contributor or AKEL for revision | ||
| 102 | * **Rejected** → Rejected status with reasoning | ||
| |
1.1 | 103 | |
| |
6.1 | 104 | ---- |
| |
1.1 | 105 | |
| |
6.1 | 106 | == Scenario Creation Workflow == |
| |
1.1 | 107 | |
| |
6.1 | 108 | === Step 1: Scenario Generation === |
| |
1.1 | 109 | |
| |
6.1 | 110 | **Automated (AKEL)**: |
| |
7.2 | 111 | |
| |
6.1 | 112 | * Generate scenarios for claim |
| 113 | * Define boundaries, assumptions, context | ||
| 114 | * Identify evaluation methods | ||
| |
1.1 | 115 | |
| |
6.1 | 116 | **Manual (Expert/Reviewer)**: |
| |
7.2 | 117 | |
| |
6.1 | 118 | * Create custom scenarios |
| 119 | * Refine AKEL-generated scenarios | ||
| 120 | * Add domain-specific nuances | ||
| |
1.1 | 121 | |
| |
6.1 | 122 | === Step 2: Scenario Validation === |
| |
1.1 | 123 | |
| |
6.1 | 124 | **Quality Checks**: |
| |
7.2 | 125 | |
| |
6.1 | 126 | * Completeness (definitions, boundaries, assumptions clear) |
| 127 | * Relevance to claim | ||
| 128 | * Evaluability | ||
| 129 | * No circular logic | ||
| |
1.1 | 130 | |
| |
6.1 | 131 | **Risk Tier Assignment**: |
| |
7.2 | 132 | |
| |
6.1 | 133 | * Inherits from parent claim |
| 134 | * Can be overridden by expert if scenario increases/decreases risk | ||
| |
1.1 | 135 | |
| |
6.1 | 136 | === Step 3: Scenario Publication === |
| |
1.1 | 137 | |
| |
6.1 | 138 | **Mode 2 (AI-Generated)**: |
| |
7.2 | 139 | |
| |
6.1 | 140 | * Tier B/C scenarios can publish immediately |
| 141 | * Subject to sampling audits | ||
| 142 | |||
| 143 | **Mode 1 (Draft)**: | ||
| |
7.2 | 144 | |
| |
6.1 | 145 | * Tier A scenarios default to draft |
| 146 | * Require expert validation for Mode 2 or Mode 3 | ||
| 147 | |||
| 148 | ---- | ||
| 149 | |||
| 150 | == Evidence Evaluation Workflow == | ||
| 151 | |||
| 152 | === Step 1: Evidence Search & Retrieval === | ||
| 153 | |||
| 154 | **AKEL Actions**: | ||
| |
7.2 | 155 | |
| |
6.1 | 156 | * Search academic databases, reputable media |
| 157 | * **Mandatory contradiction search** (counter-evidence, reservations) | ||
| 158 | * Extract metadata (author, date, publication, methodology) | ||
| 159 | * Assess source reliability | ||
| 160 | |||
| 161 | **Quality Requirements**: | ||
| |
7.2 | 162 | |
| |
6.1 | 163 | * Primary sources preferred |
| 164 | * Diverse perspectives included | ||
| 165 | * Echo chambers flagged | ||
| 166 | * Conflicting evidence acknowledged | ||
| 167 | |||
| 168 | === Step 2: Evidence Summarization === | ||
| 169 | |||
| 170 | **AKEL Generates**: | ||
| |
7.2 | 171 | |
| |
6.1 | 172 | * Summary of evidence |
| 173 | * Relevance assessment | ||
| 174 | * Reliability score | ||
| 175 | * Limitations and caveats | ||
| 176 | * Conflicting evidence summary | ||
| 177 | |||
| 178 | **Quality Gate**: Structural integrity, source quality | ||
| 179 | |||
| 180 | === Step 3: Evidence Review === | ||
| 181 | |||
| 182 | **Reviewer/Expert Validates**: | ||
| |
7.2 | 183 | |
| |
6.1 | 184 | * Accuracy of summaries |
| 185 | * Appropriateness of sources | ||
| 186 | * Completeness of contradiction search | ||
| 187 | * Reliability assessments | ||
| 188 | |||
| 189 | **Outcomes**: | ||
| |
7.2 | 190 | |
| |
6.1 | 191 | * **Mode 2**: Evidence summaries published as AI-generated |
| 192 | * **Mode 3**: After human validation | ||
| 193 | * **Mode 1**: Failed quality checks or pending expert review | ||
| 194 | |||
| 195 | ---- | ||
| 196 | |||
| 197 | == Verdict Generation Workflow == | ||
| 198 | |||
| 199 | === Step 1: Verdict Computation === | ||
| 200 | |||
| 201 | **AKEL Computes**: | ||
| |
7.2 | 202 | |
| |
6.1 | 203 | * Verdict across scenarios |
| 204 | * Confidence scores | ||
| 205 | * Uncertainty quantification | ||
| 206 | * Key assumptions | ||
| 207 | * Limitations | ||
| 208 | |||
| 209 | **Inputs**: | ||
| |
7.2 | 210 | |
| |
6.1 | 211 | * Claim text |
| 212 | * Scenario definitions | ||
| 213 | * Evidence assessments | ||
| 214 | * Contradiction search results | ||
| 215 | |||
| 216 | === Step 2: Verdict Validation === | ||
| 217 | |||
| 218 | **Quality Gates**: | ||
| |
7.2 | 219 | |
| |
6.1 | 220 | * All four gates apply (source, contradiction, uncertainty, structure) |
| 221 | * Reasoning chain must be traceable | ||
| 222 | * Assumptions must be explicit | ||
| 223 | |||
| 224 | **Risk Tier Check**: | ||
| |
7.2 | 225 | |
| |
6.1 | 226 | * Tier A: Always requires expert validation for Mode 3 |
| 227 | * Tier B: Mode 2 allowed, audit sampling | ||
| 228 | * Tier C: Mode 2 default | ||
| 229 | |||
| 230 | === Step 3: Verdict Publication === | ||
| 231 | |||
| 232 | **Mode 2 (AI-Generated Verdict)**: | ||
| |
7.2 | 233 | |
| |
6.1 | 234 | * Clear labeling with confidence scores |
| 235 | * Uncertainty disclosure | ||
| 236 | * Links to reasoning trail | ||
| 237 | * User can request expert review | ||
| 238 | |||
| 239 | **Mode 3 (Expert-Validated Verdict)**: | ||
| |
7.2 | 240 | |
| |
6.1 | 241 | * Human reviewer/expert stamp |
| 242 | * Additional commentary (optional) | ||
| 243 | * Highest trust level | ||
| 244 | |||
| 245 | ---- | ||
| 246 | |||
| 247 | == Audit Workflow == | ||
| 248 | |||
| 249 | === Step 1: Audit Sampling Selection === | ||
| 250 | |||
| 251 | **Stratified Sampling**: | ||
| |
7.2 | 252 | |
| |
6.1 | 253 | * Risk tier priority (A > B > C) |
| 254 | * Low confidence scores | ||
| 255 | * High traffic content | ||
| 256 | * Novel topics | ||
| 257 | * User flags | ||
| 258 | |||
| 259 | **Sampling Rates** (Recommendations): | ||
| |
7.2 | 260 | |
| |
6.1 | 261 | * Tier A: 30-50% |
| 262 | * Tier B: 10-20% | ||
| 263 | * Tier C: 5-10% | ||
| 264 | |||
| 265 | === Step 2: Audit Execution === | ||
| 266 | |||
| 267 | **Auditor Actions**: | ||
| |
7.2 | 268 | |
| |
6.1 | 269 | 1. Review sampled AI-generated content |
| 270 | 2. Validate quality gates were properly applied | ||
| 271 | 3. Check contradiction search completeness | ||
| 272 | 4. Assess reasoning quality | ||
| 273 | 5. Identify errors or hallucinations | ||
| 274 | |||
| 275 | **Audit Outcome**: | ||
| |
7.2 | 276 | |
| |
6.1 | 277 | * **Pass**: Content remains in Mode 2, logged as validated |
| 278 | * **Fail**: Content flagged for review, system improvement triggered | ||
| 279 | |||
| 280 | === Step 3: Feedback Loop === | ||
| 281 | |||
| 282 | **System Improvements**: | ||
| |
7.2 | 283 | |
| |
6.1 | 284 | * Failed audits analyzed for patterns |
| 285 | * AKEL parameters adjusted | ||
| 286 | * Quality gates refined | ||
| 287 | * Risk tier assignments recalibrated | ||
| 288 | |||
| 289 | **Transparency**: | ||
| |
7.2 | 290 | |
| |
6.1 | 291 | * Audit statistics published periodically |
| 292 | * Patterns shared with community | ||
| 293 | * System improvements documented | ||
| 294 | |||
| 295 | ---- | ||
| 296 | |||
| 297 | == Mode Transition Workflow == | ||
| 298 | |||
| 299 | === Mode 1 → Mode 2 === | ||
| 300 | |||
| 301 | **Requirements**: | ||
| |
7.2 | 302 | |
| |
6.1 | 303 | * All quality gates pass |
| 304 | * Risk tier B or C (or A with warnings) | ||
| 305 | * Contradiction search completed | ||
| 306 | |||
| 307 | **Trigger**: Automatic upon quality gate validation | ||
| 308 | |||
| 309 | === Mode 2 → Mode 3 === | ||
| 310 | |||
| 311 | **Requirements**: | ||
| |
7.2 | 312 | |
| |
6.1 | 313 | * Human reviewer/expert validation |
| 314 | * Quality standards confirmed | ||
| 315 | * For Tier A: Expert approval required | ||
| 316 | * For Tier B/C: Reviewer approval sufficient | ||
| 317 | |||
| 318 | **Trigger**: Human review completion | ||
| 319 | |||
| 320 | === Mode 3 → Mode 1 (Demotion) === | ||
| 321 | |||
| 322 | **Rare - Only if**: | ||
| |
7.2 | 323 | |
| |
6.1 | 324 | * New evidence contradicts verdict |
| 325 | * Error discovered in reasoning | ||
| 326 | * Source retraction | ||
| 327 | |||
| 328 | **Process**: | ||
| |
7.2 | 329 | |
| |
6.1 | 330 | 1. Content flagged for re-evaluation |
| 331 | 2. Moved to draft (Mode 1) | ||
| 332 | 3. Re-processed through workflow | ||
| 333 | 4. Reason for demotion documented | ||
| 334 | |||
| 335 | ---- | ||
| 336 | |||
| 337 | == User Actions Across Modes == | ||
| 338 | |||
| 339 | === On Mode 1 (Draft) Content === | ||
| 340 | |||
| 341 | **Contributors**: | ||
| |
7.2 | 342 | |
| |
6.1 | 343 | * Edit their own drafts |
| 344 | * Submit for review | ||
| 345 | |||
| 346 | **Reviewers/Experts**: | ||
| |
7.2 | 347 | |
| |
6.1 | 348 | * View and comment |
| 349 | * Request changes | ||
| 350 | * Approve for Mode 2 or Mode 3 | ||
| 351 | |||
| 352 | === On Mode 2 (AI-Generated) Content === | ||
| 353 | |||
| 354 | **All Users**: | ||
| |
7.2 | 355 | |
| |
6.1 | 356 | * Read and use content |
| 357 | * Request human review | ||
| 358 | * Flag for expert attention | ||
| 359 | * Provide feedback | ||
| 360 | |||
| 361 | **Reviewers/Experts**: | ||
| |
7.2 | 362 | |
| |
6.1 | 363 | * Validate for Mode 3 transition |
| 364 | * Edit and refine | ||
| 365 | * Adjust risk tier if needed | ||
| 366 | |||
| 367 | === On Mode 3 (Human-Reviewed) Content === | ||
| 368 | |||
| 369 | **All Users**: | ||
| |
7.2 | 370 | |
| |
6.1 | 371 | * Read with highest confidence |
| 372 | * Still can flag if new evidence emerges | ||
| 373 | |||
| 374 | **Reviewers/Experts**: | ||
| |
7.2 | 375 | |
| |
6.1 | 376 | * Update if needed |
| 377 | * Trigger re-evaluation if new evidence | ||
| 378 | |||
| 379 | ---- | ||
| 380 | |||
| 381 | == Diagram References == | ||
| 382 | |||
| |
7.1 | 383 | === Claim and Scenario Lifecycle (Overview) === |
| |
6.1 | 384 | |
| |
7.22 | 385 | {{include reference="Archive.FactHarbor V0\.9\.23 Lost Data.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}} |
| |
6.1 | 386 | |
| |
7.1 | 387 | === Claim and Scenario Workflow === |
| 388 | |||
| |
7.23 | 389 | {{include reference="Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}} |
| |
7.1 | 390 | |
| 391 | === Evidence and Verdict Workflow === | ||
| 392 | |||
| |
7.24 | 393 | {{include reference="Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}} |
| |
7.1 | 394 | |
| 395 | === Quality and Audit Workflow === | ||
| 396 | |||
| |
7.6 | 397 | {{include reference="FactHarbor.Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}} |
| |
7.1 | 398 | |
| 399 | |||
| 400 | |||
| |
7.25 | 401 | {{include reference="Archive.FactHarbor V0\.9\.23 Lost Data.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} |
| |
7.1 | 402 | |
| |
6.1 | 403 | ---- |
| 404 | |||
| 405 | == Related Pages == | ||
| 406 | |||
| |
7.17 | 407 | * [[AKEL (AI Knowledge Extraction Layer)>>Archive.FactHarbor V0\.9\.18 copy.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] |
| |
7.18 | 408 | * [[Automation>>Archive.FactHarbor V0\.9\.18 copy.Specification.Automation.WebHome]] |
| |
7.19 | 409 | * [[Requirements (Roles)>>Archive.FactHarbor V0\.9\.18 copy.Specification.Requirements.WebHome]] |
| |
6.1 | 410 | * [[Governance>>FactHarbor.Organisation.Governance]] |