Wiki source code of Workflows
Last modified by Robert Schaub on 2025/12/24 20:30
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Workflows = | ||
| 2 | |||
| 3 | This page describes the core workflows for content creation, review, and publication in FactHarbor. | ||
| 4 | |||
| 5 | == 1. Overview == | ||
| 6 | |||
| 7 | FactHarbor workflows support three publication modes with risk-based review: | ||
| 8 | |||
| 9 | * **Mode 1 (Draft)**: Internal only, failed quality gates or pending review | ||
| 10 | * **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates | ||
| 11 | * **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust | ||
| 12 | |||
| 13 | Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict). | ||
| 14 | |||
| 15 | |||
| 16 | == 2. Claim Submission & Publication Workflow == | ||
| 17 | |||
| 18 | === 2.1 Step 1: Claim Submission === | ||
| 19 | |||
| 20 | **Actor**: Contributor or AKEL | ||
| 21 | |||
| 22 | **Actions**: | ||
| 23 | * Submit claim text | ||
| 24 | * Provide initial sources (optional for human contributors, mandatory for AKEL) | ||
| 25 | * System assigns initial AuthorType (Human or AI) | ||
| 26 | |||
| 27 | **Output**: Claim draft created | ||
| 28 | |||
| 29 | === 2.2 Step 2: AKEL Processing === | ||
| 30 | |||
| 31 | **Automated Steps**: | ||
| 32 | 1. Claim extraction and normalization | ||
| 33 | 2. Classification (domain, type, evaluability) | ||
| 34 | 3. Risk tier assignment (A/B/C suggested) | ||
| 35 | 4. Initial scenario generation | ||
| 36 | 5. Evidence search | ||
| 37 | 6. **Contradiction search** (mandatory) | ||
| 38 | 7. Quality gate validation | ||
| 39 | |||
| 40 | **Output**: Processed claim with risk tier and quality gate results | ||
| 41 | |||
| 42 | === 2.3 Step 3: Quality Gate Checkpoint === | ||
| 43 | |||
| 44 | **Gates Evaluated**: | ||
| 45 | * Source quality | ||
| 46 | * Contradiction search completion | ||
| 47 | * Uncertainty quantification | ||
| 48 | * Structural integrity | ||
| 49 | |||
| 50 | **Outcomes**: | ||
| 51 | * **All gates pass** → Proceed to Mode 2 publication (if Tier B or C) | ||
| 52 | * **Any gate fails** → Mode 1 (Draft), flag for human review | ||
| 53 | * **Tier A** → Mode 2 with warnings + auto-escalate to expert queue | ||
| 54 | |||
| 55 | === 2.4 Step 4: Publication (Risk-Tier Dependent) === | ||
| 56 | |||
| 57 | **Tier C (Low Risk)**: | ||
| 58 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 59 | * User can request human review | ||
| 60 | * Sampling audit applies | ||
| 61 | |||
| 62 | **Tier B (Medium Risk)**: | ||
| 63 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 64 | * Higher audit sampling rate | ||
| 65 | * High-engagement content may auto-escalate | ||
| 66 | |||
| 67 | **Tier A (High Risk)**: | ||
| 68 | * **Mode 2 with warnings**: AI-generated, public, prominent disclaimers | ||
| 69 | * **Auto-escalated** to expert review queue | ||
| 70 | * User warnings displayed | ||
| 71 | * Highest audit sampling rate | ||
| 72 | |||
| 73 | === 2.5 Step 5: Human Review (Optional for B/C, Escalated for A) === | ||
| 74 | |||
| 75 | **Triggers**: | ||
| 76 | * User requests review | ||
| 77 | * Audit flags issues | ||
| 78 | * High engagement (Tier B) | ||
| 79 | * Automatic (Tier A) | ||
| 80 | |||
| 81 | **Process**: | ||
| 82 | 1. Reviewer/Expert examines claim | ||
| 83 | 2. Validates quality gates | ||
| 84 | 3. Checks contradiction search results | ||
| 85 | 4. Assesses risk tier appropriateness | ||
| 86 | 5. Decision: Approve, Request Changes, or Reject | ||
| 87 | |||
| 88 | **Outcomes**: | ||
| 89 | * **Approved** → Mode 3 (Human-Reviewed) | ||
| 90 | * **Changes Requested** → Back to contributor or AKEL for revision | ||
| 91 | * **Rejected** → Rejected status with reasoning | ||
| 92 | |||
| 93 | |||
| 94 | == 3. Scenario Creation Workflow == | ||
| 95 | |||
| 96 | === 3.1 Step 1: Scenario Generation === | ||
| 97 | |||
| 98 | **Automated (AKEL)**: | ||
| 99 | * Generate scenarios for claim | ||
| 100 | * Define boundaries, assumptions, context | ||
| 101 | * Identify evaluation methods | ||
| 102 | |||
| 103 | **Manual (Expert/Reviewer)**: | ||
| 104 | * Create custom scenarios | ||
| 105 | * Refine AKEL-generated scenarios | ||
| 106 | * Add domain-specific nuances | ||
| 107 | |||
| 108 | === 3.2 Step 2: Scenario Validation === | ||
| 109 | |||
| 110 | **Quality Checks**: | ||
| 111 | * Completeness (definitions, boundaries, assumptions clear) | ||
| 112 | * Relevance to claim | ||
| 113 | * Evaluability | ||
| 114 | * No circular logic | ||
| 115 | |||
| 116 | **Risk Tier Assignment**: | ||
| 117 | * Inherits from parent claim | ||
| 118 | * Can be overridden by expert if scenario increases/decreases risk | ||
| 119 | |||
| 120 | === 3.3 Step 3: Scenario Publication === | ||
| 121 | |||
| 122 | **Mode 2 (AI-Generated)**: | ||
| 123 | * Tier B/C scenarios can publish immediately | ||
| 124 | * Subject to sampling audits | ||
| 125 | |||
| 126 | **Mode 1 (Draft)**: | ||
| 127 | * Tier A scenarios default to draft | ||
| 128 | * Require expert validation for Mode 2 or Mode 3 | ||
| 129 | |||
| 130 | |||
| 131 | == 4. Evidence Evaluation Workflow == | ||
| 132 | |||
| 133 | === 4.1 Step 1: Evidence Search & Retrieval === | ||
| 134 | |||
| 135 | **AKEL Actions**: | ||
| 136 | * Search academic databases, reputable media | ||
| 137 | * **Mandatory contradiction search** (counter-evidence, reservations) | ||
| 138 | * Extract metadata (author, date, publication, methodology) | ||
| 139 | * Assess source reliability | ||
| 140 | |||
| 141 | **Quality Requirements**: | ||
| 142 | * Primary sources preferred | ||
| 143 | * Diverse perspectives included | ||
| 144 | * Echo chambers flagged | ||
| 145 | * Conflicting evidence acknowledged | ||
| 146 | |||
| 147 | === 4.2 Step 2: Evidence Summarization === | ||
| 148 | |||
| 149 | **AKEL Generates**: | ||
| 150 | * Summary of evidence | ||
| 151 | * Relevance assessment | ||
| 152 | * Reliability score | ||
| 153 | * Limitations and caveats | ||
| 154 | * Conflicting evidence summary | ||
| 155 | |||
| 156 | **Quality Gate**: Structural integrity, source quality | ||
| 157 | |||
| 158 | === 4.3 Step 3: Evidence Review === | ||
| 159 | |||
| 160 | **Reviewer/Expert Validates**: | ||
| 161 | * Accuracy of summaries | ||
| 162 | * Appropriateness of sources | ||
| 163 | * Completeness of contradiction search | ||
| 164 | * Reliability assessments | ||
| 165 | |||
| 166 | **Outcomes**: | ||
| 167 | * **Mode 2**: Evidence summaries published as AI-generated | ||
| 168 | * **Mode 3**: After human validation | ||
| 169 | * **Mode 1**: Failed quality checks or pending expert review | ||
| 170 | |||
| 171 | |||
| 172 | == 5. Verdict Generation Workflow == | ||
| 173 | |||
| 174 | === 5.1 Step 1: Verdict Computation === | ||
| 175 | |||
| 176 | **AKEL Computes**: | ||
| 177 | * Verdict across scenarios | ||
| 178 | * Confidence scores | ||
| 179 | * Uncertainty quantification | ||
| 180 | * Key assumptions | ||
| 181 | * Limitations | ||
| 182 | |||
| 183 | **Inputs**: | ||
| 184 | * Claim text | ||
| 185 | * Scenario definitions | ||
| 186 | * Evidence assessments | ||
| 187 | * Contradiction search results | ||
| 188 | |||
| 189 | === 5.2 Step 2: Verdict Validation === | ||
| 190 | |||
| 191 | **Quality Gates**: | ||
| 192 | * All four gates apply (source, contradiction, uncertainty, structure) | ||
| 193 | * Reasoning chain must be traceable | ||
| 194 | * Assumptions must be explicit | ||
| 195 | |||
| 196 | **Risk Tier Check**: | ||
| 197 | * Tier A: Always requires expert validation for Mode 3 | ||
| 198 | * Tier B: Mode 2 allowed, audit sampling | ||
| 199 | * Tier C: Mode 2 default | ||
| 200 | |||
| 201 | === 5.3 Step 3: Verdict Publication === | ||
| 202 | |||
| 203 | **Mode 2 (AI-Generated Verdict)**: | ||
| 204 | * Clear labeling with confidence scores | ||
| 205 | * Uncertainty disclosure | ||
| 206 | * Links to reasoning trail | ||
| 207 | * User can request expert review | ||
| 208 | |||
| 209 | **Mode 3 (Expert-Validated Verdict)**: | ||
| 210 | * Human reviewer/expert stamp | ||
| 211 | * Additional commentary (optional) | ||
| 212 | * Highest trust level | ||
| 213 | |||
| 214 | |||
| 215 | == 6. Audit Workflow == | ||
| 216 | |||
| 217 | === 6.1 Step 1: Audit Sampling Selection === | ||
| 218 | |||
| 219 | **Stratified Sampling**: | ||
| 220 | * Risk tier priority (A > B > C) | ||
| 221 | * Low confidence scores | ||
| 222 | * High traffic content | ||
| 223 | * Novel topics | ||
| 224 | * User flags | ||
| 225 | |||
| 226 | **Sampling Rates** (Recommendations): | ||
| 227 | * Tier A: 30-50% | ||
| 228 | * Tier B: 10-20% | ||
| 229 | * Tier C: 5-10% | ||
| 230 | |||
| 231 | === 6.2 Step 2: Audit Execution === | ||
| 232 | |||
| 233 | **Auditor Actions**: | ||
| 234 | 1. Review sampled AI-generated content | ||
| 235 | 2. Validate quality gates were properly applied | ||
| 236 | 3. Check contradiction search completeness | ||
| 237 | 4. Assess reasoning quality | ||
| 238 | 5. Identify errors or hallucinations | ||
| 239 | |||
| 240 | **Audit Outcome**: | ||
| 241 | * **Pass**: Content remains in Mode 2, logged as validated | ||
| 242 | * **Fail**: Content flagged for review, system improvement triggered | ||
| 243 | |||
| 244 | === 6.3 Step 3: Feedback Loop === | ||
| 245 | |||
| 246 | **System Improvements**: | ||
| 247 | * Failed audits analyzed for patterns | ||
| 248 | * AKEL parameters adjusted | ||
| 249 | * Quality gates refined | ||
| 250 | * Risk tier assignments recalibrated | ||
| 251 | |||
| 252 | **Transparency**: | ||
| 253 | * Audit statistics published periodically | ||
| 254 | * Patterns shared with community | ||
| 255 | * System improvements documented | ||
| 256 | |||
| 257 | |||
| 258 | == 7. Mode Transition Workflow == | ||
| 259 | |||
| 260 | === 7.1 Mode 1 → Mode 2 === | ||
| 261 | |||
| 262 | **Requirements**: | ||
| 263 | * All quality gates pass | ||
| 264 | * Risk tier B or C (or A with warnings) | ||
| 265 | * Contradiction search completed | ||
| 266 | |||
| 267 | **Trigger**: Automatic upon quality gate validation | ||
| 268 | |||
| 269 | === 7.2 Mode 2 → Mode 3 === | ||
| 270 | |||
| 271 | **Requirements**: | ||
| 272 | * Human reviewer/expert validation | ||
| 273 | * Quality standards confirmed | ||
| 274 | * For Tier A: Expert approval required | ||
| 275 | * For Tier B/C: Reviewer approval sufficient | ||
| 276 | |||
| 277 | **Trigger**: Human review completion | ||
| 278 | |||
| 279 | === 7.3 Mode 3 → Mode 1 (Demotion) === | ||
| 280 | |||
| 281 | **Rare - Only if**: | ||
| 282 | * New evidence contradicts verdict | ||
| 283 | * Error discovered in reasoning | ||
| 284 | * Source retraction | ||
| 285 | |||
| 286 | **Process**: | ||
| 287 | 1. Content flagged for re-evaluation | ||
| 288 | 2. Moved to draft (Mode 1) | ||
| 289 | 3. Re-processed through workflow | ||
| 290 | 4. Reason for demotion documented | ||
| 291 | |||
| 292 | |||
| 293 | == 8. User Actions Across Modes == | ||
| 294 | |||
| 295 | === 8.1 On Mode 1 (Draft) Content === | ||
| 296 | |||
| 297 | **Contributors**: | ||
| 298 | * Edit their own drafts | ||
| 299 | * Submit for review | ||
| 300 | |||
| 301 | **Reviewers/Experts**: | ||
| 302 | * View and comment | ||
| 303 | * Request changes | ||
| 304 | * Approve for Mode 2 or Mode 3 | ||
| 305 | |||
| 306 | === 8.2 On Mode 2 (AI-Generated) Content === | ||
| 307 | |||
| 308 | **All Users**: | ||
| 309 | * Read and use content | ||
| 310 | * Request human review | ||
| 311 | * Flag for expert attention | ||
| 312 | * Provide feedback | ||
| 313 | |||
| 314 | **Reviewers/Experts**: | ||
| 315 | * Validate for Mode 3 transition | ||
| 316 | * Edit and refine | ||
| 317 | * Adjust risk tier if needed | ||
| 318 | |||
| 319 | === 8.3 On Mode 3 (Human-Reviewed) Content === | ||
| 320 | |||
| 321 | **All Users**: | ||
| 322 | * Read with highest confidence | ||
| 323 | * Still can flag if new evidence emerges | ||
| 324 | |||
| 325 | **Reviewers/Experts**: | ||
| 326 | * Update if needed | ||
| 327 | * Trigger re-evaluation if new evidence | ||
| 328 | |||
| 329 | |||
| 330 | == 9. Diagram References == | ||
| 331 | |||
| 332 | === 9.1 Claim and Scenario Lifecycle (Overview) === | ||
| 333 | |||
| 334 | {{include reference="FactHarbor.Organisation.Diagrams.Claim and Scenario Lifecycle (Overview).WebHome"/}} | ||
| 335 | |||
| 336 | === 9.2 Claim and Scenario Workflow === | ||
| 337 | |||
| 338 | {{include reference="FactHarbor.Specification.Diagrams.Claim and Scenario Workflow.WebHome"/}} | ||
| 339 | |||
| 340 | === 9.3 Evidence and Verdict Workflow === | ||
| 341 | |||
| 342 | {{include reference="FactHarbor.Specification.Diagrams.Evidence and Verdict Workflow.WebHome"/}} | ||
| 343 | |||
| 344 | === 9.4 Quality and Audit Workflow === | ||
| 345 | |||
| 346 | {{include reference="FactHarbor.Specification.Diagrams.Quality and Audit Workflow.WebHome"/}} | ||
| 347 | |||
| 348 | |||
| 349 | {{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} | ||
| 350 | |||
| 351 | |||
| 352 | == 10. Related Pages == | ||
| 353 | |||
| 354 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 355 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 356 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 357 | * [[Governance>>FactHarbor.Organisation.Governance]] |