Wiki source code of Workflows
Version 6.1 by Robert Schaub on 2025/12/14 18:59
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Workflows = | ||
| 2 | |||
| 3 | This page describes the core workflows for content creation, review, and publication in FactHarbor. | ||
| 4 | |||
| 5 | == Overview == | ||
| 6 | |||
| 7 | FactHarbor workflows support three publication modes with risk-based review: | ||
| 8 | |||
| 9 | * **Mode 1 (Draft)**: Internal only, failed quality gates or pending review | ||
| 10 | * **Mode 2 (AI-Generated)**: Public with AI-generated label, passed quality gates | ||
| 11 | * **Mode 3 (Human-Reviewed)**: Public with human-reviewed status, highest trust | ||
| 12 | |||
| 13 | Workflows vary by **Risk Tier** (A/B/C) and **Content Type** (Claim, Scenario, Evidence, Verdict). | ||
| 14 | |||
| 15 | ---- | ||
| 16 | |||
| 17 | == Claim Submission & Publication Workflow == | ||
| 18 | |||
| 19 | === Step 1: Claim Submission === | ||
| 20 | |||
| 21 | **Actor**: Contributor or AKEL | ||
| 22 | |||
| 23 | **Actions**: | ||
| 24 | * Submit claim text | ||
| 25 | * Provide initial sources (optional for human contributors, mandatory for AKEL) | ||
| 26 | * System assigns initial AuthorType (Human or AI) | ||
| 27 | |||
| 28 | **Output**: Claim draft created | ||
| 29 | |||
| 30 | === Step 2: AKEL Processing === | ||
| 31 | |||
| 32 | **Automated Steps**: | ||
| 33 | 1. Claim extraction and normalization | ||
| 34 | 2. Classification (domain, type, evaluability) | ||
| 35 | 3. Risk tier assignment (A/B/C suggested) | ||
| 36 | 4. Initial scenario generation | ||
| 37 | 5. Evidence search | ||
| 38 | 6. **Contradiction search** (mandatory) | ||
| 39 | 7. Quality gate validation | ||
| 40 | |||
| 41 | **Output**: Processed claim with risk tier and quality gate results | ||
| 42 | |||
| 43 | === Step 3: Quality Gate Checkpoint === | ||
| 44 | |||
| 45 | **Gates Evaluated**: | ||
| 46 | * Source quality | ||
| 47 | * Contradiction search completion | ||
| 48 | * Uncertainty quantification | ||
| 49 | * Structural integrity | ||
| 50 | |||
| 51 | **Outcomes**: | ||
| 52 | * **All gates pass** → Proceed to Mode 2 publication (if Tier B or C) | ||
| 53 | * **Any gate fails** → Mode 1 (Draft), flag for human review | ||
| 54 | * **Tier A** → Mode 2 with warnings + auto-escalate to expert queue | ||
| 55 | |||
| 56 | === Step 4: Publication (Risk-Tier Dependent) === | ||
| 57 | |||
| 58 | **Tier C (Low Risk)**: | ||
| 59 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 60 | * User can request human review | ||
| 61 | * Sampling audit applies | ||
| 62 | |||
| 63 | **Tier B (Medium Risk)**: | ||
| 64 | * **Direct to Mode 2**: AI-generated, public, clearly labeled | ||
| 65 | * Higher audit sampling rate | ||
| 66 | * High-engagement content may auto-escalate | ||
| 67 | |||
| 68 | **Tier A (High Risk)**: | ||
| 69 | * **Mode 2 with warnings**: AI-generated, public, prominent disclaimers | ||
| 70 | * **Auto-escalated** to expert review queue | ||
| 71 | * User warnings displayed | ||
| 72 | * Highest audit sampling rate | ||
| 73 | |||
| 74 | === Step 5: Human Review (Optional for B/C, Escalated for A) === | ||
| 75 | |||
| 76 | **Triggers**: | ||
| 77 | * User requests review | ||
| 78 | * Audit flags issues | ||
| 79 | * High engagement (Tier B) | ||
| 80 | * Automatic (Tier A) | ||
| 81 | |||
| 82 | **Process**: | ||
| 83 | 1. Reviewer/Expert examines claim | ||
| 84 | 2. Validates quality gates | ||
| 85 | 3. Checks contradiction search results | ||
| 86 | 4. Assesses risk tier appropriateness | ||
| 87 | 5. Decision: Approve, Request Changes, or Reject | ||
| 88 | |||
| 89 | **Outcomes**: | ||
| 90 | * **Approved** → Mode 3 (Human-Reviewed) | ||
| 91 | * **Changes Requested** → Back to contributor or AKEL for revision | ||
| 92 | * **Rejected** → Rejected status with reasoning | ||
| 93 | |||
| 94 | ---- | ||
| 95 | |||
| 96 | == Scenario Creation Workflow == | ||
| 97 | |||
| 98 | === Step 1: Scenario Generation === | ||
| 99 | |||
| 100 | **Automated (AKEL)**: | ||
| 101 | * Generate scenarios for claim | ||
| 102 | * Define boundaries, assumptions, context | ||
| 103 | * Identify evaluation methods | ||
| 104 | |||
| 105 | **Manual (Expert/Reviewer)**: | ||
| 106 | * Create custom scenarios | ||
| 107 | * Refine AKEL-generated scenarios | ||
| 108 | * Add domain-specific nuances | ||
| 109 | |||
| 110 | === Step 2: Scenario Validation === | ||
| 111 | |||
| 112 | **Quality Checks**: | ||
| 113 | * Completeness (definitions, boundaries, assumptions clear) | ||
| 114 | * Relevance to claim | ||
| 115 | * Evaluability | ||
| 116 | * No circular logic | ||
| 117 | |||
| 118 | **Risk Tier Assignment**: | ||
| 119 | * Inherits from parent claim | ||
| 120 | * Can be overridden by expert if scenario increases/decreases risk | ||
| 121 | |||
| 122 | === Step 3: Scenario Publication === | ||
| 123 | |||
| 124 | **Mode 2 (AI-Generated)**: | ||
| 125 | * Tier B/C scenarios can publish immediately | ||
| 126 | * Subject to sampling audits | ||
| 127 | |||
| 128 | **Mode 1 (Draft)**: | ||
| 129 | * Tier A scenarios default to draft | ||
| 130 | * Require expert validation for Mode 2 or Mode 3 | ||
| 131 | |||
| 132 | ---- | ||
| 133 | |||
| 134 | == Evidence Evaluation Workflow == | ||
| 135 | |||
| 136 | === Step 1: Evidence Search & Retrieval === | ||
| 137 | |||
| 138 | **AKEL Actions**: | ||
| 139 | * Search academic databases, reputable media | ||
| 140 | * **Mandatory contradiction search** (counter-evidence, reservations) | ||
| 141 | * Extract metadata (author, date, publication, methodology) | ||
| 142 | * Assess source reliability | ||
| 143 | |||
| 144 | **Quality Requirements**: | ||
| 145 | * Primary sources preferred | ||
| 146 | * Diverse perspectives included | ||
| 147 | * Echo chambers flagged | ||
| 148 | * Conflicting evidence acknowledged | ||
| 149 | |||
| 150 | === Step 2: Evidence Summarization === | ||
| 151 | |||
| 152 | **AKEL Generates**: | ||
| 153 | * Summary of evidence | ||
| 154 | * Relevance assessment | ||
| 155 | * Reliability score | ||
| 156 | * Limitations and caveats | ||
| 157 | * Conflicting evidence summary | ||
| 158 | |||
| 159 | **Quality Gate**: Structural integrity, source quality | ||
| 160 | |||
| 161 | === Step 3: Evidence Review === | ||
| 162 | |||
| 163 | **Reviewer/Expert Validates**: | ||
| 164 | * Accuracy of summaries | ||
| 165 | * Appropriateness of sources | ||
| 166 | * Completeness of contradiction search | ||
| 167 | * Reliability assessments | ||
| 168 | |||
| 169 | **Outcomes**: | ||
| 170 | * **Mode 2**: Evidence summaries published as AI-generated | ||
| 171 | * **Mode 3**: After human validation | ||
| 172 | * **Mode 1**: Failed quality checks or pending expert review | ||
| 173 | |||
| 174 | ---- | ||
| 175 | |||
| 176 | == Verdict Generation Workflow == | ||
| 177 | |||
| 178 | === Step 1: Verdict Computation === | ||
| 179 | |||
| 180 | **AKEL Computes**: | ||
| 181 | * Verdict across scenarios | ||
| 182 | * Confidence scores | ||
| 183 | * Uncertainty quantification | ||
| 184 | * Key assumptions | ||
| 185 | * Limitations | ||
| 186 | |||
| 187 | **Inputs**: | ||
| 188 | * Claim text | ||
| 189 | * Scenario definitions | ||
| 190 | * Evidence assessments | ||
| 191 | * Contradiction search results | ||
| 192 | |||
| 193 | === Step 2: Verdict Validation === | ||
| 194 | |||
| 195 | **Quality Gates**: | ||
| 196 | * All four gates apply (source, contradiction, uncertainty, structure) | ||
| 197 | * Reasoning chain must be traceable | ||
| 198 | * Assumptions must be explicit | ||
| 199 | |||
| 200 | **Risk Tier Check**: | ||
| 201 | * Tier A: Always requires expert validation for Mode 3 | ||
| 202 | * Tier B: Mode 2 allowed, audit sampling | ||
| 203 | * Tier C: Mode 2 default | ||
| 204 | |||
| 205 | === Step 3: Verdict Publication === | ||
| 206 | |||
| 207 | **Mode 2 (AI-Generated Verdict)**: | ||
| 208 | * Clear labeling with confidence scores | ||
| 209 | * Uncertainty disclosure | ||
| 210 | * Links to reasoning trail | ||
| 211 | * User can request expert review | ||
| 212 | |||
| 213 | **Mode 3 (Expert-Validated Verdict)**: | ||
| 214 | * Human reviewer/expert stamp | ||
| 215 | * Additional commentary (optional) | ||
| 216 | * Highest trust level | ||
| 217 | |||
| 218 | ---- | ||
| 219 | |||
| 220 | == Audit Workflow == | ||
| 221 | |||
| 222 | === Step 1: Audit Sampling Selection === | ||
| 223 | |||
| 224 | **Stratified Sampling**: | ||
| 225 | * Risk tier priority (A > B > C) | ||
| 226 | * Low confidence scores | ||
| 227 | * High traffic content | ||
| 228 | * Novel topics | ||
| 229 | * User flags | ||
| 230 | |||
| 231 | **Sampling Rates** (Recommendations): | ||
| 232 | * Tier A: 30-50% | ||
| 233 | * Tier B: 10-20% | ||
| 234 | * Tier C: 5-10% | ||
| 235 | |||
| 236 | === Step 2: Audit Execution === | ||
| 237 | |||
| 238 | **Auditor Actions**: | ||
| 239 | 1. Review sampled AI-generated content | ||
| 240 | 2. Validate quality gates were properly applied | ||
| 241 | 3. Check contradiction search completeness | ||
| 242 | 4. Assess reasoning quality | ||
| 243 | 5. Identify errors or hallucinations | ||
| 244 | |||
| 245 | **Audit Outcome**: | ||
| 246 | * **Pass**: Content remains in Mode 2, logged as validated | ||
| 247 | * **Fail**: Content flagged for review, system improvement triggered | ||
| 248 | |||
| 249 | === Step 3: Feedback Loop === | ||
| 250 | |||
| 251 | **System Improvements**: | ||
| 252 | * Failed audits analyzed for patterns | ||
| 253 | * AKEL parameters adjusted | ||
| 254 | * Quality gates refined | ||
| 255 | * Risk tier assignments recalibrated | ||
| 256 | |||
| 257 | **Transparency**: | ||
| 258 | * Audit statistics published periodically | ||
| 259 | * Patterns shared with community | ||
| 260 | * System improvements documented | ||
| 261 | |||
| 262 | ---- | ||
| 263 | |||
| 264 | == Mode Transition Workflow == | ||
| 265 | |||
| 266 | === Mode 1 → Mode 2 === | ||
| 267 | |||
| 268 | **Requirements**: | ||
| 269 | * All quality gates pass | ||
| 270 | * Risk tier B or C (or A with warnings) | ||
| 271 | * Contradiction search completed | ||
| 272 | |||
| 273 | **Trigger**: Automatic upon quality gate validation | ||
| 274 | |||
| 275 | === Mode 2 → Mode 3 === | ||
| 276 | |||
| 277 | **Requirements**: | ||
| 278 | * Human reviewer/expert validation | ||
| 279 | * Quality standards confirmed | ||
| 280 | * For Tier A: Expert approval required | ||
| 281 | * For Tier B/C: Reviewer approval sufficient | ||
| 282 | |||
| 283 | **Trigger**: Human review completion | ||
| 284 | |||
| 285 | === Mode 3 → Mode 1 (Demotion) === | ||
| 286 | |||
| 287 | **Rare - Only if**: | ||
| 288 | * New evidence contradicts verdict | ||
| 289 | * Error discovered in reasoning | ||
| 290 | * Source retraction | ||
| 291 | |||
| 292 | **Process**: | ||
| 293 | 1. Content flagged for re-evaluation | ||
| 294 | 2. Moved to draft (Mode 1) | ||
| 295 | 3. Re-processed through workflow | ||
| 296 | 4. Reason for demotion documented | ||
| 297 | |||
| 298 | ---- | ||
| 299 | |||
| 300 | == User Actions Across Modes == | ||
| 301 | |||
| 302 | === On Mode 1 (Draft) Content === | ||
| 303 | |||
| 304 | **Contributors**: | ||
| 305 | * Edit their own drafts | ||
| 306 | * Submit for review | ||
| 307 | |||
| 308 | **Reviewers/Experts**: | ||
| 309 | * View and comment | ||
| 310 | * Request changes | ||
| 311 | * Approve for Mode 2 or Mode 3 | ||
| 312 | |||
| 313 | === On Mode 2 (AI-Generated) Content === | ||
| 314 | |||
| 315 | **All Users**: | ||
| 316 | * Read and use content | ||
| 317 | * Request human review | ||
| 318 | * Flag for expert attention | ||
| 319 | * Provide feedback | ||
| 320 | |||
| 321 | **Reviewers/Experts**: | ||
| 322 | * Validate for Mode 3 transition | ||
| 323 | * Edit and refine | ||
| 324 | * Adjust risk tier if needed | ||
| 325 | |||
| 326 | === On Mode 3 (Human-Reviewed) Content === | ||
| 327 | |||
| 328 | **All Users**: | ||
| 329 | * Read with highest confidence | ||
| 330 | * Still can flag if new evidence emerges | ||
| 331 | |||
| 332 | **Reviewers/Experts**: | ||
| 333 | * Update if needed | ||
| 334 | * Trigger re-evaluation if new evidence | ||
| 335 | |||
| 336 | ---- | ||
| 337 | |||
| 338 | == Diagram References == | ||
| 339 | |||
| 340 | {{include reference="FactHarbor.Organisation.Diagrams.Claim-Scenario-Lifecycle"/}} | ||
| 341 | |||
| 342 | {{include reference="FactHarbor.Specification.Diagrams.Manual vs Automated matrix.WebHome"/}} | ||
| 343 | |||
| 344 | ---- | ||
| 345 | |||
| 346 | == Related Pages == | ||
| 347 | |||
| 348 | * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] | ||
| 349 | * [[Automation>>FactHarbor.Specification.Automation.WebHome]] | ||
| 350 | * [[Requirements (Roles)>>FactHarbor.Specification.Requirements.WebHome]] | ||
| 351 | * [[Governance>>FactHarbor.Organisation.Governance]] |