Wiki source code of User Needs

Version 1.1 by Robert Schaub on 2025/12/19 08:55

Hide last authors
Robert Schaub 1.1 1 = User Needs =
2
3 This page defines user needs that drive FactHarbor's requirements and design decisions.
4
5 **Template**: As a <specific user role>, I want to <action/goal>, so that I can <benefit/outcome>
6
7 **Purpose**: User needs inform functional requirements (FR) and non-functional requirements (NFR). Each need maps to one or more requirements that fulfill it.
8
9 == 1. Core Reading & Discovery ==
10
11 === UN-1: Trust Assessment at a Glance ===
12 **As** an article reader (any content type),
13 **I want** to see a trust score and overall verdict summary at a glance,
14 **so that** I can quickly decide if the content is worth my time to read in detail.
15
16 **Maps to**: FR7 (Automated Verdicts), NFR3 (Transparency)
17
18 === UN-2: Claim Extraction and Verification ===
19 **As** an article reader,
20 **I want** to see the key factual claims extracted from content with verification verdicts (likelihood ranges + uncertainty ratings) for each relevant scenario,
21 **so that** I can distinguish proven facts from speculation and understand context-dependent truth.
22
23 **Maps to**: FR1 (Claim Intake), FR4 (Scenario Generation), FR7 (Automated Verdicts)
24
25 === UN-3: Summary with Verdict Context ===
26 **As** an article reader,
27 **I want** to see a concise summary of the article's main claims alongside verdict summaries for each scenario,
28 **so that** I can quickly understand both what is claimed and how credible those claims are under different interpretations.
29
30 **Maps to**: FR7 (Automated Verdicts), FR6 (Scenario Comparison)
31
32 === UN-4: Social Media Fact-Checking ===
33 **As** a social media user,
34 **I want** to check claims in posts before sharing,
35 **so that** I can avoid spreading misinformation.
36
37 **Maps to**: FR1 (Claim Intake), FR7 (Automated Verdicts), NFR1 (Performance - fast processing)
38
39 == 2. Source Tracing & Credibility ==
40
41 === UN-5: Source Provenance and Track Records ===
42 **As** an article reader,
43 **I want** to trace each piece of evidence back to its original source and see that source's historical track record,
44 **so that** I can assess the reliability of the information chain and learn which sources are consistently trustworthy.
45
46 **Maps to**: FR5 (Evidence Linking), Section 4.1 (Source Requirements - track record system)
47
48 === UN-6: Publisher Reliability History ===
49 **As** an article reader,
50 **I want** to see historical accuracy track records for sources and publishers,
51 **so that** I can learn which outlets are consistently reliable over time.
52
53 **Maps to**: Section 4.1 (Source Requirements), Data Model (Source entity with track_record_score)
54
55 == 3. Understanding the Analysis ==
56
57 === UN-7: Evidence Transparency ===
58 **As** a skeptical reader,
59 **I want** to see the evidence and reasoning behind each verdict,
60 **so that** I can judge whether I agree with the assessment and form my own conclusions.
61
62 **Maps to**: FR5 (Evidence Linking), NFR3 (Transparency)
63
64 === UN-8: Understanding Disagreement and Consensus ===
65 **As** an article reader,
66 **I want** to see which scenarios have strong supporting evidence versus which have conflicting evidence or high uncertainty,
67 **so that** I can understand where legitimate disagreement exists versus where consensus is clear.
68
69 **Maps to**: FR6 (Scenario Comparison), FR7 (Automated Verdicts - uncertainty factors), AKEL Gate 2 (Contradiction Search)
70
71 === UN-9: Methodology Transparency ===
72 **As** an article reader,
73 **I want** to understand how likelihood ranges and confidence scores are calculated,
74 **so that** I can trust the verification process itself.
75
76 **Maps to**: NFR3 (Transparency), Architecture (documented algorithms), AKEL (Quality Gates)
77
78 == 4. Pattern Recognition & Learning ==
79
80 === UN-10: Manipulation Tactics Detection ===
81 **As** an article reader,
82 **I want** to see common manipulation tactics or logical fallacies identified in content,
83 **so that** I can recognize them elsewhere and become a more critical consumer of information.
84
85 **Maps to**: AKEL (Bubble Detection), Section 5 (Automated Risk Scoring)
86
87 === UN-11: Filtered Research ===
88 **As** a researcher,
89 **I want** to filter content by verification status, confidence levels, and source quality,
90 **so that** I can work only with reliable information appropriate for my research needs.
91
92 **Maps to**: FR1 (Claim Classification), Section 4.4 (Confidence Scoring), NFR1 (Performance)
93
94 == 5. Taking Action ==
95
96 === UN-12: Submit Unchecked Claims ===
97 **As** a reader who finds unchecked claims,
98 **I want** to submit them for verification,
99 **so that** I can help expand fact-checking coverage and contribute to the knowledge base.
100
101 **Maps to**: FR1 (Claim Intake), Section 1.1 (Reader role)
102
103 === UN-13: Cite FactHarbor Verdicts ===
104 **As** a content creator,
105 **I want** to cite FactHarbor verdicts when sharing content,
106 **so that** I can add credibility to what I publish and help my audience distinguish fact from speculation.
107
108 **Maps to**: FR7 (Automated Verdicts), NFR3 (Transparency - exportable data)
109
110 == 6. Professional Use ==
111
112 === UN-14: API Access for Integration ===
113 **As** a journalist/researcher,
114 **I want** API access to verification data and claim histories,
115 **so that** I can integrate fact-checking into my professional workflow without manual lookups.
116
117 **Maps to**: Architecture (REST API), NFR2 (Scalability), FR11 (Audit Trail)
118
119 == 7. Understanding Evolution & Trust Labels ==
120
121 === UN-15: Verdict Evolution Timeline ===
122 **As** an article reader,
123 **I want** to see how a claim's verdict has evolved over time with clear timestamps,
124 **so that** I can understand whether the current assessment is stable or recently changed based on new evidence.
125
126 **Maps to**: FR8 (Time Evolution), Data Model (Versioned entities), NFR3 (Transparency)
127
128 === UN-16: AI vs. Human Review Status ===
129 **As** an article reader,
130 **I want** to know if the verdict was AI-generated, human-reviewed, or expert-validated,
131 **so that** I can gauge the appropriate level of trust and understand the review process used.
132
133 **Maps to**: AKEL (Publication Modes), Section 5 (Risk Tiers), Data Model (AuthorType field)
134
135 == 8. User Need → Requirements Mapping Summary ==
136
137 This section provides a consolidated view of how user needs drive system requirements.
138
139 === 8.1 Functional Requirements Coverage ===
140
141 | FR# | Requirement | Fulfills User Needs |
142 |-----|-------------|-------------------|
143 | FR1 | Claim Intake | UN-2, UN-4, UN-12 |
144 | FR4 | Scenario Generation | UN-2, UN-3 |
145 | FR5 | Evidence Linking | UN-5, UN-7 |
146 | FR6 | Scenario Comparison | UN-3, UN-8 |
147 | FR7 | Automated Verdicts | UN-1, UN-2, UN-3, UN-4, UN-13 |
148 | FR8 | Time Evolution | UN-15 |
149 | FR11 | Audit Trail | UN-14, UN-16 |
150
151 === 8.2 Non-Functional Requirements Coverage ===
152
153 | NFR# | Requirement | Fulfills User Needs |
154 |------|-------------|-------------------|
155 | NFR1 | Performance | UN-4 (fast fact-checking), UN-11 (responsive filtering) |
156 | NFR2 | Scalability | UN-14 (API access at scale) |
157 | NFR3 | Transparency | UN-1, UN-7, UN-9, UN-13, UN-15 |
158
159 === 8.3 AKEL System Coverage ===
160
161 | AKEL Component | Fulfills User Needs |
162 |----------------|-------------------|
163 | Quality Gates | UN-9 (methodology transparency) |
164 | Contradiction Search (Gate 2) | UN-8 (understanding disagreement) |
165 | Bubble Detection | UN-10 (manipulation tactics) |
166 | Publication Modes | UN-16 (AI vs. human review status) |
167 | Risk Tiers | UN-16 (appropriate review level) |
168
169 === 8.4 Data Model Coverage ===
170
171 | Entity | Fulfills User Needs |
172 |--------|-------------------|
173 | Source (with track_record_score) | UN-5, UN-6 (source reliability) |
174 | Scenario | UN-2, UN-3, UN-8 (context-dependent truth) |
175 | Verdict (with likelihood_range, uncertainty_factors) | UN-1, UN-2, UN-3, UN-8 (detailed assessment) |
176 | Versioned entities | UN-15 (evolution timeline) |
177 | AuthorType field | UN-16 (AI vs. human status) |
178
179 == 9. User Need Gaps & Future Considerations ==
180
181 This section identifies user needs that may emerge as the platform matures:
182
183 **Potential Future Needs**:
184 * **Collaborative annotation**: Users want to discuss verdicts with others
185 * **Personal tracking**: Users want to track claims they're following
186 * **Custom alerts**: Users want notifications when tracked claims are updated
187 * **Export capabilities**: Users want to export claim analyses for their own documentation
188 * **Comparative analysis**: Users want to compare how different fact-checkers rate the same claim
189
190 **When to address**: These needs should be considered when:
191 1. User feedback explicitly requests them
192 2. Usage metrics show users attempting these workflows
193 3. Competitive analysis shows these as differentiators
194
195 **Principle**: Start simple (current User Needs), add complexity only when metrics prove necessity.
196
197 == 10. Related Pages ==
198
199 * [[Requirements>>FactHarbor.Specification.Requirements.WebHome]] - Parent page with roles, rules, and functional requirements
200 * [[Architecture>>FactHarbor.Specification.Architecture.WebHome]] - How requirements are implemented
201 * [[Data Model>>FactHarbor.Specification.Data Model.WebHome]] - Data structures supporting user needs
202 * [[AKEL (AI Knowledge Extraction Layer)>>FactHarbor.Specification.AI Knowledge Extraction Layer (AKEL).WebHome]] - AI system fulfilling automation needs
203 * [[Workflows>>FactHarbor.Specification.Workflows.WebHome]] - User interaction workflows