Wiki source code of Ideal Customer Profile (ICP)

Last modified by Robert Schaub on 2025/12/31 04:10

Show last authors
1 = Ideal Customer Profile =
2
3 This page defines FactHarbor's ideal customer segments and partner profiles to guide product development, marketing, and partnership strategy.
4
5 == 1. Purpose ==
6
7 Understanding who benefits most from FactHarbor helps us:
8 * **Product Development**: Prioritize features that serve core user needs
9 * **Marketing**: Communicate value effectively to target audiences
10 * **Partnerships**: Identify and cultivate strategic relationships
11 * **Resource Allocation**: Focus limited resources on highest-impact activities
12
13 **Philosophy**: FactHarbor serves users who want to **understand**, not just believe. Our ideal customers share a frustration with binary "true/false" verdicts and value transparent reasoning they can inspect.
14
15 == 2. Primary User Segments ==
16
17 === 2.1 Journalists & Newsrooms ===
18
19 **Profile**:
20 * Working journalists at news organizations (local to international)
21 * Fact-checkers and verification specialists
22 * Editorial teams producing investigative or political content
23
24 **Core Needs** (from User Needs documentation):
25 * **UN-4**: Fast social media fact-checking (≤15 seconds to initial verdict)
26 * **UN-14**: API integration into professional workflows
27 * **UN-5/UN-6**: Source provenance and publisher reliability tracking
28 * **UN-7**: Evidence transparency for editorial review
29
30 **Key Pain Points**:
31 * Time pressure with breaking news and viral content
32 * Need to verify claims quickly without sacrificing accuracy
33 * Difficulty tracing claims to original sources
34 * Binary fact-check verdicts lack nuance for complex stories
35
36 **Value Proposition**:
37 FactHarbor provides structured, scenario-based analysis that reveals **how** conclusions are reached, saving time while providing the context needed for accurate reporting.
38
39 **Success Indicators**:
40 * Reduced time spent on claim verification
41 * Ability to cite FactHarbor analyses in published work
42 * Improved editorial confidence in complex stories
43
44 === 2.2 Researchers & Academics ===
45
46 **Profile**:
47 * University researchers (political science, communications, media studies)
48 * Think tank analysts
49 * PhD students studying misinformation
50 * Data scientists working on verification systems
51
52 **Core Needs**:
53 * **UN-7**: Complete evidence transparency
54 * **UN-9**: Methodology transparency (auditable reasoning)
55 * **UN-13**: Ability to cite FactHarbor verdicts in academic work
56 * **UN-15**: Verdict evolution timeline (how assessments change with new evidence)
57
58 **Key Pain Points**:
59 * Existing fact-checks are methodologically opaque
60 * Need structured data for quantitative analysis
61 * Difficulty comparing how claims are assessed across sources
62 * Binary verdicts hide important uncertainty
63
64 **Value Proposition**:
65 FactHarbor provides **transparent, structured methodology** that can be cited, analyzed, and built upon. The Evidence Model approach creates reusable data for academic research.
66
67 **Success Indicators**:
68 * Academic papers citing FactHarbor methodology
69 * Researchers using FactHarbor data in studies
70 * Methodology validation by academic institutions
71
72 === 2.3 Educators ===
73
74 **Profile**:
75 * University professors (media literacy, critical thinking, journalism)
76 * High school teachers (civics, social studies, media studies)
77 * Librarians and information literacy specialists
78 * Corporate trainers (media literacy programs)
79
80 **Core Needs**:
81 * **UN-3**: Article summaries with FactHarbor analysis for teaching materials
82 * **UN-8**: Understanding disagreement and consensus (why experts differ)
83 * **UN-9**: Methodology transparency for pedagogical purposes
84 * **UN-7**: Evidence transparency to teach source evaluation
85
86 **Key Pain Points**:
87 * Fact-checks don't show reasoning process for teaching
88 * Hard to teach critical thinking with black-box verdicts
89 * Need tools that demonstrate **how** to evaluate claims
90 * Limited resources for curriculum development
91
92 **Value Proposition**:
93 FactHarbor teaches the **process** of evidence evaluation, not just the answer. Students see explicit assumptions, multiple scenarios, and how confidence levels are determined.
94
95 **Success Indicators**:
96 * Educators integrating FactHarbor into curricula
97 * Student engagement with evidence exploration features
98 * Educational institution partnerships
99
100 === 2.4 Policy Analysts ===
101
102 **Profile**:
103 * Government policy advisors
104 * NGO research staff
105 * Legislative aides
106 * Regulatory analysts
107
108 **Core Needs**:
109 * **UN-2/UN-3**: Context-dependent analysis (claims true under some conditions, false under others)
110 * **UN-8**: Understanding why reasonable people disagree
111 * **UN-1**: Trust assessment with explicit confidence ranges
112 * **UN-17**: In-article claim highlighting for briefing documents
113
114 **Key Pain Points**:
115 * Policy questions rarely have simple true/false answers
116 * Need to understand stakeholder perspectives and their evidence
117 * Difficulty synthesizing information from multiple sources
118 * Risk of appearing biased when presenting controversial topics
119
120 **Value Proposition**:
121 FactHarbor's **scenario-based analysis** explicitly maps how conclusions depend on assumptions, enabling policy analysts to present balanced, well-sourced briefings.
122
123 **Success Indicators**:
124 * Policy briefs citing FactHarbor analyses
125 * Repeat usage for complex policy questions
126 * Feedback on improved briefing quality
127
128 === 2.5 Content Consumers (General Public) ===
129
130 **Profile**:
131 * Social media users seeking to verify viral claims
132 * Engaged citizens following news and politics
133 * People making decisions based on contested information
134 * Anyone who has been frustrated by oversimplified fact-checks
135
136 **Core Needs**:
137 * **UN-1**: Trust assessment at a glance (immediate visual understanding)
138 * **UN-4**: Fast social media fact-checking
139 * **UN-12**: Ability to submit unchecked claims
140 * **UN-17**: In-article claim highlighting when reading content
141
142 **Key Pain Points**:
143 * Don't trust fact-checkers' authority
144 * Want to understand reasoning, not just accept verdicts
145 * Time-constrained but want to make informed decisions
146 * Frustrated by partisan accusations about fact-checkers
147
148 **Value Proposition**:
149 FactHarbor shows **reasoning you can inspect**. Trust comes from transparent methodology, not authority. You can form your own judgment based on visible evidence.
150
151 **Success Indicators**:
152 * User retention (return visits)
153 * Time spent exploring evidence details
154 * Claims submitted for verification
155 * User satisfaction with transparency
156
157 == 3. B2B Partner Segments ==
158
159 === 3.1 Media Organizations ===
160
161 **Priority**: HIGH (Tier 1)
162
163 **Target Partners**:
164 * Swiss Broadcasting (SRG SSR, SRF, RTS, RSI)
165 * Major newspapers (Tamedia, NZZ)
166 * Regional news organizations
167 * Digital-first news outlets
168
169 **Partnership Value**:
170 * **For Partners**: Automated initial analysis saves journalist time; structured evidence for reader transparency
171 * **For FactHarbor**: Validation, use cases, credibility, potential funding
172
173 **Engagement Model**:
174 * API integration for newsroom tools
175 * Embedded analysis widgets
176 * Co-branded fact-checking initiatives
177 * Pilot programs for election coverage
178
179 === 3.2 Fact-Checking Organizations ===
180
181 **Priority**: HIGH (Tier 1)
182
183 **Target Partners**:
184 * IFCN (International Fact-Checking Network) members
185 * EFCSN (European Fact-Checking Standards Network) members
186 * dpa Fact-Checking (DACH region)
187 * Correctiv (Germany)
188 * Full Fact (UK)
189
190 **Partnership Value**:
191 * **For Partners**: Technology platform, scalability, methodology alignment
192 * **For FactHarbor**: Credibility, network access, ecosystem integration
193
194 **Engagement Model**:
195 * Open-source technology sharing
196 * ClaimReview schema collaboration
197 * Joint methodology development
198 * Cross-referencing and data sharing
199
200 === 3.3 Academic Institutions ===
201
202 **Priority**: HIGH (Tier 1)
203
204 **Target Partners**:
205 * ETH Zurich / University of Zurich (Swiss, research collaboration)
206 * Duke Reporters' Lab (ClaimReview, Tech & Check)
207 * Harvard Shorenstein Center (network access)
208 * Stanford Internet Observatory (misinformation research)
209 * Oxford Reuters Institute (journalism research)
210
211 **Partnership Value**:
212 * **For Partners**: Research platform, real-world data, novel methodology to study
213 * **For FactHarbor**: Academic validation, grant access (Innosuisse), publications
214
215 **Engagement Model**:
216 * Research partnerships
217 * Student thesis projects
218 * Co-authored publications
219 * Conference presentations
220 * Joint grant applications
221
222 === 3.4 Funding Organizations ===
223
224 **Priority**: MEDIUM (Tier 2)
225
226 **Target Partners**:
227 * Knight Foundation (journalism innovation)
228 * Google News Initiative (fact-checking fund)
229 * Swiss Innosuisse (research/innovation grants)
230 * Gebert Rüf Foundation (Swiss innovation)
231 * Prototype Fund Switzerland
232
233 **Partnership Value**:
234 * **For Partners**: Support innovative, transparent approach to misinformation
235 * **For FactHarbor**: Operational funding, validation, network access
236
237 **Engagement Model**:
238 * Grant applications
239 * Progress reporting
240 * Impact documentation
241 * Network participation
242
243 == 4. Common Customer Characteristics ==
244
245 === 4.1 Unifying Frustrations ===
246
247 All ideal customers share frustration with:
248 * Binary "true/false" verdicts that hide complexity
249 * Opaque methodology ("trust us" authority model)
250 * Lack of explicit assumptions and confidence ranges
251 * Inability to see evidence and reasoning process
252 * No way to understand why experts disagree
253
254 === 4.2 Unifying Values ===
255
256 All ideal customers value:
257 * **Transparency**: Visible reasoning chains and methodology
258 * **Nuance**: Context-dependent truth (scenarios)
259 * **Independence**: Forming own judgment from evidence
260 * **Integrity**: Non-profit, open-source, no hidden agenda
261 * **Accessibility**: Understanding without specialized expertise
262
263 === 4.3 Decision Criteria ===
264
265 When evaluating fact-checking tools, ideal customers prioritize:
266 1. **Methodology Transparency**: Can I see how conclusions are reached?
267 2. **Evidence Quality**: Are sources traceable and credible?
268 3. **Nuance Handling**: Does it acknowledge complexity?
269 4. **Speed & Usability**: Can I use it in my workflow?
270 5. **Trust & Independence**: Is there hidden bias or agenda?
271
272 == 5. Customer Journey ==
273
274 === 5.1 Awareness ===
275
276 **How they find us**:
277 * Academic publications citing FactHarbor
278 * Referrals from fact-checking organizations
279 * Search engine results (ClaimReview schema visibility)
280 * Media coverage of misinformation topics
281 * Social media discussions about fact-checking
282
283 === 5.2 Evaluation ===
284
285 **What they assess**:
286 * Methodology documentation (open and detailed?)
287 * Sample analyses (quality and transparency?)
288 * Open-source code (auditable?)
289 * Non-profit status (trustworthy?)
290 * User experience (usable?)
291
292 === 5.3 Adoption ===
293
294 **How they start**:
295 * Submit a claim they're curious about
296 * Explore an existing analysis in depth
297 * Review methodology documentation
298 * Test with a known case to validate quality
299 * Integrate API into existing workflow
300
301 === 5.4 Retention ===
302
303 **Why they return**:
304 * Consistent quality and transparency
305 * Time savings in verification workflow
306 * Unique value (scenario analysis not available elsewhere)
307 * Trust in methodology
308 * Community participation
309
310 == 6. Anti-Personas (Not Our Target) ==
311
312 === 6.1 Confirmation Seekers ===
313
314 **Profile**: Users who want verdicts that confirm their existing beliefs
315
316 **Why Not Ideal**:
317 * Will be frustrated by nuanced, scenario-based analysis
318 * May reject conclusions that don't match expectations
319 * Not looking for transparent reasoning—looking for validation
320
321 **How to Handle**:
322 * Don't compromise methodology to satisfy them
323 * The transparency may eventually convert some
324
325 === 6.2 Speed-Only Users ===
326
327 **Profile**: Users who only want instant answers, no interest in evidence
328
329 **Why Not Ideal**:
330 * Don't value FactHarbor's core differentiator (transparency)
331 * Would be better served by simpler binary fact-checkers
332 * Won't engage with evidence or scenarios
333
334 **How to Handle**:
335 * Provide quick summary views (UN-1: trust at a glance)
336 * Make deeper exploration available but not required
337
338 === 6.3 Bad-Faith Actors ===
339
340 **Profile**: Users seeking to game or manipulate the system
341
342 **Why Not Ideal**:
343 * Waste resources
344 * Damage system integrity
345 * Not genuine users
346
347 **How to Handle**:
348 * AKEL detection of manipulation patterns
349 * Moderation for flagged escalations
350 * Transparent ban policies
351
352 == 7. Metrics and Validation ==
353
354 === 7.1 Segment Metrics ===
355
356 Track for each segment:
357 * **Acquisition**: How many from each segment?
358 * **Activation**: Do they complete first analysis?
359 * **Engagement**: Do they explore evidence?
360 * **Retention**: Do they return?
361 * **Referral**: Do they recommend others?
362
363 === 7.2 Segment-Specific Success Indicators ===
364
365 | Segment | Key Success Metric |
366 |---------|-------------------|
367 | Journalists | API calls per newsroom; time saved per verification |
368 | Researchers | Papers citing FactHarbor; data downloads |
369 | Educators | Curricula integrations; student engagement |
370 | Policy Analysts | Briefings citing FactHarbor; repeat usage |
371 | Content Consumers | Retention rate; evidence exploration rate |
372
373 === 7.3 Partnership Metrics ===
374
375 | Partner Type | Success Metric |
376 |-------------|----------------|
377 | Media | Integration count; co-published analyses |
378 | Fact-Checkers | Data sharing volume; methodology alignment |
379 | Academic | Papers published; grants received |
380 | Funders | Grants awarded; renewal rate |
381
382 == 8. Related Pages ==
383
384 * [[User Needs>>FactHarbor.Specification.Requirements.User Needs.WebHome]] - Detailed user need definitions
385 * [[Requirements>>FactHarbor.Specification.Requirements.WebHome]] - How user needs map to requirements
386 * [[Partnership Strategy>>FactHarbor.Organisation.Partnership-Strategy]] - Partnership opportunity details
387 * [[Funding & Partnerships>>FactHarbor.Organisation.Funding-Partnerships]] - Funding sources and contacts
388 * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] - How FactHarbor is structured