Wiki source code of Ideal Customer Profile (ICP)
Last modified by Robert Schaub on 2026/02/08 08:32
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Ideal Customer Profile = | ||
| 2 | |||
| 3 | This page defines FactHarbor's ideal customer segments and partner profiles to guide product development, marketing, and partnership strategy. | ||
| 4 | |||
| 5 | == 1. Purpose == | ||
| 6 | |||
| 7 | Understanding who benefits most from FactHarbor helps us: | ||
| 8 | |||
| 9 | * **Product Development**: Prioritize features that serve core user needs | ||
| 10 | * **Marketing**: Communicate value effectively to target audiences | ||
| 11 | * **Partnerships**: Identify and cultivate strategic relationships | ||
| 12 | * **Resource Allocation**: Focus limited resources on highest-impact activities | ||
| 13 | |||
| 14 | **Philosophy**: FactHarbor serves users who want to **understand**, not just believe. Our ideal customers share a frustration with binary "true/false" verdicts and value transparent reasoning they can inspect. | ||
| 15 | |||
| 16 | == 2. Primary User Segments == | ||
| 17 | |||
| 18 | === 2.1 Journalists & Newsrooms === | ||
| 19 | |||
| 20 | **Profile**: | ||
| 21 | |||
| 22 | * Working journalists at news organizations (local to international) | ||
| 23 | * Fact-checkers and verification specialists | ||
| 24 | * Editorial teams producing investigative or political content | ||
| 25 | |||
| 26 | **Core Needs** (from User Needs documentation): | ||
| 27 | |||
| 28 | * **UN-4**: Fast social media fact-checking (≤15 seconds to initial verdict) | ||
| 29 | * **UN-14**: API integration into professional workflows | ||
| 30 | * **UN-5/UN-6**: Source provenance and publisher reliability tracking | ||
| 31 | * **UN-7**: Evidence transparency for editorial review | ||
| 32 | |||
| 33 | **Key Pain Points**: | ||
| 34 | |||
| 35 | * Time pressure with breaking news and viral content | ||
| 36 | * Need to verify claims quickly without sacrificing accuracy | ||
| 37 | * Difficulty tracing claims to original sources | ||
| 38 | * Binary fact-check verdicts lack nuance for complex stories | ||
| 39 | |||
| 40 | **Value Proposition**: | ||
| 41 | FactHarbor provides structured, scenario-based analysis that reveals **how** conclusions are reached, saving time while providing the context needed for accurate reporting. | ||
| 42 | |||
| 43 | **Success Indicators**: | ||
| 44 | |||
| 45 | * Reduced time spent on claim verification | ||
| 46 | * Ability to cite FactHarbor analyses in published work | ||
| 47 | * Improved editorial confidence in complex stories | ||
| 48 | |||
| 49 | === 2.2 Researchers & Academics === | ||
| 50 | |||
| 51 | **Profile**: | ||
| 52 | |||
| 53 | * University researchers (political science, communications, media studies) | ||
| 54 | * Think tank analysts | ||
| 55 | * PhD students studying misinformation | ||
| 56 | * Data scientists working on verification systems | ||
| 57 | |||
| 58 | **Core Needs**: | ||
| 59 | |||
| 60 | * **UN-7**: Complete evidence transparency | ||
| 61 | * **UN-9**: Methodology transparency (auditable reasoning) | ||
| 62 | * **UN-13**: Ability to cite FactHarbor verdicts in academic work | ||
| 63 | * **UN-15**: Verdict evolution timeline (how assessments change with new evidence) | ||
| 64 | |||
| 65 | **Key Pain Points**: | ||
| 66 | |||
| 67 | * Existing fact-checks are methodologically opaque | ||
| 68 | * Need structured data for quantitative analysis | ||
| 69 | * Difficulty comparing how claims are assessed across sources | ||
| 70 | * Binary verdicts hide important uncertainty | ||
| 71 | |||
| 72 | **Value Proposition**: | ||
| 73 | FactHarbor provides **transparent, structured methodology** that can be cited, analyzed, and built upon. The Evidence Model approach creates reusable data for academic research. | ||
| 74 | |||
| 75 | **Success Indicators**: | ||
| 76 | |||
| 77 | * Academic papers citing FactHarbor methodology | ||
| 78 | * Researchers using FactHarbor data in studies | ||
| 79 | * Methodology validation by academic institutions | ||
| 80 | |||
| 81 | === 2.3 Educators === | ||
| 82 | |||
| 83 | **Profile**: | ||
| 84 | |||
| 85 | * University professors (media literacy, critical thinking, journalism) | ||
| 86 | * High school teachers (civics, social studies, media studies) | ||
| 87 | * Librarians and information literacy specialists | ||
| 88 | * Corporate trainers (media literacy programs) | ||
| 89 | |||
| 90 | **Core Needs**: | ||
| 91 | |||
| 92 | * **UN-3**: Article summaries with FactHarbor analysis for teaching materials | ||
| 93 | * **UN-8**: Understanding disagreement and consensus (why experts differ) | ||
| 94 | * **UN-9**: Methodology transparency for pedagogical purposes | ||
| 95 | * **UN-7**: Evidence transparency to teach source evaluation | ||
| 96 | |||
| 97 | **Key Pain Points**: | ||
| 98 | |||
| 99 | * Fact-checks don't show reasoning process for teaching | ||
| 100 | * Hard to teach critical thinking with black-box verdicts | ||
| 101 | * Need tools that demonstrate **how** to evaluate claims | ||
| 102 | * Limited resources for curriculum development | ||
| 103 | |||
| 104 | **Value Proposition**: | ||
| 105 | FactHarbor teaches the **process** of evidence evaluation, not just the answer. Students see explicit assumptions, multiple scenarios, and how confidence levels are determined. | ||
| 106 | |||
| 107 | **Success Indicators**: | ||
| 108 | |||
| 109 | * Educators integrating FactHarbor into curricula | ||
| 110 | * Student engagement with evidence exploration features | ||
| 111 | * Educational institution partnerships | ||
| 112 | |||
| 113 | === 2.4 Policy Analysts === | ||
| 114 | |||
| 115 | **Profile**: | ||
| 116 | |||
| 117 | * Government policy advisors | ||
| 118 | * NGO research staff | ||
| 119 | * Legislative aides | ||
| 120 | * Regulatory analysts | ||
| 121 | |||
| 122 | **Core Needs**: | ||
| 123 | |||
| 124 | * **UN-2/UN-3**: Context-dependent analysis (claims true under some conditions, false under others) | ||
| 125 | * **UN-8**: Understanding why reasonable people disagree | ||
| 126 | * **UN-1**: Trust assessment with explicit confidence ranges | ||
| 127 | * **UN-17**: In-article claim highlighting for briefing documents | ||
| 128 | |||
| 129 | **Key Pain Points**: | ||
| 130 | |||
| 131 | * Policy questions rarely have simple true/false answers | ||
| 132 | * Need to understand stakeholder perspectives and their evidence | ||
| 133 | * Difficulty synthesizing information from multiple sources | ||
| 134 | * Risk of appearing biased when presenting controversial topics | ||
| 135 | |||
| 136 | **Value Proposition**: | ||
| 137 | FactHarbor's **scenario-based analysis** explicitly maps how conclusions depend on assumptions, enabling policy analysts to present balanced, well-sourced briefings. | ||
| 138 | |||
| 139 | **Success Indicators**: | ||
| 140 | |||
| 141 | * Policy briefs citing FactHarbor analyses | ||
| 142 | * Repeat usage for complex policy questions | ||
| 143 | * Feedback on improved briefing quality | ||
| 144 | |||
| 145 | === 2.5 Content Consumers (General Public) === | ||
| 146 | |||
| 147 | **Profile**: | ||
| 148 | |||
| 149 | * Social media users seeking to verify viral claims | ||
| 150 | * Engaged citizens following news and politics | ||
| 151 | * People making decisions based on contested information | ||
| 152 | * Anyone who has been frustrated by oversimplified fact-checks | ||
| 153 | |||
| 154 | **Core Needs**: | ||
| 155 | |||
| 156 | * **UN-1**: Trust assessment at a glance (immediate visual understanding) | ||
| 157 | * **UN-4**: Fast social media fact-checking | ||
| 158 | * **UN-12**: Ability to submit unchecked claims | ||
| 159 | * **UN-17**: In-article claim highlighting when reading content | ||
| 160 | |||
| 161 | **Key Pain Points**: | ||
| 162 | |||
| 163 | * Don't trust fact-checkers' authority | ||
| 164 | * Want to understand reasoning, not just accept verdicts | ||
| 165 | * Time-constrained but want to make informed decisions | ||
| 166 | * Frustrated by partisan accusations about fact-checkers | ||
| 167 | |||
| 168 | **Value Proposition**: | ||
| 169 | FactHarbor shows **reasoning you can inspect**. Trust comes from transparent methodology, not authority. You can form your own judgment based on visible evidence. | ||
| 170 | |||
| 171 | **Success Indicators**: | ||
| 172 | |||
| 173 | * User retention (return visits) | ||
| 174 | * Time spent exploring evidence details | ||
| 175 | * Claims submitted for verification | ||
| 176 | * User satisfaction with transparency | ||
| 177 | |||
| 178 | == 3. B2B Partner Segments == | ||
| 179 | |||
| 180 | === 3.1 Media Organizations === | ||
| 181 | |||
| 182 | **Priority**: HIGH (Tier 1) | ||
| 183 | |||
| 184 | **Target Partners**: | ||
| 185 | |||
| 186 | * Swiss Broadcasting (SRG SSR, SRF, RTS, RSI) | ||
| 187 | * Major newspapers (Tamedia, NZZ) | ||
| 188 | * Regional news organizations | ||
| 189 | * Digital-first news outlets | ||
| 190 | |||
| 191 | **Partnership Value**: | ||
| 192 | |||
| 193 | * **For Partners**: Automated initial analysis saves journalist time; structured evidence for reader transparency | ||
| 194 | * **For FactHarbor**: Validation, use cases, credibility, potential funding | ||
| 195 | |||
| 196 | **Engagement Model**: | ||
| 197 | |||
| 198 | * API integration for newsroom tools | ||
| 199 | * Embedded analysis widgets | ||
| 200 | * Co-branded fact-checking initiatives | ||
| 201 | * Pilot programs for election coverage | ||
| 202 | |||
| 203 | === 3.2 Fact-Checking Organizations === | ||
| 204 | |||
| 205 | **Priority**: HIGH (Tier 1) | ||
| 206 | |||
| 207 | **Target Partners**: | ||
| 208 | |||
| 209 | * IFCN (International Fact-Checking Network) members | ||
| 210 | * EFCSN (European Fact-Checking Standards Network) members | ||
| 211 | * dpa Fact-Checking (DACH region) | ||
| 212 | * Correctiv (Germany) | ||
| 213 | * Full Fact (UK) | ||
| 214 | |||
| 215 | **Partnership Value**: | ||
| 216 | |||
| 217 | * **For Partners**: Technology platform, scalability, methodology alignment | ||
| 218 | * **For FactHarbor**: Credibility, network access, ecosystem integration | ||
| 219 | |||
| 220 | **Engagement Model**: | ||
| 221 | |||
| 222 | * Open-source technology sharing | ||
| 223 | * ClaimReview schema collaboration | ||
| 224 | * Joint methodology development | ||
| 225 | * Cross-referencing and data sharing | ||
| 226 | |||
| 227 | === 3.3 Academic Institutions === | ||
| 228 | |||
| 229 | **Priority**: HIGH (Tier 1) | ||
| 230 | |||
| 231 | **Target Partners**: | ||
| 232 | |||
| 233 | * ETH Zurich / University of Zurich (Swiss, research collaboration) | ||
| 234 | * Duke Reporters' Lab (ClaimReview, Tech & Check) | ||
| 235 | * Harvard Shorenstein Center (network access) | ||
| 236 | * Stanford Internet Observatory (misinformation research) | ||
| 237 | * Oxford Reuters Institute (journalism research) | ||
| 238 | |||
| 239 | **Partnership Value**: | ||
| 240 | |||
| 241 | * **For Partners**: Research platform, real-world data, novel methodology to study | ||
| 242 | * **For FactHarbor**: Academic validation, grant access (Innosuisse), publications | ||
| 243 | |||
| 244 | **Engagement Model**: | ||
| 245 | |||
| 246 | * Research partnerships | ||
| 247 | * Student thesis projects | ||
| 248 | * Co-authored publications | ||
| 249 | * Conference presentations | ||
| 250 | * Joint grant applications | ||
| 251 | |||
| 252 | === 3.4 Funding Organizations === | ||
| 253 | |||
| 254 | **Priority**: MEDIUM (Tier 2) | ||
| 255 | |||
| 256 | **Target Partners**: | ||
| 257 | |||
| 258 | * Knight Foundation (journalism innovation) | ||
| 259 | * Google News Initiative (fact-checking fund) | ||
| 260 | * Swiss Innosuisse (research/innovation grants) | ||
| 261 | * Gebert RĂĽf Foundation (Swiss innovation) | ||
| 262 | * Prototype Fund Switzerland | ||
| 263 | |||
| 264 | **Partnership Value**: | ||
| 265 | |||
| 266 | * **For Partners**: Support innovative, transparent approach to misinformation | ||
| 267 | * **For FactHarbor**: Operational funding, validation, network access | ||
| 268 | |||
| 269 | **Engagement Model**: | ||
| 270 | |||
| 271 | * Grant applications | ||
| 272 | * Progress reporting | ||
| 273 | * Impact documentation | ||
| 274 | * Network participation | ||
| 275 | |||
| 276 | == 4. Common Customer Characteristics == | ||
| 277 | |||
| 278 | === 4.1 Unifying Frustrations === | ||
| 279 | |||
| 280 | All ideal customers share frustration with: | ||
| 281 | |||
| 282 | * Binary "true/false" verdicts that hide complexity | ||
| 283 | * Opaque methodology ("trust us" authority model) | ||
| 284 | * Lack of explicit assumptions and confidence ranges | ||
| 285 | * Inability to see evidence and reasoning process | ||
| 286 | * No way to understand why experts disagree | ||
| 287 | |||
| 288 | === 4.2 Unifying Values === | ||
| 289 | |||
| 290 | All ideal customers value: | ||
| 291 | |||
| 292 | * **Transparency**: Visible reasoning chains and methodology | ||
| 293 | * **Nuance**: Context-dependent truth (scenarios) | ||
| 294 | * **Independence**: Forming own judgment from evidence | ||
| 295 | * **Integrity**: Non-profit, open-source, no hidden agenda | ||
| 296 | * **Accessibility**: Understanding without specialized expertise | ||
| 297 | |||
| 298 | === 4.3 Decision Criteria === | ||
| 299 | |||
| 300 | When evaluating fact-checking tools, ideal customers prioritize: | ||
| 301 | |||
| 302 | 1. **Methodology Transparency**: Can I see how conclusions are reached? | ||
| 303 | 2. **Evidence Quality**: Are sources traceable and credible? | ||
| 304 | 3. **Nuance Handling**: Does it acknowledge complexity? | ||
| 305 | 4. **Speed & Usability**: Can I use it in my workflow? | ||
| 306 | 5. **Trust & Independence**: Is there hidden bias or agenda? | ||
| 307 | |||
| 308 | == 5. Customer Journey == | ||
| 309 | |||
| 310 | === 5.1 Awareness === | ||
| 311 | |||
| 312 | **How they find us**: | ||
| 313 | |||
| 314 | * Academic publications citing FactHarbor | ||
| 315 | * Referrals from fact-checking organizations | ||
| 316 | * Search engine results (ClaimReview schema visibility) | ||
| 317 | * Media coverage of misinformation topics | ||
| 318 | * Social media discussions about fact-checking | ||
| 319 | |||
| 320 | === 5.2 Evaluation === | ||
| 321 | |||
| 322 | **What they assess**: | ||
| 323 | |||
| 324 | * Methodology documentation (open and detailed?) | ||
| 325 | * Sample analyses (quality and transparency?) | ||
| 326 | * Open-source code (auditable?) | ||
| 327 | * Non-profit status (trustworthy?) | ||
| 328 | * User experience (usable?) | ||
| 329 | |||
| 330 | === 5.3 Adoption === | ||
| 331 | |||
| 332 | **How they start**: | ||
| 333 | |||
| 334 | * Submit a claim they're curious about | ||
| 335 | * Explore an existing analysis in depth | ||
| 336 | * Review methodology documentation | ||
| 337 | * Test with a known case to validate quality | ||
| 338 | * Integrate API into existing workflow | ||
| 339 | |||
| 340 | === 5.4 Retention === | ||
| 341 | |||
| 342 | **Why they return**: | ||
| 343 | |||
| 344 | * Consistent quality and transparency | ||
| 345 | * Time savings in verification workflow | ||
| 346 | * Unique value (scenario analysis not available elsewhere) | ||
| 347 | * Trust in methodology | ||
| 348 | * Community participation | ||
| 349 | |||
| 350 | == 6. Anti-Personas (Not Our Target) == | ||
| 351 | |||
| 352 | === 6.1 Confirmation Seekers === | ||
| 353 | |||
| 354 | **Profile**: Users who want verdicts that confirm their existing beliefs | ||
| 355 | |||
| 356 | **Why Not Ideal**: | ||
| 357 | |||
| 358 | * Will be frustrated by nuanced, scenario-based analysis | ||
| 359 | * May reject conclusions that don't match expectations | ||
| 360 | * Not looking for transparent reasoning—looking for validation | ||
| 361 | |||
| 362 | **How to Handle**: | ||
| 363 | |||
| 364 | * Don't compromise methodology to satisfy them | ||
| 365 | * The transparency may eventually convert some | ||
| 366 | |||
| 367 | === 6.2 Speed-Only Users === | ||
| 368 | |||
| 369 | **Profile**: Users who only want instant answers, no interest in evidence | ||
| 370 | |||
| 371 | **Why Not Ideal**: | ||
| 372 | |||
| 373 | * Don't value FactHarbor's core differentiator (transparency) | ||
| 374 | * Would be better served by simpler binary fact-checkers | ||
| 375 | * Won't engage with evidence or scenarios | ||
| 376 | |||
| 377 | **How to Handle**: | ||
| 378 | |||
| 379 | * Provide quick summary views (UN-1: trust at a glance) | ||
| 380 | * Make deeper exploration available but not required | ||
| 381 | |||
| 382 | === 6.3 Bad-Faith Actors === | ||
| 383 | |||
| 384 | **Profile**: Users seeking to game or manipulate the system | ||
| 385 | |||
| 386 | **Why Not Ideal**: | ||
| 387 | |||
| 388 | * Waste resources | ||
| 389 | * Damage system integrity | ||
| 390 | * Not genuine users | ||
| 391 | |||
| 392 | **How to Handle**: | ||
| 393 | |||
| 394 | * AKEL detection of manipulation patterns | ||
| 395 | * Moderation for flagged escalations | ||
| 396 | * Transparent ban policies | ||
| 397 | |||
| 398 | == 7. Metrics and Validation == | ||
| 399 | |||
| 400 | === 7.1 Segment Metrics === | ||
| 401 | |||
| 402 | Track for each segment: | ||
| 403 | |||
| 404 | * **Acquisition**: How many from each segment? | ||
| 405 | * **Activation**: Do they complete first analysis? | ||
| 406 | * **Engagement**: Do they explore evidence? | ||
| 407 | * **Retention**: Do they return? | ||
| 408 | * **Referral**: Do they recommend others? | ||
| 409 | |||
| 410 | === 7.2 Segment-Specific Success Indicators === | ||
| 411 | |||
| 412 | | Segment | Key Success Metric |\\ | ||
| 413 | |-|-|\\ | ||
| 414 | | Journalists | API calls per newsroom; time saved per verification |\\ | ||
| 415 | | Researchers | Papers citing FactHarbor; data downloads |\\ | ||
| 416 | | Educators | Curricula integrations; student engagement |\\ | ||
| 417 | | Policy Analysts | Briefings citing FactHarbor; repeat usage |\\ | ||
| 418 | | Content Consumers | Retention rate; evidence exploration rate | | ||
| 419 | |||
| 420 | === 7.3 Partnership Metrics === | ||
| 421 | |||
| 422 | | Partner Type | Success Metric |\\ | ||
| 423 | |-||\\ | ||
| 424 | | Media | Integration count; co-published analyses |\\ | ||
| 425 | | Fact-Checkers | Data sharing volume; methodology alignment |\\ | ||
| 426 | | Academic | Papers published; grants received |\\ | ||
| 427 | | Funders | Grants awarded; renewal rate | | ||
| 428 | |||
| 429 | == 8. Related Pages == | ||
| 430 | |||
| 431 | * [[User Needs>>Archive.FactHarbor 2026\.02\.08.Specification.Requirements.User Needs.WebHome]] - Detailed user need definitions | ||
| 432 | * [[Requirements>>Archive.FactHarbor 2026\.02\.08.Specification.Requirements.WebHome]] - How user needs map to requirements | ||
| 433 | * [[Partnership Strategy>>FactHarbor.Organisation.Partnership-Strategy]] - Partnership opportunity details | ||
| 434 | * [[Funding & Partnerships>>FactHarbor.Organisation.Funding-Partnerships]] - Funding sources and contacts | ||
| 435 | * [[Organisational Model>>FactHarbor.Organisation.Organisational-Model]] - How FactHarbor is structured |