Last modified by Robert Schaub on 2026/02/08 08:31

Hide last authors
Robert Schaub 1.1 1 {{info}}
2 **Current Implementation (v2.6.33)** - Uses Vercel AI SDK for multi-provider abstraction. Provider selected via ##LLM_PROVIDER## environment variable.
3 {{/info}}
4
5 = LLM Abstraction Architecture =
6
7 {{mermaid}}
8
9 graph TB
10 subgraph Pipelines[Triple-Path Pipelines]
11 ORCH[Orchestrated orchestrated.ts]
12 CANON[Monolithic Canonical]
13 DYN[Monolithic Dynamic]
14 end
15
16 subgraph AISDK[Vercel AI SDK]
17 SDK[AI SDK Core generateText generateObject]
18 STREAM[Streaming Support streamText]
19 end
20
21 subgraph Providers[LLM Providers]
22 ANT[Anthropic Claude 3.5 Sonnet DEFAULT]
23 OAI[OpenAI GPT-4o]
24 GOO[Google Gemini 1.5]
25 MIS[Mistral Large]
26 end
27
28 subgraph Config[Configuration]
29 ENV[Environment Variables LLM_PROVIDER FH_DETERMINISTIC]
30 end
31
32 ORCH --> SDK
33 CANON --> SDK
34 DYN --> SDK
35 SDK --> ANT
36 SDK --> OAI
37 SDK --> GOO
38 SDK --> MIS
39 ENV --> SDK
40
41 {{/mermaid}}
42
43 == Current Implementation ==
44
45 |= Feature |= Status |= Notes
46 | **Multi-provider support** | Implemented | Anthropic, OpenAI, Google, Mistral
47 | **Provider selection** | Implemented | Via ##LLM_PROVIDER## env var
48 | **Deterministic mode** | Implemented | ##FH_DETERMINISTIC=true## sets temperature 0
49 | **Automatic failover** | Not implemented | Manual provider switch only
50 | **Per-stage provider** | Not implemented | Single provider for all stages
51
52 == Environment Variables ==
53
54 |= Variable |= Default |= Options
55 | ##LLM_PROVIDER## | anthropic | anthropic, openai, google, mistral
56 | ##FH_DETERMINISTIC## | true | true = temperature 0, false = default
57
58 == Provider Details ==
59
60 |= Provider |= Model |= Use Case
61 | **Anthropic** | Claude 3.5 Sonnet | Default, best reasoning
62 | **OpenAI** | GPT-4o | Alternative, fast
63 | **Google** | Gemini 1.5 Pro | Alternative, long context
64 | **Mistral** | Large | Alternative, EU data residency
65
66 == Future Enhancements ==
67
68 * **Automatic failover**: Chain providers for resilience
69 * **Per-stage optimization**: Different providers per pipeline stage
70 * **Cost tracking**: Monitor and optimize LLM costs
71 * **Local models**: Ollama/vLLM for on-premises deployment