feat(ai): refresh AI models with deprecation support and multi-provider defaults (BREAKING: deploy server before frontend please)#16503
Conversation
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
| getDefaultSpeedModel(): RegisteredAIModel { | ||
| const defaultModelId = this.twentyConfigService.get( | ||
| const defaultModelIds = this.twentyConfigService.get( | ||
| 'DEFAULT_AI_SPEED_MODEL_ID', | ||
| ); | ||
| let model = this.getModel(defaultModelId); | ||
| let model = this.getFirstAvailableModelFromList(defaultModelIds); | ||
|
|
||
| if (!model) { | ||
| const availableModels = this.getAvailableModels(); |
There was a problem hiding this comment.
Bug: The ChatExecutionService does not handle the case where getDefaultPerformanceModel() returns undefined, leading to a potential server crash when accessing properties on the result.
Severity: CRITICAL | Confidence: High
🔍 Detailed Analysis
The getDefaultPerformanceModel() method can return undefined if no AI models are configured, for instance, when no provider API keys are set. The ChatExecutionService.streamChat() method calls this function to get registeredModel but does not check if the result is undefined. It then immediately attempts to access properties like registeredModel.provider and registeredModel.model. This will cause a TypeError and crash the server when a user initiates a chat in an environment with no configured AI models. Other services that call similar methods correctly include guards for this scenario.
💡 Suggested Fix
In ChatExecutionService.streamChat(), add a check to verify that the value returned from getDefaultPerformanceModel() is not undefined before attempting to access its properties. If it is undefined, the service should handle the error gracefully, for example, by throwing a specific error or returning an informative message to the user, preventing the server from crashing.
🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location:
packages/twenty-server/src/engine/metadata-modules/ai/ai-models/services/ai-model-registry.service.ts#L164-L171
Potential issue: The `getDefaultPerformanceModel()` method can return `undefined` if no
AI models are configured, for instance, when no provider API keys are set. The
`ChatExecutionService.streamChat()` method calls this function to get `registeredModel`
but does not check if the result is `undefined`. It then immediately attempts to access
properties like `registeredModel.provider` and `registeredModel.model`. This will cause
a `TypeError` and crash the server when a user initiates a chat in an environment with
no configured AI models. Other services that call similar methods correctly include
guards for this scenario.
Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7141835
ff87abe to
2e388a0
Compare
There was a problem hiding this comment.
2 issues found across 9 files
Prompt for AI agents (all 2 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/openai-models.const.ts">
<violation number="1" location="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/openai-models.const.ts:13">
P2: Likely typo in `contextWindowTokens`: `1047576` should be `1048576` (1M tokens = 2^20). The value is 1000 tokens less than the standard 1M context window.</violation>
</file>
<file name="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/ai-models-types.const.ts">
<violation number="1" location="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/ai-models-types.const.ts:35">
P2: The `| string` escape hatch defeats the type safety of the `ModelId` union. In TypeScript, `'literal' | string` collapses to just `string`, making all specific model IDs meaningless as type constraints. Consider using a branded type pattern or a separate `CustomModelId` type:
```typescript
type KnownModelId = typeof DEFAULT_FAST_MODEL | typeof DEFAULT_SMART_MODEL | 'gpt-4o' | ...;
type CustomModelId = string & { __brand: 'CustomModelId' };
export type ModelId = KnownModelId | CustomModelId;
Or if custom models need validation, keep strict typing and handle unknown models at runtime.
</details>
<sub>Reply to cubic to teach it or ask questions. Re-run a review with `@cubic-dev-ai review this PR`</sub>
...ages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/openai-models.const.ts
Show resolved
Hide resolved
...es/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/ai-models-types.const.ts
Show resolved
Hide resolved
|
🚀 Preview Environment Ready! Your preview environment is available at: http://bore.pub:32976 This environment will automatically shut down when the PR is closed or after 5 hours. |
75eb51d to
929c496
Compare
…er defaults - Add latest AI models: GPT-4.1, o3, o4-mini, Claude 4.5 (Opus/Sonnet/Haiku), Grok 4.1 - Mark deprecated models: GPT-4o, GPT-4o-mini, GPT-4-turbo, Claude Opus 4, Claude Sonnet 4 - Add deprecated flag to AIModelConfig interface - Split models into separate files per provider for maintainability - Support comma-separated default model lists for multi-provider fallback - Filter deprecated models from dropdowns while keeping them usable for existing agents - Update config variables to prefer newer models with fallbacks across providers
929c496 to
0825362
Compare
|
Hey @FelixMalfait! After you've done the QA of your Pull Request, you can mark it as done here. Thank you! |
Summary
deprecatedflagChanges
New Models Added
Deprecated Models
Config Changes
Default model configs now support comma-separated fallback lists:
DEFAULT_AI_SPEED_MODEL_ID=gpt-4.1-mini,claude-haiku-4-5-20251001,grok-3-miniDEFAULT_AI_PERFORMANCE_MODEL_ID=gpt-4.1,claude-sonnet-4-5-20250929,grok-4Test plan