Skip to content

feat(ai): refresh AI models with deprecation support and multi-provider defaults (BREAKING: deploy server before frontend please)#16503

Merged
FelixMalfait merged 1 commit intomainfrom
feat/ai-models-refresh-deprecation
Dec 11, 2025
Merged

feat(ai): refresh AI models with deprecation support and multi-provider defaults (BREAKING: deploy server before frontend please)#16503
FelixMalfait merged 1 commit intomainfrom
feat/ai-models-refresh-deprecation

Conversation

@FelixMalfait
Copy link
Copy Markdown
Member

Summary

  • Add latest AI models from OpenAI (GPT-4.1, o3, o4-mini), Anthropic (Claude 4.5 Opus/Sonnet/Haiku), and xAI (Grok 4.1)
  • Mark deprecated models (GPT-4o, GPT-4o-mini, GPT-4-turbo, Claude Opus 4, Claude Sonnet 4) with a deprecated flag
  • Split AI models into separate files per provider for better maintainability
  • Support comma-separated default model lists for automatic fallback across providers (works out of the box for self-hosters regardless of which provider they configure)
  • Filter deprecated models from dropdown selection while keeping them functional for existing agents

Changes

New Models Added

Provider Models
OpenAI gpt-4.1, gpt-4.1-mini, o3, o4-mini
Anthropic claude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5
xAI grok-4-1-fast-reasoning

Deprecated Models

  • gpt-4o, gpt-4o-mini, gpt-4-turbo (OpenAI)
  • claude-opus-4-20250514, claude-sonnet-4-20250514 (Anthropic)

Config Changes

Default model configs now support comma-separated fallback lists:

  • DEFAULT_AI_SPEED_MODEL_ID=gpt-4.1-mini,claude-haiku-4-5-20251001,grok-3-mini
  • DEFAULT_AI_PERFORMANCE_MODEL_ID=gpt-4.1,claude-sonnet-4-5-20250929,grok-4

Test plan

  • Unit tests pass
  • Typecheck passes
  • Lint passes
  • Verify deprecated models don't appear in model dropdowns
  • Verify agents with deprecated models still work correctly
  • Verify default model fallback works when only one provider is configured

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Comment on lines 164 to 171
getDefaultSpeedModel(): RegisteredAIModel {
const defaultModelId = this.twentyConfigService.get(
const defaultModelIds = this.twentyConfigService.get(
'DEFAULT_AI_SPEED_MODEL_ID',
);
let model = this.getModel(defaultModelId);
let model = this.getFirstAvailableModelFromList(defaultModelIds);

if (!model) {
const availableModels = this.getAvailableModels();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The ChatExecutionService does not handle the case where getDefaultPerformanceModel() returns undefined, leading to a potential server crash when accessing properties on the result.
Severity: CRITICAL | Confidence: High

🔍 Detailed Analysis

The getDefaultPerformanceModel() method can return undefined if no AI models are configured, for instance, when no provider API keys are set. The ChatExecutionService.streamChat() method calls this function to get registeredModel but does not check if the result is undefined. It then immediately attempts to access properties like registeredModel.provider and registeredModel.model. This will cause a TypeError and crash the server when a user initiates a chat in an environment with no configured AI models. Other services that call similar methods correctly include guards for this scenario.

💡 Suggested Fix

In ChatExecutionService.streamChat(), add a check to verify that the value returned from getDefaultPerformanceModel() is not undefined before attempting to access its properties. If it is undefined, the service should handle the error gracefully, for example, by throwing a specific error or returning an informative message to the user, preventing the server from crashing.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location:
packages/twenty-server/src/engine/metadata-modules/ai/ai-models/services/ai-model-registry.service.ts#L164-L171

Potential issue: The `getDefaultPerformanceModel()` method can return `undefined` if no
AI models are configured, for instance, when no provider API keys are set. The
`ChatExecutionService.streamChat()` method calls this function to get `registeredModel`
but does not check if the result is `undefined`. It then immediately attempts to access
properties like `registeredModel.provider` and `registeredModel.model`. This will cause
a `TypeError` and crash the server when a user initiates a chat in an environment with
no configured AI models. Other services that call similar methods correctly include
guards for this scenario.

Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7141835

@FelixMalfait FelixMalfait force-pushed the feat/ai-models-refresh-deprecation branch from ff87abe to 2e388a0 Compare December 11, 2025 15:01
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 9 files

Prompt for AI agents (all 2 issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/openai-models.const.ts">

<violation number="1" location="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/openai-models.const.ts:13">
P2: Likely typo in `contextWindowTokens`: `1047576` should be `1048576` (1M tokens = 2^20). The value is 1000 tokens less than the standard 1M context window.</violation>
</file>

<file name="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/ai-models-types.const.ts">

<violation number="1" location="packages/twenty-server/src/engine/metadata-modules/ai/ai-models/constants/ai-models-types.const.ts:35">
P2: The `| string` escape hatch defeats the type safety of the `ModelId` union. In TypeScript, `&#39;literal&#39; | string` collapses to just `string`, making all specific model IDs meaningless as type constraints. Consider using a branded type pattern or a separate `CustomModelId` type:

```typescript
type KnownModelId = typeof DEFAULT_FAST_MODEL | typeof DEFAULT_SMART_MODEL | &#39;gpt-4o&#39; | ...;
type CustomModelId = string &amp; { __brand: &#39;CustomModelId&#39; };
export type ModelId = KnownModelId | CustomModelId;

Or if custom models need validation, keep strict typing and handle unknown models at runtime.


</details>

<sub>Reply to cubic to teach it or ask questions. Re-run a review with `@cubic-dev-ai review this PR`</sub>

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Dec 11, 2025

🚀 Preview Environment Ready!

Your preview environment is available at: http://bore.pub:32976

This environment will automatically shut down when the PR is closed or after 5 hours.

@FelixMalfait FelixMalfait changed the title feat(ai): refresh AI models with deprecation support and multi-provider defaults feat(ai): refresh AI models with deprecation support and multi-provider defaults (BREAKING: deploy server before frontend please) Dec 11, 2025
@FelixMalfait FelixMalfait force-pushed the feat/ai-models-refresh-deprecation branch 2 times, most recently from 75eb51d to 929c496 Compare December 11, 2025 15:58
…er defaults

- Add latest AI models: GPT-4.1, o3, o4-mini, Claude 4.5 (Opus/Sonnet/Haiku), Grok 4.1
- Mark deprecated models: GPT-4o, GPT-4o-mini, GPT-4-turbo, Claude Opus 4, Claude Sonnet 4
- Add deprecated flag to AIModelConfig interface
- Split models into separate files per provider for maintainability
- Support comma-separated default model lists for multi-provider fallback
- Filter deprecated models from dropdowns while keeping them usable for existing agents
- Update config variables to prefer newer models with fallbacks across providers
@FelixMalfait FelixMalfait force-pushed the feat/ai-models-refresh-deprecation branch from 929c496 to 0825362 Compare December 11, 2025 16:07
@FelixMalfait FelixMalfait merged commit a137273 into main Dec 11, 2025
68 checks passed
@FelixMalfait FelixMalfait deleted the feat/ai-models-refresh-deprecation branch December 11, 2025 20:47
@twenty-eng-sync
Copy link
Copy Markdown

Hey @FelixMalfait! After you've done the QA of your Pull Request, you can mark it as done here. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant