Conversation
thomtrp
commented
Dec 2, 2025
- if >5000 workflows per hour, new ones should failed
- if >100 workflow per min, new ones should be set as not started. Except manual trigger
- when enqueued, we check if there a not started workflows that may be queued. If yes, we call the associated job
...-runner/workflow-run-queue/workspace-services/workflow-not-started-runs.workspace-service.ts
Outdated
Show resolved
Hide resolved
800feaa to
52db40e
Compare
Greptile OverviewGreptile SummaryThis PR implements a two-tier throttling system for workflow execution with hard and soft limits: Key Changes:
Issues Found:
Confidence Score: 3/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant Client
participant WorkflowRunner as WorkflowRunnerService
participant NotStartedSvc as NotStartedRunsService
participant Throttler as ThrottlerService
participant Cache as Redis Cache
participant DB as Database
participant Queue as Message Queue
Client->>WorkflowRunner: run(workflowVersionId, payload)
WorkflowRunner->>WorkflowRunner: Check if manual trigger
WorkflowRunner->>NotStartedSvc: throttleOrThrowIfHardLimitReached()
NotStartedSvc->>Throttler: tokenBucketThrottleOrThrow(hard-throttle key, 1, 5000, 3600000)
Throttler->>Cache: get(hard-throttle tokens)
Cache-->>Throttler: current token state
alt Hard limit exceeded (>5000/hour)
Throttler-->>NotStartedSvc: throw ThrottlerException
NotStartedSvc-->>WorkflowRunner: throw
WorkflowRunner->>WorkflowRunner: createFailedWorkflowRun()
WorkflowRunner->>DB: create workflow run (status=FAILED, error="Throttle limit reached")
WorkflowRunner-->>Client: return {workflowRunId}
else Hard limit OK
Throttler->>Cache: set(hard-throttle tokens - 1)
Throttler-->>NotStartedSvc: return remaining tokens
end
WorkflowRunner->>NotStartedSvc: throttleAndReturnRemainingRunsToEnqueueCount()
NotStartedSvc->>Throttler: tokenBucketThrottleOrThrow(soft-throttle key, 1, 100, 60000)
Throttler->>Cache: get(soft-throttle tokens)
Cache-->>Throttler: current token state
alt Soft limit exceeded (>100/min) AND NOT manual trigger
Throttler-->>NotStartedSvc: throw ThrottlerException
NotStartedSvc-->>WorkflowRunner: throw
WorkflowRunner->>WorkflowRunner: createNotStartedWorkflowRun()
WorkflowRunner->>DB: create workflow run (status=NOT_STARTED)
WorkflowRunner->>NotStartedSvc: increaseWorkflowRunNotStartedCount()
NotStartedSvc->>Cache: increment NOT_STARTED count
WorkflowRunner-->>Client: return {workflowRunId}
else Soft limit OK OR manual trigger
Throttler->>Cache: set(soft-throttle tokens - 1)
Throttler-->>NotStartedSvc: return remaining tokens
NotStartedSvc-->>WorkflowRunner: return remainingCount
end
WorkflowRunner->>WorkflowRunner: enqueueWorkflowRunAndPotentialAwaitingRuns()
WorkflowRunner->>DB: create workflow run (status=ENQUEUED)
WorkflowRunner->>Queue: add(RunWorkflowJob, {workflowRunId})
WorkflowRunner->>NotStartedSvc: getNotStartedRunsCountFromCache()
NotStartedSvc->>Cache: get(NOT_STARTED count)
Cache-->>NotStartedSvc: current count
NotStartedSvc-->>WorkflowRunner: return count
alt remainingCount > 0 AND notStartedCount > 0
WorkflowRunner->>Queue: add(WorkflowEnqueueAwaitingRunsJob, {workspaceId})
Note over Queue: Background job will process<br/>NOT_STARTED workflows later
end
WorkflowRunner-->>Client: return {workflowRunId}
|
...dules/workflow/workflow-runner/workflow-run-queue/jobs/workflow-enqueue-awaiting-runs.job.ts
Outdated
Show resolved
Hide resolved
|
🚀 Preview Environment Ready! Your preview environment is available at: http://bore.pub:55106 This environment will automatically shut down when the PR is closed or after 5 hours. |
|
@greptile-ai |
...src/modules/workflow/workflow-runner/workspace-services/workflow-runner.workspace-service.ts
Outdated
Show resolved
Hide resolved
...src/modules/workflow/workflow-runner/workspace-services/workflow-runner.workspace-service.ts
Outdated
Show resolved
Hide resolved
packages/twenty-server/src/engine/core-modules/throttler/throttler.service.ts
Show resolved
Hide resolved
|
|
||
| @Process(WorkflowEnqueueAwaitingRunsJob.name) | ||
| async handle({ workspaceId }: { workspaceId: string }): Promise<void> { | ||
| await this.workflowEnqueueAwaitingRunsWorkspaceService.enqueueRuns({ |
| @@ -0,0 +1,3 @@ | |||
| export const getWorkflowRunNotStartedCountCacheKey = ( | |||
| workspaceId: string, | |||
| ): string => `workflow-run-not-started-count:${workspaceId}`; | |||
There was a problem hiding this comment.
why not using workspace cache service?
| const remainingWorkflowRunToEnqueueCount = | ||
| await this.workflowRunQueueWorkspaceService.getRemainingRunsToEnqueueCountFromDatabase( | ||
| const notStartedRunsCount = | ||
| await this.workflowNotStartedRunsWorkspaceService.getNotStartedRunsCountFromDatabase( |
There was a problem hiding this comment.
we should maintain in cache
| @@ -50,7 +50,7 @@ export class WorkflowHandleStaledRunsWorkspaceService { | |||
| }, | |||
| ); | |||
|
|
|||
| await this.workflowRunQueueWorkspaceService.recomputeWorkflowRunQueuedCount( | |||
| await this.workflowNotStartedRunsWorkspaceService.recomputeWorkflowRunNotStartedCount( | |||
| import { WorkflowEnqueueAwaitingRunsWorkspaceService } from 'src/modules/workflow/workflow-runner/workflow-run-queue/workspace-services/workflow-enqueue-awaiting-runs.workspace-service'; | ||
|
|
||
| @Processor({ queueName: MessageQueue.workflowQueue, scope: Scope.REQUEST }) | ||
| export class WorkflowEnqueueAwaitingRunsJob { |
There was a problem hiding this comment.
workflow enqueue not started Job
(i would also add an optional workflowRunId? )
| @Processor({ queueName: MessageQueue.workflowQueue, scope: Scope.REQUEST }) | ||
| export class WorkflowEnqueueAwaitingRunsJob { | ||
| constructor( | ||
| private readonly workflowEnqueueAwaitingRunsWorkspaceService: WorkflowEnqueueAwaitingRunsWorkspaceService, |
There was a problem hiding this comment.
workflowEnqueueRunWorkspaceService
| import { getWorkflowRunNotStartedCountCacheKey } from 'src/modules/workflow/workflow-runner/workflow-run-queue/utils/get-workflow-run-not-started-count-cache-key.util'; | ||
|
|
||
| @Injectable() | ||
| export class WorkflowNotStartedRunsWorkspaceService { |
| triggerPayload: payload, | ||
| }); | ||
|
|
||
| await this.enqueueWorkflowRun(workspaceId, workflowRunId); |
|
|
||
| if (remainingRunsToEnqueueCount > 0 && currentNotStartedRunsCount > 0) { | ||
| await this.messageQueueService.add<{ workspaceId: string }>( | ||
| WorkflowEnqueueAwaitingRunsJob.name, |
There was a problem hiding this comment.
pass prioritiarty workflowRunId if needed here
| @@ -244,4 +253,136 @@ export class WorkflowRunnerWorkspaceService { | |||
| status: newStatus, | |||
| }; | |||
| } | |||
|
|
|||
| private async checkThrottleLimits( | |||
There was a problem hiding this comment.
no need to handle soft here
…rkflow-run-queue/jobs/workflow-enqueue-awaiting-runs.job.ts Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
32ab36d to
acba2b0
Compare
| workflowRunId, | ||
| }); | ||
|
|
||
| await this.messageQueueService.add<RunWorkflowJobData>( |
There was a problem hiding this comment.
I feel this should also go through workflow-run-enqueue job
There was a problem hiding this comment.
The enqueue notion means NOT_STARTED => ENQUEUE. The service only looks for not started workflows. In case of a running workflow, like here, we still need to run the job separately. So three cases: delays (here), forms and when a workflow has two many steps, we cut and re-send the job in the queue
|
|
||
| export type WorkflowRunEnqueueJobData = { | ||
| workspaceId: string; | ||
| workflowRunId?: string; |