Skip to main content
The Workflow Monitor gives you a time-based, run-level view of every execution of a deployed workflow. Use it to track performance, inspect inputs and outputs, diagnose failures, and review the history of runs and configuration changes. Monitoring is only available for workflows deployed to production. Workflows in design or debug phases are not tracked in the Workflow Monitor.

Workflow Monitor

The Workflow Monitor provides two analytics views:
  • All runs: Shows all workflow run instances, including runs triggered by events, schedules, or API calls.
  • Model runs: Shows individual AI node executions within each workflow run.
Both tabs display summary metrics at the top:
MetricDescription
Total runs / requestsTotal number of workflow runs or AI node calls in the selected period.
Response time (P90)The response time below which 90% of runs fall.
Response time (P99)The response time below which 99% of runs fall.
Failure ratePercentage of runs that ended in failure.
These metrics update dynamically when you apply filters or change the date range.

Access the Workflow Monitor

  1. On the Workflows page, click the workflow you want to monitor. The workflow must be in Deployed status.
  2. In the left navigation pane, click Workflow runs.
  3. Click the All runs or Model runs tab.
  4. Click any row to open the detailed run panel on the right.

All Runs Tab

Each row in the All runs tab represents one workflow execution and shows:
  • Run ID: Unique identifier for the run.
  • Status: One of the following:
    • In Progress: The run is actively executing.
    • Waiting: Execution is paused, waiting for a response from an external system (typically an async API node).
    • Success: The run completed without errors.
    • Failed: The run did not complete successfully.
  • Response time: Total time from request to output.
  • Nodes executed: Number of nodes that ran in this execution.
  • Start time / End time: When the run began and ended.
  • Type: The trigger type—Event-based, Schedule-based, or API-based.
  • Source: The service or schedule that initiated the run (for example, Gmail for an event-based trigger).

Model Runs Tab

The Model runs tab tracks each AI node call separately. If a workflow has three AI nodes, each run produces three entries in this tab. The tab is empty if the workflow has no AI nodes. Each entry shows:
  • Request ID: Unique identifier for the AI node call.
  • Status: In Progress, Waiting, Success, or Failed.
  • Node name: The name of the AI node.
  • Model name: The AI model used.
  • Connection / Deployment name: The connection or deployment linked to the model.
  • Response time: Time taken for the AI node to complete.
  • Start time / End time: Execution window for the AI node.

Viewing Execution Results

Clicking any row in the Workflow Monitor opens a detailed run panel. This panel mirrors the Run dialog on the workflow canvas and shows:
  • Run ID / Request ID: Unique identifier for the run.
  • Response time: Total execution time.
  • Debug icon: Opens the debug log for the run.
  • Input section: The inputs passed to the workflow.
  • Flow log: Node-by-node execution log.
    • Success: Shows the same debug output as in the canvas debug panel.
    • Failure: Shows failure details and error information for the failed node. For AI nodes, you can expand the node to see scanner information.
  • Output section: The final output from the workflow (available for successful runs). You can copy the output and view token usage.

Execution History

Search and Filter Runs

Use the search and filter options to narrow down the list of runs in the Workflow Monitor. Text search: Use the search box in the top-right corner to find runs by keyword. Time-based search:
  1. Click the calendar button in the top-right corner.
  2. Select a predefined range (last day, week, month, or year) or set custom dates.
  3. Click Apply.
Custom filters:
  1. Click the filter icon.
  2. Click + Add filter.
  3. Select a Column, Operator (for example, Is Equal To or Is Not Equal To), and Value.
  4. Click Apply.
Combine multiple filters using AND/OR operators for more precise results.

Trigger History

The Triggers page shows all configured triggers for a deployed workflow and their recent activity. To access it: go to Monitoring > Triggers in the left navigation pane. Each trigger entry shows: Event-based triggers:
  • Provider, trigger name, description, setup time, and last run.
Schedule-based triggers:
  • Frequency, description, setup time, start and end time, time zone, last run, and next scheduled run.
Inactive triggers show a warning with recommended actions. If a trigger is inactive:
  • Redeploy the workflow to refresh connections.
  • Retest the auth profile in Settings > Security & Control > Authorization profiles.

Audit Logs

Audit logs track all user actions and system events for a workflow, including logins, role changes, and configuration updates. To access audit logs: In the left navigation pane, click Audit logs. Each log entry includes:
  • Event name: The action or event that occurred.
  • Category: The module or entity affected.
  • User name: The user who performed the action.
  • Date and time: When the event occurred.
  • Description: Details about the action.
You can filter audit logs by date range, category, event, or user.

Change Log

The change log tracks every edit made to a workflow’s node configuration. It records the timestamp, the user responsible, and a description of each change. To access it: Click the History/Log icon in the top-right corner of the workflow canvas. Filter the change log by:
  • Date: Select a date range using the calendar icon.
  • User: View changes made by a specific team member.
  • Node type: Filter by the type of node that was changed.

Debug Workflow Runs

Run and Inspect a Workflow

  1. On the workflow canvas, click the Run flow icon in the top-right corner. The Run dialog opens with Input, Flow log, and Output sections.
  2. Click the Debug icon to open the debug log. The log populates in real time as the workflow executes.
  3. When the run completes:
    • Success: Copy the output using the copy icon. The total execution time is shown.
    • Failure: An error message appears. The output key is empty, and the output is shown in JSON format.
You can stop a running workflow at any time and restart it by clicking Run flow again.

Debug Log

The debug log captures a detailed record of each step in the workflow execution. It shows:
  • Flow input values: The values passed when the run was triggered.
  • Flow-level log: Initiation and progress details at the workflow level.
  • Node-level information: Success or failure status for each node.
  • Node success / failure links: Links to additional details per node outcome.
  • Tool calling details (AI nodes only): Logs of any workflows called during execution, including inputs (JSON), responses, and errors. A separate trace panel shows step-by-step execution of each called workflow.
  • Node metrics for each node:
    • Initiated on: When the node was triggered.
    • Executed on: When the node finished.
    • Total time taken: Duration of node execution.
    • Tokens (AI nodes only): Token usage for the node.
Expand the debug panel to full screen for a cleaner layout. In full-screen mode, all nodes align to the left, and clicking any node shows its input, output, and metrics in a side-by-side view.

Time Metrics for API and AI Nodes

The debug log shows timing breakdowns that help identify performance bottlenecks. API nodes — synchronous mode:
MetricDescription
Node processing timeTime taken by the node to process and complete.
API response timeTime spent waiting for a response from the external API.
API nodes — asynchronous mode:
MetricDescription
Node paused atTimestamp when the node paused waiting for the async response.
Node resumed atTimestamp when the node resumed after receiving the response.
Total wait timeDuration between pause and resume.
Node processing timeTime the platform spent processing the node after it resumed.
AI nodes:
MetricDescription
Node processing timeTime taken by the node to complete execution.
LLM response timeTime taken for the connected AI model to return a response.

View Parallel and Sequential Execution in Logs

Sequential flows: Nodes appear in the debug panel one after another in execution order. This view helps trace linear flow progression. Parallel flows: Each parallel branch appears as an indented block under the parent node, labeled A, B, C, and so on. You can expand or collapse each branch. All branches run simultaneously, but the workflow waits for all of them to complete before proceeding to the next step.

Debug Loop Nodes

When a loop completes, the debug panel shows:
  • Inputs received by the Loop node.
  • Per-iteration outputs from child nodes.
  • Aggregated results in the output field.
  • Errors encountered during any iteration.
Click the icon next to the Loop node in the debug panel to open the Loop Runs view:
  • View all iterations with statuses: Running, Completed, or Failed.
  • Failed iterations are highlighted in red.
  • Click any iteration to inspect its step-by-step node execution.

Troubleshoot Failures

Workflow Run Errors

In the All runs tab, click a failed run to view error details. Each error entry includes:
  • The HTTP status code returned.
  • A message describing the error.
  • Suggestions for resolving it.
Error categories:
CategoryDescription
AuthorizationAPI key authorization failed for the workflow.
Data ValidationInput field or API call data did not pass validation.
Content FilterAn AI node violated a guardrail threshold.
Internal Server ErrorA technical issue occurred on the platform server.
NetworkA connectivity issue caused the request to fail or time out.
Common error scenarios:
Error scenarioCategoryHTTP status
Mandatory input field is missingData Validation400 Bad Request
Incorrect data type for an input fieldData Validation400 Bad Request
Empty input valueData Validation400 Bad Request
Request payload exceeds size limitData Validation413 Payload Too Large
Server-side failureInternal Server Error500 Internal Server Error
Request timeoutNetwork408 Request Timeout
Guardrail threshold exceeded at an AI nodeContent Filter403 Forbidden

Timeout Behavior and Run Status

The way timeouts affect runs depends on whether the workflow and its API nodes are configured as synchronous or asynchronous.
Workflow modeAPI node modeBehavior
SyncSyncRequest is fulfilled immediately. Status shows In Progress while running.
SyncAsync (API node timeout < Sync timeout)Workflow pauses with Waiting status until the external system responds, then resumes to In Progress.
AsyncSyncWorkflow executes and sends the response to the callback URL. Status shows In Progress while running.
AsyncAsyncWorkflow pauses with Waiting status. Resumes when the external system responds. If the external system retries the same callback URL, it receives a notification that the request was already fulfilled.

Loop Node Issues

IssueLikely causeFix
Loop input is missing or emptyInput list is undefined or resolves to null or empty.Set the Loop Input Source to a valid array, such as {{inputs.items}}. Verify in the debug log.
Child nodes not executingNodes are placed outside the loop container.Drag the nodes into the loop container on the canvas. Only nodes inside the container run per iteration.
Loop stops when one item failsError handling is set to stop on failure.Change the error handling option to Continue on error to skip failed iterations.
Output variable conflictsOutput field name is reused elsewhere in the flow.Use a unique name for the output field to avoid overwriting data.

Trigger Issues

IssueFix
Output variable is undefinedOpen the Start node and define all required output variables. Save and re-run the workflow.
Trigger is inactive or has an authentication errorRedeploy the workflow to refresh connections. Retest the auth profile in Settings > Security & Control > Authorization profiles. Verify the trigger is active before running.