Skip to main content
Back to Analytics Overview Task Execution Logs provides in-depth insights into task execution data to help you assess your AI Agent’s performance in executing tasks. To view the Task Execution Logs dashboard:
  1. Click Analytics on the left navigation pane. The Analytics panel opens with the list of reports. Navigating to Task Execution Logs
  2. Click Task Execution Logs under the Automation section of the Analytics panel.
  3. Select appropriate filters and click Apply.
The Task Execution Logs page shows information in the following sections:
  • Failed Task: Indicates the number of unsuccessful tasks.
  • API Calls: Displays Service node and Webhook node execution data and the number of failed services during AI Agent interactions.
  • Script Execution: Displays analytics data for script node executions and the number of failed scripts during AI Agent interactions.
  • Debug Log: Custom debug logs including user conversations from all channels for analyzing your AI Agent.
  • Pinned: Pinned Task Execution Log records for easy access and viewing.

Task Execution Logs Fields

Failed Task

When user utterances are successfully mapped to an intent but the task cannot be completed, those utterances appear under this tab. Group them by task and failure type to analyze and resolve issues. Failed Tasks

Failed Task — Type of Issues

The following types of issues can occur during a failed task:
  • Task aborted by user
  • Alternate task initiated
  • Chat interface refreshed
  • Human-agent transfer
  • Authorization attempt failure — Max attempts reached
  • Incorrect entity failure — Max attempts reached
  • Script failure
  • Service failure
  • Inactivity or External Events (from version 8.0) — when the conversation session and in-progress task is closed due to inactivity or external events
FieldDescription
UtterancesUtterances mapped to an intent but where the task failed. Grouped by similarity by default. Click the Utterances header to disable Group by Utterances.
Task NameThe task identified for the user utterance. Click the Task Name header to enable Group by Task.
Failure PointNodes or points in the task execution journey where the failure occurred. Click an entry to view the complete conversation with markers for intent detection and failure/drop-out points.
Type of IssueThe reason for failure in the case of Task Failure records.
UserIDThe end user’s ID. View metrics by Kore User ID or Channel User ID. Channel-specific IDs are shown only for users who interacted during the selected period.
LanguageThe language in which the conversation occurred. Defaults to all enabled languages.
Date & TimeDate and time of the chat. Sort by Newest to Oldest or Oldest to Newest.

Performance

Developers can monitor all scripts and API services across the AI Agent’s tasks from a single window. The Performance tab displays information in two sections: API Calls and Script Execution.

API Calls

The API Calls section provides information on API execution performance based on the following metrics:
  • Node name, type, and task name
  • Success %
  • Total number of calls with 200 responses and with non-200 responses (view the actual response code from the details page opened by clicking the service row)
  • Average response times
FieldDescription
Node NameThe name of the service, script, or Webhook within the task. Click the Node Name header to enable Group by NodeName.
TypeIndicates whether it is a script, service, or Webhook. Webhook details are included from version 7.0.
Task NameThe task identified for the user utterance. Click the Task Name header to enable Group by Task.
Success%The percentage of service or script runs that executed successfully.
2XX ResponsesThe percentage of service or script runs that returned a 2xx response.
Non 2XX ResponsesThe percentage of service or script runs that returned a non-2xx response.
Avg Response TimeThe average response time of the script or service. Sortable from High to Low or Low to High under the Performance tab.
Status CodeFilter service executions by status code via More Filters > Status Code. Success code: 200. Non-success codes: 304, 400, 401, 403, 404, 408, 409, 500, 502, 503, and 504. More Filters

Script Execution

The Script Execution section provides information on script execution performance based on the following metrics:
  • Node name and task name
  • Success %
  • Average response times
  • Alerts if a script or service is failing consecutively
FieldDescription
Node NameThe name of the service, script, or Webhook within the task. Click the Node Name header to enable Group by NodeName.
Task NameThe task identified for the user utterance. Click the Task Name header to enable Group by Task.
Success%The percentage of service or script runs that executed successfully.
Avg Response TimeThe average response time of the script or service. Sortable from High to Low or Low to High under the Performance tab.

Debug Log

Custom debug statements entered in the Script node using koreDebugger.log("debug statement") are displayed on this tab. Debug statements must be in string format. Logs include user conversations from all channels and are useful for AI Agent analysis, especially in case of failures. The details include:
  • The actual statement defined during AI Agent definition
  • Date and time of logging
  • Channel
  • User ID (along with channel-specific ID)
  • Language of interaction
  • Task name, if available
To view chat history details associated with a debug session:
  1. Click a logged record.
  2. The corresponding window shows Details and Chat History tabs.
  3. Under Details, find the task name, channel, language, and flow.
  4. Click the Chat History tab to find the chat transcript where the log is recorded.
    • If the debug log is generated from an AI Agent message, you are navigated to that specific message in the transcript.
    • If the debug log is not part of the AI Agent message, you are navigated to the latest message added before the debug statement.
For universal AI Agents, debug statements from the universal and linked AI Agents are included in the logs. Debug logs also include error messages related to BotKit, such as when the platform could not reach the BotKit or when the BotKit did not acknowledge the message, with details like <endpoint>, <error code>, and <response time>.
FieldDescription
LogDescription of the debug log. For example: getIndex is not defined.
Task NameThe task identified for the user utterance. Click the Task Name header to enable Group by Task.
Debug PointThe point or node in the conversation where the error is identified. For example: buildDataForCarousel.
ChannelThe specific channel where the conversation occurred.
LanguageThe language in which the conversation occurred. Defaults to all enabled languages.
UserIDThe end user’s ID. View metrics by Kore User ID or Channel User ID. Channel-specific IDs are shown only for users who interacted during the selected period.
Date & TimeDate and time of the chat. Sort by Newest to Oldest or Oldest to Newest.

Pinned

Records pinned from Failed Tasks, API Calls, and Script Execution are displayed here. The fields available in the Pinned section pertain to Task Execution Logs.

Storage Limitations

The platform imposes restrictions on the number of log statements retained per AI Agent:
  • Only the latest 700 statements per AI Agent are stored.
  • Statements older than 7 days are removed.

Task Execution Logs Analysis

Features

  • Filter information by user utterances, intent, user ID, date period, channel, language, and custom tags. See Filter Criteria for more details.
  • Complete meta-information is stored for later analysis, including the original user utterance, channel, extracted entities, custom tags, and detailed Task Execution Logs.
  • Pin important records for later tracking. Pinned records appear on the Pinned tab.
  • Sort by Date and Time (Oldest to Newest, Newest to Oldest). Export insights data as a CSV file.

Fields Matrix

The following matrix shows field availability across Task Execution Log tabs:
FieldFailed TasksAPI CallsScript ExecutionPinnedDebug Log
UtterancesXXX
IntentXXXX
TraitsXXXXX
UserIDXX
LanguageXX
Date & TimeXX
Prompt TypeXXXXX
Task NameX
Node NameXXX
Failure PointXXXX
Type of IssueXXX
TypeXXX
Total RunsXXX
Success%XXX
2XX ResponsesXXX
Non 2XX ResponsesXXX
Avg Response TimeXXX
LogXXXX
Debug PointXXXX
Channel
GenAI Features*
* GenAI features include Answer Generation, Zero-Shot Model, Rephrase User Query, and Rephrase Responses.