Skip to main content
Back to Analytics Overview The LLM and Gen AI Usage Logs provide detailed information about requests sent to LLMs and the corresponding responses, enabling AI Agent designers to track usage, compare performance across LLM features, and refine prompts and settings. The log analysis focuses on the following key areas:
  • Request-response dynamics: Analysis of request-response dynamics between user prompts and model responses offers insights into prompt and model performance in specific scenarios.
  • Payload details: Analyzing the payload data exchanged during interactions allows for effective monitoring and optimization of advanced AI functionalities.
To access the logs, go to Analytics > Gen AI Analytics > Usage Logs. Click any record to view the log summary and payload details. Usage Logs

Field Descriptions

You can sort the data by either Newest to Oldest or Oldest to Newest. Click a record to view the Summary and Payload Details.

Summary

Overview

FieldDescription
DescriptionExtra details about the node and task name linked to the feature.
Date & TimeThe timestamp of the call made to the LLM.
LanguageThe language in which the conversation occurred. For multi-lingual AI Agents, you can select specific languages to filter conversations. The page shows all enabled languages by default.
ChannelThe communication channel or platform used for the interaction with LLM.
Session IDIdentifier for the session.
StatusStatus of the call made to the LLM: “Success” or “Failure”.

User Details

FieldDescription
UserAI Agent designer or end user who made a call to the LLM.
User IDThe distinct identifier of the end user engaged in the conversation. You can view metrics based on the Kore User ID or Channel User ID. Channel-specific IDs are shown only for users who interacted with the AI Agent during the selected period.

Generative AI

FieldDescription
FeatureThe platform feature making calls to the LLM models.
ModelThe Large Language Model to which the request was made.
Prompt NamePrompt used with the model at the node/task level. Pre-built prompts are named “Default”.
Request TokensThe individual parts of input text (words, punctuation) given to the model to create a response. These are the basis for the model’s understanding and output generation.
Integration TypeType of integration used (for example, System or Custom).
Response DurationTime taken by the LLM to generate the response.
Response TokensThe pieces of generated output (words, punctuation) showing the model’s response. These tokens make up the structured parts of the LLM’s text.

Guardrails

FieldDescription
Configured GuardrailsRestrict Toxicity, Restrict Topics, Detect Prompt Injections, Filter Response.
OutcomeIndicates whether the guardrail was Detected, Not Detected, or Not Applicable.
Risk ScoreCalculated risk score on a scale of 0 to 1. If not detected, it will be 0.
Guardrail details are displayed only if they are configured.

Payload Details

FieldDescription
Request PayloadThe user’s input or question sent to the LLM, along with any extra details needed for the model to give a good response.
Response PayloadThe LLM’s answer to the input it receives, in text format with additional information required to present the response.
Tokens UsedThe pieces of generated output showing the model’s response, making up the structured parts of the LLM’s text.
StreamIf stream = true, the response is delivered incrementally, token by token in real-time.

Filter Criteria

The LLM and GenAI logs data can be viewed based on specific filter criteria. Learn more.