Token Data Collection
The platform automatically captures usage data for pre-built models (OpenAI, Azure OpenAI, and Anthropic) regardless of the prompts used. For custom models and Amazon Bedrock models, you must map the Request and Response Token Keys in the custom prompts to enable tracking. Without this mapping, the platform cannot calculate consumption, which may result in unmonitored usage and unexpected costs. To access the dashboard, go to Analytics > Gen AI Analytics > Overview.
Filters
Apply filters at the top of the page by module, model, feature, and date range to focus on specific interactions and drill down into targeted performance metrics.Metrics
The following key metrics are displayed:- Total Tokens Used: The total number of tokens consumed in requests and responses during the selected period.
- Total Requests: The total number of LLM calls made during the selected period.
- Success Rate: The percentage of LLM requests that succeed.
- Average Latency: The average response time for LLM requests.