Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
AI-Assisted Manual Audit enables supervisors and QA teams to evaluate voice and chat interactions using a hybrid approach that combines AI-generated insights with manual scoring.
The audit workspace provides a unified view of transcripts, interaction metadata, timeline navigation, and AI insights such as summaries, topics, sentiment, resolution status, and metric cards.
Evaluators can review AI outputs, validate or override scores, add comments, and submit final audit results for coaching and compliance. The framework supports inbound and outbound evaluation flows, duration-aware scoring (including exclusions for short interactions), and structured QA forms for consistent assessment.
Key Capabilities:
| Capability | Description |
|---|
| AI Summary and Insights | View AI-generated summaries of key moments and outcomes, including topics, sentiment, resolution status, and key interaction metrics. |
| Multi-language Support | Audit interactions across all supported languages. |
| Sentiment and Emotion Analysis | Track sentiment and emotional shifts throughout the interaction. |
| Automated QA (Auto QA) Scoring | AI evaluates interactions against configured metrics to automatically score performance. |
| Audit Logs | Track audit actions and AI execution details, and review the evaluation history. |
| Timeline and Keyword Navigation | Jump to key events, metrics, and important terms in the transcript. |
| Custom Fields | Review business metadata for each interaction in the Conversation Details tab. |
| Manual Evaluation | Override AI results and score interactions with — Yes or No or N/A. |
| Speech and Behavior Analysis | Evaluate silence, cross-talk, hold, and transfer behavior. |
| Direction-Aware Evaluation and Forms | View the resolved contact direction (Inbound or Outbound) in the Conversation Details tab. Interactions with no form assigned for the detected direction display the status No form assigned. |
| Duration Handling | Exclude short interactions from Auto QA (manual audit allowed). |
| Hold and Transfer Etiquette | Evaluate agent behavior during hold and transfer events. |
Prerequisites
Before using AI-Assisted Manual Audit, confirm the following:
- Auto QA Permission: Access to manage metric types in Quality AI General Settings.
- QA Access: Permission to perform self-assignment and auditing.
- Role-Based Access: Appropriate permissions assigned based on your organizational role.
- GenAI Settings: Enable sentiment, emotions, and topic modeling features as required.
- Metric Settings: Enable Speech, Playbook, Hold Etiquette, and related metrics when needed.
Access AI-Assisted Manual Audit
Navigate to Quality AI > Analyze > Conversation Mining > Interactions > AI-Assisted Manual Audit.
You can open interactions from:
- Conversation Mining: View all conversations within your assigned queues.
- Allocations: View all interactions assigned to you for evaluation.
Audit Workflow
- Select an interaction.
- Review transcript, timeline, and AI insights.
- Select Assign to Me if the interaction is unassigned.
- Complete all required metrics.
- Add comments if needed.
- Select Submit to finalize the audit.
Audit Screen Overview
| Tab | Description |
|---|
| Audit | Primary workspace for reviewing transcripts, evaluating performance metrics, and analyzing AI insights. |
| Conversation Details | Interaction metadata, including Channel and Direction, start or end time, agent, queue, and audit scores. |
| Audit Logs | Full audit trail of system and user actions, including GenAI execution records. |
Audit Tab
The Audit tab is the primary workspace for manual review and AI-assisted evaluation.
| Section | Description |
|---|
| Transcript and Timeline (Left) | Displays the full interaction with speaker labels, timestamps, keyword highlights, and (for voice) synchronized audio playback with event markers for navigation. |
| Metrics Panel (Center) | Shows configured evaluation metrics with AI-generated scores, adherence status (Yes or No or N/A), and manual override controls. |
| AI Insights (Overview Panel) (Right) | Provides AI-generated analysis, including summary, topics, intents, sentiment, resolution, and key performance indicators. |
Comments (Bottom Panel): Enables feedback management at both message and metric levels, with direct navigation to relevant points in the transcript.
Audit Evaluation
AI Overview
Displays conversation insights through AI-powered widgets, helping supervisors evaluate key metrics without reading full transcripts.
This displays high-level evaluation details and scoring:
| Element | Description |
|---|
| Kore Evaluation Score | Displays the Auto QA score for the interaction (before manual evaluation). |
| Points Total | Shows the achieved score against the maximum possible score (for example, 4.00 / 100). |
| Configured Topics | Lists taxonomy-based topics detected in the interaction. |
| Generated Topics | Displays AI-discovered topics beyond configured taxonomy. |
| Overall Resolution | Indicates resolution status (for example, Resolved or Not Applicable). |
| Sentiment | Displays overall interaction sentiment. Shows No Analysis Found if sentiment is unavailable. |
Score Summary
Displays key behavioral and linguistic metrics:
| Metric | Description |
|---|
| Empathy Score | Measures agent empathy based on conversation analysis. |
| Sentiment Score | Aggregated sentiment score for the interaction. |
| Crutch Word Score | Measures usage of filler words (for example, um, uh). |
If analysis is unavailable, these values display as NA.
Topics and Intents
| Element | Description |
|---|
| Topics | Identifies key discussion themes using NLP and taxonomy-based classification. |
| Configured Intents | Intents mapped to predefined taxonomy with click-through navigation. |
| Generated Intents | AI-detected intents with sentiment indicators. |
Resolution and Sentiment
| Element | Description |
|---|
| Overall Resolution | Indicates whether the interaction was successfully resolved. |
| Topic Sentiment | Displays sentiment (positive, neutral, negative) for each topic. |
Generated Topics
-
Expands topic discovery beyond configured taxonomy.
-
Provides topic-level sentiment insights.
-
Enhances visibility into conversation patterns.
Transcript and Timeline
The Transcript provides a unified, time-synchronized view of the interaction across chat and voice.
- Speaker-separated conversation (Agent and Customer).
- Timestamped utterances.
- Keyword highlighting and navigation.
- Inline and clickable comments.
- Audio playback with synchronized transcript (voice only).
Sentiment Analysis
Shows the overall sentiment of the customer and agent across three phases of the call.
| Phase | Description |
|---|
| Call Opening | From agent transfer to issue identification. |
| Development | From issue identification to resolution discussion. |
| Call Closing | From resolution discussion to call termination. |
Sentiment Ratio
Displays the distribution of sentiment across the interaction as a percentage breakdown (Positive, Neutral, Negative). If no data is available, it shows No Sentiment Ratio Found.
| Sentiment | Meaning |
|---|
| Positive | Customer satisfaction, successful resolution. |
| Neutral | Standard interaction without strong emotion. |
| Negative | Dissatisfaction or unresolved issues. |
Sentiment Patterns
| Pattern | Meaning |
|---|
| Negative → Positive | Recovery and successful resolution |
| Positive → Positive | Consistent positive experience |
| Neutral → Positive | Improved experience |
| Positive → Negative | Service degradation |
| Neutral → Negative | Missed expectations |
| Negative → Negative | Persistent dissatisfaction |
Resolution-Aware Scoring: prioritizes final sentiment and produces a weighted interaction score.
Scores use a 1-10 scale (5 = Neutral, 7 = Positive) and produce a final classification of Positive, Neutral, or Negative.
Emotion Analysis
Tracks emotional signals for both agent and customer across the timeline.
- Agent Emotions: empathy, patience, happiness, frustration, confusion.
- Customer Emotions: satisfaction, anger, confusion, churn risk, escalation.
Emotions are ranked by duration percentage from highest to lowest. The top three emotions for each participant are displayed with timeline-based visualization and emoticon indicators.
Emotions
| Section | Description |
|---|
| Agent Top Emotions | Dominant agent emotions |
| Customer Top Emotions | Dominant customer emotions |
Displays No Emotion Found if unavailable.
Conversation Insights
| Metric | Description |
|---|
| Customer Talk Ratio | % of time customer speaks |
| Agent Talk Ratio | % of time agent speaks |
| Silence | % of inactive time |
Agent Speech Insights
| Metric | Description |
|---|
| Speaking Rate | Words per minute |
| Crutch Words | Filler word count |
| Empathy Score | Speech-based empathy |
Transcript and Timeline
Displays the full interaction with:
- Speaker-separated messages
- Timestamps
- Keyword highlights
- Audio playback (voice only)
- Clickable navigation markers
By Question (Audit)
Evaluates agent performance against predefined audit questions using AI detection and manual validation.
| Element | Description |
|---|
| Question Card | Displays evaluation question. |
| Evaluation Marking Options | - Yes (Adhered): The agent clearly performed the required action with evidence in the conversation log. - No (Not Adhered): The agent failed to perform the required action, or the action was incomplete. - N/A (Not Applicable): The situation didn’t arise, or the requirement doesn’t apply to this interaction. For Dynamic By Question metrics, the system marks N/A when it doesn’t detect the trigger intent in the conversation. |
| Auto QA Result | AI-generated outcome. |
| AI Justification | Explains AI scoring reasoning decisions with supported evidence and timestamps. |
| View Chat | Navigates to transcript location. |
| Add Comment | Adds metric-level feedback. |
| Audit Progress Bar | Displays completion percentage of required metrics. |
AI Justification
The AI Justification section uses LLM-generated explanations to clarify Auto QA decisions in By Question metrics. It explains Adhered, Not Adhered, and Not Applicable outcomes and improves transparency by showing why a metric passes, fails, or isn’t evaluated.
What AI Justification displays:
| Scenario | What the Justification Shows |
|---|
| Adhered - Yes | The system confirms the agent met the expected behavior and may include supporting context in the reasoning. |
| Not Adhered (Omission) | The system identifies that the expected behavior wasn’t observed across the interaction. No timestamps are displayed because no specific message or event caused the failure. |
| Not Adhered (Violation Event) | The system identifies a specific message or behavior that caused non-adherence. Timestamps are included only for violation scenarios where a specific conversation event causes the failure (for example, when an agent displays rude behavior). |
| Not Applicable (Dynamic By Question) | The system generates an LLM-based justification explaining why the trigger intent was not detected in the conversation. This displays in the Reasoning section and helps users refine trigger prompts during design time. |
| Outcome | Displays the evaluation result as either adhered or not adhered. The system doesn’t display Not Applicable as a standard outcome; instead, AI Justification handles it for Dynamic By Question metrics. |
| Reasoning | Provides an expandable explanation of the evaluation decision. It explains why the outcome was assigned, highlights relevant conversation context or missing behavior, and gives evaluation-specific justification rather than a generic summary. |
Dynamic By Question – Not Applicable Justification
For Dynamic By Question metrics, when the system doesn’t detect the trigger intent, it generates an LLM-based justification explaining the absence of relevant conversational evidence and why the intent doesn’t qualify for evaluation. This helps users refine and optimize trigger prompts during design time.
Timestamp Generation Rule
-
Timestamps are included: When a specific agent message or event causes a violation (for example, rude or non-compliant behavior).
-
Timestamps are not included: For omission scenarios where expected behavior is missing and no specific message or event caused the outcome.
Timestamp Examples
-
Valid (Violation-Based): Professionalism metric → Agent used rude language → timestamps displayed for relevant messages.
-
Invalid (Omission-Based): Greeting metric → Agent doesn’t greet → no timestamps displayed.
Adherence Filter Status
Filter and sort compliance metrics by adherence status:
| Status | Description |
|---|
| Adhered | The response fully meets the compliance requirement. |
| Not Adhered | The response doesn’t meet the compliance requirement. Timestamps are displayed only for violation scenarios where a specific message or event causes the issue. They aren’t displayed for omission-based failures. |
| Not Applicable | For static metrics, the metric isn’t relevant to the interaction context. For Dynamic By Question metrics, the trigger intent isn’t detected, and the system provides an LLM-based justification explaining why. |
Keyword-Based Conversation Analysis
Keyword filters applied on the Conversation Mining page carry over to the Audit screen. The transcript view shows the full conversation with keyword highlighting.
| Feature | Description |
|---|
| Timeline Integration | Visual markers show exact keyword positions on the timeline. Select a marker to jump to that point in the transcript. |
| Keyword Highlighting | The system highlights matched keywords inline using up to eight distinct colors. It doesn’t highlight excluded keywords. |
| QA Question Mapping | Keyword matches link to relevant QA questions and their scoring impact in the AI Overview panel, including speaker attribution and count. |
| Context Display | Selecting a keyword expands the surrounding transcript and shows speaker labels, sentiment, and QA impact. |
| Expand or Collapse View | The Keywords Found panel expands when keyword filters are active and collapses when none are applied. |
| Speaker Filtering | Filter keyword hits by agent or customer. |
| Session Preservation | The system saves keyword filters for the session until you clear them. |
| Clear Filter Keywords | Removes all keyword filters (include and exclude) from the transcript. Other active filters remain unchanged. |
Omissions
Highlights cases where the agent doesn’t follow configured compliance requirements, such as playbook steps or dialog tasks. This includes omitted playbook steps for playbook metrics and omitted dialog tasks for dialog metrics. The system displays only when relevant metrics are configured for the interaction.
Violations
Highlights speech metric violations that occurred during the call (for example, Cross Talk, Dead Air, or Speaking Rate Violation). Each violation includes a timestamp so you can navigate directly to that point in the recording.
Violations apply to Voice channel interactions only.
By Playbook
Enables evaluators to assess adherence to configured playbook metrics.
Displays for each playbook metric:
-
Configured minimum adherence.
-
Observed adherence within the interaction.
-
Missing steps not completed during the interaction.
-
Expected vs. observed steps in a dropdown format.
To audit Speech and Playbook metrics, enable Audit Speech Metrics and Audit Playbook Metrics under Settings. If not enabled, these metrics show in view-only mode.
Adherence Scoring Logic
| Result | Condition |
|---|
| Adhered | Similarity score meets or exceeds the configured threshold (for example, ≥ 60%). |
| Not Adhered | Similarity score < configured threshold. |
| N/A | Trigger not detected in the interaction. |
By Value
Tracks value-related metrics during interaction evaluations. Leverages GenAI to analyze agent behavior beyond predefined scripts.
Agent Adherence fields:
- Source System Value - Value obtained from the source system.
- Agent Mentioned Value - Value mentioned by the agent during the conversation.
- AI Justification - Explanation of the AI’s adherence decision.
- GenAI-based adherence - Combines business rule validation with tolerance range analysis.
- Custom script adherence - Includes the agent-mentioned value and business rule justification.
By AI Agent
Delivers advanced sentiment analysis through GenAI, enabling Post-Interaction Sentiment Analytics and Key Emotion Moments. Integrates with GenAI Copilot to leverage LLMs for detailed post-interaction insights.
Key capabilities:
- Real-time AI-driven analysis.
- Sentiment and emotion detection.
- Topic modeling and intent recognition.
- Predictive analytics.
AI Justification fields (for GenAI-based evaluations):
- Clear reasoning for the AI’s Yes/No outcome (adhered/not adhered/not applicable) with observation time.
- Evidence of trigger presence or absence for dynamic adherence types.
- Specific agent behaviors that influenced the metric outcome.
- Timestamps for all relevant conversation segments.
Adherence Filter Status
Filter and sort compliance questions by adherence status:
| Status | Description |
|---|
| Adhered | The response fully meets the compliance requirement. |
| Not Adhered | The response does not meet the compliance requirement. |
| Not Applicable | The question is not relevant to this specific context. |
Conversation Insights
Provides AI-generated overviews of customer interactions without requiring a full transcript review.
| Metric | Description |
|---|
| Customer Talk Ratio | Percentage of total call duration the customer is speaking. |
| Agent Talk Ratio | Percentage of total call duration the agent is speaking. |
| Silence Percentage | Call time in which neither party speaks (excludes hold time). |
| Speaking Rate | Agent speech speed in Words Per Minute (WPM). |
Conversation Insights are available for voice interactions only.
Agent Speech Insights
Displays agent-specific performance metrics.
| Metric | Description |
|---|
| Speaking Rate | Words Per Minute value. |
| Crutch Words | Count of filler words (for example, “um,” “uh,” “like”). |
| Empathy Score | Measurement of empathy in agent utterances. |
Displays all feedback submitted by auditors during the evaluation process. Comments show both inline in the Transcript and in the Comments tab. Commenter details are displayed based on privacy settings (for example, Hide Auditor Details).
-
Select Assign to Me to begin auditing the conversation.
-
Hover over any message in the Transcript section — a Comment icon displays.
-
Select the Comment icon and enter a comment Name and Comment text (both required).
-
Add or delete your comment before sending.
-
Select Send to publish the comment.
When submitted, the Inline comments box shows:
- Inline in the Transcript, linked to the corresponding message.
- In the Comments tab with the comment title, text, and commenter details (based on privacy settings).
Auditors and supervisors can add comments in By Question, By Value, and By AI Agent metrics when the audit is self-assigned.
| Type | Description |
|---|
| Metric Comments | Added to specific evaluation criteria (By Question, By Value, or By AI Agent). Select + Add Comment, enter your comment, then Select Save. |
| Message Comments | Contextual comments added at the message level in the Transcript. Support click-through navigation for quick review. |
Click-Through Navigation
All users — including agents without QA permissions — can select a comment to navigate to the related message. The system centers the commented message in the Transcript window, enabling agents to review feedback from supervisors and QA auditors.
Near-Miss Scenarios
Near-miss evaluations flag responses that closely resemble, but do not fully meet, adherence standards. Applicable only in Deterministic Adherence mode.
How it works:
- The system compares agent responses against predefined similarity thresholds.
- Near-miss cases are flagged for auditor review.
- When you Select the View button, the evaluation is marked Yes (highlighted in green) and the relevant customer response is highlighted.
- By Question metrics are selected by default and cannot be deselected.
- Auditors can only audit the metric types they have selected.
- Supervisor score calculation includes all enabled metric types.
Self-Assignment for Audit
QA users (auditors or supervisors) can self-assign unclaimed interactions for auditing. Interactions that are already audited, completed, or assigned to another user are not eligible.
To self-assign an interaction:
-
Navigate to the Conversation Mining page.
-
Select an interaction that is not audited or assigned.
-
Select Assign to Me. A success message confirms the assignment.
-
The interaction is marked as Self-Assigned on the Audit Allocations page and becomes unavailable for reassignment.
Only users with QA permission can add feedback comments at any point in the conversation, regardless of the evaluation metrics.
Audit Submission
The Submit button is enabled only when the interaction is assigned to you through Audit Allocations.
Before submitting:
-
If By Question, By Value, or By AI Agent metrics are present, select appropriate responses for all required audit questions.
-
Ensure the adherence percentage totals 100%.
-
Select Submit.
After submission:
- The interaction is marked as Self-Assigned on the Audit Allocations page.
- The audited interaction is unavailable for reassignment.
- A completed and submitted interaction cannot be re-audited.
Agent access to scored interactions is controlled by the Agent Access to Scored Interactions setting:
| Setting | What agents see |
|---|
| Only manually audited interactions | Supervisor Audit Score interactions with Date & Time and Queues. |
| Manually audited and Auto QA scored | Kore Evaluation Score (Auto QA) and Supervisor Audited Score interactions. |
Hide Auditor Details for Agent:
- On - Auditor details are anonymized in the audit screen.
- Off - Auditor details are visible.
Only supervisors can view auditor details. Agents cannot see auditor details.
Search
Provides keyword search across the entire transcript to locate specific topics, compliance issues, customer concerns, or training opportunities.
Conversation Details Tab
Provides contextual and audit-related information for the selected interaction, including direction and custom fields, helping supervisors review the interaction context before or after evaluation.
Conversation Details:
- Start Time, Termination Time, End Time, Agent Name, Queue, Channel, Contact Direction, Customer Phone, CSAT, Disposition, Evaluation Form, and Language.
Audit Details:
- Auditor Name, Audited Date, Audit Score, and Kore Evaluation Score.
Identifiers (each includes a copy icon):
| Identifier | Example Value |
|---|
| Call ID | NA |
| Session ID | 699d3d5ef39661f7c0aa4b95 |
| Channel User ID | NA |
| Channel Direction | Inbound or Outbound. |
| Call Conversation ID | NA |
| Agent Conversation ID | c-358c3b1-d472-4c2a-89bd-eebcca3dxxxx |
| User ID | u-e481d17b-aba0-5110-9377-05bc36f0xxxx |
You can also use Assign to Me on this tab to assign the interaction to yourself for audit.
Custom Fields
The Custom Fields section displays business-specific metadata ingested with this conversation from Express File or Agent AI integrations. Each field shows as a header-value pair. If no custom fields are available, the section displays No custom fields found.
Custom fields reflect only the fields ingested with this specific conversation record. Fields vary across conversations based on the source integration and the metadata included at ingestion time.
The right panel displays the following additional interaction context:
-
Start Time: Timestamp when the conversation started (for example, 10 April 2026, 7:00:00 AM).
-
Termination Time: Timestamp when the system terminated the conversation.
-
End Time: Timestamp when the conversation ended.
-
Agent: Name of the agent who handled the interaction.
-
Queue: Queue assigned to the interaction (for example,
AWSPublicQueue).
-
Channel & Direction: Channel type and contact direction (for example, Voice — Inbound).
-
Customer Phone: Customer’s phone number, if available.
-
CSAT: Customer satisfaction score, if collected.
-
Disposition: Disposition applied to the conversation.
-
Evaluation Form: Name of the evaluation form applied (for example, New Points Based).
-
Language: Language of the conversation (for example, English).
Audit Logs Tab
Provides a complete audit trail of the interaction evaluation process by recording system and user actions, executing GenAI-based metric evaluations, tracking status changes, and displaying audit progress for transparency and compliance.
Log entries capture:
| Detail | Description |
|---|
| Log creation and updates | Records audit creation and updates with user ID, display name, and timestamps. |
| Supervisor and reviewer changes | Tracks who made each change and what was modified. |
| AI model execution data | Logs model version, execution duration, request/response token counts, and enabled GenAI features. |
Each execution log entry includes:
- Date and Time of execution.
- GenAI Feature Name (for example, By Hold Adherence).
- Language.
- Model Name (for example, GPT-4o).
- Integration Type (System or Custom).
- Prompt Name and Type (Default or Custom).
- Request Token Count, Response Token Count, and Response Duration.
- Execution Status (Success or Failure).
Payload Visibility: View Request and Response payloads with options to expand/collapse, format or compact, copy to clipboard, or open in full-screen mode for debugging.
Select Assign to Me to assign a log entry to yourself for audit. The system records who assigned it and when, and displays the assigned user in the header and audit history.