Skip to main content
AI-Assisted Manual Audit lets supervisors and QA teams evaluate voice and chat interactions using AI insights combined with manual review. Use it to assess performance, enforce compliance, and deliver targeted coaching. Key capabilities:
CapabilityDescription
Conversation InsightsAI-generated summaries of key moments and outcomes.
Multi-language SupportAudit interactions across all supported languages.
Topics & IntentsIdentify customer purpose and discussion themes.
Emotion AnalysisTrack sentiment and emotional shifts throughout an interaction.
Automated QAScore interactions against configured metrics.
Audit LogsReview detailed evaluation history.

Prerequisites

Before using AI-Assisted Manual Audit, confirm the following:
  • AutoQA Permission – Access to manage metric types in Quality AI General Settings.
  • QA Access – Permission to perform self-assignment and auditing.
  • Role-Based Access – Appropriate permissions assigned based on your organizational role.

Access AI-Assisted Manual Audit

Navigate to Quality AI > ANALYZE > Conversation Mining > Interactions > AI-Assisted Manual Audit. AI-Assisted Manual Audit Page You can open interactions for audit from two places:
  • Conversation Mining – View all conversations within your assigned queues.
  • Allocations – View all interactions assigned to you for evaluation.

Audit Screen Overview

The audit screen has three tabs:
TabDescription
AuditMain workspace for evaluating transcripts, metrics, and AI insights.
Conversation DetailsInteraction metadata including start/end time, agent, queue, and audit scores.
Audit LogsFull audit trail of system and user actions, including GenAI execution records.

Audit Tab

The Audit tab is divided into three sections:
SectionDescription
TranscriptFull conversation dialogue for review and verification.
Audit EvaluationConfigured metrics and a high-level summary of AI analysis.
AI OverviewAI-powered widgets for assessing performance, compliance, and sentiment.

AI Overview

Displays conversation insights through AI-powered widgets so supervisors can evaluate key metrics without reading full transcripts.

Topics

Lists all subjects and themes discussed in the conversation (for example, Customer Support Process).
  • Multi-language topic identification.
  • AI natural language processing extracts topics.
  • Category-based performance analysis.
  • Identifies training needs based on topic patterns.
Topics

Intents

Captures the customer’s purpose and desired outcome.
  • Analyzes conversation context, phrasing, and patterns.
  • Each intent appears as an individual chip.
  • Supports measuring intent resolution success rates.
Intents

Configured Topics and Resolution

Provides visibility into detected topics, sentiment, and resolution status for more effective QA and coaching.
ElementDescription
Configured IntentsIntents detected based on your taxonomy. Click to jump to the detection point in the transcript.
Generated IntentsIntents detected by AI. Each includes a color indicator for sentiment (positive, negative, or neutral).
Overall ResolutionThe resolution status of the conversation.
Topic SentimentSentiment detected for each topic. Click an L3 topic to see how its resolution was addressed in the transcript.
Configured Topics

Generated Topics

Uses taxonomy-based topic discovery to expand analytics on the Audit screen.
  • Supports topic discovery and topic-level sentiment detection.
  • Displays positive, negative, or neutral sentiment for each discovered topic.
Generated Topics

Transcript

The Transcript presents a unified timeline of the full interaction, showing both agent and customer behavior, events, and emotions. It supports real-time audio navigation with transcript details.

Sentiment Analysis

Shows the overall sentiment of the customer and agent across three phases of the call.
PhaseDescription
Call OpeningFrom agent transfer to issue identification.
DevelopmentFrom issue identification to resolution discussion.
Call ClosingFrom resolution discussion to call termination.
Sentiment Analysis

Sentiment Ratio

Shows how sentiment was distributed across the interaction as a percentage breakdown.
SentimentMeaning
PositiveCustomer satisfaction, successful resolution.
NeutralStandard interaction without strong emotion.
NegativeDissatisfaction or unresolved issues.
Sentiment Ratio Sentiment Patterns
PatternTransitionMeaning
ANegative → PositiveCustomer satisfaction recovery.
BPositive → PositiveConsistent positive experience.
CNeutral → PositiveEffective positive experience creation.
DPositive → NegativeService degradation requiring attention.
ENeutral → NegativeMissed opportunities or failures.
FNegative → NegativePersistent dissatisfaction requiring escalation.
GPositive → NeutralAdequate service delivery.
HNegative → NeutralPartial improvement opportunity.
INeutral → NeutralSteady interaction without emotional impact.
Resolution-Aware Scoring uses a weighted algorithm that prioritizes final customer sentiment, applying exponential weighting to recent messages. Scores use a 1–10 scale (5 = Neutral, 7 = Positive) and produce a final classification of Positive, Neutral, or Negative.

Emotions

Ranks agent and customer emotions across the interaction timeline, with emotional states such as anger, frustration, and satisfaction detected throughout. Agent emotions tracked:
EmotionDescription
PatienceHandling difficult situations calmly.
HappyPositive attitude and engagement.
EmpathyUnderstanding and compassionate responses.
ConfusionUncertainty about processes or information.
FearAnxiety or hesitation in responses.
AngerFrustration or irritation (coaching opportunity).
Customer emotions tracked:
EmotionDescription
HappySatisfaction and positive experience.
AngerFrustration requiring attention.
ConfusionNeed for clarification.
SadnessDisappointment requiring empathy.
FearAnxiety about products or services.
EscalationRising frustration levels.
Churn RiskDeparture probability indicators.
Emotions are ranked from highest to lowest by duration percentage. The top three emotions are shown for each party, with timeline visualization and emoticon indicators. Emotions

Audit Evaluation

By Question

Evaluates agent performance on specific inquiry types using configurable evaluation forms. Each criterion is scored individually, supported by AI-powered quality assurance for consistency and precision. The Audit Progress Bar at the top right of the panel shows the completion percentage based on answered audit questions. Audit Progress Bar Evaluation Marking (Yes / No / N/A)
MarkWhen to use
YesThe agent clearly performed the required action with evidence in the conversation log.
NoThe agent failed to perform the required action or the action was incomplete.
N/AThe situation did not arise or the requirement was not applicable in this interaction.

Keyword-Based Conversation Analysis

Keyword filters applied on the Conversation Mining page carry over to the Audit screen. The transcript view shows the full conversation with keyword highlighting.
FeatureDescription
Timeline IntegrationVisual markers show exact keyword positions on the timeline. Click a marker to jump to that point in the transcript.
Keyword HighlightingMatched keywords are highlighted inline in the transcript, color-coded with up to 8 distinct colors. Excluded keywords are not highlighted.
QA Question MappingKeyword matches are linked to relevant QA questions and scoring impact in the AI Overview panel, with speaker attribution and count.
Context DisplaySelecting a keyword expands the surrounding transcript and shows speaker labels, sentiment, and QA impact.
Expand/Collapse ViewThe Keywords Found panel expands when keyword filters are active and collapses when none are applied.
Speaker FilteringNavigate by keyword hits filtered to Agent only or Customer only.
Session PreservationKeyword filters are saved in the user session until manually cleared.
Clear Filter KeywordsRemoves all keyword filters (include and exclude) from the transcript. Other filters (date, sentiment, QA score) remain active.
Keyword-Based Conversation Analysis

Omissions

Highlights instances where the agent failed to follow configured compliance elements, such as playbook steps or dialog tasks.
  • Omitted playbook steps (for playbook metrics).
  • Omitted dialog tasks (for dialog metrics).
  • Only shown when relevant metrics are configured for the interaction.

Violations

Highlights speech metric violations that occurred during the call (for example, Cross Talk, Dead Air, or Speaking Rate Violation). Each violation includes a timestamp so you can navigate directly to that point in the recording. Violations

By Playbook

Enables evaluators to assess adherence to configured playbook metrics. Displays for each playbook metric:
  • Configured minimum adherence.
  • Observed adherence within the interaction.
  • Missing steps not completed during the interaction.
  • Expected vs. observed steps in a dropdown format.
To audit Speech and Playbook metrics, enable Audit Speech Metrics and Audit Playbook Metrics under Settings. If not enabled, these metrics appear in view-only mode.
Playbook Adherence Scoring Logic
ResultCondition
AdheredSimilarity score ≥ configured threshold (for example, ≥ 60%).
Not AdheredSimilarity score < configured threshold.
N/ATrigger not detected in the interaction.

By Value

Tracks value-related metrics during interaction evaluations. Leverages GenAI to analyze agent behavior beyond predefined scripts. Agent Adherence fields:
  • Source System Value – Value obtained from the source system.
  • Agent Mentioned Value – Value mentioned by the agent during the conversation.
  • AI Justification – Explanation of the AI’s adherence decision.
  • GenAI-based adherence – Combines business rule validation with tolerance range analysis.
  • Custom script adherence – Includes the agent-mentioned value and business rule justification.

By AI Agent

Delivers advanced sentiment analysis through GenAI, enabling Post-Interaction Sentiment Analytics and Key Emotion Moments. Integrates with GenAI Copilot to leverage LLMs for detailed post-interaction insights. Key capabilities:
  • Real-time AI-driven analysis.
  • Sentiment and emotion detection.
  • Topic modeling and intent recognition.
  • Predictive analytics.
AI Justification fields (for GenAI-based evaluations):
  • Clear reasoning for the AI’s Yes/No outcome (adhered/not adhered/not applicable) with observation time.
  • Evidence of trigger presence or absence for dynamic adherence types.
  • Specific agent behaviors that influenced the metric outcome.
  • Timestamps for all relevant conversation segments.
AI Justification Adherence Filter Status Filter and sort compliance questions by adherence status:
StatusDescription
AdheredThe response fully meets the compliance requirement.
Not AdheredThe response does not meet the compliance requirement.
Not ApplicableThe question is not relevant to this specific context.
Adherence Filter

Conversation Insights

Provides AI-generated overviews of customer interactions without requiring a full transcript review.
MetricDescription
Customer Talk RatioPercentage of total call duration the customer is speaking.
Agent Talk RatioPercentage of total call duration the agent is speaking.
Silence PercentageCall time in which neither party speaks (excludes hold time).
Speaking RateAgent speech speed in Words Per Minute (WPM).
Conversation Insights
Conversation Insights are available for voice interactions only.

Agent Speech Insights

Displays agent-specific performance metrics.
MetricDescription
Speaking RateWords Per Minute value.
Crutch WordsCount of filler words (for example, “um,” “uh,” “like”).
Empathy ScoreMeasurement of empathy in agent utterances.
Agent Speech Insights

Comments

Displays all feedback submitted by auditors during the evaluation process. Comments appear both inline in the Transcript and in the Comments tab. Commenter details are shown based on privacy settings (for example, Hide Auditor Details). Hide Auditor Details

Adding a Message-Level Comment

  1. Click Assign to Me to begin auditing the conversation.
  2. Hover over any message in the Transcript section — a Comment icon appears. Comment Icon
  3. Click the Comment icon and enter a comment Name and Comment text (both required).
  4. Add or delete your comment before sending.
  5. Click Send to publish the comment. Adding Comments
Once submitted, comments appear:
  • Inline in the Transcript, linked to the corresponding message.
  • In the Comments tab with the comment title, text, and commenter details (based on privacy settings).
Click-Through Navigation
Auditors and supervisors can add comments in By Question, By Value, and By AI Agent metrics when the audit is self-assigned.

Comment Types

TypeDescription
Metric CommentsAdded to specific evaluation criteria (By Question, By Value, or By AI Agent). Click + Add Comment, enter your comment, then click Save.
Message CommentsContextual comments added at the message level in the Transcript. Support click-through navigation for quick review.
Message Comments

Click-Through Navigation

All users — including agents without QA permissions — can click a comment to navigate to the related message. The system centers the commented message in the Transcript window, enabling agents to review feedback from supervisors and QA auditors.

Near-Miss Scenarios

Near-miss evaluations flag responses that closely resemble, but do not fully meet, adherence standards. Applicable only in Deterministic Adherence mode. How it works:
  1. The system compares agent responses against predefined similarity thresholds.
  2. Near-miss cases are flagged for auditor review.
  3. When you click the View button, the evaluation is marked Yes (highlighted in green) and the relevant customer response is highlighted.
View Button
  • By Question metrics are selected by default and cannot be deselected.
  • Auditors can only audit the metric types they have selected.
  • Supervisor score calculation includes all enabled metric types.

Self-Assignment for Audit

QA users (auditors or supervisors) can self-assign unclaimed interactions for auditing. Interactions that are already audited, completed, or assigned to another user are not eligible. To self-assign an interaction:
  1. Navigate to the Conversation Mining page.
  2. Select an interaction that is not yet audited or assigned.
  3. Click Assign to Me. A success message confirms the assignment. Assign to Me
  4. The interaction is marked as Self-Assigned on the Audit Allocations page and becomes unavailable for reassignment.
Only users with QA permission can add feedback comments at any point in the conversation, regardless of the evaluation metrics.

Audit Submission

The Submit button is enabled only when the interaction is assigned to you through Audit Allocations. Before submitting:
  1. If By Question, By Value, or By AI Agent metrics are present, select appropriate responses for all required audit questions.
  2. Ensure the adherence percentage totals 100%.
  3. Click Submit. Audit Submission
After submission:
  • The interaction is marked as Self-Assigned on the Audit Allocations page.
  • The audited interaction is unavailable for reassignment.
  • A completed and submitted interaction cannot be re-audited.
Agent access to scored interactions is controlled by the Agent Access to Scored Interactions setting:
SettingWhat agents see
Only manually audited interactionsSupervisor Audit Score interactions with Date & Time and Queues.
Manually audited and Auto QA scoredKore Evaluation Score (Auto QA) and Supervisor Audited Score interactions.
Hide Auditor Details for Agent:
  • On – Auditor details are anonymized in the audit screen.
  • Off – Auditor details are visible.
Only supervisors can view auditor details. Agents cannot see auditor details.

Provides keyword search across the entire transcript to locate specific topics, compliance issues, customer concerns, or training opportunities. Search

Conversation Details Tab

Provides contextual information about the interaction for review before or after evaluation. Conversation Details:
  • Start Time, Termination Time, End Time, Agent Name, Queue, Customer Phone, CSAT, Disposition, Evaluation Form, and Language.
Audit Details:
  • Auditor Name, Audited Date, Audit Score, and Kore Evaluation Score.
Identifiers (each includes a copy icon):
IdentifierExample Value
Call IDNA
Session ID699d3d5ef39661f7c0aa4b95
Channel User IDNA
Call Conversation IDNA
Agent Conversation IDc-358c3b1-d472-4c2a-89bd-eebcca3dxxxx
User IDu-e481d17b-aba0-5110-9377-05bc36f0xxxx
You can also use Assign to Me on this tab to assign the interaction to yourself for audit. Conversation Details

Audit Logs Tab

Provides a complete audit trail of the evaluation process, recording system and user actions, GenAI metric executions, and status changes. Log entries capture:
DetailDescription
Log creation and updatesRecords audit creation and updates with user ID, display name, and timestamps.
Supervisor and reviewer changesTracks who made each change and what was modified.
AI model execution dataLogs model version, execution duration, request/response token counts, and enabled GenAI features.
Each execution log entry includes:
  • Date and Time of execution.
  • GenAI Feature Name (for example, By Hold Adherence).
  • Language.
  • Model Name (for example, GPT-4o).
  • Integration Type (System or Custom).
  • Prompt Name and Type (Default or Custom).
  • Request Token Count, Response Token Count, and Response Duration.
  • Execution Status (Success or Failure).
Payload Visibility: View Request and Response payloads with options to expand/collapse, format or compact, copy to clipboard, or open in full-screen mode for debugging. Audit Logs Payload Select Assign to Me to assign a log entry to yourself for audit. The system records who assigned it and when, and displays the assigned user in the header and audit history.