Agent Flows are intelligent conversational workflows that combine Dialog Tasks with Agent Nodes to deliver autonomous, goal-driven customer service experiences.
DialogGPT orchestrates intent identification and routing, while Agent Nodes handle conversation execution within individual use cases. This enables AI Agents to autonomously plan, reason, and execute multi-step actions.
Key Components
| Component | Role |
|---|
| Dialog Tasks | Define the scope and structure of a use case (e.g., “Web Check-In Assistance”, “Account Balance Inquiry”) |
| Agent Nodes | Provide agentic capabilities: natural language understanding, slot filling, confirmation handling, and tool invocation within business rules |
| Deterministic Nodes | Entity, message, and service nodes for regulated or compliance-critical steps |
The hybrid design lets you combine deterministic nodes for strict control with Agent Nodes for flexible, natural conversation handling.
Choosing an Approach
Deterministic
Use when:
- Regulatory compliance requires exact wording and predictable behavior (legal disclaimers, financial disclosures)
- Audit trails and full traceability are required
- Conversation follows a fixed, linear path
- Consistent responses matter more than natural conversation
Examples: Account verification, payment processing, medical triage, loan applications
Agentic
Use when:
- Natural, human-like interaction is the priority
- User inputs vary widely for the same intent
- Ease of maintenance matters (a single Agent Node handles increasing complexity)
- You’re building toward more autonomous experiences
Examples: Product recommendations, general inquiries, travel planning, content discovery
Hybrid
Use when:
- Mixed requirements exist — some steps need strict control, others benefit from flexibility
- Transitioning gradually from deterministic to agentic
- Different use cases within the same app have different needs
Examples:
- Banking: transactions are deterministic, general inquiries are agentic
- Healthcare: appointment booking is deterministic, health information is agentic
Trade-offs
| Factor | Deterministic | Agentic |
|---|
| Performance | Faster, predictable latency | Dependent on LLM response times |
| Cost | Lower (no LLM calls for responses) | Higher LLM API costs, lower dev/maintenance costs |
| Scalability | Scales linearly with complexity | Single agent handles growing complexity |
| User experience | Consistent but potentially rigid | Natural and engaging; requires prompt engineering |
Scoping Agent Flows
Avoid use cases that are too granular (dialog bloat, maintenance overhead) or too broad (poor accuracy, weak semantic matching). Find the middle ground with clear semantic boundaries.
When to Split a Use Case
Split into separate flows when:
- Users have distinct goals (“Book appointment” vs. “Cancel appointment”)
- Fulfillment uses different backend APIs or workflows
- Training phrases are semantically distinct
- Required entities or business rules differ
When to Keep Together
Keep as a single flow when:
- Variations express the same goal (“What’s my balance?” / “How much do I have?”)
- The same API handles all variations
- Training phrases overlap significantly
- Required entities are identical
Writing Effective Descriptions
Good descriptions are:
- Semantically rich: Activate more embedding dimensions
- Action and goal-oriented: Include the primary action and desired outcome
- Contextual: Explain when users typically need this
Poor scoping (high collision risk) — all retrieved for “where is my order?”:
Check_Order_Status, Track_Order, View_Order_Details, Get_Order_Information
Well-scoped examples:
- Track_Order_Shipment — Track shipping and delivery status for orders in transit. Users want to know WHERE their package is and WHEN it will arrive. Includes tracking numbers, carrier information, and estimated delivery dates.
- View_Order_History — View past completed orders. Users want to see WHAT they ordered, when, and final totals. For historical reference, NOT active in-transit shipments.
- Modify_Pending_Order — Make changes to an order that hasn’t shipped. Users want to UPDATE their order — change address, cancel items, or adjust quantities. Only for orders still processing, not yet dispatched.
Dialog Tasks
Dialog Tasks define the scope and structure of each use case. Each task consists of interconnected nodes that retrieve information, perform actions, connect to external services, and send messages to users.
Related features:
- Sub-intent management and Node Grouping — Configure sub-intents using group nodes or configure a task as a sub-intent
- Component Transitions — Configure if-else conditions between nodes based on custom criteria
- Voice and IVR Integration — Enable voice interaction (see Voice Call Properties)
- User and Error Prompt Management — Customize messaging at each node
- Context Object — Share data across tasks, intents, and FAQs (see Context Object)
Creating Dialog Tasks
Navigate to Automation > Dialogs, then click Create Dialog.
For optimal performance, limit dialog tasks to 50 or fewer. Exceeding this may cause sluggish UI response and increased latency.
Three creation methods are available:
From Scratch
- Click Start From Scratch.
- Enter an Intent Name (required) and Intent Description (recommended). Add up to 5 secondary descriptions to broaden semantic coverage and improve intent detection accuracy.
- Set availability: Customer Use Case, Agent AI Use Case, or both.
- Configure Intent Settings: set the task as sub-intent only or hide it from help.
- Set Analytics - Containment Type: Abandonment as Self-Service or Drop Off.
- Optionally set Conversation Context, Intent Preconditions, or Context Output.
- Click Proceed.
Generate with AI
- Click Generate with AI.
- Enter an Intent Name and a meaningful Description, then click Generate.
- Preview the generated flow. Click Regenerate with a revised description to refine.
- Click Proceed when satisfied.
The platform auto-defines entities, prompts, error prompts, service tasks, and other parameters. Customize as needed after generation.
If no description is provided, only an error prompt node is generated. A meaningful description is strongly recommended.
From Marketplace Templates
- Click Marketplace and browse categories and integrations. Configured integrations are labeled Installed.
- For Dialog Action Templates (API call templates): select an integration, then click Install on the desired template.
- For Dialog Templates (pre-created flows): select a template, click Install, configure name and description, set up utterances and channel experience, then click Finish.
Marketplace templates require the integration to be configured in your AI Agent first.
Session Management
Session variables persist data across tasks, dialogs, and users. Use them in JavaScript within dialog nodes.
JavaScript API
"EnterpriseContext" : {
"get" : function(key){},
"put" : function(key, value, ttl){}, // ttl in minutes
"delete" : function(key){}
},
"BotContext" : {
"get" : function(key){},
"put" : function(key, value, ttl){},
"delete" : function(key){}
},
"UserContext" : {
"get" : function(key){} // read-only
},
"UserSession" : {
"get" : function(key){},
"put" : function(key, value, ttl){},
"delete" : function(key){}
},
"BotUserSession" : {
"get" : function(key){},
"put" : function(key, value, ttl){},
"delete" : function(key){}
}
put(), get(), and delete() support EnterpriseContext, BotContext, UserSession, and BotUserSession. UserContext supports get() only. All methods operate on root-level objects — nested paths are not supported.
Session Variable Types
| Type | Scope | Description |
|---|
| EnterpriseContext | All apps, all users, all sessions | Enterprise-wide key-value store. Use carefully to avoid unnecessary data exposure. |
| BotContext | All users of a specific app | App-level shared variables (e.g., default currency based on user location) |
| UserContext | All apps for a user (read-only) | System-provided user data |
| UserSession | All apps for a specific user | User-specific data shared across all apps (e.g., home address for commerce and delivery apps) |
| BotUserSession | Specific app + specific user | Per-user, per-app data (e.g., source and destination for a travel app) |
UserContext Read-Only Keys
| Key | Value |
|---|
_id | Kore.ai user ID |
emailId | Email address |
firstName / lastName | Name |
profImage | Avatar filename |
profColor | Account color |
activationStatus | active, inactive, suspended, or locked |
jTitle | Job title |
orgId | Organization ID |
customData | Custom data passed via web SDK |
identities | Alternate user IDs (val, type) |
Standard Keys
| Key | Purpose |
|---|
_labels_ | Returns a friendly label for a GUID (e.g., project name instead of numeric ID) |
_tenant_ | Returns the tenant name for enterprise apps (e.g., JIRA subdomain in a URL) |
_fields_ | Stores end-user action task inputs not included in the payload response |
_last_run | UTC timestamp of the last web service poll in ISO 8601 format |
Method Limitations
delete(): Removes root-level objects only. To delete nested keys, use delete context.session.BotUserSession.{path}. You cannot delete a root-level object using this syntax.
put(): Inserts at root-level only. BotUserSession.put("Company.Address", val) is not supported.
get(): Retrieves root-level objects only. BotUserSession.get("Company.name") is not supported.
Context Object
The Context object persists data throughout dialog execution and across all intents (dialog tasks, action tasks, alert tasks, FAQs). The NLP engine populates intent, entities, and history automatically.
The context object has a size limit of 1024 KB. The platform notifies designers when this limit is approached. Conversations may be discarded if the limit is exceeded in future releases.
Usage: Reference context keys in URLs (https://example.com/{{context.entities.topic}}/rss), script nodes, entity nodes, and SDK payloads. Update context key values in script nodes to influence dialog execution.
Context Object Keys
| Key | Scope | Description | Syntax |
|---|
intent | Dialog | Recognized intent | context.intent.<intent name> |
entities | Dialog | Key-value pairs of user-provided entity values | context.entities.<entity name> |
traits | Dialog | Traits set for the given context | — |
currentLanguage | Global | Current conversation language | — |
suggestedLanguages | Global | Languages detected from the user’s first utterance, ordered by confidence; reset each conversation | — |
history | Global | Array of node execution records (nodeId, state, type, componentName, timestamp) | — |
onHoldTasks | Dialog | Read-only array of tasks on hold during the current conversation | — |
<service node name>.response | Dialog | HTTP response from a Service node (statusCode, body) | context.<node name>.response.body |
resultsFound | Dialog | true if results were returned | — |
message_tone | Global | Tone emotions and scores for the current node | — |
dialog_tone | Global | Average tone emotions and scores for the full dialog session | — |
Developer Defined Key | Dialog | Custom key-value pair set by the developer | context.<varName> |
UserQuery | Dialog | Original and rephrased user query | context.UserQuery.originalUserQuery, context.UserQuery.rephrasedUserQuery |
Node States
| State | Description |
|---|
processing | Platform begins processing the node |
processed | Node and connections processed; next node found but not yet moved to |
waitingForUserInput | User prompted but input not yet received |
pause | Dialog paused while another task runs |
resume | Paused dialog continues after the other task completes |
waitingForServerResponse | Async server response pending |
error | Error occurred (loop limit reached, server failure, script error) |
end | Dialog reached the end of the flow |
Tone Levels
Tone level ranges from -3 (definitely suppressed) to +3 (definitely expressed). Tone names: angry, disgust, fear, sad, joy, positive.
Reuse entity values across dialogs: Set reuseEntityWords: true in preconditions to automatically carry entity values from a parent dialog into downstream dialogs without re-prompting the user.
Voice Call Properties
Voice call properties configure AI Agent behavior for voice channels: IVR, Twilio, IVR-AudioCodes, and Kore.ai Voice Gateway.
Enable a voice channel first, then configure properties at two levels:
- App level: Set during channel enablement.
- Component level: Override per node — applicable to Entity, Message, Confirmation, Agent Node, and Standard Responses.
Access node-level properties in the Dialog Builder by selecting a node and opening the IVR Properties section.
App-Level Channel Settings
| Field | Description | Channels |
|---|
| IVR Data Extraction Key | Syntax for extracting filled data; overridable at entity/confirmation node level | IVR |
| End of Conversation Behavior | Trigger a task/script/message, or terminate the call at end of conversation | IVR, Twilio, AudioCodes, Voice Gateway |
| Call Termination Handler | Dialog task to run when call ends in error | IVR, Twilio, AudioCodes, Voice Gateway |
| Call Control Parameters | Property name-value pairs for VXML definitions / AudioCodes session parameters | IVR, AudioCodes |
| Threshold Key | Variable where ASR confidence levels are stored (pre-populated; do not change unless necessary) | IVR |
| ASR Confidence Threshold | Range 0–1.0; defines when IVR hands control to the AI Agent | IVR |
| Timeout Prompt | Default prompt when the user doesn’t respond within the timeout period | IVR, Twilio, AudioCodes, Voice Gateway |
| Grammar | VXML grammar for speech/DTMF input (custom text or URL) | IVR |
| No Match Prompt | Default prompt when user input doesn’t match defined grammar | IVR |
| Barge-In | Allow user input while a prompt is playing | IVR, Twilio, AudioCodes, Voice Gateway |
| Timeout | Max wait for user input (1–60 seconds) | IVR, Twilio, AudioCodes, Voice Gateway |
| No. of Retries | Max retry attempts (1–10) | IVR, Twilio, AudioCodes |
| Log | Send chat log to IVR system | IVR |
Node-Level Voice Settings
Applicable to: Entity, Message, Confirmation, Agent Node, and Standard Responses.
| Field | Description | Channels |
|---|
| Initial Prompts | Prompts played when IVR first executes the node | IVR, Twilio, AudioCodes, Voice Gateway |
| Timeout Prompts | Prompts when user doesn’t respond in time. Supports Customize Retries Behavior (1–10 retries) and Behavior on Exceeding Retries (call termination handler, initiate dialog, or jump to node). Retries customization applies to IVR only, at Entity, Confirmation, and Message nodes. | IVR, Twilio, AudioCodes, Voice Gateway |
| Timeout | Preset (1–60 sec) or select an environment variable. Non-numeric or >60-second variable values fall back to the channel-level timeout. | IVR, Twilio, AudioCodes, Voice Gateway |
| No Match Prompts | Prompts when input doesn’t match grammar. Supports customizable retries (IVR only). | IVR |
| Error Prompts | Prompts when input is an invalid entity type. Supports customizable retries (IVR only). | IVR, Twilio, AudioCodes, Voice Gateway |
| Grammar | Speech/DTMF grammar (custom text or URL) | IVR, Twilio |
| No. of Retries | Max retries (1–10); overrides app-level setting | IVR, Twilio, AudioCodes, Voice Gateway |
| Behavior on Exceeding Retries | Call termination handler, initiate dialog, or jump to node | IVR, Twilio, AudioCodes, Voice Gateway |
| Barge-In | Allow input during prompt (default: No) | IVR, Twilio, AudioCodes, Voice Gateway |
| Call Control Parameters | Node-level VXML/AudioCodes parameters; overrides app-level values | IVR, AudioCodes, Voice Gateway |
| Log | Send chat log to IVR (default: No) | IVR |
| Recording | Recording state at this node (default: Stop) | IVR |
Additional app-level-only settings:
| Field | Description | Channel |
|---|
| Locale Definition | Sets the xml:lang attribute in VXML to enhance ASR language recognition | IVR |
| Document Type Definitions | DTD settings (Status, Public ID, System ID) for VXML structure validation | IVR |
| Fallback Redirection | Redirect URL used when the call hangs up (default: disabled) | IVR |
| VXML Error Threshold | Max VXML errors before corrective action (default: 3); customizable to 1, 2, or 3 | IVR |
| Propagate Values to Linked Apps | Propagates Voice Call Properties from a Universal App to linked apps (default: disabled) | IVR |
Multiple prompts can be defined per prompt type and played in sequence. Drag to reorder. This avoids repetition since prompts play in defined order across retries.
Configuring Grammar
At least one Speech Grammar must be defined for IVR. Supported systems:
Nuance
- Set Enable Transcription to No.
- In Grammar: select Speech or DTMF, enter the VXML path to
dlm.zip:
https://nuance.kore.ai/downloads/kore_dlm.zip?nlptype=krypton&dlm_weight=0.2&lang=en-US
(adjust the path and language code for your setup).
- Click Add Grammar and add the path to
nle.zip using the same steps.
- Save.
Voximal / UniMRCP
- Set Enable Transcription to Yes.
- Enter the transcription engine source:
- Voximal:
builtin:grammar/text
- UniMRCP:
builtin:grammar/transcribe
- Leave the Grammar section blank — the transcription source handles speech vetting.
- Save.
Building Multilingual Applications
AI for Service supports 100+ languages. A multilingual application has two key components: input processing and response processing.
| Approach | How it works | Best for |
|---|
| Native Multilingual | Processes input in the original language using BGEM3 embeddings and multilingual LLMs — no translation overhead | Contextual understanding across languages, lower latency, cost optimization |
| Translation-Based | Converts input to the app’s default language before processing | Language-specific business logic, legacy integrations, single-language data processing |
Response Processing
| Approach | Level | Best for |
|---|
| Locale-Specific Responses | Node-level | Compliance-critical content, brand messaging, regulated industries (authored per language) |
| Translation Engines (Google Cloud, Microsoft, custom) | App-level | Broad language coverage (50+ languages), transactional messages, rapid deployment |
| LLM-Based Translation and Rephrasing | App-level or node-level | Conversational tone, cultural adaptation, dynamic context-dependent messaging |
Locale-Specific trade-offs:
| Advantages | Limitations |
|---|
| Full wording control | High maintenance overhead |
| Culturally appropriate | Hard to scale across many nodes |
| No translation cost or latency | Requires multilingual content creators |
Translation Engine trade-offs:
| Advantages | Limitations |
|---|
| Fast and cost-effective | Less wording control |
| Supports 100+ languages automatically | May miss cultural nuances; limited context awareness |
| Minimal setup and maintenance | Cannot adjust tone post-translation |
LLM-Based trade-offs:
| Advantages | Limitations |
|---|
| Context-aware and culturally adaptive | Higher latency and cost per response |
| Combines translation and rephrasing in one call | Requires prompt engineering expertise |
| Flexible prompt customization | Less deterministic output |
| Can personalize based on user context | May require guardrails for regulated industries |
Hybrid Patterns
| Pattern | When to Use |
|---|
| Translation Engine + LLM Rephrasing | Dynamic responses with broad language coverage; maintain tone consistency at scale. Note: two API calls increase latency. |
| Locale-Specific + LLM Rephrasing | Maximum flexibility — control base content while enabling personalized delivery |
| Agent Node Business Rules | Add a language instruction directly in the Agent Node: “Always respond in the same language as the user input, maintaining consistent terminology and cultural context.” No separate translation configuration needed. |
Decision Guide
| Requirement | Recommended Approach |
|---|
| Compliance-critical content | Locale-specific responses only |
| 50+ languages, transactional | Translation Engine |
| Conversational tone matters | LLM-based rephrasing |
| Dynamic responses + broad coverage | Translation Engine + LLM rephrasing |
| Specific content + personalization | Locale-specific + LLM rephrasing |
Testing Checklist
- Verify language detection accuracy.
- Review translations with native speakers.
- Test edge cases: mixed-language input, special characters, RTL languages.
- Measure latency across approaches.
- Monitor API costs per interaction.
Common Pitfalls
- Compliance content: Always use locale-specific responses for legal text; never rely on automated translation.
- Double translation: Don’t enable both a Translation Engine and LLM translation simultaneously — this causes double translation and unpredictable output.
- Skipping native speaker review: Translations may be technically correct but culturally inappropriate. Always validate with native speakers.