Skip to main content
Connect large language models (LLMs) to power your AI Agent’s generative AI capabilities. You can use pre-built integrations, bring your own model, or deploy Kore.ai’s fine-tuned XO GPT models.

Integration Options

OptionDescription
Pre-built IntegrationsDirect connections to Azure OpenAI, OpenAI, Anthropic, and Amazon Bedrock with out-of-the-box support and pre-configured prompt templates.
Custom LLM (BYO)Connect any externally hosted or enterprise-hosted model. Works with the Platform’s Auth Profiles module so you can use your preferred authentication mechanism.
Kore.ai XO GPTFine-tuned models for enterprise conversational AI—includes conversation summarization, query rephrasing, and dialog orchestration.

Authorization

Authorization establishes a secure connection between the Platform and your LLM provider. Each provider requires different credentials.
ProviderRequired FieldsNotes
Azure OpenAIAPI Key, Sub-Domain, Deployment IDAPI Key authenticates your account; Sub-Domain identifies your Azure resource endpoint; Deployment ID specifies the deployed model.
OpenAIAPI KeyAuthenticates your OpenAI account.
AnthropicAPI KeyAuthenticates your Anthropic account.
Amazon BedrockAccess Key ID, Secret Access Key, Region, Model IDAccess keys authenticate your AWS account; Region identifies the service region; Model ID specifies the Bedrock model.
Custom LLMEndpoint, Authorization, HeadersEndpoint is the API URL; Authorization defines the authentication method or credentials; Headers are additional key-value pairs.

Dynamic Variables

Use variables instead of static credential values to keep secrets secure and reusable across environments. The Platform substitutes actual values at runtime, preventing exposure of sensitive information and simplifying credential updates. Benefits:
  • Prevent API key exposure by using secure environment variables.
  • Simplify key rotation—update variables in one place without reconfiguring integrations.
Set up all required variables before configuring the LLM. See App Variables.
Variable access by feature type:
  • Runtime features: content and environment variables.
  • Designtime features: content, context, and environment variables.
The following fields support variables per provider:
ProviderVariable-Supported Fields
Azure OpenAIAPI Key, Sub-Domain, Model Deployment IDs
OpenAIAPI Key
AnthropicAPI Key
Amazon BedrockIAM Role ARN, Amazon STS API, Amazon Resource Name (ARN), Endpoint, Headers (optional)
Custom LLMEndpoint, Authorization, Headers
Variable configuration examples: Pre-built LLM (Azure OpenAI): Azure OpenAI dynamic variables In the Test Connection pop-up, enter sample values. Select the checkbox to save them for future use. Test Connection sample values Provider’s New LLM (Azure OpenAI): Add model with dynamic variables In the Test Connection pop-up, enter sample values. Select the checkbox to save them for future use. Test Connection sample values Amazon Bedrock: Amazon Bedrock test payload Custom LLM: Custom LLM dynamic variables Enter sample values for endpoints and headers, then enter the test payload and click Test. Use the checkbox to save sample values. Custom LLM variable test

Configure Pre-built Integrations

Pre-built integrations provide direct connections to leading AI providers using pre-configured APIs and prompt templates. You can use default prompts or create custom ones.
The Platform supports Azure OpenAI and OpenAI integrations for the Chat Completions API only. To use the Responses API, configure a Custom LLM.
The Platform regularly integrates new models from providers. To use a model not yet available as a pre-built integration, see Add a New Model to a Pre-built Integration.

Azure OpenAI

Azure OpenAI is an out-of-the-box (OOB) integration. You can authorize all models using variables and add newly launched models to the OOB integration. For newly launched models, you must use custom prompts—the Platform does not provide system prompts or templates. Steps:
  1. Go to Generative AI Tools > Models Library > Configure Now for Azure OpenAI, then click Next.
  2. Complete Authorization.
  3. In the Models section, toggle on the required model and enter the Deployment ID.
  4. (Optional) Click +Add to add a new model.
  5. Read the Policy Guidelines, select the checkbox, and click Save.
  6. In the Connection Status pop-up:
    • Successful: click Next.
    • Failed: hover over the warning icon to view the error. Click Cancel to update details, or Next to save the failed configuration.
  7. (Optional) Enable Token Usage Limit to track usage. You can set this later from the More options menu.
  8. Click Save > Confirm & Save. The model appears in the Models Library.
Next: enable GenAI Features.

OpenAI

OpenAI is an out-of-the-box (OOB) integration. You can authorize all models using variables and add newly launched models to the OOB integration. For newly launched models, you must use custom prompts. Steps:
  1. Go to Generative AI Tools > Models Library > Configure Now for OpenAI, then click Next.
  2. Complete Authorization.
  3. (Optional) Click +Add to add a new model.
  4. Read the Policy Guidelines, select the checkbox, and click Save.
  5. In the Connection Status pop-up:
    • Successful: click Next.
    • Failed: hover over the warning icon to view the error. Click Cancel to update details, or Next to save the failed configuration.
  6. (Optional) Enable Token Usage Limit to track usage. You can set this later from the More options menu.
  7. Click Save > Confirm & Save. The model appears in the Models Library.
Next: enable GenAI Features.

Anthropic

Anthropic is an out-of-the-box (OOB) integration. You can authorize models using variables. The Platform does not provide system prompts or templates—you must use custom prompts. Steps:
  1. Go to Generative AI Tools > Models Library > Configure Now for Anthropic, then click Next.
  2. Complete Authorization.
  3. Click +Add to add a new model.
  4. Read the Policy Guidelines, select the checkbox, and click Save.
  5. In the Connection Status pop-up:
    • Successful: click Next.
    • Failed: hover over the warning icon to view the error. Click Cancel to update details, or Next to save the failed configuration.
  6. (Optional) Enable Token Usage Limit to track usage. You can set this later from the More options menu.
  7. Click Save > Confirm & Save. The model appears in the Models Library.
Next: add Prompts.

Amazon Bedrock

Amazon Bedrock is an out-of-the-box (OOB) integration. The Platform does not provide system prompts or templates—you must use custom prompts. Setup requires two phases: configuring your AWS account, then configuring the integration in the Platform.

Phase A: Configure Your AWS Account

Prerequisites: Ensure you have the necessary IAM permissions in your AWS account. See Policies and Permissions in AWS IAM. 1. Create an IAM Role Create an IAM role that grants the Platform access to invoke Amazon Bedrock models. See AWS IAM role creation and IAM policy examples for Bedrock. Example IAM policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:ListFoundationModels"
      ],
      "Resource": "*"
    }
  ]
}
2. Set the Trust Policy Allow the Platform to assume your IAM role. Replace <kore-arn> with the AWS account ID provided by the Platform.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<kore-arn>"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
For private/on-premises deployments, the trust policy must point to your internal AWS IAM role.
3. Set the STS Endpoint Use the STS endpoint for the region where your IAM role resides. The STS region must match the IAM role region—not necessarily the model region. See the AWS STS endpoints list. Example:
https://sts.us-east-1.amazonaws.com/

Phase B: Register Your IAM Role with Kore.ai

After creating the IAM role, raise a support ticket with Kore.ai to add your IAM Role ARN to the Platform’s trust policy. This enables the Platform to assume your role and invoke Bedrock.
  1. Raise a support ticket with your IAM Role ARN.
  2. Wait for confirmation that the role has been registered.
Without this step, the Platform cannot assume your IAM role. Both your AWS account and Kore.ai’s environment must explicitly trust each other for cross-account access.

Configure the Integration

Steps:
  1. Go to Generative AI Tools > Models Library > Configure Now for Amazon Bedrock.
  2. On the Authorization tab, enter the following:
    FieldDescription
    Provider NameA name to identify the provider or group of models.
    Model NameA unique name for the language model.
    IAM Role ARNEnables the Platform to securely access resources without long-term access keys.
    Amazon STS APIThe AssumeRole API endpoint for the AWS region where your IAM role resides. Used to generate temporary credentials.
    Amazon Resource Name (ARN)The Bedrock ARN that grants your IAM role access to the specific model.
    EndpointThe URL to interact with the model’s API.
    Headers (optional)Additional metadata headers for the model API.
    Amazon Bedrock authorization
  3. Read the Policy Guidelines, select the checkbox, and click Next.
  4. In the request prompt pop-up, enter the test payload and click Test. Use the checkbox to save the payload. Amazon Bedrock test payload
  5. In the Connection Status pop-up:
    • Successful: click Next.
    • Failed: hover over the warning icon to view the error. Click Cancel to update details, or Next to save the failed configuration.
  6. (Optional) Enable Token Usage Limit to track usage. You can set this later from the More options menu.
  7. Click Save > Confirm & Save. The model appears in the Models Library.
Next: add Prompts.

Add a New Model to a Pre-built Integration

This feature is available only for Automation AI and Search AI.
You can add newly launched models to an existing OOB integration (OpenAI, Azure OpenAI, or Anthropic) without waiting for the Platform to add them. New models initially require custom prompts; the Platform adds system prompts over time. Benefits:
  • Immediate access to newly launched models.
  • Custom prompt support from day one.
  • Maintains platform security and authentication standards.
Steps: You can add a new model during initial integration setup or anytime after using More options. For example, in Azure OpenAI: on the Authorization tab, go to the Models section and click + Add. Enter the Model ID, Model Display Name, Description, and Deployment ID, then click Next. Repeat to add more models.
  • Model ID: assigned by the LLM provider.
  • Model Display Name: user-defined; used across the Platform after integration is enabled.
Add new model in Azure OpenAI

Configure Custom LLM Integration

The Platform supports bring-your-own (BYO) model integrations with any externally hosted or enterprise-hosted LLM. You can create custom prompts optimized for your model and use case. This framework works with the Platform’s Auth Profiles module.
The Platform offers generative AI features for English and non-English NLU and AI Agent languages.
Steps:
  1. Go to Generative AI Tools > Models Library > Configure Now for Custom LLM.
  2. On the Configuration tab, enter the Integration Name, Model Name, Endpoint, and Headers. Custom LLM configuration
  3. On the Auth tab, select an existing authorization profile or create a new one. See App Authorization Overview.
    OAuthv2.0 and Kerberos SPNEGO auth profiles are supported for Custom LLM integration.
  4. Read the Policy Guidelines, select the checkbox, and click Next.
  5. In the request prompt pop-up, enter the test payload and click Next to check the connection. Use the checkbox to save the payload. Custom LLM test payload
  6. In the Connection Status pop-up:
    • Successful: click Next.
    • Failed: hover over the warning icon to view the error. Click Cancel to update details, or Next to save the failed configuration.
  7. (Optional) Enable Token Usage Limit to track usage. You can set this later from the More options menu.
  8. Click Save > Confirm & Save. The model appears in the Models Library.
Next: add Prompts.

Configure Kore.ai XO GPT

Kore.ai XO GPT provides fine-tuned LLMs optimized for enterprise conversational AI. Capabilities include conversation summarization, user query rephrasing, vector generation, answer generation, and DialogGPT conversation orchestration. These models are evaluated for accuracy, safety, and production readiness. Steps:
  1. Go to Generative AI Tools > Models Library.
  2. Click Enable Now for Kore.ai XO GPT.
  3. On the Models tab, toggle on the required models. Kore.ai XO GPT models
  4. Read the Policy Guidelines, select the checkbox, and click Save.
  5. The success message confirms configuration. XO GPT is now listed in the Models Library.
Next: use these models in GenAI Features.

Manage Token Usage

Token usage tracking gives you visibility into LLM consumption and performance across AI for Service. Track consumption, request volume, and median latency by module, model, and feature. Detailed breakdowns are available in Performance Analytics. Data collection:
  • Pre-built models (OpenAI, Azure OpenAI, Anthropic): the Platform captures usage data automatically, regardless of prompts used.
  • Custom models and Amazon Bedrock: you must map Request and Response Token Keys in custom prompts to enable tracking. Without this mapping, usage is unmonitored and costs may be unexpected.
You can enable token usage tracking during initial integration or anytime after from More options. Set Token Limit:
FieldDescription
Maximum TokensToken usage limit for notification purposes. Exceeding this limit triggers an alert but does not block usage.
DurationNumber of days before the token limit resets automatically. Maximum: 90 days.
Schedule Start DateStart date for the usage cycle.
Enable Usage Notifications:
FieldDescription
Usage NotificationToggle on to receive alerts when usage reaches the threshold.
Send Notification atThreshold percentage (predefined or custom). Maximum 5 alerts.
Send to UsersEmail addresses of users who should receive alert emails.
Token usage limit configuration

Reset or Delete an Integration

If you no longer need a configured LLM, you can remove it using the Reset Configuration (pre-built) or Delete (custom) option. What happens when you reset or delete:
  • Removes all integration details (keys, endpoints, deployment names, etc.).
  • Removes the model from the selection list for all LLM features and disables those features. You can select another configured model.
  • Deletes all related prompts and responses.
This change affects only the in-development copy of the app. Changes apply to the published version when you next publish the app with NLP configurations.
Steps:
  1. Go to Generative AI Tools > Models Library.
  2. Click the three-dot menu (More options) for the integration.
  3. Click Reset Configuration or Delete.
  4. Click Reset or Delete in the confirmation dialog.
  5. The success message confirms removal.