Assistant Configurations in the Admin Hub give administrators control over communication, security, pipeline behavior, and resource management across the AI deployment.
Overview
| Configuration | Purpose |
|---|
| Announcements | Publish banner messages to users |
| Email Templates | Customize invitation emails for users and admins |
| Guardrails | Enforce PII detection and content restrictions |
| MCP Server | Manage Model Context Protocol servers and agent tools |
| Orchestration Settings | Configure the AI query processing pipeline |
| Rate Limits | Control query point consumption per user category |
| Scheduler Settings | Manage automated agent execution permissions |
| Attachment Settings | Control file attachment uploads and processing |
Announcements
The Announcements feature lets administrators create, manage, and publish organizational messages as banners at the top of the user interface.
Navigation: Account Hub → Assistant Configurations → Announcements
Create an Announcement
-
Click Create Announcement.
-
Complete the required fields:
| Field | Details |
|---|
| Name | Up to 80 characters |
| Description | Up to 800 characters; supports bold, italic, underline, hyperlinks, colors, bullets, and numbered lists |
| Announcement Type | Banner (currently the only option) |
| Publish To | Everyone in the account, or specific agents/users |
Actions
| Action | Description |
|---|
| Preview | View the banner before publishing |
| Publish | Make the announcement visible (requires Name and Description) |
| Unpublish | Remove a published announcement from view |
| Delete | Permanently remove any announcement, published or not |
Manage Published Announcements
Editing a published announcement prompts you to either Discard changes or Publish changes.
You must publish again to push changes to users. The system prompts you to publish or discard modifications to keep updates intentional and controlled.
End-User Experience
- Announcements appear as banners at the top of the screen.
- Multiple announcements rotate every 10 seconds.
- Each banner includes a Dismiss button:
- Dismissing hides the banner for the current session.
- The banner reappears after a page refresh or after one hour.
Email Templates
Email Templates let you customize the invitation emails sent when users are added to the account, a workspace, or specific applications.
Navigation: Account Hub → Assistant Configurations → Email Templates
Template Categories
Admin Templates
| Template | Sent When |
|---|
| Admin Invitation | A user is added as an Admin |
| Custom Admin Invitation | A user is assigned a custom admin role |
| Workspace Invitation | A user is invited to a specific workspace |
Application Templates
Each provisioned application has a Member Invitation template sent when a user is added as a member. Each application maintains its own separate template.
Customize a Template
- Use the toggle to enable or disable the template.
- Click the template to open the editor.
- Edit the subject line and email body.
- Preview the email to verify appearance.
- Add links to relevant resources (such as the Admin Console) for easy access.
Template Management
- Templates are organized by application; all provisioned applications appear in the list.
- Default subject and body text are provided and can be modified.
- Changes apply to all future invitations of that type.
- Admins and Custom Admins can review and adjust templates before the system sends invitations.
Guardrails
Guardrails is a multi-layered security and compliance framework that automatically scans AI inputs and outputs to protect sensitive data and enforce content policies.
Navigation: Account Hub → Assistant Configurations → Guardrails
Setup
-
Go to Guardrails in Account Hub.
-
Review pre-configured PII detection rules.
-
Set up content restrictions for your organization.
-
Validate configurations using the integrated testing suite.
Interface Layout
The Guardrails interface has three sections: PII Settings, Ban Topics, and Testing Suite.
PII Settings
Pre-configured rules detect common PII types:
| Rule | Detects |
|---|
| Email Address | Email patterns across various formats |
| Universal Phone Numbers | Domestic and international phone formats |
| Social Security Numbers | SSN patterns with format validation |
| Credit Card Numbers | Major credit card number formats |
Add or Update a Rule
Create custom detection patterns using regular expressions:
| Field | Description |
|---|
| Name | Unique identifier for the rule |
| Description | What the rule detects |
| Regex | Pattern used to match sensitive data |
| Action | How redacted data appears to non-authorized users |
Action types:
| Action | Behavior |
|---|
| Redaction | Removes sensitive data from text |
| Static Replacement | Substitutes with predefined safe text |
| Masking | Replaces with placeholder characters (e.g., **** or XXX) |
Ban Topics
Ban Topics prevents AI models from engaging with prohibited subjects by dynamically modifying input prompts in real time, ensuring consistent compliance across all AI interactions.
| Feature | Description |
|---|
| Template System | Pre-defined restriction sets based on common industry standards |
| Custom Prompts | Create detailed, context-specific content restrictions |
| Enable/Disable | Toggle built-in or custom restrictions individually as needed |
Testing Suite
Validate guardrail configurations in an isolated environment before deploying them:
- Run full or targeted validation of PII rules and banned topics.
- Use manual, AI-driven, or batch test inputs, including boundary condition testing.
- Analyze results by rule, impact status, processing duration, and output.
- Preview data sanitization visually.
MCP Server
The MCP Server feature lets administrators manage Model Context Protocol (MCP) servers, which expose agents as tools for task execution and workflow management. Currently, the MCP Server operates exclusively within the Agent Platform. Support for connecting external MCP-compatible clients is under active development and will be available in a future release.
Navigation: Account Hub → Assistant Configurations → MCP Server
Default MCP Server
The platform includes a pre-configured Default MCP Server available immediately, with no additional setup required. It includes:
- All pre-built agents, available as tools.
- Integrated enterprise knowledge sources.
The Default MCP Server is designed to work within the Agent Platform environment. Compatibility with third-party MCP clients is planned for a future release.
View Server Details
Each MCP Server has a unique URL used to connect external applications.
To access the Default MCP Server configuration:
- Navigate to the MCP Server page.
- Select Default MCP Server.
- Copy the server URL from the dialog.
Key Features
| Feature | Description |
|---|
| Pre-built Agent Integration | Access all platform agents as tools through a single server |
| Enterprise Knowledge Sources | Use organizational knowledge bases within agent workflows |
| URL-based Access | Connect external systems using the server URL |
| Copy Functionality | Copy server URLs directly from the interface |
Orchestration Settings
Orchestration Settings is a pipeline management framework that lets administrators activate, configure, and manage individual components of the AI query processing workflow.
Navigation: Account Hub → Assistant Configurations → Orchestration Settings
Setup
- Go to Orchestration Settings under Assist Configurations.
- Review available pipeline components and their current status.
- Configure components based on your organizational requirements.
- Test the configuration with sample queries.
- Monitor performance and adjust settings as needed.
Pipeline Components
| Component | Function |
|---|
| Guardrail Enforcement | Applies safety and compliance checks to AI responses |
| Small Talk Handling | Enables casual, conversational responses for a natural user experience |
| Intelligent Agent Routing | Routes queries to the most appropriate agent based on intent |
| Enterprise Knowledge Lookup | Searches configured internal knowledge bases for accurate responses |
| Fallback to AI Knowledge | Uses LLM-derived knowledge when primary sources are unavailable |
Guardrail Enforcement
Automatically applies security policies from your Guardrails configuration across all AI interactions.
- Enabled: Security policies enforce across all pipeline queries.
- Disabled: Enforcement is removed from the pipeline.
- Cannot be toggled here — controlled exclusively through the Guardrails interface.
Small Talk Handling
Enables casual, contextual conversation for an improved user experience.
- Toggle on or off based on organizational preferences.
- Uses professionally crafted, pre-configured prompts (not editable) to ensure a consistent brand voice.
Intelligent Agent Routing
Analyzes incoming queries and directs them to the most appropriate AI agent.
- Uses predefined routing algorithms optimized for accuracy and efficiency.
- Routing logic is locked to ensure consistent performance and scalability.
Enterprise Knowledge Lookup
Integrates your organization’s knowledge base into the AI query pipeline so responses draw from authoritative internal sources.
| State | Behavior |
|---|
| Configured and Enabled | Knowledge base is queried automatically for all pipeline requests |
| Configured but Disabled | Knowledge base is accessible only through the compose bar’s agent selector |
| Not Configured | Component remains inactive until Enterprise Knowledge is set up |
Enterprise Knowledge remains accessible through the compose bar’s agent selector even when disabled in the query pipeline.
Fallback to AI Knowledge
Provides responses from LLM knowledge when primary systems cannot respond adequately.
| Option | Description |
|---|
| Toggle | Enable or disable based on operational requirements |
| LLM Model | Select from available language models |
| Web Search | Optional integration; available for supported models only |
| Custom Prompt | Editable prompt for tailored AI response behavior |
Rate Limits
Rate Limits control how many query points users consume over a configurable time window, helping manage system resources and usage.
Navigation: Account Hub → Assistant Configurations → Rate Limits
Point Consumption
| Query Type | Points | Examples |
|---|
| Simple | 1 point | Small talk, context-free queries, GPT responses without knowledge |
| Advanced | 3 points | GPT with knowledge, context-aware queries, follow-up queries |
User Categories
| Category | Description |
|---|
| Moderate Users | Default for all account users |
| Power Users | Added to a separate list; subject to different rate limits |
Admins can add users to, remove users from, or clear the entire Power User list.
Time Windows
Select the duration over which points are tracked:
- 1 hour
- 3 hours
- 6 hours
- 12 hours
View point consumption per query in the Logs tab of the dashboard.
All queries — including small talk and interrupted queries — consume points. Points are not counted if an error occurs while answering a query.
Scheduler Settings
Scheduler Settings lets administrators configure account-wide controls for automated agent execution, including availability, usage limits, and workspace permissions.
Navigation: Account Hub → Assistant Configurations → Scheduler Settings
Enable Scheduler
Control whether users can access scheduling functionality for agents.
| State | Effect |
|---|
| On | Users can create and manage schedulers for their agents |
| Off | Scheduling is disabled for all users |
Scheduling Limit per User
Set the maximum number of active schedulers each user can create simultaneously:
- No Limit
- 5 schedulers
- 10 schedulers
- 20 schedulers
- 30 schedulers
Select the limit that aligns with your organization’s automation needs and resource capacity.
Workspace Owner Permissions
When enabled, workspace owners can:
- Add schedulers during the agent creation workflow.
- Publish agents with pre-configured schedules.
- Set schedulers to run automatically for all end users.
End users receive agents with active schedulers but retain control to disable any scheduler they don’t need.
For agent-level scheduler configuration, see Schedule Trigger.
Attachment Settings
Attachment Settings let administrators control whether end users can upload file attachments in the compose bar and configure how the platform processes those attachments.
Navigation: Account Hub → Assistant Configurations → Attachment Settings
Enable Uploading Attachments in Compose Bar
Use the Enable Uploading Attachments in Compose Bar toggle to control whether end users can upload files when asking queries.
| State | Effect |
|---|
| On | End users see the attachment option in the compose bar and can upload files alongside their queries |
| Off | The platform hides the attachment option from the compose bar for all end users |
You can enable attachment uploads only if at least one LLM model is configured in the platform. If no LLM model is available, the toggle remains disabled and the platform displays a banner prompting you to configure an LLM model first.
LLM Model Selection
When you enable attachment uploads, the platform displays an LLM model selection dropdown below the toggle.
- The platform selects the default LLM model automatically.
- You can change the LLM model used for processing attachments at any time.
- Changes to the LLM model take effect immediately for all subsequent queries.
Allow Attachments Larger Than Model Context
A separate toggle controls whether end users can upload files that exceed the selected model’s context size.
| State | Behavior |
|---|
| Off | The platform restricts uploaded files to the selected model’s context size limit |
| On | The platform accepts files larger than the model’s context size and processes them using RAG (Retrieval-Augmented Generation) |
When enabled, select one of the following RAG processing options:
| Option | Description |
|---|
| Platform Built-in RAG | Uses the platform’s built-in RAG capability to process and retrieve content from large attachments. Displays a Select Embedding Model dropdown; the Kore embedding model is selected by default. Only embedding-capable models appear in the list. |
| LLM-Provided RAG | Uses the selected LLM’s native RAG or file-handling capabilities. Available only if the chosen LLM model supports the required APIs. |
Switching between RAG options or changing the embedding model removes all currently uploaded files across the account. The platform prompts you to confirm before applying the change.