Skip to main content
This guide helps you migrate your existing SearchAssist application to Search AI. It covers configuration changes, deprecated features, and the steps required to set up sources, extraction strategies, answer generation, and business rules in Search AI.

Overview

Search AI is the next-generation evolution of SearchAssist. It moves beyond traditional keyword-based search by using advanced techniques and the latest LLMs for indexing, search, and answer generation—resulting in more accurate, relevant, and natural answers. A key difference is platform integration. SearchAssist is a standalone application; Search AI is part of the AI for Service platform. This integration lets users build applications and access multiple AI-powered products within a unified ecosystem. Because of these advanced techniques, many configurations that were required in SearchAssist are now handled automatically in Search AI, making setup simpler.

Architecture at a Glance

Search AI introduces a modernized architecture with greater flexibility, transparency, and control over how search and answers are generated. The new design replaces rigid, predefined workflows with a modular and configurable framework. Key changes:
  • Indexes: Search AI uses a single index for both search results and answers. SearchAssist maintained separate indexes for each.
  • Chunk Workbench: Search AI provides a Chunk Workbench to review and refine how ingested content is split before indexing, improving downstream results.
  • Extraction Techniques and Advanced Vector Models: Search AI introduces an Extraction Module supporting multiple strategies and advanced vector generation techniques for more accurate, context-aware results.
  • Agentic RAG: Search AI uses Agentic Retrieval-Augmented Generation to enhance user queries with contextual information, improving precision and naturalness of answers.
The diagram below shows the architectural components of Search AI. Architecture

Migration Steps

Creating an App

Search AI is a product within the AI for Service platform.
  1. Go to the platform homepage. After signing in, you are directed to the landing page showing your existing apps.
  2. Click Create New under Search AI.
  3. Provide the app details and click Create App.
  4. The app is created and you are taken to the search configurations.
Learn more.

Migrating Sources

Websites

  1. Go to the Websites page in your Search AI app.
  2. Set up a web crawl for each web source. Provide the URL or sitemap file, or use the Upload URL option for a CSV file with multiple sitemaps.
  3. Configure the crawl settings for each source. The following SearchAssist crawling options are available under Advanced Crawl Configurations:
    • Crawl Options
    • Use Cookies
    • Javascript rendered
    • Crawl beyond sitemap
    • Respect robots.txt
Learn more.

File Upload

  1. Go to the Documents page.
  2. Upload your files and directories.
  3. Organize the content into directories.
Learn more.

FAQs

  1. Manage FAQs using the Knowledge Module under Automation AI.
  2. Export existing FAQs from SearchAssist and import them into Automation AI as part of its Knowledge Graph.
  3. Alternatively, manually add FAQs or extract them from web pages or files.
  4. You can also use the JSON connector to add FAQs by saving the question as chunkTitle and the answer as chunkText.

Connectors

  • Import data from third-party applications using their respective connectors. Search AI supports 60+ connectors.
  • For each connector, refer to the relevant documentation for setup information.
  • The SearchAssist Sync Specific Content option is available in Search AI as Ingest Filtered Content. Select Ingest Filtered Content, go to Advanced Filters, and define your filtering conditions.
  • Search AI supports Access Control (RACLs) for connectors. Refer to the RACL documentation for details.
Learn more.

Structured Data

  • Import structured data using the JSON connector.
  • Prepare the structured data in the expected JSON format using the sample file, then upload it.
  • Alternatively, import structured data via APIs. Refer to this guide for details.
Learn more.

Migrating Extraction Configuration

In SearchAssist, there is no dedicated Extraction module. Extraction happens automatically:
  • When Extractive Answers are enabled, a snippet extraction module using rule-based methods is added.
  • If not enabled, simple text extraction is applied by default.
In Search AI, a dedicated Extraction module lets you choose how content is processed before indexing. Extraction techniques can vary by content type. Text extraction is applied to all content by default, but you can select more advanced strategies like Layout-aware or HTML-aware extraction. Rule-based extraction is no longer supported.

Text Extraction

By default, text extraction is applied to all content. No changes needed if you rely on this method.

Rule-Based Extraction

Rule-based snippet extraction (via Extractive Answers in SearchAssist) is deprecated in Search AI. The recommended alternative is Layout-aware Extraction, which provides higher accuracy without manual rules. For HTML-based content, use HTML-aware Extraction. Search AI also offers other advanced extraction techniques to suit your content type. Learn more. To change the default extraction strategy, go to the Extract page and configure your extraction strategy.

Migrating Workbench Stages

In SearchAssist, the Document Workbench processes and enriches documents from different sources. In Search AI, this capability is extended with the addition of a Chunk Workbench, allowing enrichment at both the document and chunk level.
  • Use the Document Workbench to apply transformations during extraction.
  • Use the Chunk Workbench to refine or enrich content after it is split into chunks.

Set Up Document Workbench

  1. Go to the Extract page.
  2. Open the Content Strategy configured for your content type.
  3. Add and configure the stage as required.
Refer to this guide for configuration details.

Set Up Chunk Workbench

The Chunk Workbench is a new workbench in Search AI for processing chunks. Learn more.
  1. Go to the Enrich page.
  2. Click +New Stage and add a new Chunk Workbench stage.
  3. Refer to this guide for supported stage types and configuration details.

Points to Note

  • The document and chunk workbench in Search AI support these stages:
  • Search AI only supports script conditions for the Custom Script Stage. If you wrote conditions as scripts for any other stage, convert the entire stage to a Custom Script stage or rewrite the conditions in the “basic” format.
  • Search AI does not support the Rename action through the workbench. Use Manage Schema to edit field names or descriptions instead.
  • The following SearchAssist stages are deprecated in Search AI because advanced semantic embeddings make them unnecessary:
    • Entity Extraction
    • Traits Extraction
    • Keyword Extraction
    • Semantic Meaning
  • The SearchAssist Snippet Extraction stage (automatically added when Extractive Answers is enabled) is deprecated. Instead, configure an appropriate extraction strategy on the Extract page.
  • The Custom LLM Prompt stage is available as Transform Documents with LLM in the Document Workbench and Enrich Chunks with LLM in the Chunk Workbench.
  • Use the API Stage for any other custom chunk processing.
  • Use the simulator to test the application’s behavior after processing.
  • Field names have been updated to the unified schema. Ensure you use the correct field names during migration.
Learn more.

Migrating Answer Snippets

As in SearchAssist, answers can be generated using Extractive Answers or Generative Answers. Search AI presents Generative Answers by default. To verify, go to the Answer Configuration page.

Generative Answers

Generative Answers are selected with default configurations. Update the configurations as required. For Generative Answers to work, configure and enable the LLM for answer generation:
  1. Go to Generative AI Tools > Models Library.
  2. Configure the required model.
  3. Go to Gen AI features.
  4. Select the configured model for Answer Generation. A default prompt is used; create a custom prompt if needed.
Custom configurations from SearchAssist—such as response token size and chunk order—are available in the Answer Configurations.

Extractive Answers

In SearchAssist, extractive answers use rule-based chunking. In Search AI, configure extraction strategies and enable Extractive Answers to achieve the same outcome.
  1. Go to the Answer Configuration page, select Extractive Answers, and configure the options.
  2. Go to the Extraction page in Search AI.
  3. Configure Layout-Aware Extraction using the General template for PDFs, DOCX, and documents with tables or images.
  4. Configure Advanced HTML Extraction using the General template for HTML content.
  5. Alternatively, choose any strategy best suited for your content type.
These steps ensure that headers, paragraphs, tables, and images are captured correctly, producing answer snippets similar to SearchAssist’s Extractive Model. Learn more.

Migrating Search Configuration

The following settings are deprecated because the Search AI processing pipeline uses semantic similarity and enhanced retrieval methods, removing the need for traditional keyword-based search configurations:
  • Weights
  • Presentable
  • Prefix Search
  • Search Relevance
Additional notes:
  • Small Talk can be managed through conversational flows in Automation AI. Go to the Small Talk page and provide the details. Learn more.
  • Synonym and Stop Word support is not currently available in Search AI. It will be added in upcoming releases.
  • Spell correction is not required since Search AI does not rely on keyword matching.

Migrating Business Rules

In Search AI, semantic search replaces traditional keyword-based search, so NLP-based rules from SearchAssist are deprecated. Configure Contextual Business Rules manually instead. The process for setting contextual rules in Search AI is the same as in SearchAssist. However, field names have been updated to the unified schema, so some names may have changed. Review and select the correct field names when migrating. The context object in the condition block now refers to the context object and session variables provided by the AI for Service platform. Learn more.

Migrating Facets

Facets are supported in Search AI as filters. Currently, search results with facets can only be presented using the Advanced Search API. Facets created are used when returning results through the API only. To include facets in search results, set the isFacetsEnable field in the API request. To create facets or filters:
  1. Go to the Search Results page.
  2. Manually create the required filters. A default filter that organizes content into tabs by sourceType is already available.

Other Updates

Language Support

Search AI supports 100+ languages—all languages supported by the underlying LLM and embedding models. Refer to this guide for language-specific configuration and recommendations.

LLM Configuration

Search AI uses the AI for Service platform’s model library to configure LLMs and prompts for all Gen AI capabilities. The library is a common place to configure models and prompts. The Gen AI features page lets developers choose models and prompts for different Search AI features. All configurations are available under Generative AI Tools. Learn more.

Custom Configurations

The following table describes updates to custom configurations from SearchAssist.
ConfigurationMigration Notes
Chunk Extraction MethodMultiple extraction methods are available, including the text and layout-aware methods from SearchAssist. Configure on the Extract page. Learn more.
Chunk Token SizeConfigure when defining the Extraction Strategy for content. Learn more.
Chunk Vector FieldsAvailable in Vector configuration.
Number Of ChunksReplaced by the Token Budget field, which dynamically calculates the number of chunks based on the LLM and extraction strategies. Available in Answer Configuration.
Rewrite QueryEnhanced with two options: (1) Query Rephrase for Advanced Search API via Agentic RAG; (2) Rephrase User Query to reconstruct incomplete or ambiguous inputs using conversation context.
Chunk Retrieval StrategySome legacy retrieval methods are deprecated. Configure Vector Retrieval and Hybrid Retrieval via the Retrieval page. Learn more.
Enable Vector SearchEnable using configurations on the Retrieval page.
Chunk Deviation PercentConfigure in Retrieval under Proximity Threshold.
Rerank ChunksAvailable as an advanced configuration.
Rerank Chunk FieldsAvailable as an advanced configuration.
Maximum re-rank chunksAvailable as an advanced configuration.
Chunk OrderAvailable as a config field on the Answer Configuration page.
Snippet SelectionDeprecated. No longer supported.
Snippet Selection LLMDeprecated. No longer supported.
Max Token SizeImplemented using the Token Budget field in Answer Generation. In SearchAssist, Max Token Size defined the total tokens sent to the LLM (prompt + chunks + output). Token Budget specifies tokens allocated only for retrieved chunks, excluding system prompt and LLM output tokens. Learn more.
top_pIn SearchAssist, this was specific to OpenAI. In Search AI, configure it for any LLM that supports it via Generative AI Tools > GenAI Features > Advanced Settings for the Answer Generation feature. Learn more.
Response SizeUse the Response Length parameter in Answer Configuration (Generative Answers).
Answer Response LengthUse the Response Length parameter in Answer Configuration (Extractive Answers).
Enable Page Body CleaningEnable via the Automatic Cleaning option in web crawl configuration. Navigate to Advance Crawl Configurations for a web crawl and select Automatic Cleaning under Processing Options.
Custom Vector ModelSet the Embedding model on the Vector Configuration page.
Hypothetical EmbeddingsDeprecated. No longer supported.
Crawl DelayConfigure in Advance Crawl Configurations for a web crawl. This field applies only when the JavaScript Rendered option is enabled.

Unified Schema

Search AI introduces a Unified Schema that standardizes how content from diverse sources is ingested and managed. The schema supports a broad range of connectors and content types, ensuring consistency across ingestion and retrieval. Developers can also define up to 50 custom fields to capture domain-specific metadata. Wherever schema fields are referenced in Search AI (e.g., workbench, business rules, filters), use the correct field names to avoid mapping or retrieval errors.

Training

Search AI enhances the training process with greater automation and efficiency. During training, design-time configurations are applied and ingested content is split into chunks. Unlike SearchAssist, Search AI introduces Automatic Training, which runs during both initial ingestion and incremental updates, reducing the need for manual intervention. Manual training may still be required in certain scenarios. Learn more about app training.

New Crawler

The new web crawler is faster, more reliable, and LLM-ready. It processes more pages in less time, reduces errors and interruptions, and automatically converts content into Markdown for seamless use with AI/LLM applications. It also handles JavaScript-heavy pages more efficiently for smoother, more consistent crawling.

Application Administration

Workspace Management

Search AI is part of the AI for Service platform. The platform organizes users and resources through Workspaces. Each workspace functions as a container that manages users, applications, and access controls. Workspaces can be shared between users to collaborate on all apps within the workspace. Learn how to create and manage workspaces.

App Sharing

Search AI supports App Sharing, allowing applications to be shared with other users for collaboration. Shared users can access and work on the same application based on their assigned permissions. To share your app:
  1. Go to the User Management page.
  2. Click Invite User and provide the user details and role.
  3. Send the invite.
Learn more about User Management and Roles.

Change Logs

Change logs are maintained at the platform level and are common to all application components, including Search AI. To view change logs:
  1. Navigate to App Settings.
  2. Open the Change Logs page. By default, logs for all modules are shown.
  3. To filter for Search AI logs, click More Filters, select Search AI from the Modules dropdown, then click Add.
Learn more.

App Analytics

Search AI provides analytics at the platform level, consolidating insights across all integrated applications. To view answer insights:
  1. Navigate to the Analytics page in the Search AI module.
  2. Under the Search AI section, open Answer Insights to view search-related metrics.
Learn more.

App Testing

Search AI supports testing at multiple levels:
  1. Document and Chunk Workbench: Provides a simulator to test the functionality and behavior of processing stages during development.
  2. Answer Generation: Use the Test Answers tool on the Answer Generation page to validate end-to-end search behavior after all configurations are made.
  3. Complete Application Testing: The Test option (top-right of each page) lets you test the complete agent, including all integrated modules, using both voice and chat widgets.

Deprecated Features

Result Ranking

In SearchAssist, developers relied on manual result ranking rules. In Search AI, Business rules and Agentic RAG provide more flexible, context-aware control over retrieval, eliminating the need for per-query ranking rules. This feature is deprecated.

Search Interface Configurations

Custom interfaces can be built using Search AI’s public APIs. The AI for Service Platform also provides agent experiences across 40+ channels that can be integrated with minimal configuration.

Traits

Traits are deprecated in Search AI. Since Search AI uses semantic embeddings and does not rely on traditional search relevance tools, traits are not required. Vector representations and keyword relevance via hybrid search capture the relationship between terms based on context.