Skip to main content
This guide covers validating your Search AI configuration through testing and debugging tools to ensure optimal answer quality before deployment. Navigation: Configuration > Testing and Debugging

Testing Answers

Access Testing

  1. Navigate to Configuration > Answer Generation or Configuration > Testing and Debugging
  2. Click Test Answers
  3. Enter a query
  4. Review the generated answer
  5. Use debug option to analyze behavior

Testing Workflow

StepActionPurpose
1Enter test querySimulate user input
2Review answerVerify response quality and accuracy
3Open debug viewUnderstand how answer was generated
4Analyze chunksCheck which content was used
5Refine configurationAdjust settings based on findings

Debug Information

The debug view provides comprehensive insights into answer generation.

Debug Components

ComponentDescription
Qualified ChunksChunks selected and used to generate the answer
Retrieval DetailsHow chunks were identified and ranked
LLM Request/ResponseFull prompt sent and response received (for generative answers)
Processing TimeTime taken by each component

Agentic RAG Debugging

When Agentic RAG is enabled, an additional Retrieval tab appears showing:
InformationDescription
Agent SequenceOrder in which agents were invoked
Agent InputData sent to LLM by each agent
Agent OutputResults returned from each agent
LLM TimingTime taken per LLM call

Answer Insights

The Answer Insights feature provides analytics for query-response interactions.

Available Data

FeatureDescription
Query GroupingView all answers for grouped queries
Search LogsFilter logs by answer and channel
Detailed ViewQuery overview, debug info, LLM details
Performance TrackingMonitor answer quality over time

Accessing Answer Insights

Navigate to Analytics > Search AI > Answer Insights

Debugging Checklist

Common Issues and Solutions

IssuePossible CauseSolution
No results returnedContent not indexedVerify content sources and extraction settings
Poor relevanceThreshold too high/lowAdjust similarity score threshold
Missing informationChunks too smallIncrease chunk size or token budgets
Incomplete answersInsufficient contextIncrease Top K chunks or token budget
Business rules not applyingCondition mismatchTest with debug to verify rule triggers
Slow responsesToo many LLM callsReview Agentic RAG agent usage

Configuration Verification

CheckLocationWhat to Verify
Retrieval StrategyConfiguration > RetrievalVector vs. Hybrid selection
ThresholdsConfiguration > RetrievalSimilarity, proximity, Top K values
Answer TypeConfiguration > Answer GenerationExtractive vs. Generative
LLM SettingsConfiguration > Answer GenerationModel, prompt, temperature
Business RulesConfiguration > Business RulesActive rules and conditions

Best Practices

Testing Strategy

  1. Test incrementally - Validate each configuration change before moving to the next
  2. Use varied queries - Test different query types, lengths, and phrasings
  3. Include edge cases - Test ambiguous queries and boundary conditions
  4. Compare results - Document before/after when making changes

Debug Analysis

  1. Review qualified chunks - Ensure relevant content is being selected
  2. Check chunk rankings - Verify highest-ranked chunks are most relevant
  3. Analyze LLM prompts - Confirm context is properly structured
  4. Monitor timing - Identify performance bottlenecks

Ongoing Monitoring

  1. Track Answer Insights - Review analytics regularly
  2. Monitor feedback - Enable user feedback and review ratings
  3. Iterate configuration - Continuously refine based on data
  4. Document changes - Keep records of configuration modifications

Testing Scenarios

Scenario 1: Basic Answer Validation

1. Enter simple factual query
2. Verify answer accuracy
3. Check source citation
4. Confirm response time acceptable

Scenario 2: Retrieval Quality Check

1. Enter query matching specific content
2. Open debug view
3. Verify expected chunks are qualified
4. Check similarity scores

Scenario 3: Business Rule Verification

1. Configure test rule with known conditions
2. Enter query that should trigger rule
3. Open debug view
4. Confirm rule was applied correctly

Scenario 4: Agentic RAG Testing

1. Enable Agentic RAG
2. Enter complex query
3. Review Retrieval tab in debug
4. Verify agent sequence and outputs

Quick Reference

Debug Tab Contents

TabShows
Qualified ChunksSelected content for answer
RetrievalAgent processing (Agentic RAG only)
LLM DetailsPrompt and response data

Key Metrics to Monitor

MetricHealthy Range
Response Time< 3 seconds (varies by LLM)
Chunk RelevanceTop chunks match query intent
Answer AccuracyMatches source content
User FeedbackPositive ratings trending up