Skip to main content
Back to NLP Topics The Ranking and Resolver (R&R) engine receives outputs from the FM, ML, and KG engines and determines the single winning intent for each user utterance.

R&R Versions

VersionBehavior
Version 1 (default)Rescores intents from all NLP engines and ranks them. Works with all ML models.
Version 2Ranks definitive matches from ML and KG only (no rescoring, no FM). Recommended for Few-shot ML models.
Enable Version 2:
  1. Go to Natural Language > NLU Config > Ranking and Resolver Engine.
  2. Set Rank and Resolver Version to Version 2.
  3. Confirm in the dialog.
Version 2 works best with Few-shot ML and KG models. FM engine configurations are disabled in V2. Changing versions alters how winning intents are determined — verify with Utterance Testing before deploying.

Engine Outputs (What R&R Receives)

EngineDefinitive MatchProbable Match
MLFuzzy score ≥ 95% against user inputConfidence scores per intent; top 5 matching utterances per intent (score > 0.3 threshold)
KGFuzzy score ≥ 95% against user inputConfidence scores for questions matching ≥ 50% path terms + ≥ 60% word match; includes synonyms, nodes, paths, traits matched
FMPattern match or exact task name matchPartial label matches including synonyms

How R&R Decides the Winning Intent

Definitive matches found:
  • Single definitive match → that intent wins.
  • Multiple definitive matches from different engines → ambiguous; user is prompted to choose.
Only probable matches found:
  1. Score each top-5 ML utterance against the probable intents; take highest per intent.
  2. Score alternate/modified KG questions; take highest per intent.
  3. Rank all scores; top scorer wins.
  4. If top two intents are within 2% of each other → ambiguous.
  5. If a definitive match exists → discard all probable matches.
  6. Only FM or ML found a probable match → that intent wins.
  7. Only KG found a probable match:
    • Score > 80% → wins.
    • 60% < score ≤ 80% → wins, shown as “Did you mean?” suggestion.
Ambiguity resolution:
  • Disambiguation Dialog — multiple definitive matches; user chooses.
  • Did You Mean Dialog — multiple or low-confidence probable matches; user confirms.
Both dialogs are customizable in NLP Standard Responses.

Thresholds and Configuration

Go to Natural Language > NLU Config > Ranking and Resolver Engine.
SettingDescription
Prefer Definitive MatchesWhen enabled (default), definitive matches win over probable. When disabled, all matches are rescored together.
Rescoring of IntentsWhen disabled, all qualified intents are presented to the user for selection (no rescoring).
Negative PatternsEnable to filter out intents matched by FM or ML that match negative patterns.
Proximity of Probable MatchesMax score gap between top and next probable intent to treat them as equally important.

Dependency Parsing Model

Enables intent scoring based on word dependencies (not just word presence and position).
ModelDescription
Model 1 (default)Based on word presence and position; scored by FM only.
Model 2Based on dependency matrix; scored by FM then rescored by R&R.
Configure at NLU Config > Ranking and Resolver Engine > Dependency Parsing Model:
  • Minimum Match Score: threshold for a probable match (0.0–1.0; default 0.5).
  • Advanced Configurations: JSON editor for custom weights. Click Restore to Default to reset.
Dependency Parsing Model is supported in select languages only.

Detection Scenarios

ScenarioResult
FM definitive matchFM match wins; ML had no match; KG had a probable match (discarded).
ML definitive matchML wins; FM had a probable match (discarded).
KG definitive match (100% path + 100% cosine score)KG wins; ML/FM also found probable matches (discarded).
Multiple probable matches across enginesR&R rescores all; top scorer wins; if within 2%, presented as ambiguous.
Two probable intents, scores closeBoth presented as “Did you mean?”
No match from any engineDefault intent triggered.