Skip to main content
Back to Prompts Library Reference and procedures for managing Custom prompts in the Prompts Library.

Add a Custom Prompt

Prerequisites

Integrate a pre-built or custom LLM before creating a prompt. See LLM Integration.

Steps

  1. Go to Generative AI Tools > Prompts Library.
  2. Click + New Prompt (top right).
  3. Enter the Prompt Name, then select the Feature and Model. New prompt — name, feature, model
  4. The Configuration section (endpoint URL, auth, headers) is auto-populated from the model integration and is read-only. Configuration section
  5. In the Request section, create a prompt or import an existing one. Request section To import an existing prompt:
    1. Click Import from Prompts and Requests Library. Import from Prompts and Requests Library
    2. Select the Feature, Model, and Prompt. Hover and click Preview Prompt to review before importing.
      You can interchange prompts between features.
    3. Click Confirm to import the prompt into the JSON body.
    To create from scratch: Click Start from scratch and enter the JSON request for the LLM. Start from scratch
  6. (Optional) Toggle Stream Response to enable streaming. Responses are sent incrementally in real time instead of waiting for the full response. Stream Response toggle
  • Add "stream": true to the custom prompt when streaming is enabled. The saved prompt displays a “streaming” tag.
  • Enabling streaming disables the “Exit Scenario” field. Streaming applies only to Agent Node and Prompt Node features using OpenAI and Azure OpenAI models.
  1. Fill in the Sample Context Values and click Test. If successful, the LLM response is displayed; otherwise an error appears. Test response
  2. Map the response key: In the JSON response, double-click the key that holds the relevant information (e.g., content). The Platform generates a Response Path for that location. Click Save. Select response key
  3. Click Lookup Path to validate the path. Response path
  4. Review the Actual Response and Expected Response:
    • Green (match): Click Save. Skip to step 12. Matching responses
    • Red (mismatch): Click Configure to open the Post Processor Script editor. Mismatched responses
      1. Enter the Post Processor Script and click Save & Test. Post processor script
      2. Verify the result, then click Save. The responses turn green. Post processor result
  5. (Optional) If Token Usage Limits are enabled for your custom model, map the token keys for accurate tracking:
    • Request Tokens key: usage.input_tokens
    • Response Tokens key: usage.output_tokens
    Token key mapping
    Without this mapping, the Platform can’t calculate token consumption, which may lead to untracked usage and unexpected costs.
  6. Click Save. The prompt appears in the Prompts Library.