Test Data
Populate variables in your prompts using imported or AI-generated datasets.Import Test Data
Upload a CSV file to populate variable values automatically. Requirements:- Column names must match variables exactly (case-sensitive):
{{Name}}requires a column named “Name” - Data must start from the first row and column
- Maximum 10 rows imported at once
- Click Test data > Import test data.
- Upload a CSV file or select from existing datasets.
- Preview the data and click Proceed.
Generate Test Data
Create synthetic datasets using AI—no manual data entry required. Limits: Maximum 5 rows generated at once. Steps:- Enter a prompt with variables (e.g.,
{{Job Title}},{{Experience}}). - Click Test data > Generate test data.
- Select a model, specify row count, and click Generate.
- Preview the dataset and click Apply value to the variables.
Sharing
Share prompts with other users for collaboration. Version history and settings are shared based on commit status.| Scenario | What’s Shared |
|---|---|
| Before committing | Inputs, outputs, settings (no version history) |
| After committing | Full version history with all changes |
| Multiple contributors | Owner designation preserved; all versions visible |
- Click the three dots icon > Share.
- Select users (must be part of your account).
- Assign a role and click Share.
Roles and Permissions
| Role | Permissions |
|---|---|
| Full (Owner) | View, edit, restore, commit, delete; manage users and API keys |
| Edit (Collaborator) | All Owner permissions except delete |
| View (Viewer) | View prompts and versions only |
Versioning
Track prompt iterations by committing versions. Each commit creates a shareable record of the prompt’s evolution. Key points:- Versions are auto-named sequentially (V1, V2, V3…)
- Must generate output before committing
- Latest committed version is the default (can be changed)
- Use any version as a draft without altering the original
- No limit on number of versions
- Generate output, then click Commit.
- Click Versions to view all saved versions.
- Select a version and click Use as a draft to edit it.
- Use Mark as default version to set your preferred version.
Prompt API
Access prompts programmatically via version-specific API endpoints. How it works:- API endpoint is generated after the first commit
- Calls return both SystemPrompt and HumanPrompt
- If no version specified, returns the default version
- Supports cURL, Python, and Node.js
- Create multiple keys per endpoint
- Keys can be copied once and deleted, but not reused
- Deleting a key invalidates all external integrations using it
Bookmark a Model
Mark your preferred model for consistent output. The bookmarked model is saved with committed versions. Steps:- Click the Bookmark model with its settings icon.
- If committing without a bookmark, you’ll be prompted to select a preferred model.
Draft History
Capture complete canvas snapshots—prompts, variables, and outputs—at different points in time. Steps:- Click the three dots icon > Draft history.
- View saved drafts with their inputs and outputs.
- Click Restore to revert to a previous state.
| Feature | What’s Saved |
|---|---|
| Draft History | Full canvas: prompts, variables, and generated outputs |
| Versions | Prompts only (no outputs) |
Regenerate Output
Re-run prompts selectively to refine results without regenerating everything. Regeneration levels:- Cell-level: Regenerate a single output
- Column-level: Regenerate all outputs for one model
- Fine-tune prompts for better quality
- Compare model performance
- Reduce bias with prompt adjustments
- Preserve good outputs while fixing specific issues