Skip to main content
Monitoring > Custom Scripts provides run-level and log-level visibility into custom scripts deployed in your AI for Process account. It tracks executions across API nodes, Function nodes, and direct API calls for the selected period.

Prerequisites

  • At least one custom script must be deployed and executed via API call or API/Function node.
  • If no script has been deployed and executed yet, or if it has been deployed but not run, the following message is displayed. No data to display
  • If a previously executed script is undeployed, existing run and log data remains accessible. No new runs or logs are generated until the script is redeployed and executed again.

Access Custom Scripts monitoring

  1. Go to Settings in the AI for Process top menu. Access settings
  2. In the left menu, select Monitoring > Custom scripts.
  3. If accessing for the first time, select a script from the dropdown. Select script
The dashboard loads data for the last week by default. Select a date or date range to change the view. Default calendar selection

Dashboard overview

The dashboard has two tabs for analyzing script performance:
  • All Runs — Run-level data including status, deployed version, response time, function, and source.
  • Logs — Log-level data for functions executed within the script, including input, output, errors, and debug data.

Customize the view

ControlDescription
Script Name dropdownSwitch between deployed and executed scripts.
Time Selection filterView data for a specific past period or the current day. See Time-based filtering.
Filter ByMulti-field, multi-level filter for targeted analysis. See Column filtering.
In All Runs, all columns except Executed On can be used as filters. In Logs, all columns except Timestamp can be used as filters.

UI features

  • Tooltips: Hover over metrics for additional information. Hover over metrics
  • ID copying: Click the copy icon when hovering over Run ID or Log ID. Copy run ID
  • Script selection: Switch between scripts using the dropdown. Script selection
  • Status indicators: Green labels for successful runs, red labels for failed runs, and In Progress for scripts currently deploying. In progress deployments
  • Navigation: Use the arrow buttons or keyboard shortcuts (K for previous, J for next) to move between records. Keyboard shortcuts

All Runs

The All Runs tab shows performance metrics and run-level metadata for the selected script.

Performance metrics

MetricDescription
Total RunsTotal executions since deployment. Indicates usage volume and billing impact.
Response Time (P90)90% of runs complete within this time. Lower values indicate reliable performance.
Response Time (P99)99% of runs complete within this time. Higher values suggest performance outliers or issues.
Failure RatePercentage of failed runs. For example, 1 failure in 3 runs = 33.33%.
Failure rate

Run-level data

ColumnDescription
Run IDUnique identifier for the script run.
StatusSuccess, Failed, or In Progress.
Deployment VersionVersion number, incrementing with each deployment.
Response TimeExecution duration. Empty for failed or in-progress runs.
FunctionName of the executed function.
Executed OnDate and time of execution.
Source TypeWorkflow or API (from endpoint).
SourceName of the triggering source.

Best practices for All Runs

  • Identify runs with low or high response times. Use P90 and P99 thresholds to isolate underperforming runs.
  • Analyze the Source and Source Type to diagnose failures, delayed response times, and other issues.
  • Click a run record to open the record view for that run.

Logs

The Logs tab shows execution logs captured during script runs.
Log visibility depends on how the script is configured:
  • Default logging (print(), console.log()): Logs appear only after the run completes.
  • korelogger: Logs populate in real-time, with structured log levels (Info, Debug, Warning, Error). See Enhanced logging.
  • Failed runs can generate logs if logging is implemented correctly.

Performance metrics

Total Logs indicates the total number of logs recorded during execution. This metric helps determine:
  • Script activity level — how many actions or events were logged.
  • Debugging depth — more logs indicate detailed logging, which aids debugging.
  • Execution complexity — a high log count may indicate multiple operations or functions.
  • Error visibility — whether sufficient logging is available to trace issues.
Total logs

Log-level data

ColumnDescription
Log IDUnique log identifier.
Log LevelStdout, Stderr, Info, Debug, Warning, or Error. See Enhanced logging for gVisor-supported log levels.
Log MessageRecorded message for the specific action.
TimestampDate and time of the log entry.

Best practices for Logs

  • Analyze the input and output for each run (identified by Run ID) using log data: Log ID, Log level, Log message, and Timestamp.
  • Use the input and output code editors in the record view to analyze and troubleshoot logs.

Filter runs and logs

Time-based filtering

Use the Custom time selection dropdown (top-right) to filter runs or logs by a specific past period or the current day. Time selection dropdown
Data is displayed only if the selected script ran during the selected period.
For details on using the calendar widget, see Time-based Audit Logs.

Column filtering

Apply column filters to narrow down runs or logs. This works similarly to the Audit Logs filter, with an additional contains operator that matches results including a specific keyword or value. For example, filtering Log message contains “Adding” shows only logs where the message includes that string. Contains filter example

Add a column filter

  1. Select the All Runs or Logs tab.
  2. Click the Filter icon (top-right).
  3. Click + Add Filter. Access filter
  4. In the Filter By window, select a column, operator, and enter a value. Choosing filter
    Enter multiple values in the Enter Value field by pressing Tab after each entry. The system filters on all entered values.
    Multiple filters
  5. Click Apply.
The filter count is displayed on the Filter icon. Filter count

Multiple filters

Combine filters using AND or OR operators for multi-level filtering. AND and OR operators cannot be mixed in the same filter set. See Add multiple filters for details.

Record view

Click any run in All Runs to open the record view. The record view shows:
  • Run ID
  • Log-level details: Log ID, Log level, Log message, and Timestamp
  • JSON editors showing the script’s input and the function’s output
  • Navigation buttons (or use K for previous, J for next)
Record view Use the record view to:
  • Trace a specific run for debugging.
  • Inspect input and output values.
  • Identify failures, performance bottlenecks, unexpected inputs or outputs, and misconfigured logic.

Enhanced logging

AI for Process supports two logging options for custom scripts running on the gVisor service.
  • Standard logging (print() in Python, console.log() in JavaScript): Logs appear in the Logs tab only after the script execution completes (success or failure).
  • korelogger: Logs stream in real-time as they are generated. Recommended for live monitoring and debugging due to its log-level control and immediate visibility.

Option 1: Standard logging

Standard logging uses default logging functions. Logs are captured as stdout during script execution. Example (Python):
def check_print_function():
    print("Checking print function...")
    print("Print function is working!")
return
Output captured in stdout:
Checking print function...
Print function is working!
The korelogger library supports structured log levels and enables real-time log streaming. Logs are also captured in stdout in this format:
<LOG_LEVEL> :: <LOG_MESSAGE>
You can modify the log format as required.
Example (Python):
import korelogger
def call_openai_chat(prompt):
    korelogger.debug("Debug log using korelogger")
    korelogger.info("Info log using korelogger")
    korelogger.warning("Warning log using korelogger")
    korelogger.error("Error log using korelogger")
    return
Output captured in stdout:
DEBUG :: Debug log using korelogger
INFO :: Info log using korelogger
WARNING :: Warning log using korelogger
ERROR :: Error log using korelogger
Log trace format:
{
    "name": "gvisor_info_log",
    "context": {
        "trace_id": "0x3453665abxxxxxxxxxxxxxxxxxxxxxxx",
        "span_id": "0x7e3xxxxxxxxxxxx",
        "trace_state": "[]"
    },
    "kind": "SpanKind.INTERNAL",
    "parent_id": null,
    "start_time": "2025-05-14T06:07:27.238927Z",
    "end_time": "2025-05-14T06:07:27.238966Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "traceparent": "00-abxxxxxxxxxxxxxxxxxxxxxxxxxx0-12345xxxxxxxxxxxf-01",
        "run_id": "run_12345",
        "deployment_id": "deploy_67890",
        "source": "api_call",
        "source_type": "test",
        "log.message": "Using korelogger to log",
        "log.level": "INFO",
        "log.trace_id": "00-abcdef12345xxxxxxxxxxxxxxxxxxxx0-12345xxxxxxxxxxf-01",
        "log.meta.msg": "Using korelogger to log",
        "log.meta.pid": "41",
        "log.meta.logid": "4XXXXXX5-5XX0-4XX6-bXX8-4XXXXXXXXXX6"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "service.name": "gvisor-py-normal",
            "service.instance.id": "4XXXXXX1-9XX9-4XXb-9XXc-aXXXXXXXXXX1",
            "deployment.environment": "rnd-xxx.example.com"
        },
        "schema_url": ""
    }
}
Each log entry uses these identifying markers:
FieldDescription
traceparentLinks related operations together.
run_idIdentifies each script execution.
deployment_idTracks which version of the script ran.
sourceShows where the log came from.
source_typeCategorizes the type of source.
Log messages and levels are available as log.message and log.level in the attributes field.
You can modify the structure of the attributes field as required.

Export runs and logs

Export All Runs or Logs data as a .csv file. The export reflects the selected date range and applied column filters.
  1. Select the All Runs or Logs tab.
  2. Click the Ellipses button (top-right) and select Export.
A confirmation message is displayed when the download completes. Export flow If an error occurs during export, an error message is displayed. Export error Files are saved with these naming conventions:
  • Runs data: <scriptname>_runs_data (example: Qbalance_runs_data)
  • Logs data: <scriptname>_logs_data (example: Qbalance_logs_data)
Runs schema: Runs schema Logs schema: Logs schema
Each user’s export runs independently. One user’s cancellation or adjustment does not affect another user’s export.