Skip to main content

Documentation Index

Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The AMP SDK is a lightweight, open-source TypeScript/JavaScript library that connects your AI agents and LLM applications to the Agent Management Platform (AMP). Once integrated, the SDK captures telemetry, sessions, traces, and spans from your application and streams it into your AMP project for real-time monitoring, policy evaluation, and performance analytics. The SDK is built on OpenTelemetry (OTEL) GenAI Semantic Conventions and is designed with a zero-impact guarantee, it never affects your application’s performance or reliability. All telemetry sends are non-blocking, errors are silently caught, and failed batches are dropped rather than retried.
The AMP SDK is actively maintained on GitHub. Installation steps, configuration options, API methods, code examples, and troubleshooting are documented in the GitHub repository and kept up to date with each release. Refer to the GitHub README as the single source of truth for all SDK technical details.

Key Capabilities

The SDK serves as the telemetry bridge between your AI application and the Agent Management Platform. It enables the following platform capabilities:
CapabilityWhat the SDK Enables
ObservabilityStream traces in real time, track sessions, LLM calls, tool executions, RAG retrievals, and agent workflows within your AMP project.
GovernanceFeed telemetry into AMP policies for automated evaluation of correctness, safety, and compliance.
AnalyticsCapture token usage, cost, and latency data that powers the AMP analytics dashboards and business baselines.
IntelligenceProvide the raw trace data that AMP enriches with bias detection, toxicity scoring, and quality metrics.

How the SDK Fits into AMP

The Agent Management Platform uses a Project as the organizational unit for managing an AI agent’s lifecycle. The SDK is the integration layer that connects your agent code to a project. When you create a project in AMP, the platform generates an API key. You pass this key to the SDK when initializing it in your application. From that point, every trace the SDK sends is automatically associated with that project appearing in the project’s Observability views, feeding into policies, and populating analytics dashboards.
Your Application  →  AMP SDK  →  AMP Ingestion Service  →  AMP Project
     (code)         (library)        (API)                  (dashboard)
The SDK organizes telemetry into a three-level hierarchy:
ConceptRole
SessionGroups related traces together, for example, all turns in a multi-turn conversation or all steps in a user journey.
TraceRepresents one end-to-end operation, such as a single user query or a pipeline run. Each trace contains one or more spans.
SpanThe smallest unit of work, an LLM call, a tool execution, a vector search, or an agent decision. Spans can be nested using parent-child relationships to represent complex workflows.

Integrating Your Agent

1

Create a project and copy the API key

Go to Projects → Create Project in AMP. The platform generates an API key when the project is created. Copy this key, it links your application to this project.
2

Install and configure the SDK

Follow the installation and configuration instructions in the AMP SDK GitHub repository. The README provides package installation commands, required and optional configuration parameters, and environment variable setup.
3

Instrument your application

Use the SDK’s trace and span APIs to capture telemetry from your AI workflows. The GitHub README includes a Quick Start guide and working examples for every supported span type.
4

Monitor in the dashboard

Open your project in AMP. Telemetry appear in real time under Observability → RUNS. Use dashboards, policies, and evaluations to monitor and govern your agent’s behavior.

SDK Documentation on GitHub

The complete SDK documentation is maintained alongside the source code in the GitHub repository. This ensures that the documentation always reflects the latest release, including any new features, configuration changes, or API updates.

Full SDK Documentation

The README covers installation, configuration, core concepts (Session → Trace → Span), decorator-based instrumentation, OTEL attribute mappings, the complete API reference, and troubleshooting.

Working Examples

Runnable TypeScript examples for every supported span type, LLM, RAG, Tool, Agent, Orchestration, and multi-turn Session management.
For quick reference, the GitHub README is organized into the following sections:
SectionWhat It Covers
InstallationPrerequisites, package installation, verification, and runnable example scripts.
Quick StartMinimal integration code to send your first trace.
ConfigurationRequired and optional parameters, batching, networking, debugging, and environment variables.
Core ConceptsThe Session → Trace → Span hierarchy, parent-child span relationships, and OpenTelemetry compliance.
Supported Attributes & OTEL MappingComplete attribute tables for Session, Trace, LLM, Tool, RAG, Agent, Orchestration, and Custom spans.
ExamplesSix working examples: Simple LLM Chat, RAG Pipeline, Tool Execution, Multi-Agent Workflow, Session Management, and a comprehensive demo.
API ReferenceFull method signatures for the AMP client, Session, Trace, and all Span types.
Advanced TopicsBatching and performance tuning, custom attributes, and error handling.
TroubleshootingSolutions for connection errors, invalid API keys, missing traces, and memory usage.
Do not rely on this page for SDK-specific technical details such as method signatures, configuration parameters, or code examples. These may change between releases. Always refer to the GitHub repository for the latest information.

Additional Resources

npm Package

Install from the npm registry.

GitHub Issues

Report bugs or request features.

Support

Contact the Kore.ai support team.

Related Resources