This concise introduction to the AgenticAI Core SDK consolidates concepts, setup, and a minimal end-to-end workflow into a single logical flow for software developers and system integrator partners.Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
| Section | Purpose |
|---|---|
| 1. What Is AgenticAI Core | What the SDK does and when to use it |
| 2. Architecture and Core Concepts | Mental model and key building blocks |
| 3. Installation and Setup | Local workspace and Platform configuration |
| 4. First Application Flow | Minimal end-to-end example |
| 5. Development Lifecycle | Build → test → package → deploy |
| 6. Beginner Best Practices | Practical guidance |
| 7. Advanced Concepts and Next Steps | Where to go after this guide |
1. What Is AgenticAI Core
AgenticAI Core is a Python SDK for building and deploying multi-agent AI applications. You define agents, tools, models, and orchestration in Python (design-time) and execute them through a runtime that exposes an MCP server. Typical uses include customer-facing assistants, workflow automation, and domain-specific AI systems that require multiple specialized agents.2. Architecture and Core Concepts
Design-Time and Runtime
Design-time focuses on describing what exists in your application, while runtime focuses on how requests are executed.Main Building Blocks
An App is the top-level container that groups agents and defines orchestration. Agents perform domain-specific work and may reason (REACT) or proxy to external systems. Tools are callable actions, typically Python functions registered with@Tool.register. LLM models define how agents reason, while prompts constrain behavior. Memory stores persist context, and orchestration controls routing between agents.
Together, a request flows as:
3. Installation and Setup
Prerequisites
You need Python 3.10+, pip, and Git. The SDK is provided through a private workspace repository. Contact support.Workspace Setup
Clone the workspace and run the setup script:.venv virtual environment, installs agenticai-core and its dependencies. Always keep the virtual environment named .venv.
Platform Configuration
Before building applications, configure access to the AgenticAI platform. Create an application and API key in the AgenticAI platform, then define environment variables. Create separate configs for different environments such as.env/dev, .env/staging, or .env/prod.
4. First Application Flow
This section shows the smallest useful end-to-end path.Step 1: Define a Tool
Step 2: Define an Agent and App
Step 3: Run Locally
Step 4: Package and Deploy
5. Development Lifecycle
The typical lifecycle is linear and repeatable:6. Best Practices
Keep agents narrowly focused, design tools to do one thing well, and test locally before every deployment. Use.venv consistently to avoid oversized packages, and save deployment identifiers returned by the Platform for later testing.
7. Advanced Concepts and Next Steps
| Topic | Summary |
|---|---|
| SDK Architecture | Covers the SDK’s two phases — design-time configuration and runtime execution — and its two components: the agentic-core library (design-time builder models, runtime abstract classes, and the MCP server) and the workspace (project scaffold with the run.py CLI). Also details memory store configuration with scope and retention options, structured tool logging, and the full deployment sequence from packaging a KAR archive to container provisioning. |
| CLI Reference | Documents all eight run.py commands — config, package, start, deploy, publish, status, undeploy, and test — with full syntax, options, and examples for each. Includes environment configuration for dev, staging, and production environments, a complete lifecycle workflow from local development to production deployment, and troubleshooting guidance for common issues. |
| API Reference | Reference for design-time builder classes (App, Agent, Tool, LlmModel, MemoryStore, EnvVariable, Prompt, Icon) and runtime service APIs (RequestContext, Logger, Tracer), with constructor parameters and method signatures. Covers accessing session context and environment variables, performing memory CRUD operations with field projections, emitting structured logs at four severity levels, and integrating distributed tracing into tools and orchestrators. |
| Examples | Walks through two complete applications: a single-agent banking assistant with custom MCP tools for balance checks and fund transfers, a session-scoped memory store, and a keyword-routing orchestrator; and a multi-agent customer service app with three specialized agents (support, billing, technical) that routes requests by intent and escalates between agents. Both examples include full implementation code, project structure, and CLI commands to package, deploy, and test end to end. |
| Build Applications | Covers end-to-end app assembly: defining App, configuring agents (LLM + prompts), registering tools, setting up memory stores, configuring advanced features, implementing a custom orchestrator, and starting the MCP server. Includes a complete example and best practices. |
| Create Agents | Explains agent configuration in depth: autonomous vs proxy types, REACT pattern, roles (WORKER/SUPERVISOR), prompt design, LLM settings, tool attachment (builder and direct), metadata, icons, real-time flags, and conversion to AgentMeta for orchestration. |
| Work with Tools | Details tool creation and usage: MCP tools via @Tool.register, inline tools, tool library, and knowledge tools. Covers request context access, memory operations inside tools, structured logging, tracing, agent integration, and best practices for error handling and performance. |
| Memory Stores | Describes design-time memory configuration (schema, namespaces, scope, retention), adding stores to apps, and runtime CRUD operations (set_content, get_content, delete_content). Explains scope types, retention policies, projections, schema validation, security, and performance guidance. |
| Prompts and LLM Config | Covers LLM model setup across providers (OpenAI, Anthropic, Azure), parameter tuning (temperature, tokens, top_p, penalties), builder patterns, structured prompt design, template variables, supervisor prompts, security rules, and optimization strategies for cost and quality. |
| Custom Orchestration | Explains implementing AbstractOrchestrator, message handling protocol (user/tool roles), ToolCall and route_to_user, routing strategies (keyword, round-robin, task-based), stateful orchestration with memory, tracing integration, and common multi-agent flow patterns. |