Skip to main content
This concise introduction to the AgenticAI Core SDK consolidates concepts, setup, and a minimal end-to-end workflow into a single logical flow for software developers and system integrator partners.
SectionPurpose
1. What Is AgenticAI CoreWhat the SDK does and when to use it
2. Architecture and Core ConceptsMental model and key building blocks
3. Installation and SetupLocal workspace and Platform configuration
4. First Application FlowMinimal end-to-end example
5. Development LifecycleBuild → test → package → deploy
6. Beginner Best PracticesPractical guidance
7. Advanced Concepts and Next StepsWhere to go after this guide

1. What Is AgenticAI Core

AgenticAI Core is a Python SDK for building and deploying multi-agent AI applications. You define agents, tools, models, and orchestration in Python (design-time) and execute them through a runtime that exposes an MCP server. Typical uses include customer-facing assistants, workflow automation, and domain-specific AI systems that require multiple specialized agents.

2. Architecture and Core Concepts

Design-Time and Runtime

Design-time focuses on describing what exists in your application, while runtime focuses on how requests are executed.
Design-Time  ->  App + Agents + Tools + Models
Runtime      ->  MCP Server -> Orchestrator -> Agents -> Tools

Main Building Blocks

An App is the top-level container that groups agents and defines orchestration. Agents perform domain-specific work and may reason (REACT) or proxy to external systems. Tools are callable actions, typically Python functions registered with @Tool.register. LLM models define how agents reason, while prompts constrain behavior. Memory stores persist context, and orchestration controls routing between agents. Together, a request flows as:
Client -> MCP Server -> Orchestrator -> Agent -> (LLM <-> Tools) -> Response
For details, see the SDK architecture.

3. Installation and Setup

Prerequisites

You need Python 3.10+, pip, and Git. The SDK is provided through a private workspace repository. Contact support.

Workspace Setup

Clone the workspace and run the setup script:
git clone <workspace-repository-url>
cd workspace
chmod +x .setup.sh
./.setup.sh
The script creates a .venv virtual environment, installs agenticai-core and its dependencies. Always keep the virtual environment named .venv.

Platform Configuration

Before building applications, configure access to the AgenticAI platform. Create an application and API key in the AgenticAI platform, then define environment variables. Create separate configs for different environments such as .env/dev, .env/staging, or .env/prod.
# .env/dev
KORE_HOST=https://agent-platform.kore.ai
APP_API_KEY=your_api_key
TRACING_ENABLED=True
After successful configuration, your workspace looks like:
workspace/
├── .venv/                    # Virtual environment (created by setup)
├── .env/                     # Environment configurations (you create)
│   ├── dev
│   ├── staging
│   └── prod
├── lib/                      # Pre-installed libraries
│   ├── agenticai_core-0.1.0-py3-none-any.whl
│   └── kore_api-1.0.0-py3-none-any.whl
├── src/
│   ├── tools/                # Your custom tools
│   ├── orchestrator/         # Your custom orchestrators
│   └── app.py                # Your application definition
├── bin/                      # Generated archives (created on archive)
├── examples/                 # Example applications
├── .scripts/                 # Utility scripts
├── requirements.txt          # Dependency list
├── run.py                    # CLI entry point
├── .setup.sh                 # Setup script
└── README.md                 # Workspace documentation

4. First Application Flow

This section shows the smallest useful end-to-end path.

Step 1: Define a Tool

from agenticai_core.designtime.models.tool import Tool

@Tool.register(name="Get_Balance", description="Get account balance")
async def get_balance(account_id: str):
    return {"balance": 1000}

Step 2: Define an Agent and App

from agenticai_core.designtime.models import App, Agent

app = App(
    name="Sample App",
    agents=[Agent(name="DemoAgent")]
)

Step 3: Run Locally

source .venv/bin/activate
python run.py start -H localhost -P 8080

Step 4: Package and Deploy

python run.py package -o sample-app
python run.py config -u dev
python run.py deploy -f bin/sample-app/application.kar

5. Development Lifecycle

The typical lifecycle is linear and repeatable:
Clone -> Setup -> Develop tools -> Test Local -> Package -> Deploy -> Test E2E -> Monitor
Local testing happens before packaging, and deployment artifacts are always generated from the workspace.

6. Best Practices

Keep agents narrowly focused, design tools to do one thing well, and test locally before every deployment. Use .venv consistently to avoid oversized packages, and save deployment identifiers returned by the Platform for later testing.

7. Advanced Concepts and Next Steps

TopicSummary
SDK ArchitectureCovers the SDK’s two phases — design-time configuration and runtime execution — and its two components: the agentic-core library (design-time builder models, runtime abstract classes, and the MCP server) and the workspace (project scaffold with the run.py CLI). Also details memory store configuration with scope and retention options, structured tool logging, and the full deployment sequence from packaging a KAR archive to container provisioning.
CLI ReferenceDocuments all eight run.py commands — config, package, start, deploy, publish, status, undeploy, and test — with full syntax, options, and examples for each. Includes environment configuration for dev, staging, and production environments, a complete lifecycle workflow from local development to production deployment, and troubleshooting guidance for common issues.
API ReferenceReference for design-time builder classes (App, Agent, Tool, LlmModel, MemoryStore, EnvVariable, Prompt, Icon) and runtime service APIs (RequestContext, Logger, Tracer), with constructor parameters and method signatures. Covers accessing session context and environment variables, performing memory CRUD operations with field projections, emitting structured logs at four severity levels, and integrating distributed tracing into tools and orchestrators.
ExamplesWalks through two complete applications: a single-agent banking assistant with custom MCP tools for balance checks and fund transfers, a session-scoped memory store, and a keyword-routing orchestrator; and a multi-agent customer service app with three specialized agents (support, billing, technical) that routes requests by intent and escalates between agents. Both examples include full implementation code, project structure, and CLI commands to package, deploy, and test end to end.
Build ApplicationsCovers end-to-end app assembly: defining App, configuring agents (LLM + prompts), registering tools, setting up memory stores, configuring advanced features, implementing a custom orchestrator, and starting the MCP server. Includes a complete example and best practices.
Create AgentsExplains agent configuration in depth: autonomous vs proxy types, REACT pattern, roles (WORKER/SUPERVISOR), prompt design, LLM settings, tool attachment (builder and direct), metadata, icons, real-time flags, and conversion to AgentMeta for orchestration.
Work with ToolsDetails tool creation and usage: MCP tools via @Tool.register, inline tools, tool library, and knowledge tools. Covers request context access, memory operations inside tools, structured logging, tracing, agent integration, and best practices for error handling and performance.
Memory StoresDescribes design-time memory configuration (schema, namespaces, scope, retention), adding stores to apps, and runtime CRUD operations (set_content, get_content, delete_content). Explains scope types, retention policies, projections, schema validation, security, and performance guidance.
Prompts and LLM ConfigCovers LLM model setup across providers (OpenAI, Anthropic, Azure), parameter tuning (temperature, tokens, top_p, penalties), builder patterns, structured prompt design, template variables, supervisor prompts, security rules, and optimization strategies for cost and quality.
Custom OrchestrationExplains implementing AbstractOrchestrator, message handling protocol (user/tool roles), ToolCall and route_to_user, routing strategies (keyword, round-robin, task-based), stateful orchestration with memory, tracing integration, and common multi-agent flow patterns.
Related information: