Tool Calling
How agents identify, invoke, and process tools during execution.Overview
Tool calling is the mechanism by which agents interact with external systems. The LLM identifies when a tool is needed, selects the appropriate one, invokes it with the correct parameters, and incorporates the results into its response.The Tool Calling Process
How the LLM Selects Tools
The LLM uses tool descriptions to decide which tool to invoke:Tool Definition
Selection Criteria
The LLM considers:- Query intent — What is the user trying to accomplish?
- Tool descriptions — Which tool’s description matches?
- Required information — What data does the tool provide?
- Parameter availability — Can required parameters be extracted?
Writing Effective Descriptions
Multiple Tool Calling
Agents can invoke multiple tools in a single turn.Sequential Execution
Tools run one after another when outputs are dependent:Parallel Execution
Independent tools run simultaneously for faster responses:Tool Calling in Different Contexts
Agentic Apps
Agents dynamically select tools based on reasoning:Workflow Tools (AI Nodes)
AI nodes in workflows can be configured with tool access:Tool Choice Modes
Control how the LLM interacts with tools:| Mode | Behavior |
|---|---|
auto | LLM decides whether to use tools |
required | LLM must use at least one tool |
none | Tools are disabled for this request |
specific | LLM must use a specified tool |
Error Handling
Tool Execution Failures
Invalid Parameters
Timeout Handling
Observability
Track tool usage for debugging and optimization.Execution Trace
Metrics
- Invocation count — How often each tool is used
- Success rate — Tool reliability
- Latency — Execution time
- Token impact — Tokens used for tool calls
Supported Models
Tool calling requires models with function calling support:| Provider | Models |
|---|---|
| OpenAI | GPT-4, GPT-4o, GPT-3.5-turbo |
| Azure OpenAI | GPT-4, GPT-3.5-turbo |
| Anthropic | Claude 3 Opus, Sonnet, Haiku; Claude 3.5 Sonnet |
| Gemini 1.5 Pro, Gemini 1.5 Flash |