Skip to main content
AI for Process supports a wide range of AI models across major providers. You can use platform-hosted open-source models or connect to external providers through Easy Integration.

Open-Source Models

AI for Process supports over thirty open-source models and provides them as a service. Platform-hosted models can be optimized prior to deployment, making them ideal for private environments or specialized applications. The following table lists the supported models and their variants:
Model ProviderModel Variant
Amazon• amazon/MistralLite
Argilla• argilla/notus-7b-v1
• argilla/notux-8x7b-v1
DeepSeek• deepseek-ai/DeepSeek-R1-Distill-Llama-8B
• deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
• deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
• deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Eleutherai• EleutherAI/gpt-j-6b
• EleutherAI/gpt-neo-1.3B
• EleutherAI/gpt-neo-125m
• EleutherAI/gpt-neo-2.7B
• EleutherAI/gpt-neox-20b
Facebook• facebook/opt-1.3b
• facebook/opt-2.7b
• facebook/opt-350m
• facebook/opt-6.7b
Google• google/flan-t5-base
• google/flan-t5-large
• google/flan-t5-small
• google/flan-t5-xl
• google/flan-t5-xxl
• google/gemma-2-27b-it
• google/gemma-2-9b-it
• google/gemma-2b
• google/gemma-2b-it
• google/gemma-3-12b-it
• google/gemma-7b
• google/gemma-7b-it
Helsinki-nlp• Helsinki-NLP/opus-mt-es-en
Huggingfaceh4• HuggingFaceH4/zephyr-7b-alpha
• HuggingFaceH4/zephyr-7b-beta
Meta-llama• meta-llama/Llama-2-13b-hf
• meta-llama/Llama-2-7b-hf
• meta-llama/Llama-3.2-1B
• meta-llama/Llama-3.2-1B-Instruct
• meta-llama/Llama-3.2-3B
• meta-llama/Llama-3.2-3B-Instruct
• meta-llama/Llama-3.2-11B-Vision-Instruct
• meta-llama/Llama-Guard-4-12B
• meta-llama/Meta-Llama-3-8B
• meta-llama/Meta-Llama-3-8B-Instruct
• meta-llama/Meta-Llama-3.1-8B
• meta-llama/Meta-Llama-3.1-8B-Instruct
Microsoft• microsoft/phi-1
• microsoft/phi-1_5
• microsoft/phi-2
• microsoft/Phi-3-medium-128k-instruct
• microsoft/Phi-3-medium-4k-instruct
• microsoft/Phi-3-mini-128k-instruct
• microsoft/Phi-3-mini-4k-instruct
Mistralai• mistralai/Mistral-7B-Instruct-v0.1
• mistralai/Mistral-7B-Instruct-v0.2
• mistralai/Mistral-7B-Instruct-v0.3
• mistralai/Mistral-7B-v0.1
• mistralai/Mistral-Nemo-Instruct-2407
• mistralai/Mixtral-8x7B-Instruct-v0.1
• mistralai/Mixtral-8x7B-v0.1
OpenAI• GPT2
OpenAI Community• openai-community/gpt2-large
• openai-community/gpt2-medium
• openai-community/gpt2-xl
Stable Diffusion• stabilityai/stable-diffusion-xl-base-1.0
• stabilityai/stable-diffusion-2-1
• stable-diffusion-v1-5/stable-diffusion-v1-5
• (Available only in the text-to-image node, No Prompt Studio support.)
T5• t5-base
• t5-large
• t5-small
Tiiuae• tiiuae/falcon-40b
• tiiuae/falcon-40b-instruct
• tiiuae/falcon-7b
• tiiuae/falcon-7b-instruct
• tiiuae/falcon-rw-1b
Xiaomi• Mimo-7B—VL-RL

Structured Output

Kore-hosted open-source models can produce structured JSON responses, making outputs consistent and easy to parse.
  • Structured output support depends on the optimization technique used: No optimization or vLLM.
  • Models optimized with CT2, fine-tuned models, Hugging Face imports, and locally imported models are not supported.
The following table lists the models that support structured JSON output:
Model NamevLLMNo Optimization
amazon/MistralLite
argilla/notus-7b-v1
EleutherAI/gpt-j-6b
facebook/opt-1.3b
facebook/opt-2.7b
facebook/opt-350m
facebook/opt-6.7b
google/gemma-2b
google/gemma-2b-it
google/gemma-7b
google/gemma-7b-it
HuggingFaceH4/zephyr-7b-alpha
HuggingFaceH4/zephyr-7b-beta
meta-llama/Llama-2-7b-chat-hf
meta-llama/Llama-2-7b-hf
meta-llama/Llama-3.2-1B
meta-llama/Llama-3.2-1B-Instruct
meta-llama/Llama-3.2-3B
meta-llama/Llama-3.2-3B-Instruct
meta-llama/Meta-Llama-3-8B
meta-llama/Meta-Llama-3-8B-Instruct
meta-llama/Meta-Llama-3.1-8B
meta-llama/Meta-Llama-3.1-8B-Instruct
microsoft/Phi-3-medium-128k-instruct
microsoft/Phi-3-medium-4k-instruct
microsoft/Phi-3-mini-128k-instruct
microsoft/Phi-3-mini-4k-instruct
microsoft/phi-1
microsoft/phi-1_5
microsoft/phi-2
mistralai/Mistral-7B-Instruct-v0.1
mistralai/Mistral-7B-Instruct-v0.2
mistralai/Mistral-7B-Instruct-v0.3
mistralai/Mistral-7B-v0.1
openai-community/gpt2-large
openai-community/gpt2-medium
openai-community/gpt2-xl
tiiuae/falcon-7b
tiiuae/falcon-7b-instruct
tiiuae/falcon-rw-1b

External Models for Easy Integration

With Easy Integration, you can connect to external model providers such as OpenAI, Anthropic, Google, Cohere, and Amazon Bedrock. No infrastructure setup is required—authenticate and start deploying models. The following table lists all external models supported in AI for Process:
Model ProviderModel Variant
Anthropic• claude-3-5-sonnet-20240620
• claude-3-haiku-20240307
• claude-3-opus-20240229
• claude-3-sonnet-20240229
• claude-2.1
• claude-2.0
• claude-3-7-sonnet-20250219
• claude-3-5-sonnet-20241022
• claude-3-5-haiku-20241022
• claude-sonnet-4-20250514
• claude-opus-4-20250514
• claude-opus-4-1-20250805
• Claude Sonnet Vision (Available only for the Image-to-text node, No Prompt Studio support.)
Azure Open AI• GPT-4
• GPT-3.5-Turbo
• GPT-4o-Mini
• GPT-4o
• GPT-4.1
• GPT-4.1-mini
• GPT-4.1-nano
• GPT-4.5-preview
• O1-Mini
• O1
• O3-Mini
Cohere• command-light-nightly
• command-light
• command
• command-nightly
Google• gemini-1.5-flash-latest
• gemini-1.5-pro
• gemini-1.0-pro
• gemini-2.5-Pro
• gemini-2.0-flash
• gemini-2.0-flash-lite
• gemini-2.5-flash-preview-05-20
• gemini-2.5-flash
Open AI• gpt-4o
• gpt-4o-mini
• gpt-3.5-turbo
• gpt-3.5-turbo-1106
• gpt-4-0613
• o1-preview
• o1-mini
• o3-mini
• gpt-4-0125-preview
• gpt-4-turbo-preview
• gpt-4-1106-preview
• gpt-5-2025-08-07
• gpt-5-nano-2025-08-07
• gpt-5-mini-2025-08-07
• gpt-5-chat-latest
• gpt-4
• whisper (Available only for the Audio-to-text node, No Prompt Studio support.)
• whisper-1
• gpt-4o-realtime-preview
• gpt-4o-mini-realtime-preview
• gpt-4.1-2025-04-14
• gpt-4.1-mini-2025-04-14
• gpt-4.1-nano-2025-04-14
• gpt-4.5-preview-2025-02-27
• dall-e-3
• dall-e-2
• text-embedding-3-small
• text-embedding-3-large
• text-embedding-ada-002