Custom LLM Configuration
Introduction
This release introduces Custom LLM Configuration in Unifize.
Organizations can now explicitly configure how AI behaves across structured workflows, including prompt governance and model selection. This improves consistency, auditability, and execution control when using AI within checklist-based fields.
AI behavior is no longer dependent on implicit defaults. It is now governed through defined configuration at the organization level.
Release Capabilities
Supported Model Providers
Unifize supports integration with a broad ecosystem of AI model providers. This enables flexibility across cloud, enterprise, hosted, and private model deployments.
Supported providers include:
Enterprise & Foundation Model Providers
OpenAI
Anthropic
Azure OpenAI / Azure AI
AWS Bedrock
AWS Sagemaker
Google Vertex AI
Google Gemini (AI Studio)
Cohere
Mistral
Groq
NVIDIA NIM
IBM Watsonx
HuggingFace
Databricks
Snowflake
Together AI
Replicate
Perplexity
xAI
Additional Platforms & Model Hosts
DeepSeek
DeepInfra
Fireworks AI
ElevenLabs
AssemblyAI
Meta Llama API
Ollama / Ollama Chat
OpenRouter
Vercel AI Gateway
SambaNova
Predibase
Novita AI
Nscale
Lambda AI
Cloudflare AI Workers
GitHub Models
GitHub Copilot
Watsonx Text
Voyage AI
Volcengine
Triton
VLLM / Hosted VLLM
This allows organizations to use approved vendors, regional deployments, or private model infrastructure without redesigning workflows.
Before vs After
Before
After
AI behavior relied on implicit or static configuration.
AI behavior is governed by explicit organization-level configuration.
Limited visibility into how prompts were applied.
Clear System Prompt control and precedence.
Model flexibility constrained.
Explicit model selection supported.
AI execution behavior is harder to audit.
Configuration-driven and traceable execution.
Last updated