Custom LLM Configuration

Introduction

This release introduces Custom LLM Configuration in Unifize.

Organizations can now explicitly configure how AI behaves across structured workflows, including prompt governance and model selection. This improves consistency, auditability, and execution control when using AI within checklist-based fields.

AI behavior is no longer dependent on implicit defaults. It is now governed through defined configuration at the organization level.

Release Capabilities

  1. Organization-Level System Prompt

    1. Admins can define a System Prompt for the organization.

    2. The System Prompt applies across AI-enabled fields.

    3. Changes are effective for new executions.

    4. Access is restricted to authorized administrators.

  2. Model Selection

    1. Organizations can select which AI model is used for execution.

    2. Model selection is explicit and configurable.

    3. Switching models does not change governance or field-level controls.

  3. Structured AI Execution

    1. AI is triggered only within defined AI-powered checklist fields.

    2. Output is returned in structured format aligned to Unifize schemas.

    3. AI execution is scoped to the record and field context.

    4. AI remains assistive — user validation is required before applying outputs.

  4. Deterministic Configuration Behavior

    1. AI behavior is governed by explicit configuration.

    2. Prompt precedence is enforced.

    3. No hidden or implicit AI instructions are applied.

    4. Configuration changes are traceable.

Supported Model Providers

Unifize supports integration with a broad ecosystem of AI model providers. This enables flexibility across cloud, enterprise, hosted, and private model deployments.

Supported providers include:

Enterprise & Foundation Model Providers

  • OpenAI

  • Anthropic

  • Azure OpenAI / Azure AI

  • AWS Bedrock

  • AWS Sagemaker

  • Google Vertex AI

  • Google Gemini (AI Studio)

  • Cohere

  • Mistral

  • Groq

  • NVIDIA NIM

  • IBM Watsonx

  • HuggingFace

  • Databricks

  • Snowflake

  • Together AI

  • Replicate

  • Perplexity

  • xAI

Additional Platforms & Model Hosts

  • DeepSeek

  • DeepInfra

  • Fireworks AI

  • ElevenLabs

  • AssemblyAI

  • Meta Llama API

  • Ollama / Ollama Chat

  • OpenRouter

  • Vercel AI Gateway

  • SambaNova

  • Predibase

  • Novita AI

  • Nscale

  • Lambda AI

  • Cloudflare AI Workers

  • GitHub Models

  • GitHub Copilot

  • Watsonx Text

  • Voyage AI

  • Volcengine

  • Triton

  • VLLM / Hosted VLLM

This allows organizations to use approved vendors, regional deployments, or private model infrastructure without redesigning workflows.

Before vs After

Before

After

AI behavior relied on implicit or static configuration.

AI behavior is governed by explicit organization-level configuration.

Limited visibility into how prompts were applied.

Clear System Prompt control and precedence.

Model flexibility constrained.

Explicit model selection supported.

AI execution behavior is harder to audit.

Configuration-driven and traceable execution.

Last updated