Model Hub
Accessing the Model Hub to select, configure, and manage Large Language Models (LLMs) used by agents.
The Model Hub is your gateway to the diverse array of Large Language Models (LLMs) that power the intelligence of your Agents and Skills. Uptiq’s architecture is designed to be model-agnostic, meaning you have the control to select the best model for any specific task—whether you prioritize speed, accuracy, or cost.
Accessing and Selecting LLMs
The platform allows for precise selection of the model used for any AI-driven task, such as prompt execution or classification.
Model Selection: When configuring an AI Skill (like the Prompt Skill or Intent Classification Skill), a dropdown menu allows you to select the appropriate model (e.g., Gemini-2.5-Flash, gpt-4). The platform supports a continuously growing list of LLMs.
Agent-Level Default: While individual skills can override the selection, the Agent Builder allows you to set a default Agent Mode (Advanced, Balanced, Flash), which correlates to the quality and cost of the underlying model used for that Agent's core reasoning.
Configuration and Optimization
Certain Skills provide advanced controls to fine-tune the LLM's behavior and performance for enterprise-specific needs:
Setting
Purpose
Recommended Use Case
Temperature
Controls the creativity or "randomness" of the model's response.
Use a low temperature (e.g., 0.2) for factual tasks like RAG, where grounding and consistency are critical. Use a higher temperature for creative text generation or brainstorming.
Response Format
Defines the expected output structure from the model.
Defaults to plain text. Developers can set this to JSON if the subsequent workflow step requires a structured output (e.g., a list of extracted entities or a decision object) for a Mapper or Ruleset Skill.
Num of Conversation Turns
Sets the number of previous messages the LLM should remember.
Used primarily for multi-turn conversational Agents to maintain context and history, ensuring the Agent remains relevant across a longer interaction.
Specialized AI Capabilities
Mattr uses its integrated models to provide specialized, pre-packaged AI Skills, ensuring complex tasks are handled reliably and consistently:
PII Guard: This specialized AI Skill automatically detects and masks sensitive data (SSNs, emails, phone numbers) in text, replacing them with generic placeholders (e.g., ``), independent of the primary LLM used for reasoning.
Intent Classification: This Skill uses a dedicated LLM to determine the user's underlying goal (e.g., CheckAccountBalance or ApplyForLoan) from a natural language query, which is crucial for routing the workflow correctly.
Last updated