Skip to content

How do I configure the OpenAi chat model node?#

A chat model is a type of Large Language Model (LLM) that returns text that is statistically likely to meet the requirements of the user prompt.

The OpenAi chat model uses tokenisation, embedding and transformation functions to determine responses to prompts.

Before you begin#

  • Click the Models button on the Agent.
  • Search for OpenAi chat model in the node search field.
  • Click the model to automatically link to the agent.

Step 2 - set OpenAi parameters#

  • Choose an OpenAi credential
  • Choose an OpenAi model from those available in your account

Step 3 - Decide how parameters are to be added#

There are three ways to add parameters to Fixed or Expression fields in a given node:

Method Description Additional information
Load from previous nodes Click Execute previous nodes in the Input panel to load parameters from connected and configured nodes Not available for Triggers
Add a fixed value Click Fixed on available fields to enter plain-text or JSON values Learn how to edit fixed responses
Add a JSON expression Click Expression on available fields to enter a JSON expression Learn how to edit expressions

Note

The Expression editor loads all possible parameters from connected nodes. These can then be added to fields as required

Step 3 - define optional LLM Token parameters#

Click Add option to define any of the following fields:

Field Parameter Description Additional information
Frequency penalty frequency_penalty Relates to repeated tokens that exist in the prompt sent by the Agent node which may result in repetition. Positive decimal values decrease repetition
Maximum number of tokens max_tokens The maximum number of tokens to generate OpenAi Max Tokens parameter
Presence penalty presence_penalty Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics OpenAi presence pentalty parameter

Step 4 - define optional LLM response parameters#

Click Add option to define any of the following fields:

Field Parameter Description Additional information
Response format response_format Choose plain text or valid JSON
Timeout timeout Integer value representing milliseconds before a LLM response should timeout
Max retries max_retries Integer value representing the number of times the LLM should retry processing the prompt to create a response

Step 5 - define optional sampling methods#

Choose one of the following to define how repetitive or unique responses will be:

Field Parameter Description Additional information
Sampling temperature temperature A scale of values which increase repetitive responses from the LLM the closer the temperature is to zero OpenAi Sampling temperature parameter
Top P top_p Scale of decimal values between 0 and 1 that determines the diversity of responses using nucleus sampling OpenAi Top_P Nucleus Sampling parameter