When using OpenAI or ChatGPT in your Zaps, you can adjust the parameters of the large language model (LLM) to improve its results.
You can adjust these parameters:
- Temperature
- Maximum Length
- Top P
Temperature
Temperature allows you to increase the model’s randomness level. Lowering the temperature results in less random responses, best suited for tasks such as instructions. Higher temperatures will result in more creative responses, best suited for creative writing. This field is useful to use in scenarios where you want to control the predictability of the output.
You can set the temperature between 0.0-2.0
If you set the Temperature field, it is recommended to avoid setting the Top P field.
Maximum Length
Maximum Length allows you to set the number of tokens used in your input and output tokens. This means the limit includes the combined count of tokens from your prompt and the model’s response.
You can set the Maximum Length field with a number value. Token limits will vary depending on what specific model you use, such as different versions of the GPT models. To find the token limits of the model you’re working with, refer to OpenAI’s documentation.
If you leave this field blank Zapier will attempt to automatically calculate this field, based on your selected model, to avoid you going over your model’s output token limit.
Top P
Setting the Top P field helps a language model choose the best works when it’s writing. It does this by only looking at the most likely words. For example, if the Top P field is set to 0.1, the model only considers the top 10% of words that are most likely to be a good fit. This way Top P helps the model pick words carefully, making sure each word it chooses is one of the best options.
This field is useful to use when wanting to maintain a balance between creativity and relevance. It allows for more randomness but within a controlled and focused set of possibilities.
You can set the Top P field between 0.0-1.0.
If you set the Top P field, it is recommended to avoid setting the Temperature field.
Other advanced fields for OpenAI
Stop Sequences
Stop sequences are a specific sequence of words, phrases, or characters that instruct the model to stop generating further content. This is useful when controlling where the model ends its response, especially if you’re using structured formats such as emails, letters, or scripts, where a specific closing phrase is expected.
When a stop sequence is encountered, the model’s response will truncated at that point. This means the output may end with or just before the stop sequence.
You can add up to 4 sequences in the Stop Sequence field. The model will stop generating content upon reaching the first occurrence of any of these stop sequences.
Frequency Penalty
This field allows you to penalize words that appear too frequently in the text. This is useful in scenarios where you want to avoid excessive repetition of certain words or phrases, but are okay with them appearing multiple times, as long as they aren’t overused. For example, in a detailed article, certain key terms might naturally need to be mentioned multiple times, but you don’t want them to dominate the text.
To set this field, you can add a number between -2.0 and 2.0.
A higher penalty value discourages the model from reusing words it has already used, making it less likely to repeat phrases exactly.
Presence Penalty
This field allows you to penalize used words or phrases, encouraging the model to introduce new concepts. This is useful in scenarios for tasks such as generating ideations where you want unique ideas or items. If you want each sentence or paragraph to introduce new concepts rather than reiterating what has already been said.
To set this field, you can add a number between -2.0 and 2.0.
Positive values penalize new tokens based on whether they appear in the next so far, increasing the model’s likelihood to talk about new topics.