When using Google Vertex AI actions in your Zaps, you can adjust the parameters of the large language model (LLM) to improve its results.
You can adjust these parameters:
- Max Output Token
Temperature allows you to increase the model’s randomness level. A lower number means it will be more deterministic, selecting the most probable answer, while a higher number results in a more unexpected, creative response.
Controls the degree of randomness in your completion response. Lowering the temperature results in less random responses. Higher temperatures will result in more creative responses.
You can set the temperature between 0.0-1.0
Max Output Tokens
Max Output Tokens is how you define the maximum number of tokens to generate as a response. A token is approximately 4 characters, so 1000 tokens correspond to roughly 60-80 words.
You can set the Max OutPut tokens between 1-1024.
The topP field is how you specify the minimum probability that a token must have in order to be considered for the generated response. It works differently from temperature where instead of increasing or decreasing the randomness of text, it specifies a threshold for how likely a token must be in order to be considered for selection.
A low topP will result in more diverse text, whereas a higher topP will result in more focused text.
You can set the topP between 0.0-1.0.
The topK field is how you specify the number of most probable tokens to consider when generating a response. For example, a topK of 10 means that the model will only consider the top 10 most probable tokens when generating a response. This can help reduce the amount of randomness in the generated text, whilst still allowing for some creativity.
You can set the topK between 1-40.