Chat Settings
Last updated
Was this helpful?
Last updated
Was this helpful?
The chat settings affect only the current chat session and override the default chat settings. To open the chat settings tap on thebutton in the right corner.
Select the model to be used for the current chat session.
This defines the maximum size of the context, in tokens, that the model will use. Since different models calculate tokens differently, this is an approximation. If the field is empty, the context limit is set to the maximum possible value for the model.
Keep in mind: Your new, unsent messages and unsent attached files are not included in the context window limit. Adjust the context limit accordingly if attaching large files.
Note: The context window limit does not account for system parameters required for the app to function. These typically do not exceed an additional 1000 tokens per request. It is advisable to always set the limit to 1000 tokens more than your actual needs.
Important: Be cautious when using an empty context limit with long chats or large files, as this can quickly lead to high costs.
This defines the model's maximum response length in tokens.
This setting varies by model but generally influences the predictability or creativity of the model's output. Switching between models resets this value.
Available on certain models only. This setting controls the repetition of specific phrases or words in the generated text. Switching between models resets this value.
System prompts provide instructions and information to the model regarding the nature of the chat.
Available only for models that support tools.
Toggle this setting to enable or disable the model's use of tools.
Available only for models that support tools.
Toggle this setting to allow or disallow the model's ability to generate images.
Select the model to be used for image generation.
This depends on the model. Choose the desired resolution for the generated images.
This depends on the model:
Standard: Default setting.
HD: Allocates more time for image generation, resulting in higher quality images but also increased latency and price.
This depends on the model:
Vivid: Encourages the model to generate hyper-real and dramatic images.
Natural: Promotes a more realistic, less hyper-real appearance in the generated images.
Press Change to open the context memory panel. To learn more, refer to the .