MT Settings
This section contains basic configuration options for each machine translation engine

Web Scraping
Show Browser Opens a browser window showing the translation page
User Agent Specifies the browser identity string (not recommended to change unless necessary)
LLM Web
Show Browser Opens a browser window showing the LLM translation page
Combine Prompt with Source Text If enabled, the source text will be merged with the custom prompt
Prompt The base prompt used by the LLM
LLM API
API Key Your personal API key to authenticate with the LLM service (make sure to keep it private)
Model The name of the LLM model you want to use
System Prompt A base instruction for the AI model that sets the context or behavior for all translations
Temperature Controls randomness in the output. Lower values make output more focused and deterministic, higher values make it more creative
Max Tokens The maximum number of tokens (words/characters) the model can generate in a response. Higher limits may cost more.
Top P Another way to control creativity. Works with temperature, lower values limit the response to more likely outcomes
Frequency Penalty Reduces the chance of the model repeating the same lines. Higher values = less repetition
Presence Penalty Encourages the model to talk about new topics. Higher values = more diverse content
Last updated