# Service Settings

<figure><img src="https://4121582948-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FKz66WcKqTPRwdFrHi4mM%2Fuploads%2FlkIFneNW5uf1tYFZBNF6%2Fmt-settings.jpg?alt=media&#x26;token=51f6db54-4278-4ecb-aa4f-caeb149bf43c" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
To open Service Settings:&#x20;

* From Launcher: Menubar -> Translator -> Double Click on the Service Name
* From Settings: Settings -> Translation Engines -> Click on the Service Name
  {% endhint %}

### Web Scraping

* **Show Browser**\
  Opens a browser window showing the translation page.
* **User Agent**\
  Specifies the browser identity string. (Not recommended to change unless necessary)

### LLM Web

* **Show Browser**\
  Opens a browser window showing the LLM translation page.
* **Combine Prompt with Source Text**\
  When enabled, the source text will be merged with the custom prompt.
* **Prompt**\
  The base prompt used by the LLM.

### LLM API

* **API Key**\
  Your personal API key used to authenticate with the LLM service. Keep this private and secure.
* **Model**\
  The name of the LLM model you want to use.
* **System Prompt**\
  A base instruction for the AI model that sets the context or behavior for all translations.
* **Temperature**\
  Controls the randomness in the output. Lower values produce more focused and deterministic results, while higher values produce more creative output.
* **Max Tokens**\
  The maximum number of tokens (words/characters) the model can generate in a single response. Higher limits may increase costs.
* **Top P**\
  An alternative method to control creativity. Works together with temperature - lower values limit responses to more likely outcomes.
* **Frequency Penalty**\
  Reduces the likelihood of the model repeating the same phrases. Higher values result in less repetition.
* **Presence Penalty**\
  Encourages the model to introduce new topics. Higher values result in more diverse content.
