# LLMs

VNTranslator can be integrated with popular LLMs either through APIs for advanced features or via web browser for free access.

{% embed url="<https://youtu.be/LSXqwUIiwpY>" %}

## **Free LLM Integration via Web Browser**

#### No API configuration required

Access LLMs directly using the integrated web browser by logging into each LLM service.\
Please note:

* Some LLM services may require specific configurations or regional access
* Using a VPN may cause websites to fail loading

<table><thead><tr><th width="299">Name</th><th width="158" align="center">Combine Prompt</th><th width="117" align="center">Stream</th><th align="center">Automation</th></tr></thead><tbody><tr><td>ChatGPT Web </td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Gemini Web </td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Claude Web </td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Mistral Web</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>DeepSeek Web</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Grok Web</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr></tbody></table>

***

## Integrating LLM via API

<table><thead><tr><th width="306">Name</th><th width="155" align="center">System Prompt</th><th width="117" align="center">Stream</th><th align="center">Context History</th></tr></thead><tbody><tr><td>OpenAI API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Gemini API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Claude API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Mistral API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>DeepSeek API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>Grok API </td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>OpenRouter API</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>-------------------------</td><td align="center">-</td><td align="center">-</td><td align="center">-</td></tr><tr><td>OpenAI Conversation v1.3 (legacy)</td><td align="center">✅</td><td align="center">✅</td><td align="center">✅</td></tr><tr><td>OpenAI Translate v1.3 (legacy)</td><td align="center">❌</td><td align="center">❌</td><td align="center">❌</td></tr><tr><td>GeminiAI v1.1 (legacy)</td><td align="center">❌</td><td align="center">❌</td><td align="center">❌</td></tr></tbody></table>

{% hint style="info" %}
You can set the LLM parameter in the MT Settings. To open MT Settings:

* In the Launcher: Menubar -> Translator -> Double-click on the translator option
  {% endhint %}

<figure><img src="/files/5lQRVHRkbNZZ6F2gXy2M" alt=""><figcaption></figcaption></figure>

### System Prompt

Customize the initial instructions provided to the translation system to fine-tune translation.

### Stream

Enable real-time streaming to receive translation results progressively instead of waiting for the entire translation to complete.

### Context History

Improve translation accuracy by using context from previously translated text.

* **Context Source**\
  Choose the context source:
  * **New Translation History**: Uses the most recent translations from the current session.
  * **Translation Memory**: Uses stored translation memory entries.
* **Max Context Entries**\
  Set the maximum number of previous entries to use as context.

  **Note:** More entries consume more tokens, which may increase processing time and API costs.

### Links

* **OpenAI** - [https://platform.openai.com](https://platform.openai.com/)
  * Models: <https://platform.openai.com/docs/models#models-overview>
* **GeminiAI** - [https://aistudio.google.com](https://aistudio.google.com/)
  * Models: <https://ai.google.dev/gemini-api/docs/models/gemini>
* **ClaudeAI** - [https://console.anthropic.com](https://console.anthropic.com/)
  * Models: <https://docs.anthropic.com/en/docs/about-claude/models>
* **MistralAI** - [https://console.mistral.ai](https://console.mistral.ai/)
  * Models: [https://docs.mistral.ai/getting-started/models/models\_overview](https://docs.mistral.ai/getting-started/models/models_overview/)
* **DeepSeek** - [https://platform.deepseek.com](https://platform.deepseek.com/)
  * Models: <https://api-docs.deepseek.com/quick_start/pricing>
* **GrokAI** - [https://console.x.ai](https://console.x.ai/)
  * Models: <https://docs.x.ai/docs/models>
* OpenRouter - <https://openrouter.ai/>
  * Models: <https://openrouter.ai/models>

***

## Offline (Run LLMs Locally)

<div align="left"><figure><img src="/files/EFmse1iyTHQnl7znOMRq" alt=""><figcaption></figcaption></figure></div>

* ✅ **LM Studio** (Recommended)
  * Download and Install LM Studio from <https://lmstudio.ai/>
  * Download a model and load it in LM Studio
  * Start the server in LM Studio
  * Enter the model name in MT settings
* ✅ **GPT4All**
  * Download and Install GPT4All from <https://www.nomic.ai/gpt4all>

***

## **Content Filtering and Restrictions**

LLM services like ChatGPT, GeminiAI, and ClaudeAI have content filters that may block:

* Explicit, harmful, or offensive content
* Text that violates the AI provider's terms of service
* Sensitive or restricted material, including copyrighted content

**What Happens with Restricted Content?**

If your text is flagged:

* The LLM may refuse to translate it
* You may receive an error message or generic response instead of a translation

**Tips to Avoid Issues**

* **Try a different model or version:** If one model flags your text, switch to another.\
  Note that newer versions often have stricter filters.
* **Experiment with prompt engineering:** Search online for effective prompt engineering techniques.\
  Be careful to follow the AI provider's terms of service when doing this.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.vntranslator.com/advanced/llms.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
