Variables
Configure dynamic variables used in translation requests, including context memory for AI/LLM translations.
Context memory
Maintains context history for AI/LLM translations, allowing the model to reference previous dialogue for improved coherence and accuracy. Customize the system prompt and control how much context is retained.

Configuration
Import from JSON file Specifies the path to a JSON file containing context history to import.
Context Source Choose where context is retrieved from:
Translation Memory Uses recent translations from Translation Memory as context
New Translation Starts fresh conversation for each translation request
JSON File Imports context history from the specified JSON file
Max Context Entries Sets the maximum number of context entries to store in memory. Older entries are removed when this limit is reached.
API Variables
Use the following variable in your API requests to include context memory:
$MT::ConversationMemory.Entries::ToArray()
$MT::ContextMemory.Entries::ToArray()
This variable expands to an array of context entries that can be passed to AI/LLM APIs.
Example: OpenAI GPT Integration
Initial prompt:
You will be provided with a sentence in Japanese,
and your task is to translate it into English accurately.
If there are any cultural references or nuances within the text,
kindly provide a brief explanation or context for those as well.
The text is as follows:
System template:
[{"role": "system", "content": "$PROMPT"}]
User template:
[{"role": "user", "content": "$ORIGINAL_TEXT"}]
Assistant template:
[{"role": "assistant", "content": "$TRANSLATED_TEXT"}]
Custom MT Configuration
Complete configuration example for OpenAI GPT with context memory:
{
"version": "2",
"service": "openai",
"lang": {
"source": [
{ "name": "Custom", "value": "Custom" },
],
"target": [
{ "name": "Custom", "value": "Custom" },
]
},
"config": {
"method": "post",
"encodeURI": false,
"encodeURIComponent": false,
"postURL": "https://api.openai.com/v1/chat/completions",
"postData": {
"model": "gpt-3.5-turbo",
"messages": "$MT::ConversationMemory.Entries::ToArray()",
"temperature": 0.9,
"max_tokens": 4096,
"top_p": 1,
"frequency_penalty": 0.7,
"presence_penalty": 0.7
},
"postOptions": {
"headers": {
"Authorization": "Bearer [YOUR_API_KEY_HERE]",
"Content-Type": "application/json"
}
},
"responseParse": true,
"responseType": "json",
"responseQuery": "choices[0].message.content"
}
}