# Azure Cloud Vision

## Get started

### **Step 1: Create an Azure Account**

* Visit the [Azure Cloud](https://azure.microsoft.com/free/)
* Click **Try Azure for free**
* Follow the instructions to create account:
  * Provide basic details like your email address and billing information
  * New users receive **$200 in free credit** for the first 30 days.
* After signing up, you will be redirected to the Azure Portal

### **Step 2: Create a** Computer Vision **Resource**

<figure><img src="https://4121582948-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FKz66WcKqTPRwdFrHi4mM%2Fuploads%2F71ljY1krHaSsi1JKMB4S%2Fimage.png?alt=media&#x26;token=b38547d8-a09c-4d4d-8f92-3534934ecda1" alt=""><figcaption></figcaption></figure>

* In the [Azure Portal](https://portal.azure.com/):
  * Click **Create a resource** in the left-hand menu
  * Search for **Computer Vision** in the search bar
* Select **Computer Vision** and click **Create**
* Fill in the required details:
  * **Subscription**: Choose your Azure subscription
  * **Resource Group**: Create a new one or use an existing group
  * **Region**: Choose the region closest to your location
  * **Name**: Give your resource a unique name
  * **Pricing Tier**: Select **Free** or another available tier based on your needs
* Click **Review + Create**, then **Create**

### **Step 3: Your API Key and Endpoint**

<figure><img src="https://4121582948-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FKz66WcKqTPRwdFrHi4mM%2Fuploads%2FOdWdtlzy0Mq7nXzE9pRz%2Fimage.png?alt=media&#x26;token=a53ad6bd-a3d4-4a5b-9a4d-6e544feb41e0" alt=""><figcaption></figcaption></figure>

* Navigate to your **Computer Vision** resource in the Azure Portal
* Click **Keys and Endpoint** from the left-hand menu
* Copy the **API Key** and **Endpoint** for use in VNTranslator

### **Step 5: Integrate with VNTranslator** <a href="#step-5-integrate-with-vntranslator" id="step-5-integrate-with-vntranslator"></a>

* Go to **Settings -> Modules -> OCR**
* Paste the **API Key 1 or API Key 2** from Step 3 into the **Azure API Key** field
* Configure the **Azure Request URL** as follows:
  * URL Format: \
    https\://{endpoint}/vision/v3.2/ocr\[?language]\[\&detectOrientation]\[\&model-version]
  * Replace **{endpoint}** with your Computer Vision Resource Endpoint. For example:\
    <https://vntranslator.cognitiveservices.azure.com/vision/v3.2/ocr?language=ja\\&model=2022-04-30>

### Request parameters

```
https://{endpoint}/vision/v3.2/ocr[?language][&detectOrientation][&model-version]
```

* **language (optional)**\
  The BCP-47 language code of the text to be detected in the image.The default value is "unk", then the service will auto detect the language of the text in the image.
* **detectOrientation (optional)**\
  Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down).
* **model-version (optional)** \
  Optional parameter to specify the version of the AI model. The default value is "latest".


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.vntranslator.com/user-guide/ocr/ocr-engines/azure-cloud-vision.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
