Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.
For more information on video generation, see Video generation concepts.
Prerequisites
- An Azure subscription. Create one for free.
- Python 3.8 or later version. We recommend using Python 3.10 or later, but having at least Python 3.8 is required. If you don't have a suitable version of Python installed, you can follow the instructions in the VS Code Python Tutorial for the easiest way of installing Python on your operating system.
- An Azure OpenAI resource created in one of the supported regions. For more information about region availability, see the models and versions documentation.
- Then, you need to deploy a
sora
model with your Azure OpenAI resource. For more information, see Create a resource and deploy a model with Azure OpenAI.
Microsoft Entra ID prerequisites
For the recommended keyless authentication with Microsoft Entra ID, you need to:
- Install the Azure CLI used for keyless authentication with Microsoft Entra ID.
- Assign the
Cognitive Services User
role to your user account. You can assign roles in the Azure portal under Access control (IAM) > Add role assignment.
Set up
Create a new folder
video-generation-quickstart
and go to the quickstart folder with the following command:mkdir video-generation-quickstart && cd video-generation-quickstart
Create a virtual environment. If you already have Python 3.10 or higher installed, you can create a virtual environment using the following commands:
Activating the Python environment means that when you run
python
orpip
from the command line, you then use the Python interpreter contained in the.venv
folder of your application. You can use thedeactivate
command to exit the python virtual environment, and can later reactivate it when needed.Tip
We recommend that you create and activate a new Python environment to use to install the packages you need for this tutorial. Don't install packages into your global python installation. You should always use a virtual or conda environment when installing python packages, otherwise you can break your global installation of Python.
For the recommended keyless authentication with Microsoft Entra ID, install the
azure-identity
package with:pip install azure-identity
Retrieve resource information
You need to retrieve the following information to authenticate your application with your Azure OpenAI resource:
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. You can change the version in code or use an environment variable. |
Learn more about keyless authentication and setting environment variables.
Generate video with Sora
You can generate a video with the Sora model by creating a video generation job, polling for its status, and retrieving the generated video. The following code shows how to do this via the REST API using Python.
Create the
sora-quickstart.py
file with the following code:import requests import base64 import os from azure.identity import DefaultAzureCredential # Set environment variables or edit the corresponding values here. endpoint = os.environ['AZURE_OPENAI_ENDPOINT'] # Keyless authentication credential = DefaultAzureCredential() token = credential.get_token("https://cognitiveservices.azure.com/.default") api_version = 'preview' headers= { "Authorization": f"Bearer {token.token}", "Content-Type": "application/json" } # 1. Create a video generation job create_url = f"{endpoint}/openai/v1/video/generations/jobs?api-version={api_version}" body = { "prompt": "A cat playing piano in a jazz bar.", "width": 480, "height": 480, "n_seconds": 5, "model": "sora" } response = requests.post(create_url, headers=headers, json=body) response.raise_for_status() print("Full response JSON:", response.json()) job_id = response.json()["id"] print(f"Job created: {job_id}") # 2. Poll for job status status_url = f"{endpoint}/openai/v1/video/generations/jobs/{job_id}?api-version={api_version}" status=None while status not in ("succeeded", "failed", "cancelled"): time.sleep(5) # Wait before polling again status_response = requests.get(status_url, headers=headers).json() status = status_response.get("status") print(f"Job status: {status}") # 3. Retrieve generated video if status == "succeeded": generations = status_response.get("generations", []) if generations: print(f"✅ Video generation succeeded.") generation_id = generations[0].get("id") video_url = f"{endpoint}/openai/v1/video/generations/{generation_id}/content/video?api-version={api_version}" video_response = requests.get(video_url, headers=headers) if video_response.ok: output_filename = "output.mp4" with open(output_filename, "wb") as file: file.write(video_response.content) print(f'Generated video saved as "{output_filename}"') else: raise Exception("No generations found in job result.") else: raise Exception(f"Job didn't succeed. Status: {status}")
Run the Python file.
python sora-quickstart.py
Wait a few moments to get the response.
Output
The output will show the full response JSON from the video generation job creation request, including the job ID and status.
```json
{
"object": "video.generation.job",
"id": "task_01jwcet0eje35tc5jy54yjax5q",
"status": "queued",
"created_at": 1748469875,
"finished_at": null,
"expires_at": null,
"generations": [],
"prompt": "A cat playing piano in a jazz bar.",
"model": "sora",
"n_variants": 1,
"n_seconds": 5,
"height": 480,
"width": 480,
"failure_reason": null
}
The generated video will be saved as output.mp4
in the current directory.
Job created: task_01jwcet0eje35tc5jy54yjax5q
Job status: preprocessing
Job status: running
Job status: processing
Job status: succeeded
✅ Video generation succeeded.
Generated video saved as "output.mp4"
Clean-up resources
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
Related content
- Learn more about Azure OpenAI deployment types.
- Learn more about Azure OpenAI quotas and limits.