Share via


Model Serving limits and regions

This article summarizes the limitations and region availability for Mosaic AI Model Serving and supported endpoint types.

Resource and payload limits

Mosaic AI Model Serving imposes default limits to ensure reliable performance. If you have feedback on these limits, reach out to your Databricks account team.

The following table summarizes resource and payload limitations for model serving endpoints.

Feature Granularity Limit
Payload size Per request 16 MB. For endpoints serving foundation models, external models, or AI agents the limit is 4 MB.
Request/response size Per request Any request/response over 1 MB will not be logged.
Queries per second (QPS) Per workspace 200, but can be increased to 25,000 or more by reaching out to your Databricks account team.
Model execution duration Per request 120 seconds
CPU endpoint model memory usage Per endpoint 4GB
GPU endpoint model memory usage Per endpoint Greater than or equal to assigned GPU memory, depends on the GPU workload size
Provisioned concurrency Per model and per workspace 200 concurrency. Can be increased by reaching out to your Databricks account team.
Overhead latency Per request Less than 50 milliseconds
Init scripts Init scripts are not supported.
Foundation Model APIs (pay-per-token) rate limits Per workspace If the following limits are insufficient for your use case, Databricks recommends using provisioned throughput.
  • Claude Sonnet 4 has a limit of 2 queries per second.
  • Claude Opus 4 has a limit of 2 queries per second.
  • Llama 4 Maverick has a limit of 4 queries per second and 2400 queries per hour.
  • Claude 3.7 Sonnet has a limit of 4 queries per second and 2400 queries per hour.
  • Llama 3.3 70B Instruct has a limit of 4 queries per second and 2400 queries per hour.
  • Llama 3.1 405B Instruct has a limit of 1 query per second and 1200 queries per hour.
  • Llama 3.1 8B Instruct has a limit of 2 query per second.
  • GTE Large (En) has a rate limit of 150 queries per second
  • BGE Large (En) has a rate limit of 600 queries per second.
Foundation Model APIs (provisioned throughput) rate limits Per workspace 200 queries per second.

Networking and security limitations

  • Model Serving endpoints are protected by access control and respect networking-related ingress rules configured on the workspace, like IP allowlists and Private Link.
  • Private connectivity (such as Azure Private Link) is only supported for model serving endpoints that use provisioned throughput or endpoints that serve custom models.
  • By default, Model Serving does not support Private Link to external endpoints (like, Azure OpenAI). Support for this functionality is evaluated and implemented on a per-region basis. Reach out to your Azure Databricks account team for more information.
  • Model Serving does not provide security patches to existing model images because of the risk of destabilization to production deployments. A new model image created from a new model version will contain the latest patches. Reach out to your Databricks account team for more information.

Compliance security profile standards: CPU workloads

The following table lists the supported compliance security profile compliance standards for the core Model Serving functionality on CPU workloads.

Note

These compliance standards require served containers to be built in the most recent 30 days. Databricks automatically rebuilds outdated containers on your behalf. However, if this automated job fails, an event log message like the following appears and provides guidance on how to ensure your endpoints stay within compliance requirements:

"Databricks couldn't complete a scheduled compliance check for model $servedModelName. This can happen if the system can't apply a required update. To resolve, try relogging your model. If the issue persists, contact support@databricks.com."

Region Location HIPAA HITRUST PCI-DSS IRAP CCCS Medium (Protected B) UK Cyber Essentials Plus
australiacentral AustraliaCentral            
australiacentral2 AustraliaCentral2            
australiaeast AustraliaEast      
australiasoutheast AustraliaSoutheast            
brazilsouth BrazilSouth      
canadacentral CanadaCentral      
canadaeast CanadaEast            
centralindia CentralIndia      
centralus CentralUS      
chinaeast2 ChinaEast2            
chinaeast3 ChinaEast3            
chinanorth2 ChinaNorth2            
chinanorth3 ChinaNorth3            
eastasia EastAsia      
eastus EastUS      
eastus2 EastUS2      
francecentral FranceCentral      
germanywestcentral GermanyWestCentral      
japaneast JapanEast      
japanwest JapanWest            
koreacentral KoreaCentral      
mexicocentral MexicoCentral            
northcentralus NorthCentralUS      
northeurope NorthEurope      
norwayeast NorwayEast            
qatarcentral QatarCentral            
southafricanorth SouthAfricaNorth            
southcentralus SouthCentralUS      
southeastasia SoutheastAsia      
southindia SouthIndia            
swedencentral SwedenCentral      
switzerlandnorth SwitzerlandNorth      
switzerlandwest SwitzerlandWest            
uaenorth UAENorth      
uksouth UKSouth    
ukwest UKWest            
westcentralus WestCentralUS            
westeurope WestEurope      
westindia WestIndia            
westus WestUS      
westus2 WestUS2      
westus3 WestUS3      

Foundation Model APIs limits

Note

As part of providing the Foundation Model APIs, Databricks might process your data outside of the region where your data originated, but not outside of the relevant geographical ___location.

For both pay-per-token and provisioned throughput workloads:

  • Only workspace admins can change the governance settings, such as rate limits for Foundation Model APIs endpoints. To change rate limits use the following steps:
    1. Open the Serving UI in your workspace to see your serving endpoints.
    2. From the kebab menu on the Foundation Model APIs endpoint you want to edit, select View details.
    3. From the kebab menu on the upper-right side of the endpoints details page, select Change rate limit.
  • The GTE Large (En) embedding models do not generate normalized embeddings.

Pay-per-token limits

The following are limits relevant to Foundation Model APIs pay-per-token workloads:

  • Pay-per-token workloads are HIPAA compliant.
    • For customers with the Compliance Security Profile enabled, pay-per-token workloads are available provided that compliance standard HIPAA or None is selected. Other compliance standards are not supported for pay-per-token workloads.
  • The following pay-per-token models are supported only in the Foundation Model APIs pay-per-token supported US regions:
    • Anthropic Claude Sonnet 4
    • Anthropic Claude Opus 4
    • Meta Llama 3.1 405B Instruct
    • BGE Large (En)
  • Anthropic Claude 3.7 Sonnet is available in pay-per-token EU and US supported regions. If your workspace is not in an EU or US region, but is in a supported Model Serving region, you can enable cross-Geo data processing to access this model.
  • If your workspace is in a Model Serving region but not a US or EU region, your workspace must be enabled for cross-Geo data processing. When enabled, your pay-per-token workload is routed to the U.S. Databricks Geo. To see which geographic regions process pay-per-token workloads, see Databricks Designated Services.

Provisioned throughput limits

The following are limits relevant to Foundation Model APIs provisioned throughput workloads:

  • Provisioned throughput supports the HIPAA compliance profile and is recommended for workloads that require compliance certifications.

  • To deploy a Meta Llama model from system.ai in Unity Catalog, you must choose the applicable Instruct version. Base versions of the Meta Llama models are not supported for deployment from Unity Catalog. See [Recommended] Deploy foundation models from Unity Catalog.

  • For provisioned throughput workloads that use Llama 4 Maverick:

    • Support for this model on provisioned throughput workloads is in Public Preview.
    • Autoscaling is not supported.
    • Metrics panels are not supported.
    • Traffic splitting is not supported on an endpoint that serves Llama 4 Maverick. You cannot serve multiple models on an endpoint that serves Llama 4 Maverick.

Region availability

Note

If you require an endpoint in an unsupported region, reach out to your Azure Databricks account team.

If your workspace is deployed in a region that supports model serving but is served by a control plane in an unsupported region, the workspace does not support model serving. If you attempt to use model serving in such a workspace, you will see in an error message stating that your workspace is not supported. Reach out to your Azure Databricks account team for more information.

For more information on regional availability of each Model Serving feature, see Model serving regional availability.

For Databricks-hosted foundation model region availability, see Foundation models hosted on Databricks.