Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This page is an overview of the tools available for developing generative AI apps on Azure Databricks, including building, deploying, and managing generative AI apps.
Serve and query gen AI models
Serve a curated set of gen AI models from LLM providers such as OpenAI and Anthropic and make them available through secure, scalable APIs.
Feature | Description |
---|---|
Foundation Models | Serve gen AI models, including open source and third-party models such as Meta Llama, Anthropic Claude, OpenAI GPT, and more. |
Build enterprise-grade AI agents
Build and deploy your own agents, including tool-calling agents, retrieval-augmented generation apps, and multi-agent systems.
Feature | Description |
---|---|
AI Playground (no code) | Prototype and test AI agents in a no-code environment. Quickly experiment with agent behaviors and tool integrations before generating code for deployment. |
Mosaic AI Agent Framework | Author, deploy, and evaluate agents in Python. Supports agents written with any authoring library, including LangChain, LangGraph, and pure Python code agents. Supports Unity Catalog for governance and MLflow for tracking. |
Agent Bricks (no code) | Build and optimize ___domain-specific AI agent systems with a simple, no-code interface. Focus on your data and metrics while Agent Bricks streamlines implementation. |
Evaluate, debug, and optimize agents
Track agent performance, collect feedback, and drive quality improvements with evaluation and tracing tools.
Feature | Description |
---|---|
Agent Evaluation | Use Agent Evaluation and MLflow to measure quality, cost, and latency. Collect feedback from stakeholders and subject matter experts through built-in review apps and use LLM judges to identify and resolve quality issues. |
MLflow Tracing | Use MLflow Tracing for end-to-end observability. Log every step your agent takes, making it easy to debug, monitor, and audit agent behavior in development and production. |
Productionize AI agents
Deploy and manage agents in production with scalable endpoints, observability, and governance built in.
Task | Description |
---|---|
Log and register agents | Log agent code, configuration, and artifacts in Unity Catalog for governance and lifecycle management. |
Deploy agents | Deploy agents as managed, scalable endpoints. |
Monitor agents | Use the same evaluation configuration (LLM judges and custom metrics) in offline evaluation and online monitoring. |