Share via


Create and edit prompts

Important

This feature is in Beta.

This guide shows you how to create new prompts and manage their versions in the MLflow Prompt Registry. You'll learn to perform these actions using the MLflow Python SDK.

Prerequisites

  1. Install MLflow and required packages

    pip install --upgrade "mlflow[databricks]>=3.1.0" openai
    
  2. Create an MLflow experiment by following the setup your environment quickstart.

  3. Access to a Unity Catalog schema with CREATE FUNCTION

    • Why? Prompts are stored in the UC as functions

Step 1. Create a new prompt

You can create prompts programmatically using the Python SDK.

Create prompts programmatically with mlflow.genai.register_prompt(). Prompts use double-brace syntax ({{variable}}) for template variables.

import mlflow

# Replace with a Unity Catalog schema where you have CREATE FUNCTION permission
uc_schema = "workspace.default"
# This table will be created in the above UC schema
prompt_name = "summarization_prompt"

# Define the prompt template with variables
initial_template = """\
Summarize content you are provided with in {{num_sentences}} sentences.

Content: {{content}}
"""

# Register a new prompt
prompt = mlflow.genai.register_prompt(
    name=f"{uc_schema}.{prompt_name}",
    template=initial_template,
    # all parameters below are optional
    commit_message="Initial version of summarization prompt",
    tags={
        "author": "data-science-team@company.com",
        "use_case": "document_summarization",
        "task": "summarization",
        "language": "en",
        "model_compatibility": "gpt-4"
    }
)

print(f"Created prompt '{prompt.name}' (version {prompt.version})")

Step 2: Use the prompt in your application

Below is a simple application that uses your prompt template from above.

  1. Load the prompt from the registry
# Load a specific version using URI syntax
prompt = mlflow.genai.load_prompt(name_or_uri=f"prompts:/{uc_schema}.{prompt_name}/1")

# Alternative syntax without URI
prompt = mlflow.genai.load_prompt(name_or_uri=f"{uc_schema}.{prompt_name}")
  1. Use the prompt in your application
import mlflow
from openai import OpenAI

# Enable MLflow's autologging to instrument your application with Tracing
mlflow.openai.autolog()

# Connect to a Databricks LLM via OpenAI using the same credentials as MLflow
# Alternatively, you can use your own OpenAI credentials here
mlflow_creds = mlflow.utils.databricks_utils.get_databricks_host_creds()
client = OpenAI(
    api_key=mlflow_creds.token,
    base_url=f"{mlflow_creds.host}/serving-endpoints"
)

# Use the trace decorator to capture the application's entry point
@mlflow.trace
def my_app(content: str, num_sentences: int):
    # Format with variables
    formatted_prompt = prompt.format(
        content=content,
        num_sentences=num_sentences
    )

    response = client.chat.completions.create(
    model="databricks-claude-sonnet-4",  # This example uses a Databricks hosted LLM - you can replace this with any AI Gateway or Model Serving endpoint. If you provide your own OpenAI credentials, replace with a valid OpenAI model e.g., gpt-4o, etc.
    messages=[
        {
        "role": "system",
        "content": "You are a helpful assistant.",
        },
        {
        "role": "user",
        "content": formatted_prompt,
        },
    ],
    )
    return response.choices[0].message.content

result = my_app(content="This guide shows you how to integrate prompts from the MLflow Prompt Registry into your GenAI applications. You'll learn to load prompts, format them with dynamic data, and ensure complete lineage by linking prompt versions to your MLflow Models.", num_sentences=1)
print(result)

Step 3. Edit the prompt

Prompt versions are immutable once created. To edit a prompt, you create a new version with your changes. This Git-like versioning ensures complete history and enables rollbacks.

Create a new version by calling mlflow.genai.register_prompt() with an existing prompt name:

import mlflow

# Define the improved template
new_template = """\
You are an expert summarizer. Condense the following content into exactly {{ num_sentences }} clear and informative sentences that capture the key points.

Content: {{content}}

Your summary should:
- Contain exactly {{num_sentences}} sentences
- Include only the most important information
- Be written in a neutral, objective tone
- Maintain the same level of formality as the original text
"""

# Register a new version
updated_prompt = mlflow.genai.register_prompt(
    name=f"{uc_schema}.{prompt_name}",
    template=new_template,
    commit_message="Added detailed instructions for better output quality",
    tags={
        "author": "data-science-team@company.com",
        "improvement": "Added specific guidelines for summary quality"
    }
)

print(f"Created version {updated_prompt.version} of '{updated_prompt.name}'")

Step 4. Use the new prompt

# Load a specific version using URI syntax
prompt = mlflow.genai.load_prompt(name_or_uri=f"prompts:/{uc_schema}.{prompt_name}/2")

# Or load from specific version
prompt = mlflow.genai.load_prompt(name_or_uri=f"{uc_schema}.{prompt_name}", version="2")

Step 5. Search and discover prompts

Find prompts in your Unity Catalog schema:

# REQUIRED format for Unity Catalog - specify catalog and schema
results = mlflow.genai.search_prompts("catalog = 'workspace' AND schema = 'default'")

# Using variables for your schema
catalog_name = uc_schema.split('.')[0]  # 'workspace'
schema_name = uc_schema.split('.')[1]   # 'default'
results = mlflow.genai.search_prompts(f"catalog = '{catalog_name}' AND schema = '{schema_name}'")

# Limit results
results = mlflow.genai.search_prompts(
    filter_string=f"catalog = '{catalog_name}' AND schema = '{schema_name}'",
    max_results=50
)

Note: Unity Catalog search only supports catalog + schema filtering. Support for name patterns and tag filtering is coming soon.

Next Steps