AutoGen 是一个开源框架,用于构建事件驱动、分布式、可缩放且可复原的 AI 代理系统。
MLflow 跟踪 为 AutoGen(一个开源多代理框架)提供自动跟踪功能。 通过调用 mlflow.autogen.autolog
函数为 AutoGen 启用自动跟踪,MLflow 将捕获嵌套跟踪,并在代理执行时将其记录到活动的 MLflow 试验。
import mlflow
mlflow.autogen.autolog()
MLflow 捕获有关多代理执行的以下信息:
- 在不同轮次调用哪个代理
- 代理之间传递的消息
- 每个代理的 LLM 和工具调用记录,按照代理和轮次进行组织
- 潜伏期
- 引发的任何异常
先决条件
若要将 MLflow 跟踪与 AutoGen 配合使用,需要安装 MLflow 和 pyautogen
库。
开发
对于开发环境,请使用 Databricks 附加组件安装完整的 MLflow 包,如下所示:
pip install --upgrade "mlflow[databricks]>=3.1" pyautogen
完整 mlflow[databricks]
包包括用于 Databricks 的本地开发和试验的所有功能。
生产
对于生产部署,请安装mlflow-tracing
和pyautogen
。
pip install --upgrade mlflow-tracing pyautogen
包 mlflow-tracing
已针对生产用途进行优化。
注释
强烈建议使用 MLflow 3 进行 AutoGen 的最佳追踪体验。
在运行示例之前,需要配置环境:
对于不使用 Databricks 笔记本的用户:设置 Databricks 环境变量:
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_TOKEN="your-personal-access-token"
对于 Databricks 笔记本中的用户:这些凭据会自动为您设置。
OpenAI API 密钥:将 API 密钥设置为环境变量:
export OPENAI_API_KEY="your-openai-api-key"
基本示例
import os
from typing import Annotated, Literal
from autogen import ConversableAgent
import mlflow
# Ensure your OPENAI_API_KEY (or other LLM provider keys) is set in your environment
# os.environ["OPENAI_API_KEY"] = "your-openai-api-key" # Uncomment and set if not globally configured
# Turn on auto tracing for AutoGen
mlflow.autogen.autolog()
# Set up MLflow tracking on Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/autogen-tracing-demo")
# Define a simple multi-agent workflow using AutoGen
config_list = [
{
"model": "gpt-4o-mini",
# Please set your OpenAI API Key to the OPENAI_API_KEY env var before running this example
"api_key": os.environ.get("OPENAI_API_KEY"),
}
]
Operator = Literal["+", "-", "*", "/"]
def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int:
if operator == "+":
return a + b
elif operator == "-":
return a - b
elif operator == "*":
return a * b
elif operator == "/":
return int(a / b)
else:
raise ValueError("Invalid operator")
# First define the assistant agent that suggests tool calls.
assistant = ConversableAgent(
name="Assistant",
system_message="You are a helpful AI assistant. "
"You can help with simple calculations. "
"Return 'TERMINATE' when the task is done.",
llm_config={"config_list": config_list},
)
# The user proxy agent is used for interacting with the assistant agent
# and executes tool calls.
user_proxy = ConversableAgent(
name="Tool Agent",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None
and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
)
# Register the tool signature with the assistant agent.
assistant.register_for_llm(name="calculator", description="A simple calculator")(
calculator
)
user_proxy.register_for_execution(name="calculator")(calculator)
response = user_proxy.initiate_chat(
assistant, message="What is (44231 + 13312 / (230 - 20)) * 4?"
)
禁用自动跟踪
通过调用 mlflow.autogen.autolog(disable=True)
或 mlflow.autolog(disable=True)
可以全局禁用 AutoGen 的自动跟踪。